Disclosure of Invention
The intelligent XR multi-mode interactive venue panoramic projection system is designed for solving the technical problems. The system realizes a brand new exhibition paradigm by innovatively integrating technologies such as virtual reality, motion capture, panoramic projection, artificial intelligence and the like.
The invention provides an intelligent XR multi-mode interactive venue panoramic projection system, which comprises:
An intelligent data terminal for:
Receiving image information captured by a camera device in a venue;
identifying and analyzing image information according to a preset identification algorithm;
Generating a display instruction;
The display interaction module is in communication connection with the intelligent data terminal and is used for:
responding to a display instruction generated by the intelligent data terminal;
Controlling display content and display modes;
VR terminal, including user client, action capture card and virtual reality glasses for:
receiving data transmitted by the motion capture device and the visual tracker;
Displaying virtual scene data;
providing visual rendering module data;
The intelligent service background is connected with the VR terminal through a network and comprises:
the action data processing module is used for acquiring action and position data from the action capture card and decoding and processing the data;
The virtual reality module is used for updating the virtual scene according to the action data;
the data communication and control module is used for carrying out data communication with the VR terminal and controlling the display screen;
a panoramic projection display system comprising a plurality of projectors for:
forming a curved screen or a spherical screen in an exhibition hall;
determining the content projected to the display screen according to the real-time action position of the user;
When a user is in a projection area, an image pickup device for capturing image information is arranged above a venue, the projection area is a display area, the image pickup device is triggered to capture the image information based on the fact that the user stands in a user identification area, and boundary points of the user identification area are projected by first projection equipment on the ceiling of the venue.
Preferably, the user identification area comprises a first area and a second area, the connecting line of the first area and the second area is a rectangular boundary line, the rectangular area comprises a plurality of unit areas, the first area is a unit area corresponding to a standing area of a user, the second area is a walkable area formed by splicing a plurality of unit areas, the first area is located above the image identification area, and the image identification area corresponds to a projection area of a venue display.
Preferably, when a user stands at the second area, standing signals are generated by sensing standing postures of the user at the second area through a pressure sensor on the ground of the venue, the ground of the venue comprises foot blocks, the foot blocks cover unit blocks of the second area, the standing signals are sent to a camera device in a wireless mode, and the camera device captures image information based on the standing signals.
Preferably, when a user walks from a first area to a second area, the camera device captures real-time image information, at least two projection areas are arranged in the venue, the areas of the two projection areas cover all exhibition positions of the venue, the intelligent data terminal is provided with a display area, the display area is internally provided with the first area and the second area, when the user walks in a certain area between the first area and the second area, the standing area and the walking area of the user projected by the first projection device correspondingly change, and the first projection device is matched with the camera device to capture and process real-time image information.
Preferably, the display interaction module performs interaction control based on a display instruction generated by the intelligent data terminal, the interaction control comprises display content and display mode control, the display content and the display mode control different interaction results according to analysis information received by the intelligent data terminal, the interaction results comprise a first display mode, a second display mode and a third display mode, the display content is related to the image information, the display content comprises static display, stereoscopic dynamic display and interactive display, and the display modes comprise holographic projection, panoramic stereoscopic projection, video projection, naked eye 3D image projection and AR projection.
The method comprises the steps of selecting a corresponding display mode according to display content identified by a user, selecting one of holographic projection and video projection when the display content is static display, selecting one of panoramic stereoscopic projection, naked eye 3D image projection and AR projection when the display content is stereoscopic dynamic display or interactive display, generating the display content according to analysis information, and calculating and analyzing the analysis information according to image information by a calculation module preset by an intelligent data terminal.
Preferably, the first area is smaller than the second area, the first area corresponds to a venue display and the second area, the first area corresponds to a user standing area, the second area corresponds to a user walking area, the size of the unit block is adjusted according to the venue display to form a standing area corresponding to the first area and a walkable area corresponding to the second area, the unit block size and the unit block arrangement mode are projected through first projection equipment in a preset splicing mode, and the venue display comprises a display size and a display interval.
Preferably, the display area of the user standing on the user walking area is different from the display area of the user standing on the first area, the display area of the user standing on the second area is projected between the first area and the second area through a first projection device in the venue to represent the separation between the first area and the second area, the static projection mode and the dynamic projection mode are included, when the user stands on the second area, dynamic image display is performed in the second area through the first projection device in the venue, when the user walks between the first area and the second area, dynamic image display is performed in the second area through the first projection device in the venue, and the user standing on the walking area is represented through the display area moving in the second area.
Preferably, when the user is in the first region:
when judging the display mode of the user, taking a venue display corresponding to the position where the user stands as a projection area, and acquiring the display information of the user from a cloud according to the venue display corresponding to the user;
The intelligent data terminal receives image information captured by an in-venue camera device, and identifies and analyzes the image information according to a preset identification algorithm, wherein the image information comprises an upper image of a user body standing in the image information, wherein the upper image of the user body is a complete image of the user, and comprises a trunk, an upper body and a neck, or comprises the trunk, the upper body, the neck and the face of the user;
The intelligent data terminal identifies and analyzes the image information according to a preset identification algorithm, judges whether the user is standing according to the head height of the user standing, if so, judges that the head height of the user is compared with the boundary of the standing area, and judges the display mode according to different heights.
Preferably, the identification algorithm includes:
Determining whether the image information contains user information or not according to the image information acquired by the intelligent data terminal, if so, acquiring the image characteristics of the user to identify the gender, age and race information of the user;
the intelligent data terminal identifies and analyzes the image information according to a preset identification algorithm, and displays the boundary of the position through second projection equipment in the venue based on a standing signal of the position where the user stands, wherein the standing signal is acquired through a pressure sensor in the venue.
The beneficial effects of the invention are mainly represented in the following aspects:
firstly, through the cooperative work of the intelligent data terminal and the display interaction module, the system can analyze the characteristics and behaviors of the user in real time and provide highly personalized display content. The method not only greatly improves the user experience, but also can effectively improve the information transfer efficiency of the exhibition.
And secondly, the combination of the VR terminal and the intelligent service background realizes the seamless fusion of the virtual and the reality. The user can experience rich virtual contents through the virtual reality glasses in a real exhibition hall environment, and meanwhile, the system can adjust the virtual scene according to the real-time position and action of the user, so that unprecedented immersion is created.
In addition, the panoramic projection display system solves the problem of single content and lack of interaction in the traditional display mode. By the cooperation of a plurality of projectors, the system can create a continuous curved or spherical projection screen in an exhibition hall and adjust projection contents in real time according to the position of a user. The dynamic panoramic projection not only enhances the visual impact of the exhibition, but also provides an ideal solution for multiple people to visit simultaneously.
Another important contribution of the present invention is that accurate user identification and tracking is achieved. Through the ingenious design of the user identification area and the dynamic projection boundary, the system can accurately capture the position and the behavior of the user, and lays a foundation for providing personalized content. The design also solves the problem of insufficient precision of the traditional motion capture system in a large open space.
It is worth mentioning that the system of the present invention is excellent when a plurality of persons visit at the same time. Through different show areas and personalized content of intelligent distribution, the system can satisfy the demand of a plurality of users simultaneously, can also promote the interdynamic and the exchange between the user simultaneously. This not only improves the participation of the display, but also enhances its social properties.
From the technical realization point of view, the invention successfully integrates a plurality of complex technologies through modularized design and advanced data processing algorithm. The integration not only improves the expandability and maintainability of the system, but also realizes the synergistic effect of various technical advantages. For example, real-time processing of motion capture data enhances the realism of virtual reality, while panoramic projection provides a wider presentation space for virtual content.
In general, the intelligent XR multi-mode interactive venue panoramic projection system successfully solves a plurality of challenges faced by the current exhibition technology through innovative technology integration and smart interactive design. The method not only brings unprecedented immersive and personalized experience for spectators, but also indicates the direction for digital transformation in the exhibition industry. The system marks the exhibition technology to enter a new era, is hopeful to thoroughly change the mode of people visiting the exhibition, and brings revolutionary transformation for the fields of cultural diffusion, scientific and technological exhibition, commercial popularization and the like.
Detailed Description
Referring to fig. 1-6, the present invention provides an intelligent XR multi-modal interactive venue panoramic projection system. The system aims to provide a highly interactive and immersive experience environment for a venue, and intelligent presentation and personalized interaction of exhibition contents are realized by combining advanced technologies such as virtual reality, motion capture and panoramic projection.
Specifically, the system of the invention comprises an intelligent data terminal 1, a display interaction module 2, a VR terminal 3, an intelligent service background 4 and a panoramic projection display system 5. The modules work cooperatively to create an interactive exhibition space integrating reality and virtual for the user.
First, the intelligent data terminal 1 is a core processing unit of the system. It is mainly responsible for receiving the image information captured by the in-venue camera device 6 and analyzing and identifying the information using a preset identification algorithm. Based on the analysis result, the intelligent data terminal 1 generates a corresponding presentation instruction. For example, when the camera device 6 captures that an observer stands in front of a certain exhibit, the intelligent data terminal 1 may recognize the age range, sex, etc. characteristics of the observer, and generate instructions suitable for the display content and manner of the observer according to the characteristics.
Secondly, the interactive module 2 is shown to be in communication connection with the intelligent data terminal 1. The main function of the intelligent data terminal is to respond to the display instruction generated by the intelligent data terminal 1 and control the display content and the display mode. For example, according to the instruction of the intelligent data terminal 1, the presentation interaction module 2 may choose to use holographic projection technology to present a 3D model of an artwork, or initiate a video presentation to explain the principles of a technological invention.
Again, VR terminal 3 is a key component in the system that implements a virtual reality experience. It comprises a user client 31, an action capture card 32 and virtual reality glasses 33. The VR terminal 3 is capable of receiving data transmitted by the motion capture device 7 and the visual tracker 8, including motion information and location information of the user. Based on these data, VR terminal 3 may update and display virtual scene data in real time, providing the user with an immersive virtual experience. Meanwhile, the VR terminal 3 is also responsible for providing visual rendering module data and ensuring high-quality presentation of virtual scenes.
The intelligent service background 4 is connected with the VR terminal 3 through a network and is a data processing and control center of the whole system. The system comprises three core sub-modules, namely an action data processing module 41, a virtual reality module 42 and a data communication and control module 43. The action data processing module 41 is responsible for acquiring action and position data from the action capture card 32 and decoding and processing these data. For example, it may use a Kalman filtering algorithm to smooth motion data, reducing the effects of jitter and noise. The processed data is transferred to the virtual reality module 42, which updates the virtual scene in real time based on the data. The data communication and control module 43 is responsible for data communication with the VR terminal 3 and controls the content output of the display screen 9.
The panoramic projection display system 5 is another important component of the present invention. It comprises a plurality of projectors 51 which are carefully arranged to form a continuous curved or spherical screen in the booth. Preferably, the system uses 8-10 single projectors to achieve seamless stitching through edge blending techniques. The panoramic projection display system 5 is unique in that it is capable of dynamically adjusting the content projected onto the display screen 9 in accordance with the user's real-time actions and position. This means that as the user moves through the exhibition hall, the projected content changes, creating an immersive sensation for the user.
One key innovation of the present invention is its user identification and interaction mechanism. When the user is in the projection area, the image pickup device 6 provided above the venue is triggered to capture image information of the user. The projection area here generally corresponds to the booth area. In order to accurately identify the location of the user, the system is provided with a first projection device arranged on the ceiling of the venue for projecting the boundary points of the user identification area. The design not only improves the recognition accuracy of the system, but also provides visual feedback for users to know when the users enter the interaction area.
Further, the user identification area of the present invention is skillfully divided into a first area and a second area. The connection line between the first area and the second area forms a rectangular boundary line, and the rectangular area comprises a plurality of unit blocks. Wherein the first area corresponds to one unit block as a user's standable area. The second area is formed by splicing a plurality of unit blocks to form a walking area. An advantage of this design is that it can meet both static presentation and dynamic interaction requirements. For example, the system may trigger a detailed exhibit introduction when the user is standing in a first area, and may provide a broader overview or guidance when the user is moving in a second area.
Notably, the first region is purposely disposed above the image recognition region. The image recognition area here corresponds to the projection area of the venue display. This layout design ensures that the system is able to accurately capture whole body images of standing users for more accurate identification and analysis.
In order to further improve the interaction accuracy of the system, the pressure sensor is arranged on the ground of the second area. These pressure sensors are capable of sensing the standing position of the user and generating a standing signal when the user is standing in the second area. Specifically, the stadium floor is provided with foot blocks which cover the unit blocks of the second area. When the user steps on these foot blocks, the pressure sensor will be activated. Subsequently, a standing signal is sent wirelessly to the camera device 6, triggering the capturing of image information. The design not only improves the response speed of the system, but also enhances the interaction accuracy.
For example, assuming that we set the trigger threshold of the pressure sensor to 50N (approximately equal to the pressure at which an adult person stands on his foot), when a pressure exceeding this threshold is detected, the system will assume that a user is standing in the area. The threshold is selected based on ergonomic studies that effectively distinguish between actual human standing and other possible interference factors.
In general, the intelligent XR multi-modal interactive venue panoramic projection system of the invention achieves highly personalized and interactive exhibition experience by comprehensively utilizing a plurality of advanced technologies. The method not only can accurately identify and track the position and action of the user, but also can dynamically adjust the display content and mode according to the characteristics and behaviors of the user. The innovative interaction mode provides a brand new display model for modern exhibitions, and is expected to remarkably improve the participation and satisfaction of observers. The system of the present invention further optimizes the user's mobile experience in the exhibition hall. The imaging device 6 captures image information in real time when the user walks from the first area via the second area. This real-time tracking ensures that the system is able to continually respond to the movements of the user, thereby providing a seamless interactive experience.
Preferably, the system of the present invention provides at least two projected areas within the venue, the areas of these projected areas covering all of the venue's booth. This design ensures that the user enjoys a full visual experience regardless of the location of the venue. For example, in a 1000 square meter exhibition hall, two 500 square meter projection areas may be provided, covering the front and rear halves of the venue, respectively.
A significant feature of the present invention is that the intelligent data terminal 1 is configured with a presentation area. In this display area, a first area and a second area are likewise provided. The first projection device dynamically adjusts the projected user stance and walk regions as the user walks in an area between the two regions. The dynamic adjustment enables the system to accurately track the movement track of the user, and provides more personalized display content for the user.
For example, suppose a user moves from exhibit A to exhibit B, a distance of approximately 10 meters. In this process, the first projection device may adjust the projection area at a frequency that is updated every second, ensuring that the user is always in the optimal interaction position. This high frequency of updates can create a fluent mobile experience for the user as if the entire exhibition hall were changing with their steps.
It is noted that the first projection device is closely matched with the image pickup device 6, so as to realize real-time image information capturing and processing. The cooperative working mode greatly improves the response speed and accuracy of the system.
The system of the present invention is also unique in terms of presentation interactions. The display interaction module 2 performs interaction control based on the display instruction generated by the intelligent data terminal 1, and the control comprises adjustment of display content and display modes. The system controls different interaction results according to the analysis information received by the intelligent data terminal 1, wherein the results comprise a first display mode, a second display mode and a third display mode.
In particular, the presentation content is closely related to the image information. The system of the present invention supports a variety of presentation content types, including static presentation, stereoscopic dynamic presentation, and interactive presentation. Correspondingly, the display modes are various, including holographic projection, panoramic stereo projection, video projection, naked eye 3D image projection, AR projection and the like.
This varied presentation content and manner enables the system of the present invention to accommodate a variety of different display requirements. For example, for a static artwork, the system may choose to use holographic projection to reveal its three-dimensional structure, while for a highly interactive technical exhibit, the system may use AR projection to allow the user to explore portions of the exhibit through gesture operations.
In a preferred embodiment of the present invention, the system selects a corresponding display mode according to the result of user identification. Specifically, when the display content is static display, the system can select between holographic projection and video projection, and when the display content is stereoscopic dynamic display or interactive display, the system can select one of panoramic stereoscopic projection, naked eye 3D image projection and AR projection.
The intelligent display mode selection greatly improves the look and feel and interactivity of the display. For example, for a young audience aged between 20-30 years, the system may tend to select more interactive and more technological AR projections, while for an older audience, the system may select more intuitive and easier to understand video projections.
It should be noted that the display content is generated according to analysis information, and the analysis information is obtained by performing calculation and analysis by a calculation module preset by the intelligent data terminal 1 according to the image information. This process involves complex image recognition and machine learning algorithms, ensuring a high degree of personalization and adaptability of the presentation content.
In another embodiment of the invention, the first region is smaller in area than the second region. This design is based on an in-depth analysis of the user behavior pattern. The first area corresponds to a venue/user standing area and the second area corresponds to a user walking area. This arrangement fully takes into account the user's activity characteristics in the exhibition hall, i.e. the user will typically stand for a period of time before a certain exhibit and then move to the next exhibit.
The system adjusts the sizes of the unit blocks according to the specific situation of the venue to form a standable area corresponding to the first area and a walkable area corresponding to the second area. The unit block size and arrangement mode adopts a preset splicing mode, and projection is carried out through first projection equipment. For example, in a typical arrangement, the unit blocks of the first region may be 2m×2m in size, and the unit blocks of the second region may be 1m×1m in size. The differentiated design can better adapt to the activity demands of users in different areas.
The system of the present invention also takes into account features of venue display including display size and display spacing. For example, for a large exhibit, the system may allocate a larger first area, while for the aisle between the exhibits, the system adjusts the size and shape of the second area accordingly. This flexible layout adjustment ensures that the system can accommodate a variety of different exhibition hall layouts, providing an optimal user experience.
In general, the intelligent XR multi-mode interactive venue panoramic projection system provides a brand new interactive paradigm for modern exhibitions through advanced user tracking, intelligent display selection and flexible spatial layout design. The method not only can accurately capture the behaviors and the preferences of the user, but also can dynamically adjust the display content and the display mode according to the information, thereby creating highly personalized and immersive exhibition experience. The innovative exhibition mode is expected to remarkably improve the participation degree and satisfaction degree of observers, and brings revolutionary transformation for the exhibition industry. The system of the present invention is further innovative in terms of user interaction experience. Specifically, the system carefully designs the display effect of the user in different areas so as to enhance the spatial perception and interaction experience of the user.
In a preferred embodiment of the invention, the display area of the user standing on the user walking area (i.e. the second area) is significantly different from the display area of the user standing on the first area. This differentiated design is intended to provide a clear sense of spatial positioning to the user, while also directing the user's attention to the particular presentation.
To achieve visual separation between the first area and the second area, the system of the present invention ingeniously utilizes a first projection device within the venue. In particular, the first projection device projects the display area of the standing user within the second area, which projection may be in a static or dynamic manner. For example, when the user is standing in the second area, the system may project a circle of light under the user's foot, the color of the circle may change over time, or the circle may change in size depending on the speed of movement of the user. Such dynamic visual feedback not only enhances the user's immersion, but also provides intuitive location information to the user.
When the user moves in the second area, the system of the invention can display dynamic images in the second area through the first projection device. The dynamic display can reflect the moving track of the user in real time, and creates an interactive carpet effect for the user. For example, the system may leave a string of lighted footprints on the path the user moves, which may fade over time, creating a dynamically changing visual effect.
Further, when the user moves from the first area to the second area or walks between the two areas, the system displays a dynamic image in the second area through the first projection device. The display mode can clearly mark the walking path of the user and provide useful space information for other observers. For example, the system may project a colored band of light on the path traveled by the user, and the color of the band of light may vary depending on the speed of movement or dwell time of the user.
The system of the present invention is also deeply optimized in terms of user identification and information processing. When the user is in the first area, the intelligent data terminal 1 will first receive the image information, then calculate the height of the user according to the information, and judge the display mode of the user. The process involves complex image processing and machine learning algorithms that can quickly and accurately identify user features and make corresponding presentation decisions.
For example, assume that the system detects an adult male user about 180cm in height standing in a first area. Based on this information, the system may determine that the user is suitable for receiving deeper presentation. Then, the system takes a venue display corresponding to the position where the user stands as a projection area, and acquires corresponding display information from the cloud. This approach ensures a high degree of relevance and personalization of the presentation content.
The intelligent data terminal 1 adopts an advanced recognition algorithm when processing image information. This algorithm is capable of extracting from the image an image of the upper part of the user's body, including the torso, upper body and neck, and sometimes the face. Based on this information, the system is able to identify the gender, age, race, etc. characteristics of the user. For example, the system may use a deep learning model to analyze facial features of the user to estimate the age range and gender of the user. Such accurate user portrayal can help the system provide more personalized presentation content.
In addition to user feature identification, the system of the present invention can also identify whether the location where the user is standing is a standable area and the venue exhibits corresponding to that area. This spatial awareness capability enables the system to provide the user with exhibit information that is highly correlated to where it is located.
In judging the posture of the user, the system of the invention adopts a judging method based on the height of the head. Specifically, the system compares the head height of the user with the boundary height of the area in which the user is standing. If the user's head height is significantly higher than the boundary height, the system will determine that the user is standing. The method has the advantages of simplicity and reliability, and can rapidly judge the gesture of the user, thereby timely adjusting the display mode.
The system of the present invention also includes a complex set of recognition algorithms. The algorithm will first determine whether the image information contains user information. If a user is detected, the algorithm further extracts the image features of the user and identifies the gender, age and race information of the user. The algorithm then combines these characteristics of the user with a comparison of the user's head height to the standing area boundary height to determine the exhibit information on the venue's display corresponding to the user.
For example, if the system identifies a female user between 30-40 years old, about 165cm in height, standing in front of a particular booth in an artistic exhibition, the system may choose to play an audio presentation of the background of the creation of the artwork while projecting an animation of the creation process of the artwork on the ground in front of the user.
Finally, the system of the present invention also utilizes a pressure sensor within the venue to acquire the user's standing signal. This signal triggers a second projection device within the venue to display the boundary of the location of the user. The method not only improves the response speed of the system, but also provides clear space positioning feedback for the user.
In general, the intelligent XR multi-modal interactive venue panoramic projection system of the present invention creates a highly intelligent, personalized exhibition experience by comprehensively utilizing advanced image processing, machine learning, projection techniques and sensor techniques. The system can accurately identify the user characteristics, track the user position in real time and dynamically adjust the display content and mode according to the information. The innovative exhibition mode not only can remarkably improve the user experience, but also provides new possibility for digital transformation of the exhibition industry.
It should be noted that the above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements, etc. within the principle of the present invention should be included in the protection scope of the present invention.