[go: up one dir, main page]

CN110703916A - Three-dimensional modeling method and system - Google Patents

Three-dimensional modeling method and system Download PDF

Info

Publication number
CN110703916A
CN110703916A CN201910938301.1A CN201910938301A CN110703916A CN 110703916 A CN110703916 A CN 110703916A CN 201910938301 A CN201910938301 A CN 201910938301A CN 110703916 A CN110703916 A CN 110703916A
Authority
CN
China
Prior art keywords
sub
data
mode
calling
mold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910938301.1A
Other languages
Chinese (zh)
Other versions
CN110703916B (en
Inventor
李小波
甘健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Oriental Culture Ltd By Share Ltd
Original Assignee
Hengxin Oriental Culture Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Oriental Culture Ltd By Share Ltd filed Critical Hengxin Oriental Culture Ltd By Share Ltd
Priority to CN201910938301.1A priority Critical patent/CN110703916B/en
Publication of CN110703916A publication Critical patent/CN110703916A/en
Application granted granted Critical
Publication of CN110703916B publication Critical patent/CN110703916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a three-dimensional modeling method and a system thereof, wherein the three-dimensional modeling method comprises the following steps: creating a plurality of sub-modes matched with the virtual mold and storage paths corresponding to the sub-modes; creating area calling data corresponding to the sub-mode according to the sub-mode, and storing the area calling data in a storage path of the sub-mode; and calling the sub-mode for use according to the accessed real scene mold. The application has the technical effects that the three-dimensional virtual reality is combined with real objects and dynamic operation gestures, and the real use experience of a user is improved.

Description

Three-dimensional modeling method and system
Technical Field
The application relates to the field of computers, in particular to a three-dimensional modeling method and a system thereof.
Background
The accurate and efficient reconstruction of three-dimensional models from the real world is of increasing interest. People use the established three-dimensional model, and a Virtual Reality technology appears, wherein Virtual Reality (VR) is a Virtual world which utilizes a computer to simulate and generate a three-dimensional space, and provides the simulation of senses of vision, hearing, touch and the like for a user, so that the user can observe objects in the three-dimensional space in time without limitation as if the user is in his own environment.
However, the existing three-dimensional virtual reality technology only provides observation and simulation of a user on a pre-constructed three-dimensional world, and how to combine the three-dimensional virtual reality with an existing real object and combine operation gestures of the user, so as to achieve the purpose of providing real use feeling for the user, which is a problem existing nowadays.
Disclosure of Invention
The application aims to provide a three-dimensional modeling method and a three-dimensional modeling system, which have the technical effects of combining three-dimensional virtual reality with real objects and dynamic operation gestures and improving the real use experience of a user.
In order to achieve the above object, the present application provides a three-dimensional modeling method, including: creating a plurality of sub-modes matched with the virtual mold and storage paths corresponding to the sub-modes; creating area calling data corresponding to the sub-mode according to the sub-mode, and storing the area calling data in a storage path of the sub-mode; and calling the sub-mode for use according to the accessed real scene mold.
Preferably, the sub-step of creating a plurality of sub-patterns adapted to the virtual mold is as follows: classifying the virtual moulds and creating a plurality of using modes according to the classes of the virtual moulds; respectively creating a plurality of sub-modes in each use mode; and respectively creating a corresponding storage path for each sub-mode.
Preferably, the substeps of creating the region call data are as follows: acquiring basic data of each sub-mode; processing the basic data to obtain regional data, and storing the regional data in a regional data comparison library; acquiring or simulating a plurality of operation gestures, acquiring coordinate data of the operation gestures, judging the coordinate data positions of the operation gestures through a regional data comparison library, analyzing and presetting the activity state of a virtual mold according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the activity state of the virtual mold; and creating a plurality of area calling files, and storing the dynamic virtual mold in the corresponding area calling files.
Preferably, the sub-steps of processing the basic data to obtain the region data and storing the region data in the region data comparison library are as follows: dividing the coordinate data of the virtual mold into a plurality of first use areas; dividing the virtual space coordinate data into a plurality of second use areas corresponding to the first use areas and third use areas; and creating an area data comparison library, and storing the coordinate data of the first use area, the coordinate data of the second use area and the coordinate data of the third use area in the area data comparison library.
Preferably, the sub-step of calling the corresponding sub-mode to use according to the accessed real scene mold is as follows: acquiring identification information of the live-action mold; and calling the corresponding sub-mode for use according to the identification information.
Preferably, the sub-step of calling the corresponding sub-mode to use according to the identification information is as follows: judging the type of the virtual mold to be called according to the type of the mold in the identification information, and judging the use mode according to the type of the virtual mold; and judging a sub-mode from the use modes according to the specific type in the identification information, and calling the area calling data of the sub-mode for use.
The application also provides a three-dimensional modeling system which comprises at least one live-action mold, an access device, VR equipment and a somatosensory controller, wherein the access device is respectively connected with the live-action mold, the VR equipment and the somatosensory controller, and the access device is used for executing the three-dimensional modeling method.
Preferably, the access device comprises an identifier, a processor and a display, wherein the processor is respectively connected with the display and the identifier; the recognizer is used for acquiring identification information of a live-action mold accessed to the access device according to an instruction of the processor and sending the identification information to the processor for processing; the processor is used for receiving the data sent by the recognizer, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment; the display is used for receiving and displaying data sent by the processor and the VR device.
Preferably, the processor comprises a storage module, a three-dimensional modeling module, a processing module, a judging module and a calling module; the storage module is used for storing basic data, sub-modes and corresponding data such as a region calling file; the three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mold; the processing module is used for carrying out partition processing on the acquired basic data; a judging module: the system comprises a sub-mode judging module, a region calling file judging module and a calling module, wherein the sub-mode judging module is used for judging a sub-mode to be used according to identification information, judging the region calling file to be called according to the coordinate data position of the operation gesture, and feeding back a judgment result to the calling module for calling; the calling module is used for calling the sub-mode and the area calling file according to the judgment and judgment result.
Preferably, the live-action mold is provided with mark information; the logo information includes at least a mold type and a specific type of the real scene mold.
The beneficial effect that this application realized is as follows:
(1) the three-dimensional modeling method and the three-dimensional modeling system have the technical effects that the three-dimensional virtual reality is combined with real objects and dynamic operation gestures, and the real use experience of a user is improved.
(2) According to the three-dimensional modeling method and the three-dimensional modeling system, an operator can utilize the multiple live-action molds and the corresponding sub-modes to carry out different learning and exercises, the learning and exercise cost is low, and the application range is wide.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram of a three-dimensional modeling system according to one embodiment;
FIG. 2 is a schematic flow chart diagram of an embodiment of a three-dimensional modeling method.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The three-dimensional modeling method and the three-dimensional modeling system have the technical effect of combining the three-dimensional virtual reality with the real object and the operation gesture and improving the real feeling of a user.
As shown in fig. 1, the present application provides a three-dimensional modeling system, which includes at least one real estate mold 1, an access device 2, a VR device 3 and a somatosensory controller 4, wherein the access device 2 is connected to the real estate mold 1, the VR device 3 and the somatosensory controller 4, respectively, and the access device 2 is used for executing the following three-dimensional modeling method.
Further, the access device 2 comprises an identifier, a processor and a display, wherein the processor is connected with the display and the identifier respectively.
The recognizer is used for acquiring the identification information of the live-action mold 1 accessed to the access device 2 according to the instruction of the processor and sending the identification information to the processor for processing.
The processor is used for receiving the data sent by the recognizer, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment.
The display is used for receiving and displaying data sent by the processor and the VR device.
Further, the processor comprises a storage module, a three-dimensional modeling module, a processing module, a judging module and a calling module.
The storage module is used for storing basic data, sub-modes and corresponding data such as region calling files.
The three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mold.
And the processing module is used for carrying out partition processing on the acquired basic data.
A judging module: and the sub-mode calling module is used for judging the sub-mode to be used according to the identification information, judging the region calling file to be called according to the coordinate data position of the control gesture, and feeding the judgment result back to the calling module for calling.
The calling module is used for calling the sub-mode and the area calling file according to the judgment and judgment result.
The processor also comprises a three-dimensional modeling module, and the three-dimensional modeling module is connected with the storage module; the three-dimensional modeling module is used for creating a virtual mold and a virtual three-dimensional space and sending the created virtual mold and virtual three-dimensional space to the storage module for storage.
Further, the storage module comprises a regional data comparison library.
As shown in fig. 2, the present application provides a three-dimensional modeling method, including:
s1: and creating a plurality of sub-modes matched with the virtual mold and storage paths corresponding to the sub-modes.
Specifically, as an embodiment, a plurality of real-scene molds 1 are manufactured in advance, and the real-scene molds 1 may be musical instrument molds, drawing molds, sports molds, or the like.
Further, as an example, the specific type of the musical instrument mold may be a piano, an electronic organ, a drum kit, or the like; the specific type of the writing and drawing die can be a drawing board and the like; the specific type of sports mould may be a boxing glove, a sandbag, etc. And the three-dimensional modeling module creates a virtual mold matched with the real-scene mold according to the real-scene mold. The virtual mold may be a virtual musical instrument mold, a virtual painting mold, a virtual motion mold, or the like. Further, as an example, the specific type of the virtual musical instrument mold may be a virtual piano, a virtual electronic organ, a virtual drum kit, or the like; the specific type of the virtual writing and drawing die can be a virtual drawing board and the like; the specific type of virtual sports mold may be a virtual boxing glove, a virtual sandbag, etc.
Specifically, as another embodiment, a plurality of virtual molds are created in advance by the three-dimensional modeling module, and a live-action mold adapted to the virtual mold is manufactured according to the virtual mold.
Furthermore, the live-action mold is provided with identification information.
Specifically, as an embodiment, the logo information includes at least a mold type and a specific type of the real-world mold 1.
Wherein the mould categories at least include: a stroke class, a button class, a pure gesture class, and a drawing class. Specific types include at least pianos, electronic organs, drum sets, drawing boards, boxing gloves, sandbags and the like.
Further, the sub-step of creating a plurality of sub-patterns adapted to the virtual mold is as follows:
s110: the virtual molds are classified, and a plurality of usage patterns are created according to the classes of the virtual molds.
Specifically, as an embodiment, the processing module discriminates the virtual mold and classifies the virtual mold according to a discrimination type, where the category of the virtual mold at least includes: a stroke class, a button class, a pure gesture class, and a drawing class.
The plurality of usage patterns includes at least: a striking mode, a key mode, a pure gesture mode, a writing and drawing mode and the like.
Storing the beating type of the virtual die into a beating mode, storing the key type of the virtual die into a key mode, and storing the pure gesture mode of the virtual die into a pure gesture mode; and storing the writing and drawing mode of the virtual mold into the writing and drawing mode, and the like.
S120: respectively creating a plurality of sub-modes in each use mode;
specifically, for example, the striking mode includes an instrument mode and a striking mode; the musical instrument modes comprise a drum set mode, a timpani mode, a hi-hat mode, a gong mode, a tambourine mode and the like. The movement pattern in the striking mode includes a boxing pattern and the like.
The key modes include a piano mode, an organ mode, and the like.
Pure gesture modes include paper folding mode, stacked wood mode, and the like.
The writing and drawing mode includes writing mode, drawing mode and the like.
S130: and respectively creating a corresponding storage path for each sub-mode.
Specifically, a corresponding storage path is created for each sub-mode on the storage module, and when data of a certain sub-mode needs to be called, the data can be directly acquired from the storage path.
S2: and creating area calling data corresponding to the sub-mode according to the sub-mode, and storing the area calling data in a storage path of the sub-mode.
Specifically, the substeps of creating the region call data are as follows:
t1: and acquiring basic data of each sub-mode.
Wherein the basic data at least comprises: the system comprises a plurality of virtual space coordinate data, a plurality of virtual mould coordinate data, a plurality of audio data and a plurality of characters and drawing data.
Specifically, virtual mold coordinate data arranged in a virtual space and current virtual space coordinate data are derived from the three-dimensional modeling module, and the obtained virtual space coordinate data and the obtained virtual mold coordinate data are transmitted to the processing module; a plurality of audio data, a plurality of text and drawing data are obtained from an existing database.
T2: and processing the basic data to obtain area data, and storing the area data in an area data comparison library.
Further, the sub-steps of processing the basic data to obtain the region data and storing the region data in the region data comparison library are as follows:
specifically, the area data includes at least coordinate data of the first usage area, coordinate data of the second usage area, and coordinate data of the third usage area.
H1: the coordinate data of the virtual mold is divided into a plurality of first use areas.
Specifically, as one example, a virtual piano mold is taken as an example of the virtual mold. And after the virtual coordinate data of each key in the virtual piano mould is acquired from the three-dimensional modeling module, marking the area of each key through the three-dimensional modeling module or marking the area of each key by a worker, and taking the area of each key as a first use area.
H2: the virtual space coordinate data is divided into a plurality of second use areas corresponding to the first use areas, and third use areas.
Specifically, as one example, a virtual piano mold is taken as an example of the virtual mold. And acquiring coordinate data of each virtual space from the three-dimensional modeling module, acquiring coordinate data of a space occupied by the virtual piano mould in the virtual space according to the matching position of the virtual piano mould and the virtual space, and setting an area corresponding to each key in the virtual piano mould in the virtual space as a second using area.
Wherein, the part of the virtual space which is not covered by the virtual mould is a third use area.
H3: and creating an area data comparison library, and storing the coordinate data of the first use area, the coordinate data of the second use area and the coordinate data of the third use area in the area data comparison library.
Specifically, an area data comparison library is created in the storage module, and the acquired coordinate data of the first use area, the acquired coordinate data of the second use area, and the acquired coordinate data of the third use area are stored in the area data.
T3: acquiring or simulating a plurality of operation gestures, acquiring coordinate data of the operation gestures, judging the coordinate data positions of the operation gestures through a region data comparison library, analyzing and presetting the activity state of a virtual mold according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the activity state of the virtual mold.
Specifically, as one example, a virtual piano mold is taken as an example of the virtual mold. The virtual piano model has eighty-eight keys, the area of each key is a first use area, namely the virtual piano model has eighty-eight first use areas, when the coordinate data of the operation gesture falls into one of the first use areas, the keys of the first use areas should be preset to be in a pressing state, a virtual piano dynamic model with the first use area in a pressing state and the rest eighty-seven first use areas in a normal non-pressing state is created according to the position where the coordinate data of the operation gesture falls.
T4: and creating a plurality of area calling files, and storing the dynamic virtual mold in the corresponding area calling files.
Specifically, the area call file at least includes coordinate data of one first usage area of the plurality of first usage areas, and a dynamic virtual mold corresponding to the first usage area.
Further, for the sub-mode requiring the use of audio, the region calling file further includes audio data corresponding to the first use region.
Specifically, as one example, a virtual piano mold is taken as an example of the virtual mold. The range of the register of the audio data that the virtual piano model needs to use is from a0(27.5HZ) to C8(4186HZ), wherein the respective register ranges correspond to eighty-eight first use regions of the virtual piano model, respectively, in accordance with the setting of the existing piano.
Further, the region calling file is stored in the corresponding sub-mode.
S3: and calling a corresponding sub-mode according to the accessed real scene mold for use.
Further, the sub-steps of calling the corresponding sub-mode to use according to the accessed real-scene mold are as follows:
p1: and acquiring identification information of the live-action mold.
Specifically, the real-scene mold to be used is accessed to the access device, and the identification information of the real-scene mold is acquired through the identifier of the access device and is sent to the processor for processing.
P2: and calling the corresponding sub-mode for use according to the identification information.
Further, the sub-step of calling the corresponding sub-mode to use according to the identification information is as follows:
n1: and judging the type of the virtual mold to be called according to the type of the mold in the identification information, and judging the use mode according to the type of the virtual mold.
Specifically, the judging module receives the identification information sent by the identifier, analyzes the identification information, judges the type of the virtual mold to be called according to the type of the mold in the identification information, and judges the use mode according to the type of the virtual mold.
N2: and judging a sub-mode from the use modes according to the specific type in the identification information, and calling the area calling data of the sub-mode for use.
Specifically, the judging module judges a sub-mode from the judged using modes according to the specific type in the identification information, sends the judging result to the calling module, and the calling module calls the area calling data of the sub-mode from the storage module for use.
Specifically, data of an operation gesture of a user are collected through the somatosensory controller and sent to the processor for analysis, the processor judges the coordinate data position of the operation gesture according to the regional data comparison base, if the coordinate data of the operation gesture are located in a third use region, the virtual die is in an unoperated state, and region calling data do not need to be called; if the coordinate data of the operation gesture is located in the second using area, the corresponding first using area is judged, and the corresponding area calling data is called for using through the calling module according to the judged first using area.
The beneficial effect that this application realized is as follows:
(2) the three-dimensional modeling method and the three-dimensional modeling system have the technical effects that the three-dimensional virtual reality is combined with real objects and dynamic operation gestures, and the real use experience of a user is improved.
(2) According to the three-dimensional modeling method and the three-dimensional modeling system, an operator can utilize the multiple live-action molds and the corresponding sub-modes to carry out different learning and exercises, the learning and exercise cost is low, and the application range is wide.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the scope of protection of the present application is intended to be interpreted to include the preferred embodiments and all variations and modifications that fall within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A three-dimensional modeling method, comprising:
creating a plurality of sub-modes matched with the virtual mold and storage paths corresponding to the sub-modes;
creating area calling data corresponding to the sub-mode according to the sub-mode, and storing the area calling data in a storage path of the sub-mode;
and calling the sub-mode for use according to the accessed real scene mold.
2. The three-dimensional modeling method of claim 1, wherein the sub-step of creating a plurality of sub-patterns adapted to the virtual mold is as follows:
classifying the virtual moulds and creating a plurality of using modes according to the classes of the virtual moulds;
respectively creating a plurality of sub-modes in each use mode;
and respectively creating a corresponding storage path for each sub-mode.
3. The three-dimensional modeling method according to claim 1, wherein the sub-step of creating region call data is as follows:
acquiring basic data of each sub-mode;
processing the basic data to obtain regional data, and storing the regional data in a regional data comparison library;
acquiring or simulating a plurality of operation gestures, acquiring coordinate data of the operation gestures, judging the coordinate data positions of the operation gestures through a regional data comparison library, analyzing and presetting the activity state of a virtual mold according to the coordinate data positions of the operation gestures, and creating a plurality of dynamic virtual molds corresponding to the activity state of the virtual mold;
and creating a plurality of area calling files, and storing the dynamic virtual mold in the corresponding area calling files.
4. The three-dimensional modeling method of claim 3, wherein the substeps of processing the base data to obtain region data and storing the region data in the region data comparison library are as follows:
dividing the coordinate data of the virtual mold into a plurality of first use areas;
dividing the virtual space coordinate data into a plurality of second use areas corresponding to the first use areas and third use areas;
and creating an area data comparison library, and storing the coordinate data of the first use area, the coordinate data of the second use area and the coordinate data of the third use area in the area data comparison library.
5. The three-dimensional modeling method according to claim 1, characterized in that the sub-steps of calling the corresponding sub-mode for use according to the accessed real-scene model are as follows:
acquiring identification information of the live-action mold;
and calling the corresponding sub-mode for use according to the identification information.
6. The three-dimensional modeling method according to claim 5, wherein the sub-step of calling the corresponding sub-mode for use according to the identification information is as follows:
judging the type of the virtual mold to be called according to the type of the mold in the identification information, and judging the use mode according to the type of the virtual mold;
and judging a sub-mode from the use modes according to the specific type in the identification information, and calling the area calling data of the sub-mode for use.
7. A three-dimensional modeling system comprising at least one live action mold, an access device, a VR device and a somatosensory controller, the access device being connected to the live action mold, the VR device and the somatosensory controller, respectively, the access device being adapted to perform the three-dimensional modeling method of any of claims 1-6.
8. The three-dimensional modeling system of claim 7, wherein said access device comprises an identifier, a processor, and a display, said processor being connected to said display and said identifier, respectively;
the recognizer is used for acquiring identification information of a live-action mold accessed to the access device according to an instruction of the processor and sending the identification information to the processor for processing;
the processor is used for receiving the data sent by the recognizer, processing the data, calling the sub-mode according to the processed data, and respectively sending the region calling data called according to the sub-mode to the display and the VR equipment;
the display is used for receiving and displaying data sent by the processor and the VR device.
9. The three-dimensional modeling system of claim 8, wherein the processor comprises a storage module, a three-dimensional modeling module, a processing module, a discrimination module, and a calling module;
the storage module is used for storing basic data, sub-modes and corresponding data such as a region calling file;
the three-dimensional modeling module is used for acquiring basic data and sending the basic data to the processing module, and is also used for creating a dynamic virtual mold;
the processing module is used for carrying out partition processing on the acquired basic data;
a judging module: the system comprises a sub-mode judging module, a region calling file judging module and a calling module, wherein the sub-mode judging module is used for judging a sub-mode to be used according to identification information, judging the region calling file to be called according to the coordinate data position of the operation gesture, and feeding back a judgment result to the calling module for calling;
the calling module is used for calling the sub-mode and the area calling file according to the judgment and judgment result.
10. The three-dimensional modeling system of claim 7, wherein the live-action mold is provided with logo information; the logo information includes at least a mold type and a specific type of the real scene mold.
CN201910938301.1A 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof Active CN110703916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910938301.1A CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910938301.1A CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Publications (2)

Publication Number Publication Date
CN110703916A true CN110703916A (en) 2020-01-17
CN110703916B CN110703916B (en) 2023-05-09

Family

ID=69197419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910938301.1A Active CN110703916B (en) 2019-09-30 2019-09-30 Three-dimensional modeling method and system thereof

Country Status (1)

Country Link
CN (1) CN110703916B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399873A (en) * 2013-07-10 2013-11-20 中国大唐集团科学技术研究院有限公司 Database dynamic loading management method and device of virtual reality system
WO2017031089A1 (en) * 2015-08-15 2017-02-23 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality
CN109559370A (en) * 2017-09-26 2019-04-02 华为技术有限公司 A kind of three-dimensional modeling method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399873A (en) * 2013-07-10 2013-11-20 中国大唐集团科学技术研究院有限公司 Database dynamic loading management method and device of virtual reality system
WO2017031089A1 (en) * 2015-08-15 2017-02-23 Eyefluence, Inc. Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects
CN109559370A (en) * 2017-09-26 2019-04-02 华为技术有限公司 A kind of three-dimensional modeling method and device
CN108320333A (en) * 2017-12-29 2018-07-24 中国银联股份有限公司 The scene adaptive method of scene ecad virtual reality conversion equipment and virtual reality

Also Published As

Publication number Publication date
CN110703916B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN112598785B (en) Method, device and equipment for generating three-dimensional model of virtual image and storage medium
CN112560605B (en) Interaction method, device, terminal, server and storage medium
CN112669417A (en) Virtual image generation method and device, storage medium and electronic equipment
CN112818981B (en) Musical instrument playing key position prompting method and device, electronic equipment and storage medium
KR20180028717A (en) Method for providing intelligent user interface by 3D digital actor
Santini Augmented piano in augmented reality
CN114693848B (en) Method, device, electronic equipment and medium for generating two-dimensional animation
CN109564756B (en) Intelligence piano system
CN118097031B (en) Method, device, equipment and medium for constructing vegetation three-dimensional space topological structure
CN116528016A (en) Audio/video synthesis method, server and readable storage medium
CN110703916B (en) Three-dimensional modeling method and system thereof
Antoshchuk et al. Creating an Interactive Musical Experience for a Concert Hall.
Zaveri et al. Aero drums-augmented virtual drums
US20060153425A1 (en) Method of processing three-dimensional image in mobile device
US20220111290A1 (en) Haptic engine for spatial computing
CN111651054A (en) Sound effect control method and device, electronic equipment and storage medium
Bering et al. Virtual Drum Simulator Using Computer Vision
Armitage et al. mConduct: a multi-sensor interface for the capture and analysis of conducting gesture
CN118250627A (en) Vehicle projection method, device, vehicle and storage medium
JP2025517023A (en) Posture analysis device, posture analysis method and program
Shang et al. A music performance method based on visual gesture recognition
CN115811623B (en) Live broadcast method and system based on virtual image
JPH1091701A (en) Form document system
JP7474175B2 (en) Sound image drawing device and sound image drawing method
Wang et al. Virtual piano system based on monocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant