CN118196445B - Beam position identification method based on geometric information - Google Patents
Beam position identification method based on geometric information Download PDFInfo
- Publication number
- CN118196445B CN118196445B CN202410324309.XA CN202410324309A CN118196445B CN 118196445 B CN118196445 B CN 118196445B CN 202410324309 A CN202410324309 A CN 202410324309A CN 118196445 B CN118196445 B CN 118196445B
- Authority
- CN
- China
- Prior art keywords
- geometric
- indoor
- model
- contour model
- beam position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/20—Configuration CAD, e.g. designing by assembling or positioning modules selected from libraries of predesigned modules
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Structural Engineering (AREA)
- Health & Medical Sciences (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Mathematics (AREA)
- Computer Graphics (AREA)
- Civil Engineering (AREA)
- Architecture (AREA)
- Mathematical Optimization (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of beam position recognition, in particular to a beam position recognition method based on geometric information. Then, the image processing technology is applied to further extract the reflection characteristics of the target surface and generate a second indoor contour model. And then, processing the model by using a convolutional neural network, extracting key geometric information and generating a geometric model, and carrying out matching calculation on the geometric model and a preset beam position geometric feature library so as to determine a similarity value. The feature library contains geometric features of various beams, which are carefully designed and optimized to ensure efficient and accurate matching. Finally, based on the calculated similarity value, the system can determine whether the target position is a target beam position.
Description
Technical Field
The invention relates to the technical field of beam position recognition, in particular to a beam position recognition method based on geometric information.
Background
In the field of modern construction engineering and interior design, accurate identification of beam positions inside a building is an important and challenging task. Beams are one of the main load-bearing structures of a building, the position and quality of which directly affect the safety and practicality of the building. Traditionally, the identification of beam positions has relied primarily on building blueprints and human visual inspection.
However, conventional manual inspections are not only time consuming and error prone, especially in complex or irregular building environments, and therefore rely heavily on the experience judgment of technicians, limiting work thresholds and also resulting in insufficient human resources. Meanwhile, it is difficult to effectively integrate various information types such as image data, distance measurement, and the like using the conventional method. The prior art, such as a simple image processing method, cannot accurately process complex geometric information, such as the relative position relationship of a beam and other structures.
Disclosure of Invention
In order to solve the problems, the invention provides a beam position identification method based on geometric information.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a beam position identification method based on geometric information is characterized by comprising the following steps:
Collecting indoor image data and laser ranging data;
the indoor image data and the laser ranging data are fused to obtain a first indoor contour model, and the wall body is judged and calculated according to the reflection characteristics of the target surface to obtain a second indoor contour model;
extracting geometric information of the second indoor contour model based on the convolutional neural network to generate a geometric model;
Matching and calculating the geometric model with a preset beam position geometric feature library to obtain a similarity value;
And carrying out beam position identification according to the similarity value.
Further, the fusing the indoor image data and the laser ranging data to obtain the first indoor contour model includes:
Respectively preprocessing indoor image data and laser ranging data;
and fusing the indoor image data and the laser ranging data through a feature fusion algorithm to obtain a first indoor contour model.
Further, the preprocessing of the indoor image data and the laser ranging data respectively includes:
denoising, scaling and graying the indoor image data;
and denoising, calibrating and filtering the laser ranging data.
Further, the judging and calculating the wall body according to the reflection characteristics of the target surface comprises the following steps:
The indoor image is processed by the image processing technology, the reflection characteristics of the target surface are extracted, and the wall body judgment is carried out by comparing the similarity of the reflection characteristics with the known reflection characteristics library.
Further, the library of retroreflective features includes retroreflective features of a plurality of material surfaces.
Further, the geometric information extraction of the second indoor contour model based on the convolutional neural network comprises the following steps:
Inputting a second indoor contour model into a convolutional neural network;
and extracting and classifying the characteristics of the second indoor contour model through the convolution layer, the pooling layer and the full connection layer to obtain a geometric model.
Further, the feature extraction and classification of the second indoor contour model through the convolution layer, the pooling layer and the full connection layer comprises:
extracting local features of the second indoor contour model through the convolution layer;
performing dimension reduction and space information compression on the features extracted by the convolution layer through the pooling layer;
And classifying and connecting the features extracted by the pooling layer through the full-connection layer to obtain a geometric model.
Further, the extracting the local feature of the second indoor contour model through the convolution layer includes:
the convolution layer carries out convolution operation on the second indoor contour model for a plurality of times, and extracts local features, wherein the local features comprise edges and corner points.
Further, the matching calculation of the geometric model with the preset beam position geometric feature library to obtain the similarity value includes:
and carrying out matching calculation on the generated geometric model and a preset beam position geometric feature library, and obtaining a similarity value by comparing the similarity of the geometric features of the geometric model and features in the Liang Wei geometric feature library.
Further, the beam position recognition according to the similarity value includes:
if the similarity value is greater than a preset threshold, the target position is considered as a target Liang Wei;
And if the similarity value is smaller than the preset threshold value, the target position is considered not to be the target beam position.
The invention has the beneficial effects that: the method comprises the steps of collecting image data and laser ranging data of an indoor environment, and combining the two data types through a data fusion technology to generate an accurate first indoor contour model. Then, the scheme utilizes the reflection characteristics of the target surface to judge and calculate the wall body, so that a more accurate second indoor contour model is obtained. The key step is to apply a convolutional neural network to carry out deep geometric information extraction on the second indoor contour model so as to generate a detailed geometric model. The successful implementation of the step is beneficial to the high-efficiency performance of the convolutional neural network in terms of image processing and pattern recognition, and key geometric features can be extracted from complex data, so that a foundation is laid for subsequent beam position recognition. And finally, carrying out matching calculation on the generated geometric model and a preset beam position geometric feature library to obtain a similarity value, and carrying out beam position identification according to the similarity value. The process not only enhances the accuracy of the identification process, but also improves the reliability of beam position judgment. By analyzing the similarity values, the scheme can effectively identify the actual beam positions no matter whether the actual beam positions are exposed or not, so that the dependence on human judgment is greatly reduced, and the error of human judgment is avoided.
Drawings
Fig. 1 is a flow chart of steps of a beam position recognition method based on geometric information in the invention.
FIG. 2 is a flowchart illustrating steps for feature extraction and classification of a second indoor contour model by a convolution layer, a pooling layer and a full connection layer in accordance with the present invention.
Detailed Description
Referring to fig. 1-2, the present invention relates to a beam position recognition method based on geometric information, which comprises the following steps:
s1, collecting indoor image data and laser ranging data;
It should be noted that, the collection of indoor image data may use a high-resolution digital camera, and the system may take a full picture of the indoor environment. The cameras take pictures at different positions and angles, ensuring that every corner of the room is covered. For example, for a typical living room, cameras would take pictures from four corners and a center location of the room, respectively, to obtain a panoramic view of the wall, floor, ceiling and interior items. In addition, close-up shots may be taken of special elements in the room, such as windows, doors, or special decorations, to capture more detail. A laser rangefinder is used to acquire accurate dimensional data in the room. This includes measuring the distance between walls, the height of a room, the size of a door or window, and the position and size of an object such as furniture. For example, in the same living room, a laser rangefinder would take lateral and longitudinal measurements from one end wall to the other while recording the ceiling-to-floor vertical height. For larger indoor furniture such as sofas and bookcases, the occupied space size and the relative position with surrounding objects are measured. The collected image data and laser ranging data need to be synchronized and calibrated to ensure consistency of both in spatial references. This step is critical for the subsequent generation of accurate indoor models. Through the calibration process, the system can compare visual information in the image with the size data provided by the laser range finder, and ensure that the understanding of the indoor space is accurate.
S2, fusing the indoor image data and the laser ranging data to obtain a first indoor contour model, and judging and calculating the wall according to the reflection characteristics of the target surface to obtain a second indoor contour model; the step S2 comprises the following steps:
S21, respectively preprocessing indoor image data and laser ranging data; the step S21 includes:
s211, denoising, scaling and graying are carried out on indoor image data;
The indoor image data is subjected to denoising, scaling, and graying processing. The denoising process aims to eliminate unnecessary noise in an image, such as noise due to poor lighting conditions or camera shake. The quality of the image can be significantly improved by using advanced image processing algorithms such as median filters or gaussian filters. The scaling process that follows ensures that the image data conforms to the standard size of the system process, thereby reducing computational complexity. Finally, the image is converted to a gray scale format, which helps to simplify the subsequent image analysis process, since the gray scale image contains only luminance information, removing the complexity of the color processing.
S212, denoising, calibrating and filtering the laser ranging data;
The laser ranging data is subjected to denoising, calibration and filtering. The denoising of laser ranging data is to remove errors that may be introduced by equipment accuracy limitations or external environmental factors (such as air turbulence or surface reflection characteristics). The calibration process ensures that the laser ranging data is consistent with the size of the actual physical space, and the step is important for establishing an accurate space model. Finally, the filtering process is used for smoothing the data, eliminating abnormal values, and improving the reliability of the data.
S22, fusing the indoor image data and the laser ranging data through a feature fusion algorithm to obtain a first indoor contour model;
In some embodiments, key features are first extracted from the preprocessed indoor image and laser ranging data. For image data, this may include visual features such as edges, corners, textures, etc., which help describe the visual appearance of the indoor environment. For laser ranging data, spatial features, such as distance, angle, and spatial layout, are extracted, which reflect the physical structure in the room. Feature matching is then performed. This involves identifying similar features in the image data and the laser ranging data and aligning them. For example, a corner feature in the image data (e.g., a corner of a room) needs to be matched to the spatial coordinates of the corresponding location in the laser data. This step is typically accomplished by advanced algorithms, such as feature descriptor based matching algorithms, to ensure that the two types of data correspond correctly in space. And then applying a fusion strategy to synthesize the matched and aligned features into a unified data set. This may involve multiple stages of processing, including data interpolation, spatial reconstruction, and three-dimensional modeling. For example, interpolation techniques can fill in gaps in spatial information between image data and laser data, while spatial reconstruction involves creating a coherent three-dimensional spatial model based on the fused data. After feature fusion, the system performs accuracy verification. This step ensures that the fused data is not only visually consistent with the actual indoor environment, but also highly accurate in terms of spatial dimensions. The system may use known reference standards or perform field verification to ensure accuracy of the model. The fused data is optimized in detail, which includes smoothing, edge sharpening, or emphasis rendering of specific areas. These optimization steps promote the visual quality of the model, making it more realistic and detailed.
S23, processing the indoor image by an image processing technology, extracting the reflection characteristics of the target surface, and judging the wall body by comparing the similarity of the reflection characteristics with the known reflection characteristics library;
wherein the light reflecting characteristic library comprises light reflecting characteristics of the surfaces of a plurality of materials;
It should be noted that, first, the image in the first indoor contour model is further processed, so as to more accurately identify the optical properties of the surfaces of various materials. This includes the use of image processing techniques such as edge detection, illumination effect analysis, and texture recognition to extract the light reflecting features. For example, the reflective properties of different material (e.g., painted wall, glass, metal, or wood) surfaces under light may be different, manifesting as different brightness, contrast, and texture patterns in the image. The system has a database containing reflective characteristics of the surfaces of several materials. This feature library is built by collecting and analyzing the reflection behavior of different materials under various lighting conditions. The light reflecting characteristics of each material are recorded in detail, including its optical response at different illumination angles and intensities. And comparing the extracted reflection characteristics with data in a reflection characteristic library, and judging the material type of the wall body by calculating the similarity. This process uses complex algorithms, such as pattern recognition and machine learning techniques, to ensure high accuracy and reliability. For example, if the light reflecting characteristics of a portion of a wall are highly similar to the characteristics of a painted wall in a characteristics library, the system will determine that the wall is a painted wall.
S3, geometric information extraction is carried out on the second indoor contour model based on the convolutional neural network, and a geometric model is generated; the step S3 comprises the following steps:
s31, inputting a second indoor contour model into a convolutional neural network;
Specifically, the second indoor contour model is first formatted to ensure that it matches the input specifications of the convolutional neural network. This typically involves resizing the image, normalizing the color and brightness levels of the image, and converting the data to the format required by the neural network model.
S32, extracting and classifying the characteristics of the second indoor contour model through the convolution layer, the pooling layer and the full-connection layer to obtain a geometric model; step S32 includes:
s321, extracting local features of a second indoor contour model through a convolution layer;
Further, the extracting the local feature of the second indoor contour model through the convolution layer includes:
the convolution layer carries out convolution operation on the second indoor contour model for a plurality of times, and extracts local features, wherein the local features comprise edges and corner points.
In some embodiments, the convolution layer processes the image data in the second indoor contour model using a plurality of convolution kernels (filters). Each convolution kernel is responsible for detecting a particular type of feature in the image. In the convolution operation, these convolution kernels are slid over the entire image, calculating the characteristic response for each location. This calculation involves multiplying the weights in the convolution kernel with the pixel values of the corresponding region on the image and summing, with the result being a feature map (feature map) that highlights the particular feature in the image. After a number of convolution operations, the network can extract local features from the indoor contour model, mainly including edges and corner points. Edge features are critical to understanding the structure of indoor spaces such as walls, furniture profiles, and door and window edges. Corner features then help identify key points of the indoor space, such as corners of a room, corners of furniture, etc. These features together form the basic visual and spatial framework of the indoor environment. Each convolution kernel generates a feature map that reflects the distribution and intensity of the corresponding features in the image. For example, a convolution kernel that specifically detects vertical edges may produce a higher response in the area of the wall and door frame. By combining the feature maps of the different convolution kernels, the system obtains a comprehensive feature representation about the indoor environment.
S322, performing dimension reduction and space information compression on the features extracted by the convolution layer through the pooling layer;
Specifically, the pooling layer processes the feature map generated by the convolution layer. This typically involves applying a pooling window (typically a size of 2x2 or 3x 3) over the feature map and extracting critical information using operations such as maximum (max pooling) or average (average pooling) within the window. For example, a max-pooling operation may select the maximum value in each window, which may help capture the most salient features in the image, such as sharp edges or corner points. By the pooling operation, the spatial dimension of the feature map is significantly reduced, i.e., reduced in dimension. This not only reduces the parameters and computational requirements of the network, but also helps to control the over-fitting and improve the generalization ability of the model. Spatial information compression also ensures that key features of the model are preserved while unnecessary detailed information is removed, thereby making the network more focused on global features rather than local noise.
S323, classifying and connecting the features extracted by the pooling layer through the full-connection layer to obtain a geometric model;
Specifically, in the fully connected layer, features previously extracted and compressed by the convolution layer and the pooling layer are further integrated and processed. This layer "flattens" all feature maps into a one-dimensional vector that contains the global information of the input data. This transformation allows the network to combine local features with the overall structure in preparation for final classification and identification. The fully connected layer classifies and connects the flattened feature vectors by a series of weights and biases. Each neuron is connected to all neurons of the previous layer in the fully connected layer, and the input data is weighted and summed, and then output is generated by an activation function (e.g., reLU or S igmoid). In this process, the network recognizes the importance of different feature combinations through the learned weights, thereby realizing classification of the input data. Through the processing of the full connection layer, the network finally outputs a geometric model describing the second indoor contour model. The geometric model not only contains the size and shape information of the indoor space, but also fuses the visual and spatial characteristics of the indoor environment, and provides a basis for subsequent application (such as three-dimensional rendering, spatial analysis or augmented reality).
S4, matching and calculating the geometric model with a preset beam position geometric feature library to obtain a similarity value;
Further, matching calculation is carried out on the generated geometric model and a preset beam position geometric feature library, and similarity values are obtained by comparing the similarity of geometric features of the geometric model and features in the Liang Wei geometric feature library;
In some embodiments, building a beam geometry library is a complex process involving multiple stages, aimed at collecting, analyzing, and classifying various beam geometries, in order to achieve rapid and accurate beam identification in subsequent engineering applications. First, the process begins with extensive data collection, including information from architectural design documents, historical architectural databases, and field measurement data. The sources of this data are diverse and range from traditional residential to modern commercial construction types, to various beam materials such as reinforced concrete, wood and steel. In order to ensure the comprehensiveness and practicality of feature libraries, data collection work has focused on capturing various details of the beams, such as the size, shape, location, and relationship with other structural elements of the beams. The system then extracts key geometric features by detailed analysis of the collected data. These features include the length, width, height, cross-sectional shape, spatial location, etc. of the beam. In particular, the system also focuses on identifying beam shape features such as straight beams, curved beams or complex geometries, as well as the spatial positioning of the beams, i.e. the specific location and orientation of the beams in the building. The work at this stage is critical because it provides the basis for subsequent feature classification and database construction. Feature classification is a key ring in the feature library construction process. At this stage, the system classifies the features according to the different uses, material types and design features of the beam. For example, the spandrel girder and the ornamental girder are functionally distinct and their geometric features are not identical; also, prestressed beams and general concrete beams are different in design and structure. The classification method not only improves the organization of the feature library, but also provides convenience for subsequent beam position matching and recognition work. The system performs normalization and formatting processing on the extracted features. This step includes converting all data into a unified unit of measure, ensuring consistency of the data, and converting the feature data into a standardized format for storage and retrieval in the database. The normalization process ensures that data of different sources and types can be effectively compared and analyzed within a unified framework. The system then integrates the processed data into a structured database to construct a comprehensive and easily retrievable beam geometry library. In this database, each entry contains detailed geometric feature information of one beam, and a corresponding class label.
The generated geometric model is ready for matching calculation. This involves further processing of the model to ensure that its format and data structure are compatible with the data in the Liang Wei geometric feature library. The geometric model contains key geometric information about the position, size, shape, etc. of the beams in the building space. Next, key features for matching calculations are extracted from the geometric model. These features may include the length, width, height, cross-sectional features of the beam, etc. After feature extraction, they are normalized to ensure data consistency and accuracy in the matching calculation. Matching calculations are then performed to compare the features of the geometric model to features in the Liang Wei geometric feature library. This process typically employs advanced algorithms, such as pattern recognition, machine learning, or shape matching algorithms, to evaluate the similarity between the two. The matching calculation takes into account a combination of geometric features to ensure accuracy and reliability of the results. During the comparison, the system calculates one or more similarity values that represent the degree of similarity of a particular element in the geometric model to a feature in the feature library. The computation of similarity may involve a comparison of multidimensional feature spaces, including a comparison of shape, size, position, and orientation.
S5, beam position identification is carried out according to the similarity value;
if the similarity value is greater than a preset threshold, the target position is considered as a target Liang Wei;
And if the similarity value is smaller than the preset threshold value, the target position is considered not to be the target beam position.
Specifically, a similarity threshold for beam recognition is first set. This threshold is based on previous testing, analysis and empirical optimization, aimed at balancing the accuracy of recognition and false positive rate. The threshold is set to account for the complexity and variety of beam characteristics, ensuring that it effectively distinguishes between target Liang Wei and non-target beam positions. The system evaluates the similarity values obtained by matching calculation and compares the similarity values with a preset threshold value. This step involves analyzing each potential beam position to determine if its similarity value meets the criteria required to identify the target beam position. If the similarity value of a certain position is greater than or equal to a preset threshold value, the system judges that the position is the target beam position. This means that the geometry of the location matches a feature in the Liang Wei geometry library to a high degree, indicating that the location is likely to be the beam position sought. Conversely, if the similarity value is less than the threshold, the system considers the position not to be the target beam position. In this case, the geometric features of the location are insufficient to form a valid match with the beam position features in the feature library, indicating that the location does not match the beam position sought. The system finally outputs the identification result, including all the positions identified as the target beam positions and the corresponding similarity values. The results have important values for professionals such as architects, structural engineers, construction team and the like in building design, structural evaluation or construction planning, and realize automatic identification and analysis of building beam positions.
The above embodiments are merely illustrative of the preferred embodiments of the present invention and are not intended to limit the scope of the present invention, and various modifications and improvements made by those skilled in the art to the technical solution of the present invention should fall within the scope of protection defined by the claims of the present invention without departing from the spirit of the design of the present invention.
Claims (7)
1. The beam position identification method based on the geometric information is characterized by comprising the following steps of:
Collecting indoor image data and laser ranging data;
The indoor image data and the laser ranging data are fused to obtain a first indoor contour model, and the wall body is judged and calculated in the first indoor contour model according to the reflection characteristics of the target surface to obtain a second indoor contour model;
extracting geometric information of the second indoor contour model based on the convolutional neural network to generate a geometric model;
Matching and calculating the geometric model with a preset beam position geometric feature library to obtain a similarity value;
Carrying out beam position identification according to the similarity value;
the geometric information extraction of the second indoor contour model based on the convolutional neural network comprises the following steps:
Inputting a second indoor contour model into a convolutional neural network;
Performing feature extraction and classification on the second indoor contour model through the convolution layer, the pooling layer and the full-connection layer to obtain a geometric model;
the feature extraction and classification of the second indoor contour model through the convolution layer, the pooling layer and the full connection layer comprises the following steps:
extracting local features of the second indoor contour model through the convolution layer;
performing dimension reduction and space information compression on the features extracted by the convolution layer through the pooling layer;
classifying and connecting the features extracted by the pooling layer through the full-connection layer to obtain a geometric model;
The extracting the local feature of the second indoor contour model through the convolution layer comprises the following steps:
the convolution layer carries out convolution operation on the second indoor contour model for a plurality of times, and extracts local features, wherein the local features comprise edges and corner points.
2. The method for identifying beam positions based on geometric information according to claim 1, wherein the step of fusing the indoor image data and the laser ranging data to obtain the first indoor contour model comprises the steps of:
Respectively preprocessing indoor image data and laser ranging data;
and fusing the indoor image data and the laser ranging data through a feature fusion algorithm to obtain a first indoor contour model.
3. The beam position recognition method based on geometric information according to claim 2, wherein the preprocessing of the indoor image data and the laser ranging data respectively comprises:
denoising, scaling and graying the indoor image data;
and denoising, calibrating and filtering the laser ranging data.
4. The method for identifying beam positions based on geometric information according to claim 1, wherein the determining and calculating the wall according to the reflection characteristics of the target surface comprises:
The indoor image is processed by the image processing technology, the reflection characteristics of the target surface are extracted, and the wall body judgment is carried out by comparing the similarity of the reflection characteristics with the known reflection characteristics library.
5. The method for identifying beam positions based on geometric information according to claim 1, wherein the light reflection feature library comprises light reflection features of a plurality of material surfaces.
6. The method for identifying beam positions based on geometric information according to claim 1, wherein the matching calculation of the geometric model with a preset beam position geometric feature library to obtain the similarity value comprises the following steps:
and carrying out matching calculation on the generated geometric model and a preset beam position geometric feature library, and obtaining a similarity value by comparing the similarity of the geometric features of the geometric model and features in the Liang Wei geometric feature library.
7. The method for identifying beam positions based on geometric information according to claim 1, wherein the step of identifying beam positions according to the similarity value comprises the steps of:
if the similarity value is greater than a preset threshold, the target position is considered as a target Liang Wei;
And if the similarity value is smaller than the preset threshold value, the target position is considered not to be the target beam position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410324309.XA CN118196445B (en) | 2024-03-21 | 2024-03-21 | Beam position identification method based on geometric information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410324309.XA CN118196445B (en) | 2024-03-21 | 2024-03-21 | Beam position identification method based on geometric information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118196445A CN118196445A (en) | 2024-06-14 |
CN118196445B true CN118196445B (en) | 2024-09-17 |
Family
ID=91414823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410324309.XA Active CN118196445B (en) | 2024-03-21 | 2024-03-21 | Beam position identification method based on geometric information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118196445B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119992212B (en) * | 2025-02-11 | 2025-08-15 | 华联世纪工程咨询股份有限公司 | Arc beam identification method based on geometrical characteristic extension of lines |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116702298A (en) * | 2023-08-01 | 2023-09-05 | 全屋优品科技(深圳)有限公司 | Model construction method and system for interior decoration design |
CN117516435A (en) * | 2023-11-02 | 2024-02-06 | 快意电梯股份有限公司 | Elevator shaft concrete ring beam recognition and measurement system and method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9036861B2 (en) * | 2010-04-22 | 2015-05-19 | The University Of North Carolina At Charlotte | Method and system for remotely inspecting bridges and other structures |
NL2018911B1 (en) * | 2017-05-12 | 2018-11-15 | Fugro Tech Bv | System and method for mapping a railway track |
CN113592927B (en) * | 2021-07-26 | 2023-12-15 | 国网安徽省电力有限公司电力科学研究院 | A cross-domain image geometric registration method guided by structural information |
US20230138762A1 (en) * | 2021-10-28 | 2023-05-04 | MFTB Holdco, Inc. | Automated Building Floor Plan Generation Using Visual Data Of Multiple Building Images |
CN115019042A (en) * | 2022-06-09 | 2022-09-06 | 上海同岩土木工程科技股份有限公司 | High-precision identification method for tunnel structure cracks |
CN116152219B (en) * | 2023-03-08 | 2025-06-13 | 上海市建筑科学研究院有限公司 | Concrete component damage detection and assessment method and system |
-
2024
- 2024-03-21 CN CN202410324309.XA patent/CN118196445B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116702298A (en) * | 2023-08-01 | 2023-09-05 | 全屋优品科技(深圳)有限公司 | Model construction method and system for interior decoration design |
CN117516435A (en) * | 2023-11-02 | 2024-02-06 | 快意电梯股份有限公司 | Elevator shaft concrete ring beam recognition and measurement system and method |
Also Published As
Publication number | Publication date |
---|---|
CN118196445A (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110246112B (en) | Laser scanning SLAM indoor three-dimensional point cloud quality evaluation method based on deep learning | |
Tang et al. | Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques | |
Kim et al. | Fully automated registration of 3D data to a 3D CAD model for project progress monitoring | |
US7593019B2 (en) | Method and apparatus for collating object | |
Paparoditis et al. | Building detection and reconstruction from mid-and high-resolution aerial imagery | |
CN119355747B (en) | Building measurement method and system based on intelligent robot | |
Valero et al. | Detection, modeling, and classification of moldings for automated reverse engineering of buildings from 3D data | |
CN118196445B (en) | Beam position identification method based on geometric information | |
CN111612907A (en) | Multidirectional repairing system and method for damaged ancient building column | |
Sezen et al. | Deep learning-based door and window detection from building façade | |
CN118982667A (en) | Indoor semantic segmentation method and device using multimodal fusion of color and depth images | |
CN119544928A (en) | Converter station valve hall unmanned inspection method, device, equipment and storage medium | |
CN120031788A (en) | A slope crack detection method and device based on super-resolution processing | |
CN119476963A (en) | Power operation safety risk identification method and system based on fine-grained detection | |
Wang et al. | Efficient rock-mass point cloud registration using $ n $-point complete graphs | |
CN109583513A (en) | Method, system and device for detecting similar frame and readable storage medium | |
CN111854651A (en) | A real-time measurement method of indoor building area based on SLAM | |
Kuniyoshi et al. | Building Damage Visualization Through Three-Dimensional Reconstruction | |
Sequeira et al. | High-level surface descriptions from composite range images | |
Yu et al. | Evaluation of model recognition for grammar-based automatic 3D building model reconstruction | |
Sheik et al. | Automated registration of building scan with BIM through detection of congruent corner points | |
CN117726239B (en) | Engineering quality acceptance actual measurement method and system | |
CN119067616B (en) | A method for extracting suspicious points in construction project engineering management audit based on 3D point cloud | |
CN119245609B (en) | Linear management method, device and medium for assembled bridge | |
Wang | Automatic As-built BIM Model Generation for MEP System Based on Laser Scanning and Depth Camera Data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |