Project CADistency: AI as a bridge between technical drawings and CAD models

AI-powered merging of 2D and 3D information
In manufacturing, technical drawings are used alongside CAD files, from which dimensional accuracy is read manually. The CADistency research project therefore aims to achieve cost advantages through end-to-end automation solutions by combining 2D and 3D information with the help of AI.
Background and objectives of the CADistency project
With the advancement of digitalization in industrial manufacturing, the consistent, loss-free provision of technical product information is becoming increasingly important. In practice, this information is often available in two parallel formats: as classic 2D drawings and as 3D CAD models. In order to avoid redundant data maintenance, manual transfer processes, and the resulting sources of error, the CADistency research project aims to map essential drawing content - in particular dimensions, tolerances, and surface specifications - in a structured and complete manner in the 3D model.
The project is being carried out jointly by the Institute for Data-Optimized Manufacturing (IDF) at Kempten University of Applied Sciences, which conducts research in numerous collaborations on AI-powered optimization of CAx processes, and PartSpace, a company specializing in AI-powered design data understanding and procurement solutions.
Model Based Definition
Within the framework of Model Based Definition (MBD), technical product information is no longer maintained separately in 2D drawings, but is stored directly in the 3D CAD model in a structured manner. This includes, in particular, dimensions, tolerances, surface specifications, and other production-relevant information, which is digitally embedded as Product Manufacturing Information (PMI). The aim is to provide a central, machine-readable data source that supports the continuous flow of information along the entire process chain.
In production preparation, this information can be used to automatically generate machining strategies, NC programs, or work plans. Manual evaluation of technical drawings is no longer necessary, nor are error-prone transfer processes. The consolidated data structure also creates a uniform level of information for higher-level tasks such as capacity planning, cost estimation, or process recommendations, enabling automated and AI-powered evaluations.
In addition, MBD offers significant advantages in the context of product lifecycle management (PLM): changes to product data can be managed centrally and systematically transferred to all downstream applications - from design and manufacturing to quality assurance. The reproducibility of technical decisions is enhanced, and collaboration between departments and with external partners is significantly improved thanks to the consistent database.
The technical basis for this approach is the standardized exchange format STEP AP 242, which allows geometry and PMI data to be combined in a single file. Due to its cross-manufacturer structure, STEP AP 242 is particularly suitable for loss-free data exchange between different CAD, CAM, and PLM systems. It is therefore the preferred target format for the implementation of MBD in the CADistency project.
CNNs for technical drawings
The CADistency project uses various machine learning methods to automatically evaluate information from technical drawings and 3D CAD models. Convolutional neural networks (CNNs), which are particularly suitable for analyzing visual data, play a central role in this process.
Artificial neural networks are conceptually based on the functioning of biological nerve cells and are able to learn relevant features and relationships from training data. CNNs are an architecture developed specifically for image processing: they view input data as a two-dimensional pixel grid and analyze it using adaptive filters that glide over the image. Each filter recognizes specific visual features - such as edges, lines, or basic geometric shapes - enabling structured evaluation of even complex drawing content.
PartSpace uses CNNs as part of its AI product PartSpace AI to automatically read technical drawings. Typical elements such as dimensions, tolerances, and symbols are reliably recognized and digitally processed in a structured manner. This preliminary work is an important component of the CADistency project: the digitalized content of the drawings can now be specifically linked to the geometric elements of the 3D CAD model. This results in a multimodal database in which information from 2D and 3D sources is merged. This enables the precise and context-related transfer of relevant drawing content into the 3D model - a key step on the path to standardized model description in the STEP AP 242 format.
GNNs for CAD models
Graph neural networks (GNNs) are designed to process structured data in the form of graphs. In a graph, nodes represent individual semantic units - in the CAD context, for example, component surfaces - and edges model their topological relationships, such as the adjacency of neighboring surfaces. Through repeated information exchange between the nodes, GNNs learn how geometric elements relate to each other and can draw conclusions about functional or manufacturing-relevant relationships.
In the CADistency project, GNNs are used to link the content previously extracted from the technical drawing using CNNs with the geometric structure of the 3D CAD model. For this purpose, the CAD model is interpreted as a graph whose nodes represent the surfaces of the component. The image elements extracted from the drawing – such as dimensional information or tolerance symbols – are entered into the graph as numerical information and serve as the basis for subsequent node classification: The network learns which surfaces are to be linked to which PMIs.
PartSpace already uses graph-based methods to automatically derive technological features - such as drill holes, countersinks, or bends - and convert them into work plan proposals. At the same time, the IDF has developed a graph-based AI for predicting CNC manufacturing times, which achieves 58% better prediction accuracy compared to rule-based methods. To do this, structured component information was combined with CAD information. The CADistency project also pursues this multimodal approach, in which 2D drawing information and 3D CAD structures are evaluated together. On this basis, the model can automatically identify relevant surfaces and enrich them with the correct PMI types - an essential step towards the consistent digitalization of product descriptions in STEP AP 242 format.
AI for 2D/3D matching
To integrate the content extracted from the 2D drawing into the 3D CAD model, the image features extracted by the CNNs were incorporated into the graph-based representation of the CAD model. The component surfaces were modeled as nodes and their topological neighborhoods were mapped via shared edges. A downstream graph neural network was trained on this structured input to determine, via node classification, which surfaces are to be associated with which PMI types (e.g., roundness, flatness).
The combination of both networks allows for a semantically precise link between the manufacturing information displayed in 2D and the corresponding geometric elements in the 3D model. A classification accuracy of over 97% was achieved on an independent validation dataset. This finding demonstrates the technical feasibility of the approach on simple, synthetically generated components and forms the basis for further generalization to more complex geometries and real design data in the further course of the project.
Outlook and conclusion
The main challenges for the further course of the project lie primarily in the increasing geometric and semantic complexity of real components. Particularly demanding is the handling of situation-dependent conventions, historical representations, and implicit drawing intentions, which require a high degree of domain-specific knowledge. Added to this is the heterogeneity of the data material in industrial practice: in addition to standardized CAD exports, there are also scanned or handwritten drawings - some of which vary greatly in quality. Such input data requires robust, error-tolerant approaches in both preprocessing and interpretation.
Despite these challenges, the results achieved so far demonstrate the technical feasibility and potential of the chosen approach. If scaling to realistic and varied components is successful, this could make a significant contribution to increasing efficiency in production preparation – through reduced manual effort, lower error rates, and shorter throughput times. In the long term, this opens up new opportunities for data-driven planning and automation processes along the entire digital process chain.
Discuss AI Solutions for Your CAD Challenges
Discuss AI Solutions for Your CAD Challenges
Related Articles

Why Data Alignment Between Engineering, Cost and Procurement Teams Is Key to Quality, Savings and Supply Chain Resilience

Technical purchasing in mechanical engineering: The 10 biggest obstacles – and how to overcome them
