Abstract

Additive manufacturing (AM) has revolutionized the way we design, prototype, and produce complex parts with unprecedented geometries. However, the lack of understanding of the functional properties of 3D-printed parts has hindered their adoption in critical applications where reliability and durability are paramount. This paper proposes a novel approach to the functional qualification of 3D-printed parts via physical and digital twins. Physical twins are parts that are printed under the same process conditions as the functional parts and undergo a wide range of (destructive) tests to determine their mechanical, thermal, and chemical properties. Digital twins are virtual replicas of the physical twins that are generated using finite element analysis (FEA) simulations based on the 3D shape of the part of interest. We propose a novel approach to transfer learning, specifically designed for the fusion of diverse, unstructured 3D shape data and process inputs from multiple sources. The proposed approach has demonstrated remarkable results in predicting the functional properties of 3D-printed lattice structures. From an engineering standpoint, this paper introduces a comprehensive and innovative methodology for the functional qualification of 3D-printed parts. By combining the strengths of physical and digital twins with transfer learning, our approach opens up possibilities for the widespread adoption of 3D printing in safety-critical applications. Methodologically, this work presents a significant advancement in transfer learning techniques, specifically addressing the challenges of multi-source (e.g., digital and physical twins) and multi-input (e.g., 3D shapes and process variables) transfer learning.

1 Introduction

Additive manufacturing (AM), commonly known as 3D printing, has become a popular manufacturing technique due to its ability to produce complex parts with unique geometries. The versatility of 3D printing has made it a popular choice in various industries, such as aerospace and healthcare. However, the lack of understanding of the functional properties of 3D-printed parts has hindered their adoption in critical applications where reliability and durability are essential. Destructive testing is a common method for assessing the functional properties of a part, but in many cases, it may not be possible or practical to perform destructive testing on the part to be used. Destructive testing may damage the part, rendering it unusable. In addition, destructive testing is often time-consuming and expensive, which may not be feasible for large-scale production. Furthermore, the material properties of 3D-printed parts can vary significantly depending on the printing process, material type, and post-processing techniques used.

Existing research on the functional qualification of 3D-printed parts has mainly focused on the characterization of material properties or the development of predictive models. However, these approaches are often limited in their ability to capture the complex interactions between the as-printed 3D shape, material properties, and the printing process variability.

We hypothesize that the only way to verify the functional properties of a part without destructive testing is through the use of physical and digital twins. Physical twins are printed under the same process conditions as the functional parts and undergo a wide range of tests to determine their mechanical, thermal, and chemical properties. Digital twins are virtual replicas of physical counterparts, created through finite element analysis (FEA) simulations. More formally, according to NIST [1], “A digital twin is the electronic representation—the digital representation—of a real-world entity, concept, or notion, either physical or perceived.” In our case, these replicas are generated from 3D scans of the object and enable precise assessment of the impact of shape inaccuracies on functional characteristics. Together, physical and digital twins enable accurate predictions of the functional properties of the parts, without the need for destructive testing on the part to be used. Figure 1 illustrates the concept behind the AUDIT (e.g., Functional Qualification in Additive Manufacturing via Physical and Digital Twins) framework, which combines physical and digital twins for functional qualification.

Fig. 1
Illustration of AUDIT functional qualification approach
Fig. 1
Illustration of AUDIT functional qualification approach
Close modal

To accurately predict functional characteristics, this paper proposes a novel transfer learning technique for data fusion between heterogeneous process data and unstructured 3D shape data. By utilizing the knowledge from digital and physical twins, as well as the process conditions, this approach enables the accurate prediction of functional characteristics.

Overall, this paper presents a comprehensive approach to the functional qualification of 3D-printed parts. This approach has the potential to significantly improve the adaptation of 3D-printed parts in critical applications, where their functional properties must be accurately verified.

The remainder of the article is organized as follows. Section 2 provides a brief literature review. Then the proposed AUDIT framework for functional qualification in additive manufacturing via physical and digital twins is introduced in Sec. 3. Section 4 validates the proposed methodology by using a real-world case study of 3D-printed polylactic acid (PLA) lattice structures. Furthermore, the performance of the proposed method is compared with existing benchmark methods in terms of estimation accuracy. Finally, we conclude the article with a short discussion and an outline of future research topics in Sec. 5.

2 Literature Review

In recent years, 3D printing has emerged as a transformative manufacturing technology with a wide range of applications across industries. As the demand for functional qualification of 3D-printed parts in safety-critical applications continues to grow, researchers and practitioners have explored various approaches to address this critical aspect. This literature review examines the existing methods and their limitations, highlighting the need for further advancements in the field.

2.1 Destructive Testing for Functional Analysis in 3D Printing.

Destructive testing has long been a common practice for assessing the mechanical properties and performance of manufactured products. Mishra and Senthil [2] investigated the relationship between applied force and breaking strain of 3D-printed PLA parts using destructive testing with a universal testing machine (UTM). Zeng et al. [3] used destructive compression testing to study the behavior of honeycomb structures, revealing the correlation between material bonding and fracture location. Li et al. [4] quantitatively measured the post-yield crushing stress of honeycombs through destructive testing, enabling the evaluation of hierarchical honeycombs. Han et al. [5] conducted destructive testing on 3D-printed concrete walls to derive a calculation formula for predicting failure loads.

However, the applicability of the aforementioned methods to 3D-printed parts is limited due to the irreversible nature of the destructive testing process. Once a product is subjected to destructive testing, it becomes unusable, rendering it impractical for functional qualification. This limitation raises the need for alternative methods that can provide functional assessment without compromising the integrity of the part.

2.2 Finite Element Analysis for Functional Analysis in 3D Printing.

FEA has been widely utilized to simulate and predict the behavior of structures, including 3D-printed components. FEA offers valuable insights into the mechanical response of parts under different loading conditions. Cao et al. [6] compared experimental and simulation results for different lattice structures, investigating the effect of lattice shape parameters on stress using numerical simulations. Lesueur et al. [7] explored the effect of internal structure geometry on the yield of a structure using an FEA model, validated with experimental results.

However, FEA has its limitations, particularly when it comes to capturing the variability introduced by different process conditions in additive manufacturing [8]. Factors such as temperature, layer thickness, printing speed, and material properties can significantly influence the functional properties of 3D-printed parts. Several studies have investigated the accuracy of computer-aided design (CAD)-based compression simulations. Belhabib and Guessasma [9] found that filament-based computations closely matched the experimental deformation trends in the compression of hollow structures, but they overestimated the performance of hollow structures by an average of 43%. Abbot et al. [10] also observed significant discrepancies between simulation and physical compression test results. FEA models, with their inherent assumptions and simplifications, may not fully account for the variations of process conditions during 3D printing, limiting their accuracy in functional qualification.

2.3 Transfer Learning for Multi-Input, Multi-Source 3D Transfer Learning.

Transfer learning has emerged as a powerful technique in machine learning, enabling knowledge transfer from one domain to another. While transfer learning has shown promising results in various applications, its extension to the multi-input, multi-source setting in additive manufacturing is still limited. The lack of large-scale and diverse datasets, as well as the challenges associated with obtaining samples representative of the entire design space, pose obstacles to effectively applying transfer learning to functional qualification in 3D printing. Unlike structured data types, 3D point cloud data introduce complexities stemming from its inherent spatial and geometric attributes. The irregular nature of point clouds, their varying densities, and the incorporation of both structural and textural information necessitate specialized methodologies that can effectively capture these features. The focus on multi-source domain adaptation often revolves around structured tabular or image data [1113]. These challenges demand a tailored approach that considers the unique characteristics of 3D data. Consequently, current transfer learning methods are not readily available to tackle the specific challenges posed by the functional qualification of 3D-printed parts.

2.4 Digital Twins and Transfer Learning in Additive Manufacturing.

In the realm of additive manufacturing, both digital twins and transfer learning have garnered attention. The literature spans diverse dimensions, ranging from initial conceptual visions endorsing the integration of digital twins in metal additive manufacturing for improved process models [14], to the exploration of optimal process conditions [15], and their subsequent adaptation to novel shapes using transfer learning [16]. However, it is important to note, that these methods do not predict shape variation or the functional attributes of individual products. Instead, they focus on batch shape optimization by suggesting optimal settings for specific shapes. Generative design has been explored to generate intricate geometries via numerical simulations [17], but it fails to account for quality and structural concerns arising from process variation during printing. Additionally, a range of work has tackled defect classification [18]. However, a classification of defects may not offer the nuanced understanding needed for certain situations, such as evaluating the significance of a keyhole pore and whether it necessitates product rejection. Several comprehensive review articles [19,20] underscore the need for functional qualification of 3D-printed parts that integrates both physical and digital twins, capturing the information from complex and diverse data sources.

The limitations discussed above highlight the challenges in functional qualification for 3D-printed parts. Destructive testing is not universally applicable due to the irreversible nature of the process. FEA may not capture process variability, and transfer learning methods lack applicability in the multi-input, multi-source setting.

To overcome these limitations and address the research gaps identified in the literature review, the AUDIT framework makes several contributions. In particular, AUDIT includes:

  • Integration of Physical and Digital Twins: introduces methodology for functional qualification by combining physical and digital twin concepts, bridging the gap between real-world physical processes and their virtual representations through a cohesive twin system.

  • Consideration of Process Variability: Proposed a novel methodology to model and account for inherent variations in physical processes, enhancing the accuracy of predictions for functional characteristics, and considering real-world manufacturing conditions and associated uncertainties.

  • Incorporation of 3D Shape Data: Provided comprehensive representation of physical objects by integrating 3D shape data into the modeling process, enhancing the understanding of complex relationships between 3D shape data, process settings, and functional properties in additive manufacturing

  • Multi-Source, Multi-Input Transfer Learning: Proposed a transfer learning framework that can effectively leverage information from diverse sources and inputs, including different data types, for unsupervised transfer learning.

In conclusion, the AUDIT framework provides a comprehensive solution that addresses the limitations of functional qualification in the current literature. Its contributions pave the way for further research and development, offering a pathway to overcome these challenges and establish robust approaches for functional qualification in the field of 3D printing.

3 AUDIT Methodology

This section presents the AUDIT framework as an approach to functional qualification enabled by multi-source, multi-input transfer learning via contrastive learning with augmentations. We consider a specific data scenario, where we assume 3D measurements of the 3D-printed part are available. However, in the context of products and additive manufacturing processes, where obtaining 3D measurements can be challenging, it is possible to replace the 3D point clouds by utilizing 2D imaging data from each print layer, which can be represented as a 3D tensor or stack of 2D images. We would like to emphasize recent techniques enabling the reconstruction of 3D point cloud data from 2D image stacks through complementary data fusion with process features [21]. This promising approach addresses the discretization issue of 2D image stacks and holds potential for application in our functional qualification work. Additionally, in the literature, treating 3D point cloud data as a stack of 2D images is widely adopted and effective [22]. Leveraging the well-established image processing capabilities of convolutional neural networks (CNNs) optimized for 2D data proves advantageous in various applications involving 3D point cloud data [23]. While this representation introduces further discretization, it offers valuable benefits, such as compatibility with established data acquisition methodologies in additive manufacturing and the effectiveness of CNN methodologies, leading to promising results in practical scenarios [24]. Importantly, in certain applications, obtaining 3D scans may not be feasible, especially for extremely complex shapes. However, the shape features remain crucial for determining functional properties. In such cases, utilizing 2D image stacks might be the best possible approach to extract valuable insights and predict functional behavior effectively. Therefore, without sacrificing generality, we assume that the 3D point cloud measurements can be substituted with layer-wise 2D imaging data.

From the object of interest, which is the functional part intended for field use, a set of 3D point cloud measurements XSOoI={XS,iOoI}i=1NOoI is available, where S is the subscript for a 3D shape, i is the sample index, NOoI is the total number of samples, and the sample XiOoI consists of a set of niOoI unstructured, varying-sized 3D measurement points (i.e., XiOoIRniOoI×3). The object of interest represents the target domain, for which we aim to enhance prediction accuracy. Additionally, a set of process variables XP={XP,i}i=1NOoI is available, which are the same for the object of interest and the physical twin, since they are printed under the same process conditions. Note that we do not assume the availability of the functional property output variables YOoI (i.e., unlabeled dataset) in the AUDIT framework, which enhances the practicality of this approach by eliminating the need for an extensive dataset obtained through destructive testing. The destructive testing labels of the object of interest are solely utilized to verify the model's performance.

For the digital twin, a set of 3D measurement point clouds XSOoI={XS,iOoI}i=1NOoI of the 3D-printed part is available, which represents the shape of the part. In addition, a set of material properties XMDT={XM,iDT}i=1NDT of the part is available too, where the subscript M denotes the material. Furthermore, functional property output variables YDT={YDT}i=1NDT are obtained via computer simulation (e.g., finite element analysis), where YDTRdyDT, and dyDT denotes the dimension of the (multivariate) functional property, or dimension of the output variables. For the digital twin, the process variables XP are not available because it did not undergo a physical printing process.

The physical twin, which is manufactured under identical process conditions as the object of interest, exhibits similar design features but utilizes less material to conserve resources. However, it still allows for destructive testing to assess the impact of the process conditions. For the physical twin, a set of 3D measurement point clouds XSPT={XS,iPT}i=1NPT and a set of process variables XP={XP,i}i=1NOoI for the 3D-printed part are available. Note that the object of interest and the physical twin are printed at the same time on one print bed, so they are printed under the same process conditions. Additionally, functional property output variables YPT={YPT}i=1NPT are obtained from physical testing procedures, where YPTRdyPT. For the physical twin, we did not measure specific material properties for each sample, and hence no material properties XM are available.

Based on this dataset, we study the problem of unsupervised domain adaptation for 3D point cloud models by adapting a 3D model fθ parametrized by θ, where θ is obtained from multiple labeled sources, multi-input domains (i.e.,{XSOoI,XMDT,YDT}, {XSPT,XP,YPT}) to an unlabeled target domain XSOoI (i.e., object of interest). The main objective of the AUDIT framework is to improve the performance of the model fθ on the unseen test set of the target domain, which requires careful consideration of the architecture and loss function to enable the learning of transferable features.

3.1 Architecture Design and Big Picture.

To accomplish this objective, we employ multi-input encoders for both the target (object of interest) and source domains (physical and digital twins), allowing us to learn domain-specific features. These features are then used in the discriminative head of the model, which benefits from a shared encoder structure that learns transferable features. The proposed framework is presented in Fig. 2, offers a high-level overview.

Fig. 2
Overview of the proposed AUDIT framework for functional qualification
Fig. 2
Overview of the proposed AUDIT framework for functional qualification
Close modal

Our architectural design hypothesis is based on the notion that 3D networks progressively process domain-specific nontransferable features while acquiring domain-invariant features. To capture these distinctions, we utilize domain-specific encoders to learn low-level features that are unique to each data domain. Subsequently, we concatenate the features from the low-dimensional feature space and perform contrastive alignment to achieve instance-level feature alignment. Finally, we incorporate a discriminative head that provides a supervision signal for both the physical and digital twin domains. A discriminative head refers to the final layers of a deep learning model that is responsible for making regression predictions or classifying inputs based on the learned features extracted by the preceding layers. This head also predicts pseudo-labels for the target domain, which represents the object of interest. These pseudo-labels are continuously updated during the joint optimization process.

Transfer learning for 3D objects is challenging due to significant geometry shifts, such as variations in density and occlusion ratios caused by diverse physical environments and sensor configurations. Unlike 2D domain models trained on the backbone ImageNet, 3D point cloud modeling lacks a well-trained, transferable backbone. One of the reasons is the difficulty in reducing domain shifts for low-level geometric representations in the 3D model architecture.

Our architecture addresses this challenge by leveraging domain-specific 3D encoders that learn distinct mapping functions to convert unstructured 3D point clouds into a low-dimensional feature space. In our method, “domain-specific encoders” refer to separate neural network encoders designed to extract distinctive features from different source domains (i.e., digital twin, physical twin, and object of interest domain). The encoders are domain-specific since they do not share model parameters with other domains to capture unique features present within each domain. The use of domain-specific encoders offers several advantages:

  • Separation of Domain-Specific Features: Domain-specific encoders facilitate the disentanglement of domain-specific features from shared features. By forcing the encoders to focus on capturing domain-specific characteristics, we enable the model to differentiate between features that are intrinsic to each domain and those that are shared across domains.

  • Enhanced Feature Discriminability: When the encoders are tailored to their respective domains, the learned features become more discriminative. This discriminability improves the model's ability to capture subtle differences and adapt to variations. Domain-specific encoders prevent the model from merging domain-specific attributes into a single, less informative representation.

  • Robustness to Domain Shift: Domain-specific encoders contribute to the model's robustness against domain shift. As each encoder specializes in learning domain-specific representations, the model becomes more adaptable to variations between sources and target domains. This adaptability is crucial for effective alignment in scenarios where domains exhibit dissimilarities due to changes in data collection conditions or sensing mechanisms.

In summary, domain-specific encoders in our method play a crucial role in enhancing the effectiveness of the contrastive instance alignment. This enables domain adaptation on the target domain while maintaining performance on the source domain, facilitating bidirectional knowledge sharing. Shared 3D encoders co-train with data samples from both domains, compressing the outputs of domain-specific encoders.

3.2 Selection of Physical Twin.

The design of the physical twin should be tailored to the specific attributes of the object of interest and the goals of the functional qualification. For example, in our case study, we were particularly concerned with the compressive force behavior of lattice structures. Hence, through preliminary experiments and FEA simulations, we identified high-stress regions that occur for certain printing parameters/settings under specific forces and orientations. Based on those results, we can choose a design that preserves the high-stress regions while conserving materials and reducing printing time. A practical approach to constructing the physical twins is to utilize parametric models accessible through CAD tools. These models allow us to extract and incorporate significant design characteristics, which have been identified through prior experimentation and FEA analysis. The connection between the physical twin and the object of interest stems from our machine learning model. This is in contrast to the current industrial practice of printing two identical parts, subsequently subjecting one to destructive testing. This existing approach lacks a definitive link or model to establish the equivalence and relationship between the two parts and leads to high material usage and scrap rates. Our method effectively remedies this shortcoming. Simultaneously printing a physical twin alongside the object of interest enables us to evaluate how variations in the printing parameters influence the functionality and specifications of the object of interest. In the case study, while we have selected a quarter section of the lattice as the physical twin, the impact of the excluded links is contained within our training dataset. Through the integration of digital and physical twins into a comprehensive machine learning model, we can significantly enhance the precision of predictions for functional characteristics. This enhancement is achieved by considering process variability through the physical twin and process variables, alongside factoring in the effects of 3D shapes and materials via the digital twin. The efficacy of this approach has been demonstrated in our case study. While we acknowledge that further exploration is needed to apply these concepts to more intricate component shapes and diverse functional qualification objectives, our intention is to introduce a novel concept through our paper. We aim to address a critical problem—functional qualification in AM—which will undoubtedly require further investigation for the development of appropriate physical twin designs tailored to new objectives and components. A promising direction for the design of physical twins is to draw inspiration from accelerated testing techniques in reliability theory. For instance, one potential strategy involves proportionally reducing the wall thickness of the part and simplifying its structures in the physical twin. However, it is crucial to incorporate uncertainty quantification methodologies to establish confidence levels for the relationships established by our model. The robustness and generalizability of our approach depend on the careful selection of physical twin (source domain) shapes that share large similarities with the object of interest. By choosing a large overlap of geometric attributes and design complexities between physical twin and object of interest, it is possible to improve the method's ability to effectively adapt to the target domain.

3.3 Contrastive Instance Alignment.

In this section, we describe the contrastive instance alignment procedure using pseudo-labels. The core idea behind the contrastive alignment is to minimize the feature distance between similar samples from different domains. To encourage the learning of domain-invariant features, we incorporate ideas from contrastive alignment learning in 2D vision. This encourages the development of deeper features that resemble grid-based feature maps in 2D image tasks, making them more transferable. During training, the discriminative head minimizes MSE for the regression task using labeled samples from both source domains (i.e., physical and digital twins).
(1)
To achieve domain adaptation, we utilize pseudo-labels, which enhance the discriminative power of the network and ensure the alignment of similar samples in the low-dimensional feature space. This approach significantly improves the model's generalization capability and its ability to address domain shift issues. Specifically, we choose the feature instance pair (FiS,FjT) based on a similarity criterion as follows. In our setting, there are two sources S, the physical twin PT, and the digital twin DT. The target domain T is the object of interest OoI. For each source feature instance FiS, we aim to find a feature instance Fj*T from the target domain that maximizes the cosine similarity.
(2)
where Φ(FiS,FjT)=FiSFjTFiSFjT calculates the cosine similarity between features of a source sample FiS and a target candidate FjT. The decision to utilize cosine similarity was made after careful consideration of the unique characteristics of our problem domain and the intended goals of our approach:
  • Scale Invariance for Varying Point Density: 3D point clouds are inherently sparse and exhibit varying point densities across domains due to different sensor configurations. By choosing the cosine similarity, we leverage its scale-invariant nature to ensure that our instance alignment method is not affected by the overall density or magnitude of points in each point cloud.

  • Directional Information for Spatial Relationships: In the realm of 3D point clouds, capturing spatial relationships is very important. Cosine similarity considers the direction of the vectors in the high-dimensional space, allowing us to capture the alignment based on the orientations of the vectors rather than just their magnitudes. This feature becomes valuable when aligning instances to preserve spatial structures and geometric arrangements, crucial for 3D point cloud tasks like shape matching and object recognition across domains.

  • Sparse Data Handling in High Dimensions: Cosine similarity is effective when dealing with high-dimensional and sparse point cloud data. In some point cloud representations like voxels, the majority of elements in point cloud vectors might be zero, rendering traditional distance metrics less effective.

In addition to minimizing the inter-class distance between domains, we also constrain the intra-class distance between different samples within the same domain. Hence, we get the following loss functions for the contrastive alignment:
(3)
(4)
(5)
where τ denotes a tuning parameter for the strength of domain adaption. The contrastive alignment loss Lcontr,align considers the pairwise relations of samples between and within domains (source and target) to enable inter-domain transfer learning and improved discriminative performance on intra-domain tasks. Finally, by combining the loss terms in Eqs. (1) and (5), we optimize the model fθ() by minimizing the following loss:
(6)
where λ is a tuning parameter to balance domain adaptation (i.e., contrastive alignment) and the learning of the discriminative task. Due to the sparse distribution of features in point clouds, achieving effective alignment between domains through global distribution alignment is challenging. In our experiments, we observed that using contrastive alignment alone introduces a mismatch in point density and occlusion ratio between the sample distributions of pseudo-labels and ground truths in the target domain. To address this issue, we incorporate effective augmentation through hard sample mining to further enhance domain adaptation.

3.4 Hard Sample Mining.

In the context of 3D measurements in manufacturing, variability arises from factors such as variations in point cloud density and the presence of occlusions. These factors have a significant impact on the effectiveness of contrastive instance alignment. Point cloud density can vary between the object of interest and the physical twin, with some measurement techniques producing sparse point clouds while others generate denser ones. This discrepancy poses a challenge when aligning sample distributions, as the contrastive instance alignment approach may favor densely populated areas, potentially neglecting patterns with sparse point clouds. Moreover, additive manufacturing often involves complex geometries and occlusions, making it challenging to capture complete object geometry. Consequently, pseudo-labels used for alignment may not accurately represent patterns with severe occlusions. Acknowledging and addressing these factors is essential for accurate and comprehensive 3D transfer learning in additive manufacturing and other manufacturing domains.

To address these limitations, we leverage hard sample mining as a transformation technique for point clouds that specifically considers geometry mismatches. This approach, introduced by Biehler et al. [25] for single-source transfer learning, aims to maximize network learning by generating augmented samples, denoted as Xi,hsmOoI, that meet two requirements. First, Xi,hsmOoI should be more challenging than the original sample XiOoI, ensuring a larger discriminative loss (Ldiscr(Xi,hsmOoI)Ldiscr(XiOoI)). Second, Xi,hsmOoI should not lose its 3D shape features and should describe a shape that is not too different from XiOoI. To control the augmentation magnitude, the difference in discriminative losses Ldiscr(Xi,hsmT)Ldiscr(XiT) is upper bounded by a dynamic parameter δ (i.e., Ldiscr(Xi,hsmT)Ldiscr(XiT)δLdiscr(XiT)). The underlying intuition is as follows: During the initial stages of training, when the model is fragile, it is preferable to have a smaller δ. This choice ensures that the generated hard samples are not excessively challenging for the model to predict. As the model's discriminative ability improves over time, δ gradually increases, allowing for the generation of more challenging hard samples. Hence, the dynamic parameter δ is inversely proportional to the discriminative loss (i.e., δ=1+1Ldiscr, ensuring δ ≥ 1). Consequently, hard sample mining loss, denoted as Lhsm, is computed as
(7)

To efficiently obtain hard samples, Ref. [25] proposed a novel algorithm that optimizes Lhsm without directly conducting gradient-based optimization. This algorithm combines two components: simulating object occlusions by altering the geometry of easy samples and discarding critical points along the gradient direction from existing dense point clouds. The attribution score, indicating the contribution of each point to the discriminative loss, plays a crucial role in hard sample mining. Aggregating highly scored points identifies important segments/subsets in a point cloud. By discarding points with high attribution scores, a “hard sample” is created for the model to predict. The transformed point clouds achieved through hard sample mining contribute to effective contrastive instance alignment by reducing the distribution mismatch induced by pseudo-labels in the target domain. Figure 3 illustrates the hard sample mining algorithm, involving random viewpoint selection, calculation of point attributions, and deletion of points with large attribution scores until the termination criterion is met Ldiscr(Xi,hsmT)Ldiscr(XiT)>δLdiscr(XiT).

Fig. 3
Illustration of hard sample mining algorithm [25]
Fig. 3
Illustration of hard sample mining algorithm [25]
Close modal

The transformed point clouds focus on effective contrastive instance alignment by reducing the distribution mismatch of the target (object of interest) domain induced by pseudo-labels.

Figure 4 illustrates the AUDIT procedure specifically designed for multi-source constructive alignment. In the context of functional qualification in additive manufacturing, the conventional use of contrastive loss tends to effectively align easily recognizable 3D objects, such as parts with consistent point density and minimal occlusions. However, this approach often neglects the challenging samples encountered in additive manufacturing, where different parts may exhibit variations in point density and severe occlusions due to complex geometries or intricate designs. Consequently, using the conventional contrastive loss may lead to a mismatch in point density and occlusion ratio between the sample distribution of pseudo-labels and the ground truths in the object of interest (target) domain. To overcome this issue, we leverage the hard sample mining algorithm [25] in the additive manufacturing context. This algorithm transforms point clouds by considering specific geometry mismatches across the object of interest and physical twin. By addressing these challenges, the hard sample mining algorithm significantly improves the domain alignment, enabling more effective transfer learning and alignment among the objects of interest, the physical and digital twins.

Fig. 4
AUDIT procedure for multi-source (digital twin, physical twin, and object of interest) contrastive alignment with hard sample augmentation
Fig. 4
AUDIT procedure for multi-source (digital twin, physical twin, and object of interest) contrastive alignment with hard sample augmentation
Close modal

3.5 Unified AUDIT Framework.

We propose a stepwise training procedure with a warm-up process to train the AUDIT framework as shown in Algorithm 1. Specifically, we first pre-train the source models of the physical and digital twins on the labeled source domain and use them to generate pseudo-labels on the target set of the object of interest. We then conduct hard sample mining [25] and augment the target set. Next, we warm up the model following Eq. (6), which allows a more stable convergence in the early stages of training. For the remaining epochs, we update the pseudo-labels using stepwise co-training. During this process fθ() gradually adapts to the object of interest (target) domain while maintaining the in-domain performance.

The model architecture details are available from the source code of the AUDIT method, which will be open source upon paper publication. The hyperparameters are tuned using Bayesian optimization.

AUDIT algorithm for functional qualification via multi-source, multi-input transfer learning between digital and physical twins and the object of interest

Algorithm 1

Inputs:

  • Digital twin (Source 1): Labeled point cloud dataset from source domain DDT={(XiDT,YiDT)}i=1NDT

  • Physical twin (Source 2): Labeled point cloud dataset from source domain DPT={(XiPT,YiPT)}i=1NPT

  • Material properties for digital twin simulation: XMDT

  • Process measurements (identical for object of interest and physical twin): XP

  • Object of interest (target): Unlabeled input point cloud from target domain XSOoI

  • Algorithm parameters: Network architecture and termination tolerance ε

Output:

  • Learned network weights θ of model f(·)

Algorithm:

  1. Pre-train base model

    finit = fit(DDT, DPT)

  2. Generate pseudo-labels for target domain samples

    {Y¯iOoI}i=1NOoI=predict(finit,{XiOoI}i=1NOoI)

  3. Mine hard samples to augment the target set

    Dhsm0OoI={(Xi,hsm0OoI,Y¯i,hsm0OoI)}i=1NOoI,hsm0=hsm({(XiOoI,Y¯iOoI)}i=1NOoI)

  4. Initialize the model with 3D base model

    fθ=finit

  5. Warm start of AUDIT model

    fθ0=fit(DDT,DPT,Dhsm0OoI)

  6. AUDIT iteration: Iteration index k

    While not converged:

    • 6.1 Update pseudo-labels:

      • {Y¯i,hsmkOoI}i=1NOoI,hsmk=predict(fθ0,{XiOoI}i=1NOoI)

    • 6.2 Add new hard samples to the target dataset [25]
    • 6.3 Model updatefθk=fit(DDT,DPT,Daug,kOoI)

    Termination check: |fθk1fθk|ε

3.6 Discussion of Properties and Limitations.

The effectiveness of transfer learning is not always guaranteed, unless its basic assumptions are satisfied: (1) the learning tasks of the domains are related/similar; (2) the source domain and target domain data distributions are not too different; and (3) a suitable model can be applied to both domains. Violations of these assumptions may lead to negative transfer (NT), i.e., introducing source domain data/knowledge undesirably decreases the learning performance in the target domain. We would like to highlight two key properties of our approach in handling this issue: contrastive instance alignment using cosine similarity and the incorporation of hard sample mining:

  • Contrastive Instance Alignment Using Cosine Similarity: Our method employs contrastive instance alignment as a core component. By utilizing cosine similarity, we focus on aligning instances while considering their relative positions in the feature space. This approach helps mitigate negative transfer by promoting the alignment of instances that share semantic similarities from both the source and target domains. This alignment encourages these instances to group together in the feature space, effectively preserving domain-specific characteristics. The use of cosine similarity as a distance metric promotes the alignment of semantically similar instances, even in scenarios where other distance measures might not be as effective.

  • Hard Sample Mining: Negative transfer can stem from including irrelevant or conflicting source domain instances. To mitigate this, our method employs hard sample mining during the instance alignment process. Hard sample mining involves identifying challenging samples from the source domain that are difficult to align with the target domain. By focusing on these challenging instances, our approach reduces the likelihood of introducing undesirable knowledge from the source domain into the target domain. This strategy enhances the model's adaptability by prioritizing instances that contribute positively to the alignment process.

Through the synergistic application of contrastive instance alignment with cosine similarity and the incorporation of hard sample mining, our approach actively addresses the negative transfer challenge.

Acknowledging the potential scarcity of physical twins in comparison to digital twins, which may lead to imbalanced data, we emphasize the adaptability of our method with slight adjustments. These adaptations are outlined as follows:

  • Parallel Feature Learning With Imbalanced Data: Within the framework of co-training, the imbalanced domain can be considered as one view while the balanced (or artificially balanced) domain serves as the other. This preserves the original co-training mechanism while mitigating the imbalance challenge.

  • Data-Specific Sampling: When imbalances are present in source and target domains, distinct sampling strategies can be employed for each dataset. Strategies such as oversampling, undersampling, or adaptive sampling can be applied to address the imbalance in the imbalanced domain, while the balanced (or artificially balanced) domain adheres to regular co-training principles without extensive alterations.

However, we acknowledge that data imbalances can introduce biases in the alignment process. Our method is designed with a certain level of flexibility to accommodate such concerns via more advanced strategies as follows:

  • Loss Re-Weighting: Implementing loss re-weighting mechanisms assigns higher weights to instances from the minority domain. This corrective measure counteracts the impact of imbalanced data by directing the model's focus toward underrepresented instances, thus fostering a more balanced alignment.

  • Transfer Learning Techniques: The use of transfer learning techniques enables the utilization of pre-trained models or features from the imbalanced domain to initiate the alignment process. This leverages knowledge transfer from both the digital and physical twins, even when the latter's data are limited.

In conclusion, our model offers a robust approach for functional qualification, employing both physical and digital twins within the framework of multi-source, multi-input unsupervised transfer learning. Nonetheless, we recognize certain limitations that present exciting opportunities for future research endeavors.

4 AUDIT Case Study—3D-Printed Lattice Structures

We conducted a real-world case study to demonstrate the potential of the AUDIT framework for functional qualification in additive manufacturing via physical and digital twins. Our experiments use fused filament fabrication (FFF) to print PLA specimens. The printed object of interest is a body-centered cubic (BCC) lattice situated within a cubic-primitive (CP) (Fig. 5(a)). The dimensions of the lattice unit cube are 5 cm × 5 cm × 5 cm. To create a physical twin, we extracted a one-fourth portion of the BCC-CP lattice structure. This approach allowed us to produce a physical twin that required less material compared to the object of interest, while still retaining similar design features (Fig. 5(b)). The physical twin and the object of interest were printed using identical process conditions. This enables us to understand how the printing conditions of both parts impact their functional properties.

Fig. 5
(a) Object of interest and (b) physical twin in the case study, and (c) experimental setup of the case study
Fig. 5
(a) Object of interest and (b) physical twin in the case study, and (c) experimental setup of the case study
Close modal

4.1 Experimental Setup.

The specimens for the experiments have been printed using a Prusa MK3S FFF printer developed by Prusa Research, Prague, Czech Republic. The measurement setup is complemented by a FLIR T360 thermal imaging infrared camera with 1.3 MP resolution and a FARO Quantum ScanArm with laser line probe. A microcomputer is used to log the nozzle and print bed temperature. A noise detector is installed to collect acoustic emission signals of the process. The experimental setup is visualized in Fig. 5(c).

To capture the influence of process parameters on the FFF printing process, a space-filling Latin hypercube design with N = 60 samples is utilized. The corresponding process parameter ranges are reported in Table 1.

Table 1

Parameter settings for design of experiments

Process settingRange
Printing speed35–100 mm/s
Fan speed0–100%
Nozzle temperature190–240 °C
Print bed temperature40–75 °C
Extrusion width0.35–0.55 mm
Process settingRange
Printing speed35–100 mm/s
Fan speed0–100%
Nozzle temperature190–240 °C
Print bed temperature40–75 °C
Extrusion width0.35–0.55 mm

When conducting the experiments, 12 experiments failed due to improper process parameter combinations, resulting in 48 samples in total. For each of those experiments, 3D measurement point cloud data from the FARO Scanner are available for both the object of interest as well as the physical twin. Additionally, we recorded in situ sensing data from six heterogeneous data sources ranging from three data types tabular, functional curve, and image data. The process data sources XP along with their typical dimensions are listed in Table 2, where L = 250 denotes the number of printing layers.

Table 2

Process data description

Process data XPData typeData dimension
Process settings (no in situ monitoring):
  • Fan speed: XP,1

  • Extrusion width: XP,2

  • Printing speed: XP,3

  • Extrusion width: XP,4

Tabular3
Nozzle temperature: XP,5(t)Functional curve1000×L
Print bed temperature: XP,6(t)Functional curve1000×L
Infrared image: XP,7(t)Image320 × 240 × L
Process data XPData typeData dimension
Process settings (no in situ monitoring):
  • Fan speed: XP,1

  • Extrusion width: XP,2

  • Printing speed: XP,3

  • Extrusion width: XP,4

Tabular3
Nozzle temperature: XP,5(t)Functional curve1000×L
Print bed temperature: XP,6(t)Functional curve1000×L
Infrared image: XP,7(t)Image320 × 240 × L

In terms of data preprocessing, the functional curves of the nozzle and bed temperature are fixed to a length of 1000 using dynamic time warping. The point clouds of the object of interest XSOoI and the physical twin XSPT are up- or down-sampled to a fixed-point number of Np = 60,000 resulting in a data dimension ℝ60,000×3 for each sample. Note that these measurement points are unstructured and can exhibit irregular spatial arrangements and varying densities. In contrast to structured point clouds, these measurement locations are not consistent across different samples.

To model the heterogeneous input data, we use the following data-type-specific feature extractors: for the tabular data, employ a fully-connect multi-layer perceptron (MLP) to extract features from XP,1, XP,2, and XP,3. For the two functional curves (i.e., Xh,4(t), Xh,5(t)), we utilize the deep CNN architecture proposed by Yang et al. [26] in an autoencoder setting. As the feature extractor for the infrared images Xh,6(t), we utilize a convolutional autoencoder structure proposed by Ref. [27]. Here we elaborate further on our rationale for choosing specific neural network architectures tailored to distinct dataset types:

  • Fully-Connected MLP: For tabular datasets, we opted for a fully-connected MLP due to its effectiveness in handling structured data. Tabular data typically consist of features organized in rows and columns, making them well-suited for MLPs. The architecture's ability to learn complex relationships among features enables accurate predictions in such scenarios.

  • Deep CNN Architecture: A deep CNN architecture was selected for functional curve datasets as it excels in processing sequential data. Functional curves represent time-series or sequential data, where the order of the data points is crucial. The hierarchical nature of CNNs allows them to capture both local and global patterns in the functional curves, making them a powerful choice for this dataset.

  • Convolutional Autoencoder Structure: For infrared image datasets, we employed a convolutional autoencoder structure. Autoencoders are particularly suitable for learning efficient representations from high-dimensional data like images. The convolutional autoencoder's ability to encode essential features and reconstruct the images with high fidelity is essential for achieving accurate predictions with infrared images.

Each chosen architecture was carefully tailored to the specific attributes and intricacies of its respective dataset. This approach aimed to maximize performance and ensure robust predictions. The neural network designs were aligned with the inherent nature of the data types, striving for optimal outcomes in each case. It is worth noting that these selections are widely acknowledged in literature. However, we acknowledge the potential necessity of fine-tuning architectures for particular applications to further elevate performance and address unique challenges inherent to different scenarios.

Furthermore, we conducted individual quasi-static compression tests on the object of interest and physical twin samples using a Shimadzu AG-IC 20 kN UTM. To carry out the compression test, we positioned the lattice specimens on a rigid plate, with an upper rigid plate descending to apply compression at an engineering strain rate of 0.001 s−1. Since the layer-wise fabrication process introduces anisotropy in material properties, all lattices in this study were compressed along the rise (printing) direction for consistency. The displacement recorded by the UTM and the contact force measured by the load cell attached to the upper plate were converted into engineering stress–strain curves. Additionally, we utilized a digital camera to capture optical images of the entire crushing process, enabling future analysis of the deformation mechanism. Figure 6 presents a visual representation of the destructive testing performed on one object of interest for different contact forces. The experiment began with Fig. 6(a) where no displacement was initially applied. As the experiment progressed, a gradual displacement was exerted on the top surface in a top-down direction to compress the part. In Fig. 6(b), the force reached a magnitude of 103.1 N. Continuing the experiment, both the displacement and force continued to increase. However, as the structure started to crack, the two left front struts of the CP became detached and flew away, and the front right strut also developed a crack. Consequently, the force decreased to 86.2 N, as depicted in Fig. 6(c). Subsequently, the force was absorbed by the inner lattice structure (BCC). As the back right top strut of the BCC structure also cracked, the force further dropped to 43.7 N, as illustrated in Fig. 6(d).

Fig. 6
Destructive testing of one object of interest on UTM at different levels of compressive force: (a) 0 N, (b) 103.1 N, (c) 86.2 N, and (d) 43.7 N
Fig. 6
Destructive testing of one object of interest on UTM at different levels of compressive force: (a) 0 N, (b) 103.1 N, (c) 86.2 N, and (d) 43.7 N
Close modal

The primary aim of this case study is to accurately predict the maximum compressive strength of the object of interest YOoI, as it serves as a crucial indicator for assessing the functional qualification of lattice structures. The histograms and the fitted PDF shown in Fig. 7 provide evidence of substantial variation in functional performance due to different printing process conditions. Each of the 48 maximum compressive force results in Fig. 7 corresponds to a distinct set of process settings, as a space-filling design of experiments was employed. Additionally, a moderate correlation of 53.43% (Pearson) is observed between the maximum compressive force of the object of interest YOoI and its physical twin YPT, as they were printed under identical process conditions. This correlation further shows the connection between the functional properties of the object of interest and its physical twin, which is induced by the identical printing process conditions.

Fig. 7
Histograms of the distribution of maximum compressive force values for both the (a) objects of interest and (b) physical twins
Fig. 7
Histograms of the distribution of maximum compressive force values for both the (a) objects of interest and (b) physical twins
Close modal

In our case study, our focus was on predicting the maximum compressive strength of the object of interest—a pivotal parameter for numerous applications. However, we recognize the significance of extending our predictions to encompass other functional properties relevant in diverse applications. To enable predictions for these additional functional properties, a series of carefully designed experiments is essential. These experiments should encompass various materials, geometries, and process conditions to establish comprehensive datasets. These datasets will then serve as the foundation for training our model to make precise predictions across a range of functional properties. The accuracy of these predictions depends on the correlation among the 3D shape, digital twin, and measured process conditions with functional properties.

4.2 Finite Element Analysis Simulations for the Digital Twin.

To create a digital twin of the object, a 3D scan was performed of the object of interest. Finite element simulations were then conducted using the 3D scan data to analyze the compressive behavior. The ansysTM software, specifically the static structural module, was utilized. The boundary condition involved fixing the bottom surfaces of the lattice structure while applying uniform stress on the top surfaces. Table 3 presents the input parameters XMDT used in the simulations, which were obtained from the literature on bulk PLA filament [28].

Table 3

Material input parameters XMDT of the PLA filament

Density (kg/m3)Elastic modulusPoisson's ratio
ρ = 1240E = 3500 MPaν = 0.35
Density (kg/m3)Elastic modulusPoisson's ratio
ρ = 1240E = 3500 MPaν = 0.35

The output of the simulation is the breaking displacement value YDT at the ultimate tensile strength of PLA (42 MPa) with a precision of two decimal microns. Figure 8 depicts the lattice structure of the digital twin at various times throughout the simulation.

Fig. 8
Digital twin during compression simulation in ansys at different stress values: (a) 9.23 MPa, (b) 18.46 MPa, (c) 27.73 MPa, and (d) 36.93 MPa
Fig. 8
Digital twin during compression simulation in ansys at different stress values: (a) 9.23 MPa, (b) 18.46 MPa, (c) 27.73 MPa, and (d) 36.93 MPa
Close modal

We find that the small shape discrepancies from the ideal design have only a moderate impact on the functional properties, as indicated by a Pearson correlation of 22.31% between the breaking displacement YDT and the maximum compressive force of the object of interest YOoI.

A far more crucial factor influencing the functional characteristics is the 3D printing process conditions. Optimizing printing process conditions enables enhanced control over the functional properties of printed objects. By emphasizing the relationship between process conditions and functional properties, it paves the way for continuous improvement and design optimization in future research.

4.3 Benchmark Methods.

We evaluated the AUDIT framework against various benchmarks such as linear regression, multi-layer perceptron, supervised transfer learning (pre-training), and a data augmentation scheme called PointAugment. In the following, we give a brief overview of those benchmark methods.

Two supervised regression models, namely linear regression and MLP, are used as benchmarks. Linear regression is chosen for its simplicity, while MLP can capture nonlinear relationships. These models use a combination of features related to the 3D printing process conditions XP and functional property outcome variables from digital and physical twins to predict the maximum compressive force of the object of interest. Various combinations of input features are explored for both models to identify the optimal setup. Table 4 displays the different input feature settings for each model configuration.

Table 4

Input features for the supervised regression models across different settings

Setting 1Setting 2Setting 3Setting 4
Input featuresProcess setting values:
  • Fan speed: XP,1

  • Extrusion width: XP,2

  • Printing speed: XP,3

  • Printing speed: XP,3

  • Extrusion width: XP,4

  • Nozzle temperature: XP,5

  • Bed temperature: XP,6

Setting 1 + YPT: destructive testing result of physical twinSetting 1 + YDT: simulated result of digital twinSetting 1 + YPT + YDT
OutputYOoIYOoIYOoIYOoI
Setting 1Setting 2Setting 3Setting 4
Input featuresProcess setting values:
  • Fan speed: XP,1

  • Extrusion width: XP,2

  • Printing speed: XP,3

  • Printing speed: XP,3

  • Extrusion width: XP,4

  • Nozzle temperature: XP,5

  • Bed temperature: XP,6

Setting 1 + YPT: destructive testing result of physical twinSetting 1 + YDT: simulated result of digital twinSetting 1 + YPT + YDT
OutputYOoIYOoIYOoIYOoI

Furthermore, we utilize a widely used supervised transfer learning technique called pre-training [29]. Initially, the model is pre-trained on the source dataset (digital and physical twins) and then fine-tuned on the target dataset of interest. This transfer learning approach mitigates overfitting caused by PointNet's neural network structure when dealing with smaller datasets. During pre-training, a certain percentage of the initial layers are frozen, allowing the model to leverage generalized information. The optimal number of frozen layers is determined through iterative exploration. Subsequently, the model is fitted to the object of interest (target) dataset.

Finally, we compare AUDIT with PointAugment, an advanced augmentation algorithm for 3D point cloud data [30]. PointAugment generates new samples by augmenting existing ones, enriching data diversity. Unlike fixed strategies, PointAugment trains an augmentor alongside the model, using sample-aware augmentation based on geometric structure. It optimizes performance through adversarial learning and automates the augmentation process, improving dataset enrichment.

Although these benchmarks have the advantage of accessing target labels, we include them to evaluate AUDIT's unsupervised domain adaptation performance.

4.4 Case Study Prediction Results.

In this section, we compare the AUDIT framework with the benchmarks using normalized root mean squared error (NRMSE) for future dataset comparisons. Table 5 presents the average NRMSE from ten-fold cross-validation (CV). In particular, we used a nested cross-validation setup, where the outer loop performs ten-fold cross-validation for model evaluation (43 samples (90%) for training and 5 samples (10%) for testing), while the inner loop splits the training data further to tune hyperparameters on a smaller training set (34 samples, 80%) and a validation set (9 samples, 20%) to tune the hyperparameters. It is important to note that the object of interest labels used for performance evaluation in AUDIT are only employed during training in the benchmark methods. Additionally, we provide the un-normalized root mean squared error (RMSE) to scale back to the dataset being analyzed (RMSE=NRMSE*(YOoImaxYOoImin)).

Table 5

Case study prediction results on the object of interest (bold: best performing model)

MethodNRMSERMSE
Linear regression—Setting 16.066 (3.214)1319.420 (699.081)
Linear regression—Setting 25.940 (2.668)1291.975 (580.329)
Linear regression—Setting 36.065 (2.745)1319.205 (597.204)
Linear regression—Setting 45.952 (2.242)1294.718 (487.785)
MLP—Setting 17.407 (4.363)1611.130 (949.229)
MLP—Setting 26.961 (3.018)1514.087 (654.766)
MLP—Setting 37.446 (2.542)1619.608 (552.954)
MLP—Setting 42.781 (1.303)604.947 (283.484)
Pre-training (PointNet)20.973 (13.269)4561.662 (2884.613)
PointAugment0.280 (0.076)61.067 (16.628)
AUDIT (ours)0.014 (0.001)3.002 (0.337)
MethodNRMSERMSE
Linear regression—Setting 16.066 (3.214)1319.420 (699.081)
Linear regression—Setting 25.940 (2.668)1291.975 (580.329)
Linear regression—Setting 36.065 (2.745)1319.205 (597.204)
Linear regression—Setting 45.952 (2.242)1294.718 (487.785)
MLP—Setting 17.407 (4.363)1611.130 (949.229)
MLP—Setting 26.961 (3.018)1514.087 (654.766)
MLP—Setting 37.446 (2.542)1619.608 (552.954)
MLP—Setting 42.781 (1.303)604.947 (283.484)
Pre-training (PointNet)20.973 (13.269)4561.662 (2884.613)
PointAugment0.280 (0.076)61.067 (16.628)
AUDIT (ours)0.014 (0.001)3.002 (0.337)
Note: Average results and standard deviation in brackets from ten-fold CV. Bold signifies best performing model.

AUDIT demonstrates superior performance compared to all other models in Table 5, showcasing its potential for functional qualification in additive manufacturing. AUDIT achieves a small RMSE of the maximum compressive force of approximately 3 N, demonstrating the accurate prediction of functional properties.

Although the benchmark methods (e.g., linear regression, MLP, pre-training, and PointAugment) have access to target labels, they still fall short in achieving satisfactory performance due to limited sample size, high-dimensional data complexity, and the inability to consider all relevant data sources. In contrast, AUDIT surpasses them by effectively co-training with labeled source data and augmented hard samples, while also incorporating heterogeneous process conditions and material properties with its multi-input architecture.

Our framework incorporates hard sample mining as an augmentation strategy. During each training epoch, this technique generates a set of challenging “hard” samples for augmentation, enhancing the model's generalizability and preventing overfitting. Furthermore, to assess the model's ability to generalize to unseen data, we employed a nested ten-fold cross-validation methodology. We closely monitored the model's performance on a validation set during training and applied early stopping. If we detected signs of overfitting, such as a decrease in validation performance while training performance continued to improve, we stopped training to mitigate overfitting.

However, it is essential to acknowledge the limitations of this study. The current application of this novel machine learning approach is limited to a small dataset and simple geometries. More R&D efforts are needed for further exploration to assess its generalizability to larger datasets and more complex geometries.

5 Conclusion

In conclusion, this paper introduces the AUDIT framework—a novel approach to the functional qualification of 3D-printed parts using physical and digital twins. By combining the strengths of physical and digital twins with transfer learning techniques, the AUDIT framework enables accurate predictions of the functional properties of 3D-printed parts without the need for destructive testing. The case studies on 3D-printed lattice structures highlight the potential of this approach in enhancing the functional qualification of critical 3D-printed parts. By considering real-world manufacturing process conditions and incorporating the FEA analysis of the 3D shape (digital twin), AUDIT provides a more holistic evaluation of 3D-printed functional properties. Additionally, the framework introduces transfer learning techniques for additive manufacturing processes, enabling the fusion of heterogeneous 3D shape data from multiple sources to enhance understanding of the relationships between 3D shapes, process conditions, and functional properties.

Although the framework has undergone evaluation using a dataset of 3D-printed lattice structures, there is a need for future work to expand and validate its applicability across a broader range of 3D-printed parts. This verification should encompass different printing processes and materials to ensure the framework's effectiveness in diverse scenarios. Additionally, future research should focus on improving the efficiency and sampling procedures of 3D data acquisition with intricate designs or internal structures (e.g., computed tomography scanning) to generate digital twins in those challenging applications.

Furthermore, this work has the potential to enable the development of control and compensation schemes based on the functional properties of the products. The AUDIT model establishes a link between heterogeneous process variables and functional properties, enabling inverse optimization and control of 3D printing parameters.

Overall, the AUDIT framework offers a comprehensive solution for functional qualification in 3D printing. Progress in this field holds the potential to facilitate the widespread adoption of 3D printing in safety-critical applications.

Acknowledgment

The authors would like to extend their sincere appreciation to Dr. Chuck Zhang, the Harold E. Smalley Professor, and Yifeng Wang, a Ph.D. Student, both affiliated with the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Institute of Technology. Their invaluable assistance with the quasi-static compression tests conducted on the Universal Testing Machine (UTM) is deeply acknowledged.

Funding Data

  • This research is funded by National Science Foundation Award ID 2019378.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Nomenclature

i =

sample index

E =

elastic modulus

L =

number of printing layers

S =

shape

T =

target domain (object of interest)

fθ =

neural network model

NOoI =

number of samples from the object of interest

NPT =

number of samples from the physical twin

dyDT =

dimension of the (multivariate) functional property output variables

niOoI =

number of measurement points in object of interest point cloud of sample i

FiS =

feature instance from the source domain

FjT =

feature instance from the target domain

Xi,hsmOoI =

hard sample point cloud of the object of interest

YOoImax =

maximal value of the functional property output variable of the object of interest in the dataset

YOoImin =

minimal value of the functional property output variable of the object of interest in the dataset

Ldiscr =

discriminative loss

Linter =

inter-class loss

Lintra =

intra-class loss

Lcontr,align =

contrastive alignment loss

Lhsm =

hard sample loss

XP =

process variables

YOoI =

functional property output variables of the object of interest

YDT =

functional property output variables of the digital twin

YPT =

functional property output variables of the physical twin

XSOoI =

3D measurement point cloud of the object of interest

XMDT =

material properties for the digital twin

XSPT =

3D measurement point cloud of the physical twin

δ =

dynamic upper bounding parameter for hard sample mining

θ =

parameters of the neural network model

λ =

tuning parameter balance domain adaptation and learning of the discriminative task

ρ =

density

τ =

tuning parameter for the strength of domain adaptation

ν =

Poisson's ratio

3D =

three-dimensional

AM =

additive manufacturing

AUDIT =

functional qualification in additive manufacturing via physical and digital twins

BCC =

body-centered cubic

CAD =

computer-aided design

CP =

cubic-primitive

CV =

cross-validation

DT =

digital twin

FEA =

finite element analysis

FFF =

fused filament fabrication

MLP =

multi-layer perceptron

NRMSE =

normalized root mean squared error

OoI =

object of interest

PDF =

probability distribution function

PLA =

polylactic acid

PT =

physical twin

RMSE =

root mean squared error

UTM =

universal testing machine

References

1.
Voas
,
J.
,
Mell
,
P.
, and
Piroumian
,
V.
,
2021
,
Considerations for Digital Twin Technology and Emerging Standards
,
National Institute of Standards and Technology
,
Gaithersburg, MD
.
2.
Mishra
,
P. K.
, and
Senthil
,
P.
,
2020
, “
Prediction of In-Plane Stiffness of Multi-Material 3D Printed Laminate Parts Fabricated by FDM Process Using CLT and Its Mechanical Behaviour Under Tensile Load
,”
Mater. Today Commun.
,
23
, p.
100955
.
3.
Zeng
,
C.
,
Liu
,
L.
,
Bian
,
W.
,
Leng
,
J.
, and
Liu
,
Y.
,
2021
, “
Compression Behavior and Energy Absorption of 3D Printed Continuous Fiber Reinforced Composite Honeycomb Structures With Shape Memory Effects
,”
Addit. Manuf.
,
38
, p.
101842
.
4.
Li
,
S.
,
Liu
,
Z.
,
Shim
,
V. P. W.
,
Guo
,
Y.
,
Sun
,
Z.
,
Li
,
X.
, and
Wang
,
Z.
,
2020
, “
In-Plane Compression of 3D-Printed Self-Similar Hierarchical Honeycombs–Static and Dynamic Analysis
,”
Thin-Walled Struct.
,
157
, p.
106990
.
5.
Han
,
X.
,
Yan
,
J.
,
Liu
,
M.
,
Huo
,
L.
, and
Li
,
J.
,
2022
, “
Experimental Study on Large-Scale 3D Printed Concrete Walls Under Axial Compression
,”
Autom. Constr.
,
133
, p.
103993
.
6.
Cao
,
X.
,
Duan
,
S.
,
Liang
,
J.
,
Wen
,
W.
, and
Fang
,
D.
,
2018
, “
Mechanical Properties of an Improved 3D-Printed Rhombic Dodecahedron Stainless Steel Lattice Structure of Variable Cross Section
,”
Int. J. Mech. Sci.
,
145
, pp.
53
63
.
7.
Lesueur
,
M.
,
Poulet
,
T.
, and
Veveakis
,
M.
,
2021
, “
Predicting the Yield Strength of a 3D Printed Porous Material From Its Internal Geometry
,”
Addit. Manuf.
,
44
, p.
102061
.
8.
Hanon
,
M. M.
,
Marczis
,
R.
, and
Zsidai
,
L.
,
2021
, “
Influence of the 3D Printing Process Settings on Tensile Strength of PLA and HT-PLA
,”
Period. Polytech. Mech. Eng.
,
65
(
1
), pp.
38
46
.
9.
Belhabib
,
S.
, and
Guessasma
,
S.
,
2017
, “
Compression Performance of Hollow Structures: From Topology Optimisation to Design 3D Printing
,”
Int. J. Mech. Sci.
,
133
, pp.
728
739
.
10.
Abbot
,
D.
,
Kallon
,
D.
,
Anghel
,
C.
, and
Dube
,
P.
,
2019
, “
Finite Element Analysis of 3D Printed Model Via Compression Tests
,”
Proc. Manuf.
,
35
, pp.
164
173
.
11.
Amosy
,
O.
, and
Chechik
,
G.
,
2022
, “
Coupled Training for Multi-Source Domain Adaptation
,”
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision
,
Waikoloa, HI
,
Jan. 4–8
, pp.
420
429
.
12.
Ren
,
C.-X.
,
Liu
,
Y.-H.
,
Zhang
,
X.-W.
, and
Huang
,
K.-K.
,
2022
, “
Multi-Source Unsupervised Domain Adaptation Via Pseudo Target Domain
,”
IEEE Trans. Image Process.
,
31
, pp.
2122
2135
.
13.
Sun
,
S.
,
Shi
,
H.
, and
Wu
,
Y.
,
2015
, “
A Survey of Multi-Source Domain Adaptation
,”
Inf. Fusion
,
24
, pp.
84
92
.
14.
Gunasegaram
,
D. R.
,
Murphy
,
A. B.
,
Barnard
,
A.
,
DebRoy
,
T.
,
Matthews
,
M. J.
,
Ladani
,
L.
, and
Gu
,
D.
,
2021
, “
Towards Developing Multiscale-Multiphysics Models and Their Surrogates for Digital Twins of Metal Additive Manufacturing
,”
Addit. Manuf.
,
46
, p.
102089
.
15.
Knapp
,
G.
,
Mukherjee
,
T.
,
Zuback
,
J. S.
,
Wei
,
H. L.
,
Palmer
,
T. A.
,
De
,
A.
, and
DebRoy
,
T.
,
2017
, “
Building Blocks for a Digital Twin of Additive Manufacturing
,”
Acta Mater.
,
135
, pp.
390
399
.
16.
Cheng
,
L.
,
Wang
,
K.
, and
Tsung
,
F.
,
2020
, “
A Hybrid Transfer Learning Framework for In-Plane Freeform Shape Accuracy Control in Additive Manufacturing
,”
IISE Trans.
,
53
(
3
), pp.
298
312
.
17.
Wang
,
Z.
,
Zhang
,
Y.
,
Orquera
,
M.
,
Millet
,
D.
, and
Bernard
,
A.
,
2023
, “
A New Hybrid Generative Design Method for Functional & Lightweight Structure Generation in Additive Manufacturing
,”
Proc. CIRP
,
119
, pp.
66
71
.
18.
Pandiyan
,
V.
,
Drissi-Daoudi
,
R.
,
Shevchik
,
S.
,
Masinelli
,
G.
,
Le-Quang
,
T.
,
Logé
,
R.
, and
Wasmer
,
K.
,
2022
, “
Deep Transfer Learning of Additive Manufacturing Mechanisms Across Materials in Metal-Based Laser Powder Bed Fusion Process
,”
J. Mater. Process. Technol.
,
303
, p.
117531
.
19.
Tang
,
Y.
,
Dehaghani
,
M. R.
, and
Wang
,
G. G.
,
2022
, “
Review of Transfer Learning in Modeling Additive Manufacturing Processes
,”
Addit. Manuf.
,
61
, p.
103357
.
20.
Zhang
,
L.
,
Chen
,
X.
,
Zhou
,
W.
,
Cheng
,
T.
,
Chen
,
L.
,
Guo
,
Z.
,
Han
,
B.
, et al
,
2020
, “
Digital Twins for Additive Manufacturing: A State-of-the-Art Review
,”
Appl. Sci.
,
10
(
23
), p.
8350
.
21.
Biehler
,
M.
,
Kulkarni
,
A.
,
Li
,
J.
, and
Shi
,
J.
,
2023
, “Multi-modal: Multi-fidelity Multi-modality 3D Shape Modeler,” Submitted to
IEEE Trans. Autom. Sci. Eng.
22.
Law
,
A. C. C.
,
Wang
,
R.
,
Chung
,
J.
,
Kucukdeger
,
E.
,
Liu
,
Y.
,
Barron
,
T.
,
Johnson
,
B. N.
, et al
,
2023
, “
Process Parameter Optimization for Reproducible Fabrication of Layer Porosity Quality of 3D-Printed Tissue Scaffold
,”
J. Intell. Manuf.
, pp.
1
20
.
23.
Ye
,
Z.
,
Liu
,
C.
,
Tian
,
W.
, and
Kan
,
C.
,
2021
, “
In-Situ Point Cloud Fusion for Layer-Wise Monitoring of Additive Manufacturing
,”
J. Manuf. Syst.
,
61
, pp.
210
222
.
24.
Lyu
,
J.
,
Akhavan Taheri Boroujeni
,
J.
, and
Manoochehri
,
S.
,
2021
, “
In-Situ Laser-Based Process Monitoring and In-Plane Surface Anomaly Identification for Additive Manufacturing Using Point Cloud and Machine Learning
,”
International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
American Society of Mechanical Engineers
, vol.
85376
, p.
V002T02A030
.
25.
Biehler
,
M.
,
Kulkarni
,
A.
,
Li
,
J.
, and
Shi
,
J.
PLURAL: 3D Point Cloud Transfer Learning Via Contrastive Learning With Augmentations
,” submitted to IEEE Transactions on Automation Science and Engineering, Preprint 2023.
26.
Yang
,
J.
,
Nguyen
,
M. N.
,
San
,
P. P.
,
Li
,
X. L.
, and
Krishnaswamy
,
S.
,
2015
, “
Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition
,”
Twenty-Fourth International Joint Conference on Artificial Intelligence
,
Buenos Aires, Argentina
,
July 25–31
.
27.
Valdarrama
,
S.
, 2021, “
Convolutional Autoencoder for Image Denoising
,” https://keras.io/examples/vision/autoencoder/, Accessed October 10, 2023.
28.
Letcher
,
T.
, and
Waytashek
,
M.
,
2014
, “
Material Property Testing of 3D-Printed Specimen in PLA on an Entry-Level 3D Printer
,”
ASME International Mechanical Engineering Congress and Exposition
,
American Society of Mechanical Engineers
, Vol.
46438
, p.
V02AT02A014
.
29.
Wu
,
C.
,
Bi
,
X.
,
Pfrommer
,
J.
,
Cebulla
,
A.
,
Mangold
,
S.
, and
Beyerer
,
J.
,
2023
, “
Sim2real Transfer Learning for Point Cloud Segmentation: An Industrial Application Case on Autonomous Disassembly
,”
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision
,
Waikoloa, HI
,
Jan. 3–7
, pp.
4531
4540
.
30.
Li
,
R.
,
Li
,
X.
,
Heng
,
P.-A.
, and
Fu
,
C.-W.
,
2020
, “
PointAugment: An Auto-Augmentation Framework for Point Cloud Classification
,”
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
,
Snowmass Village, CO
,
Mar. 2–5
, pp.
6378
6387
.