## Abstract

Generative design (GD) techniques have been proposed to generate numerous designs at early design stages for ideation and exploration purposes. Previous research on GD using deep neural networks required tedious iterations between the neural network and design optimization, as well as post-processing to generate functional designs. Additionally, design constraints such as volume fraction could not be enforced. In this paper, a two-stage non-iterative formulation is proposed to overcome these limitations. In the first stage, a conditional generative adversarial network (cGAN) is utilized to control design parameters. In the second stage, topology optimization (TO) is embedded into cGAN (cGAN + TO) to ensure that desired functionality is achieved. Tests on different combinations of loss terms and different parameter settings within topology optimization demonstrated the diversity of generated designs. Further study showed that cGAN + TO can be extended to different load and boundary conditions by modifying these parameters in the second stage of training without having to retrain the first stage. Results demonstrate that GD can be realized efficiently and robustly by cGAN+TO.

## 1 Introduction

In design-to-manufacturing workflows, manufacturing process selection can be conducted after initial design concepts are developed to choose suitable manufacturing processes and resources [1–3]. Following process selection, generative design (GD) can be used to generate a wide variety of module and part layouts to satisfy engineering requirements. Designers can explore the design space by generating and analyzing designs to identify interesting design configurations and promising directions for further work. In this work, a deep neural network technique, generative adversarial network (GAN), is integrated with topology optimization (TO) to develop an efficient GD method. In our approach, generated designs satisfy engineering requirements that are specified in the TO problem, while avoiding the iterations between the GAN and TO solver that are necessary for previous approaches.

GANs have received much attention in the deep learning community for their capabilities in generating novel images and designs after training on a (typically large) dataset. A GAN is composed of two separate neural networks, a generator and a discriminator. The generator mimics existing designs seen in the training dataset when generating new designs, while the discriminator aims to distinguish between fake designs generated by the generator and real designs in the training dataset. As a result, the generator and the discriminator compete. After the training stage, the trained generator will be capable of generating new designs as high in quality as the training examples. However, a GAN generates new designs by learning from existing designs in the training dataset, and as a result, the generated designs by GAN will have similar characteristics and behaviors as the training dataset. Although GAN-generated designs may look good, they typically will not satisfy engineering requirements since no techniques are available for incorporating those requirements in the GAN. GAN technology was selected for generative design in this work due to its ability to effectively generate a wide variety of designs, to assist the designer in exploring the design space, and to do so in an automated manner.

TO is a design optimization methodology that can make large changes in part shape to improve performance. In structural problems, the main idea of structural TO is to optimize the distribution of material to minimize a quantity related to the total energy of the system, often compliance, subject to loading and boundary conditions as well as several constraints. Constraints ensure static equilibrium that the final part design meets a specified volume fraction requirement, and that element densities remain within bounds. More details can be found in Ref. [4] and many other TO papers. In this work, TO will be utilized to evaluate and improve functionality during GD for structural design problems. The challenge addressed in this work is how to integrate TO into a GAN in a computationally efficient manner to enable a diversity of designs to be generated, while satisfying functional requirements.

Past research on GD focused on two directions. The first direction required high-quality datasets for training neural networks. Non-functional designs would be utilized as input to the neural network, while the corresponding topology-optimized designs would be utilized as output from the neural network. The other direction on GD required iteration between GAN and TO to generate novel functional designs as shown in the top part of Fig. 1. This latter direction is termed the iteration method in this paper. Within that work, a non-optimized dataset was fed into TO to improve functionality. Then, the functional designs generated by TO were utilized to train the GAN to generate additional novel functional designs. Because the training dataset of GAN had good mechanical performance, the designs generated by GAN typically had good mechanical performance. When the generated functional designs by GAN were not novel enough, they were sent back to TO for the next iteration. The newly modified dataset by TO was utilized to train GAN again. In the end, when the generated functional designs satisfied the novelty criterion, the iterations between GAN and TO were terminated. However, this category of generative methods suffered from some significant limitations. To begin with, the iterations between GAN and TO imposed excessive computational burden. Post-processing was required for the designs generated by GAN to ensure they were of high quality and symmetric. Last but not least, previous work usually cannot control the generated designs, and the obtained designs may be dramatically different from that desired.

In contrast to that previous work [5], this paper proposes to embed TO into GAN as one module, which is shown in the bottom part of Fig. 1. Our proposed method can generate functional designs directly without iteration between GAN and TO, significantly reducing the computational burden and accelerating the design process. This is accomplished by adding the TO objective, compliance, and a constraint (volume fraction) to the cGAN loss function. Further, no post-processing of generated designs to solve the symmetry problem (relevant to the design domains in Ref. [5] and this work) is needed to produce functional designs with good shape quality, demonstrating that our generated designs have better quality compared with previous work. In addition, the use of the conditional GAN (cGAN) technique, rather than GAN, makes it possible to control generated designs with prescribed parameters. Lastly, the previous method required multiple iterations of optimization during the gradient descent process of TO. On the contrary, our method requires only one iteration of the gradient descent process of optimization to satisfy functional requirements. As a result, the computation cost of our method is much lower compared to the previous method.

The following section provides a review of relevant literature on GANs, TO, and GD. Key limitations are identified that are addressed in this work. In Sec. 3, the framework of our method, cGAN + TO, is presented. First, a 2D wheel design problem is introduced which will be our focus in this paper. Then, to control the prescribed design parameters, cGAN is applied with these design parameters as a condition. TO is then briefly introduced including its formulation and optimization scheme. Lastly, the formulation of cGAN + TO is put forward. Since both cGAN and TO utilize gradient-based optimization methods, they can be combined as a single optimization problem and optimized together. The formulation of cGAN has been well studied in the past decade. In contrast, there is no study on how to accommodate TO into the cGAN framework.

To address problems with deteriorating image quality, inaccurate volume fraction, and other problems, a two-stage training process is investigated. The first stage of training generates a cGAN, which is similar to traditional conditional GAN, with the goal of training the cGAN to generate possibly non-functional designs with given predetermined parameters. Then, the saved checkpoints of cGAN in the first training stage are utilized to train cGAN + TO in the second training stage. The two-stage training process is proven to be robust, and the generated designs are of high-quality satisfying prescribed parameters.

To prove the strength of cGAN + TO, several experiments were conducted as reported in Sec. 4. To begin with, we compared the results generated by cGAN and cGAN + TO qualitatively and quantitatively. The qualitative study showed that cGAN + TO could generate high quality and diversity of designs without post-processing. Note that novelty is assessed qualitatively by the authors, similar to the approach taken in Ref. [5]. In the quantitative study, the Frechet Inception Distance (FID) score was utilized to estimate the quality of generated designs, and average compliance was applied for estimating the mechanical performance. The quantitative comparison between cGAN and cGAN + TO showed the advantage of cGAN + TO in generating functional designs. The convergence speed was demonstrated to be fast. In addition, to prove the robustness of cGAN + TO, different weights were utilized for different terms within the loss function of cGAN + TO and their performances were compared. In real design problems, different load and boundary conditions may be encountered. To deal with this problem, the second stage of the framework was retrained utilizing different load and boundary conditions to demonstrate an important generalization of the cGAN + TO method. Lastly, a qualitative comparison among cGAN + TO, the iteration method, and TO was conducted to show their strengths and weaknesses, and a quantitative comparison between cGAN + TO and TO was performed for estimating the quality and performance of the generated designs.

The key contribution of this paper lies in the novel architecture that integrates TO into cGAN to generate novel functional designs satisfying functional requirements and prescribed parameters while avoiding computationally expensive iterations between cGAN and TO. Although this paper focuses on 2D functional designs of wheels, our method can be applied to any 2D structural design problem and extended to 3D structural problems, including those requiring design symmetry. We demonstrate that the cGAN + TO architecture can generate a wide diversity of designs in a robust manner. Further, we demonstrate the architecture is extensible; it can deal with different load and boundary conditions with efficient retraining.

## 2 Literature Review

In this section, several research fields will be reviewed. To begin with, GAN will be reviewed, especially the Self-Attention GAN which is utilized in this paper. Then, some famous works on TO will be reviewed as this paper focuses on the application of TO. Lastly, works on generative design will be summarized that either utilized neural networks for TO or combined GAN with TO.

### 2.1 Generative Adversarial Networks.

GAN was proposed by Goodfellow et al. in 2014 [6]. GAN is composed of two parts: generator and discriminator. The generator creates fake data to fool the discriminator, while the discriminator distinguishes the real data from the training dataset from the fake data generated by the generator. Through the competition between the generator and the discriminator during training, the performance of the generator improves. After training, the generator can create high-quality images. Since its introduction, GAN technology has been greatly improved from the viewpoints of quality and diversity. Some benchmarks have been established like EBGAN [5], BEGAN [7], Self-Attention GAN [8], and BigGAN [9]. These works belong to the category of unconditional GAN, where there are no prescribed restraints on the final designs. With more well-designed techniques, larger neural network architectures, and more high-quality datasets, the performances of the later works outperform previous works. In comparison, Pix2Pix [10] and CycleGAN [11] are types of conditional GANs, with the goal of changing image style while maintaining original image information. StyleGAN [12] and its later modifications realized amazing quality in generating high-resolution human face images. In this work, we apply the neural network structure of Self-Attention GAN because it utilized the attention mechanism to allow attention-driven, long-range dependency modeling for image generation tasks, and its computation cost is affordable for our dataset.

In addition, GAN is notorious for its instability, such as mode collapse, during training. Mode collapse refers to the generated data containing only a small ratio of the original training dataset. We have also suffered from this mode collapse problem. To deal with this problem, different mechanisms were proposed to stabilize GAN like WGAN-GP [13] and LSGAN [14]. WGAN-GP penalizes the norm of the discriminator gradient with respect to its input, and as a result, it enables stable training for different GAN architectures. In this paper, WGAN-GP is utilized, and it indeed improves the mode collapse problem. Some examples of the mode collapse problem in our dataset are shown in Fig. 5.

### 2.2 Topology Optimization.

Solution methods for topology optimization problems try to find the best distribution of material within a predetermined domain, subject to some constraints [15,16]. For structural problems, “best” usually means utilizing less material while tolerating more forces, i.e., building light stiff structures. The most frequently utilized method for encouraging light stiff structures is to minimize compliance given specific external load and boundary conditions, which is used to compute the external work done by external loads. In addition to compliance as objective, several constraints like equilibrium, volume fraction, and element density also need to be satisfied. In this work, structures are optimized using the SIMP (Solid Isotropic Material with Penalization) approach. In the SIMP method, design variables are element densities that range from 0, meaning void, to 1, indicating solid material. Intermediate values can be interpreted as a porous material, which is typically not desired since it is not easily manufacturable. More details can be found in some educational papers [15,17] and their references.

### 2.3 Generative Design.

GD should be conducted to explore the design space before being sent for manufacturability analysis [18,19]. Some pioneering research on GD has utilized deep neural networks to generate functional designs. There were mainly two streams to realize automatic functional design generation. The first stream directly utilized neural networks to realize TO. The training dataset was composed of paired examples, i.e., with non-functional designs as input and their corresponding functional designs as output. A neural network, usually U-Net structure [20], was utilized to map from non-functional designs as input to functional designs as output. TopologyGAN took advantage of various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a cGAN [21]. Zhang et al. [22] proposed a neural network with a strong generalization ability for structural topology optimization. The strong generalization referred to giving a less accurate solution to the topology optimization problems with different boundary conditions that it is not trained on. Deng et al. [23] put forward Self-directed Online Learning Optimization (SOLO) with the deep neural network, and the output of the neural network converged to the “true global optimum” through iteration. However, this was realized by finding local optima in the current batch data and repeating the process on the next batch data until the optimum solution was achieved. In the authors’ opinion, the global optimum is not guaranteed. Wang et al. [24] proposed a deep convolutional neural network with perceptible generalization capabilities to boundary conditions for structural topology optimization, i.e., obtaining a solution even when the boundary condition is not included in the training dataset with a certain accuracy. Lee and Ha [25] proposed the optimal rotor design of a type of electric motor using deep convolutional GAN (DCGAN) on 56,400 images obtained from topology optimization. Ntintakis et al. [26] applied finite element analysis (FEA) to understand the initial designs and utilized generative design methods to create new designs, which were further evaluated by FEA. The previous works were limited to 2D design domains. Similar works have been done in the 3D domain. Zheng et al. [27] proposed a deep learning-based neural network to generate 3D structural topologies efficiently. Xiang et al. [28] built a 3D convolutional neural network (CNN) to realize topology optimization, though the resolution for the 3D model was low. Chen et al. [29] proposed PaDGAN to increase the diversity of generated designs. However, this stream usually requires a high-quality dataset for training. In addition, it cannot be extended to different resolution, load, and boundary conditions.

The other stream uses a deep learning network to generate designs and then uses engineering analysis or design optimization to modify the generated designs such that they satisfy problem requirements. Most past research utilized GAN to generate new designs and TO to improve the functionality of designs. To begin with, TO modifies existing designs so that their functionality is improved. Then, the modified functional designs will be utilized to train the GAN to generate new functional designs. If a GAN-generated design does not satisfy requirements, a new iteration will start with TO modifying the generated designs from the previous iteration. Eventually, when GAN can generate functional designs that satisfy the requirements, iteration stops. Oh et al. [30] generated numerous design options which were aesthetic and functional through the described iterations between GAN and TO. Jang et al. [31] proposed a reinforcement learning (RL)-based generative design process, with reward functions maximizing the diversity of topology designs. Yamasaki et al. [32] put forward a sensitivity-free and multi-objective structural design method using a variational autoencoder (VAE) to generate new material distributions. These works were limited to the 2D domain. Similar to the previous stream, some research has been extended to 3D. Shu et al. [33] presented a GAN model that demonstrated how to generate 3D models and evaluated them using complex simulation environments. Yoo et al. [34] proposed a deep learning-based computer-aided design (CAD)/computer-aided engineering (CAE) framework in the conceptual design phase that automatically generated 3D CAD designs and evaluated their engineering performance. However, in these methods, the solution process iterates between GAN and TO. In addition, this stream also has the limitation when extending to different resolution, load, and boundary conditions.

All the previous research, either in 2D or in 3D, focused on using deep learning to realize the TO process or using iteration between GAN and TO for generative design. In this process, TO was the core process while the neural network was an auxiliary module. However, there have not been any studies on integrating TO into GAN directly for generative design to speed up the process of generating new designs. In contrast, this work embedded TO into GAN where GAN was the core module while TO was the auxiliary module. In addition, how to extend the trained neural network to different load and boundary conditions has not been investigated. Solving the aforementioned limitations is the focus of this paper.

## 3 cGAN + TO

In this section, the details of how to embed TO into GAN are introduced. To begin with, a 2D wheel design problem is introduced. Then, TO and its optimization scheme are described. cGAN and its loss function utilized in this paper are briefly introduced. Last, TO and its embedding into cGAN are described in detail. This method of cGAN + TO can be generalized to different design problems. To solve the symmetry problem of wheel design, a symmetry enforcement technique is proposed. This symmetry enforcement technique can be generalized to all symmetry design problems.

### 3.1 Wheel Design Problem and Dataset.

*D*= 128. Within the image, the black region is the non-design region, which does not belong to the wheel and should be represented as a void element. The internal green region represents the vehicle axis and is also a non-design region. The radius of the axle region is 0.075

*D*. The purple region is the rim of the wheel, which is solid and non-designable. Its thickness is 0.1

*D*. In between, the blue area is the design region. A distributed force is applied to the outside circumference of the rim, which includes both radial (in red) and lateral tangential force components (in yellow). The relative magnitude of the lateral force to the radial force is defined as the force ratio as shown in Eq. (1) where

*rat*,

*F*

_{l}, and

*F*

_{r}are the force ratio, lateral force and radial force, respectively

For simplicity, the force ratio is selected as 1, i.e., *rat* = 1. The radius of the design spoke region is 0.4*D*. During design, the elements in the design domain become either solid or void in a pattern that will form spokes. For a given volume fraction specification, many different spoke designs can satisfy the same loading and boundary conditions. The objective of this generative design problem is to generate a wide variety of spoke designs that satisfy the functional requirements and constraints.

To facilitate this study, a wheel dataset was generated through a Matlab script that was composed of three types of wheels as shown in Fig. 3. The first type only has straight spokes. The second type also only has straight spokes, but there exist subtrees for the spokes. The last type of wheel design has curved spokes; 6000 images were generated for each type to comprise a dataset of 18,000 wheels for training. For each wheel design, its volume fraction and the number of spokes were recorded to facilitate the future training process of cGAN. Here, the volume fraction is the ratio between the number of solid pixels and the total pixels in an image. The details of the volume fraction will be introduced in the following section. The number of spokes is an important parameter. As an example, the number of spokes is 4 in the second curved spokes design in Fig. 3. Finally, the generated wheel designs only consider aesthetics, while they do not consider their mechanical performance. In general, their mechanical performance can be improved through TO.

### 3.2 Topology Optimization.

*c*(

**) ∈ ℝ refers to the compliance of the whole structure. $K(x)\u2208Rm\xd7m$ is the global stiffness matrix obtained by assembling local stiffness matrices $Ki(xi)$, where**

*x**m*refers to the total degrees-of-freedom of the structure. The global displacement vector $U(x)\u2208Rm$ is derived by solving the equilibrium constraint $K(x)U(x)=F$. The second constraint is called the volume fraction constraint and is the ratio of the solid to the total area. For the

*i*th element, its area is $vi=1n$, assuming that all elements are of the same size, and the total volume fraction is given by $\u2211i=1nxivi$. Before topology optimization, a desired volume fraction is predefined as

*V*

_{0}.

This formulation in Eq. (2) is frequently solved using the SIMP method to interpolate material and properties. SIMP discretizes a domain into a grid of finite elements, i.e., isotropic solid microstructures. The *i*th element is either solid (*x*_{i} = 1) or void (*x*_{i} = 0) where *x*_{i} is called density for the *i*th element. To alleviate the problem of discontinuity of binary classification, a continuous density distribution function is applied. For the *i*th element, the density is distributed between *x*_{min} and 1: *x*_{min} ≤ *x*_{i} ≤ 1, where *x*_{min} is a small positive real number (not 0) to avoid numerical issues.

*i*th element determines its Young's modulus

*E*

_{i}(

*x*

_{i}) and the local stiffness matrix $Ki(xi)$ as shown in Eqs. (3) and (4).

*E*

_{min}∈ ℝ and $Kmin\u2208R8\xd78$ are the minimum Young's modulus and local stiffness matrix in the 2D domain to avoid singularity.

*E*

_{0}∈ ℝ and $K0\u2208R8\xd78$ are Young's modulus and local stiffness matrix in the 2D domain for a solid element, i.e.,

*x*

_{i}= 1.

*p*is the penalty coefficient to minimize the effect of intermediate densities and contributes to binary classification, and it is usually set to 3

Given the optimization problem in Eq. (2), a gradient-based method is utilized to find a local optimum efficiently. The most frequently utilized method is the optimality criteria (OC) method. More details of the OC method can be found in the reference papers [15]. Passive zones are regions with predefined pixel values. As mentioned, the wheel rim and central axle regions, as well as the region outside of the wheel, comprise the passive zones.

### 3.3 Conditional Generative Adversarial Network.

Generative Adversarial Networks (GANs) belong to the class of generative models, which take noise as input and output new data. In particular, Self-Attention GAN (SAGAN) is utilized in this work [8]. GANs are composed of two neural networks, as shown in Fig. 4. The first part is the generator, which can generate fake data from noise by mimicking the training dataset. The second part is the discriminator, and it distinguishes real data from the fake data generated by the generator. The objective of the generator is to generate high-quality data that fools the discriminator, while the discriminator identifies real data from fake data to punish the generator. In a word, the generator and discriminator compete during the training process, and their performance improves as a result. In the training process of the generator, the parameters within the discriminator are fixed, and the loss function only modifies the parameters within the generator. Similarly, during the training process of the discriminator, the parameters within the generator are fixed, and only parameters within the discriminator are updated.

To train a GAN, the loss function needs to be determined. The most frequently utilized loss function is called Binary Cross Entropy (BCE) loss. However, a GAN trained with BCE loss usually faces the problem of mode collapse, which refers to the generated images representing only a subset of the training dataset's variety. Although this may be enough to fool the discriminator, it cannot satisfy the objective of the generator to generate a wide variety of designs. To alleviate the mode collapse problem, the Wasserstein GAN with gradient penalty (WGAN-GP) is proposed. The problem of mode collapse is shown in Fig. 5. On the left side, all the generated wheels only have six spokes, while the training dataset has wheel designs of three to 12 spokes. On the right side of Fig. 5, the problem of mode collapse is greatly eased with the help of WGAN-GP, and the generated wheels have different numbers of spokes.

*E*(.) refers to the expectation function, i.e., the mean function.

*D*(.) is the discriminator while

*G*(.) is the generator.

*x*refers to real data,

*z*is random noise, and

*G*(

*z*) refers to fake data produced by the generator. $\theta G$ represents parameters in the generator, while $\theta D$ refers to parameters in the discriminator. $x^$ is the sum between real data

*x*and fake data

*G*(

*z*) with weights

*α*and (1 −

*α*), respectively. The last term, $\lambda regE(D(x^))$, plays the role of regularization for the discriminator where

*λ*

_{reg}is a constant weight term. Without this term, the discriminator may become singular. $Max\theta D(\u22c5)$ and $min\theta G(\u22c5)$ refer to the minmax competition between the generator and discriminator. The generator desires to minimize the difference between real and fake data, while the discriminator tries to maximize the difference

The structure of the generator and discriminator within SAGAN is shown in Fig. 4. In the generator, fully connected layers transform the original initial noise, volume fraction, and number of spokes from 1 × 128, 1 × 1 and 1 × 1 to 1 × 128, 1 × 128 and 1 × 128, respectively. After concatenation, ConvTranspose2D is applied to increase the resolution of images. Similar ideas apply to the discriminator, and Conv2D is applied to extract information from the image, volume fraction, and number of spokes data to determine whether it is real or fake. In order to increase the capability of creating new data with long-range dependencies, a self-attention module is utilized in SAGAN. In addition, the benefit of the self-attention module lies in its affordable computation cost. The self-attention module will be introduced briefly, while more details can be found in Ref. [8].

Suppose the input feature to a self-attention module is $X$. Take the first self-attention module of the generator as an example. Three intermediate features, $Q$, $K$ and $V$, are extracted from $X$ by 1 × 1 convolution, i.e., convolution in 2D with a kernel size of 1. $Q$, $K$_{,} and $V$ refer to query, key, and value, respectively. $Q$ and $K$ are utilized to calculate the correlation between pixels by matrix multiplication and SoftMax activation to derive the attention map *β*. Then, combining the value $V$ and attention map *β*, the output $O$ is derived through matrix multiplication. Lastly, a weighting term *γ* is predicted by the neural network, and the final result $Y$ is generated as $Y=\gamma O+X$. This SAGAN module is applied in both the generator and discriminator. A summary of the SAGAN module is shown in Fig. 6.

Combining self-attention GAN with WGAN-GP, a conditional GAN can be generated with prescribed conditions, i.e., number of spokes and volume fraction for wheel designs. The number of spokes, *N*_{s} ∈ *F*[3, 12], is an integer ranging from 3 to 12, where *F* is the discrete uniform distribution. Volume fraction refers to the number of solid pixels divided by the total number of pixels within a design image, and as a result, it is a scalar between 0 and 1. Considering the mechanical performance, the volume fraction is modeled as a uniform distribution between 0.4 and 0.6, i.e., *V*_{0} ∈ *U*[0.4, 0.6]. Since random noise is the input to the GAN generator, the images generated by a cGAN will be a function of random noise, volume fraction, and number of spokes, i.e., ** x** =

*f*(

**,**

*z**V*

_{0},

*N*

_{s}) where

**refers to the generated 128 × 128 resolution wheel design.**

*x*### 3.4 cGAN + TO.

*c*(

**). The compliance is a function of the generated design in image**

*x***, and traditional methods directly optimize**

*x***to minimize**

*x**c*(

**). For cGAN + TO,**

*x***=**

*x**f*(

**,**

*z**V*,

*N*

_{s}), and as a result,

*c*(

**) =**

*x**c*(

*f*(

**,**

*z**V*,

*N*

_{s})) is a function of the initial input to cGAN. In this way, compliance can be added to the loss function of cGAN and optimized. We define the loss function of compliance as shown in Eq. (6)

The first constraint of TO, $K(x)U(x)=F$, represents the equilibrium of the whole structure system. This constraint models that the internal force generated by the wheel system is equivalent to the external force applied to the system. This constraint is not added to the loss function of cGAN + TO. Instead, we achieve this constraint by solving $U(x)=K(x)\u22121F$ within TO. The solved displacements $U(x)$ are further utilized for calculating the compliance of the system as $c(x)=FTU(x)$. The last constraint of TO is that its pixel values should lie in the interval [0,1]. This constraint is realized automatically by the activation function, i.e., sigmoid activation function, at the end of the neural network.

Another constraint for this wheel design problem is central symmetry since real wheel designs must be symmetric. Previous researchers solved this problem by designing partial regions of the wheel, such as $14$ of the whole wheel, and they then finish the wheel design in a post-processing step by patterning the designed region. In this paper, a different, novel method is used to achieve symmetry directly. When observing the data generated by cGAN without TO in the first stage of training, wheel designs generally had good central symmetric properties. However, the final outputs of cGAN + TO after the second-stage training were usually asymmetric as shown in Fig. 7. In this way, we concluded that the asymmetry problem was induced by TO, and we desired to minimize the asymmetry problem induced by TO.

**(**

*g***). Given the initial provided number of spokes**

*x**N*

_{s}, the gradient of compliance should be the same by rotating every $360degNs$. Then, $g(x)^$, which is an average over the

*N*

_{s}rotated gradients of compliance, is derived by averaging over the

*N*

_{s}rotated gradients of compliance as in Eqs. (8) and (9)

*rotate*(·) refers to the rotation function and $i\xd7360degNs$ is the angle for the

*i*th rotation and its mean over all pixels within one design; i.e., $\mu g(x)~$ is subtracted to make sure that TO will not modify the volume fraction. Since $g(x)^$ is nonzero while the mean of $g(x)^$ is zero, i.e., $g(x)\u2260^0,\mu g(x)^=0$, the total volume fraction will stay constant while material redistribution within the design will be realized to minimize compliance. In the end, the volume fraction would only be controlled by the volume fraction term as shown in Eq. (7). After that, $g(x)^$ rather than

*g*(

**), is utilized in the optimization of compliance, i.e., $dc(x)dx=g(x)^$ to ensure central symmetry. This method strongly improves the central symmetry of generated images, and the designs generated by cGAN + TO have good central symmetry properties as demonstrated in Sec. 4. This method enables the generative design method to be applicable to real engineering problems involving symmetry.**

*x**L*

_{G}and

*L*

_{D}are the loss for the generator and discriminator, respectively. $LGWGAN\u2212GP$, $LDWGAN\u2212GP$ in Eq. (5),

*L*

_{c}in Eq. (6), and

*L*

_{v}in Eq. (7) are the generator loss, discriminator loss for WGAN-GP, compliance loss and volume fraction loss.

*λ*

_{1},

*λ*

_{2}, and

*λ*

_{3}are weights for different loss terms

The structure of how to embed TO into GAN is shown in Fig. 8. The main idea of cGAN + TO is to send the generated fake images by the generator to TO for generating two new loss terms *L*_{v} and *L*_{c}, which will direct the generator and discriminator to create functional designs. Given the fake image designs, the loss function for the volume fraction *L*_{v} can be derived directly. In comparison, the process of obtaining the loss function of compliance *L*_{c} requires more steps. Given the image design, the stiffness matrix $K(x)$ corresponding to each design needs to be generated. Then, for the fixed external loads, the displacement can be calculated by $U(x)=K(x)\u22121F$. The derived displacement vector is further utilized to calculate compliance as $c(x)=FTU(x)$, and symmetry is enforced for the gradient of compliance as in Eq. (8). With the penalty for compliance and volume fraction, the parameters within the generator and discriminator can be updated directly generate functional wheel designs. The proposed cGAN + TO method can be generalized to different structural design problems. To extend to a new structural design problem, the design domain, passive zones, load, and boundary conditions need to be determined. Then, the framework shown in Fig. 8 can be utilized for the new design problems. Design domain and passive zones will be utilized in the GAN for generating fake data. Boundary conditions will be applied for computing the stiffness matrix. Given the load condition, the external loads can be constructed and the corresponding displacements can be calculated by matrix inversion. Eventually, diverse functional designs can be generated by trained cGAN + TO.

Last but not least, there exists an important difference between TO and cGAN + TO: the number of iterations in the gradient descent method during TO. This gradient descent issue is separate from the issue related to iterations between GAN and TO that was referred to as the iteration method. In traditional TO, a design is optimized for several hundreds or thousands of iterations until its convergence. However, in cGAN + TO, in one epoch, all the generated designs are optimized by only one iteration. Then, a new epoch starts, and new wheel designs will be optimized using only one iteration. Our results in Sec. 4 show that the cGAN + TO can gradually learn to generate optimized designs directly from initial designs by this one-step optimization scheme.

### 3.5 Two Stages of Training.

In this work, we apply a two-stage training scheme. In the first stage, a traditional cGAN without TO is trained with the number of spokes and volume fraction as the condition. In the second stage, the cGAN + TO is trained based on the saved checkpoints in the first stage. This two-stage training scheme was necessary to address problems encountered when training cGAN + TO directly. Specifically, the problems included low-quality image generation, mode collapse, and large volume fraction constraint violations. Our results show that the two-stage training can strongly improve the stability of cGAN + TO, and all of the work presented in this paper utilizes the two-stage training scheme.

## 4 Experiments

In this section, different experiments are conducted to test our proposed method, cGAN + TO. To begin with, the designs produced by cGAN shown in Fig. 4 and cGAN + TO shown in Fig. 8 will be compared. cGAN refers to the combination of SAGAN with the loss function of WGAN-GP. It does not consider the mechanical performance and is the result of the first-stage training mentioned in Sec. 3.4. In comparison, cGAN + TO refers to the combination of SAGAN, WGAN-GP, and TO. It takes the mechanical performance into consideration and includes the second-stage training in the two-stage training scheme. Then, the compliance and volume fraction of cGAN + TO are tracked to show the strength of cGAN + TO compared to cGAN. In addition, we also test parameters within cGAN and TO to show their effects on the generated images. In addition, different load and boundary conditions are considered to extend the generalizability of our method. Lastly, a comparison of our method to previous methods is conducted qualitatively and quantitatively.

### 4.1 Qualitative Comparisons of Conditional Generative Adversarial Network and cGAN + TO.

In this section, a qualitative comparison of the results of cGAN and cGAN + TO is conducted. The cGAN method can generate novel wheel designs based on the wheel dataset. Weight values of *λ*_{1} = 1, *λ*_{2} = 10^{−6}, and *λ*_{3} = 10^{−3} were utilized for the loss function of cGAN + TO, Eq. (10). As shown in Fig. 9(a), the cGAN method can generate different wheel designs with different volume fractions and numbers of spokes. The only limit is that some randomly generated wheel designs may have blurry pixels, and future studies will try to improve the quality. Figure 9(b) shows the corresponding partially optimized wheels from part (a) of Fig. 9 using cGAN + TO after 250 iterations of training. Figure 9(c) shows the corresponding optimized wheels from part (a) of Fig. 9 using cGAN + TO after 2000 iterations of training, which represents the converged solution from the viewpoint of compliance as discussed in Sec. 4.2. cGAN + TO maintains the same volume fraction and number of spokes as cGAN. But cGAN + TO produces designs with different spoke shapes and improved mechanical performance than cGAN. In addition, the blurriness problem of cGAN is greatly improved by cGAN + TO. However, TO is a gradient-based method, and as a result, local optima, rather than global optima, may be reached. This may decrease the diversity of the final wheel designs generated. To balance diversity and mechanical performance, some intermediate wheel designs may be good suggestions as shown in Fig. 9(b). Lastly, the spokes in Fig. 9(a) can be either clockwise or counterclockwise, while wheel designs in Fig. 9(c) all have clockwise orientation. This effect is due to the TO loading condition with counterclockwise lateral forces as shown in Fig. (2). From this viewpoint, cGAN + TO should produce designs with better mechanical performance than cGAN. This is investigated in Sec. 4.2.

### 4.2 Quantitative Comparison of Conditional Generative Adversarial Network and cGAN + TO.

This section desires to investigate changes that take place between cGAN and cGAN + TO quantitatively. TO seeks to improve the mechanical performance of designs, in this case, the minimization of compliance. Figure 10(a) tracks the compliance of 64 wheel designs in the training process of cGAN + TO. In the first 500 iterations, the compliances decrease quickly. In the next 2000 iterations, the compliance of many designs is nearly constant, with small oscillations. Figure 10(b) represents the average of wheel compliance of the 64 wheels as plotted in Fig. 10(a). The gradual decreasing trend of compliance shows the gradual improvement of mechanical performance. The optimization process converged between 500 and 1000 iterations for all designs. The trained cGAN + TO can directly generate optimized wheel designs from the viewpoint of mechanical performance, as indicated in Fig. 9(c). In addition, we also tracked the mean volume fraction across the 64 wheels, which is shown in Fig. 10(c). The blue line represents the mean volume fraction across the 64 wheel examples at different numbers of iterations, and the red line is the desired mean volume fraction. From the figure, the maximum difference between the generated and desired volume fractions is limited to 0.01, which is acceptable for design in real applications. These results show that cGAN + TO can improve the mechanical performance of generated wheels, and the generated wheels have the desired volume fraction.

A frequently utilized method to evaluate generated image quality is FID, which measures the similarity of the mean and variances of two sets of images. A smaller FID implies that the generated image is closer to the training dataset. To calculate the FID score, 10,000 images were generated for cGAN and cGAN + TO separately. Another 10,000 images were selected from the training dataset to calculate FID scores of cGAN and cGAN + TO separately. As shown in Table 1, the FID scores are 74.24 and 151.09 for cGAN and cGAN + TO separately. Our belief is that cGAN will have a lower FID score than cGAN + TO, and the result is consistent with our assumption. The wheel designs in the training dataset do not have good mechanical performance. cGAN generates images by mimicking the original training dataset, while cGAN + TO generates images considering both similarity to the training dataset and mechanical performance. Thus, the spoke shapes generated by cGAN + TO may be dramatically different from the training dataset or the generated images of cGAN, while the images generated by cGAN should be more similar to the training dataset. Another explanation is that there exists a trade-off between mechanical performance and similarity to the training dataset. cGAN + TO improves the mechanical performance while trading off similarities to the original designs. In addition, we propose to utilize average compliance to measure the mechanical performances of cGAN + TO-generated designs. The 10,000 images that were utilized for calculating FID are also applied for computing compliances. The average compliances of cGAN and cGAN + TO are 8.29 × 10^{8} and 3.82 × 10^{8}, respectively (Table 1). The standard deviation of compliances of cGAN and cGAN + TO are 4.84 × 10^{8} and 1.72 × 10^{8}, respectively (Table 1). The significant decrease in average compliance from cGAN to cGAN + TO shows the strength of cGAN + TO in generating functional designs. The average volume fraction differences between the generated designs and the targets were 0.0197 and 0.0123 for cGAN and cGAN + TO, respectively. Diversity is another main research topic in the field of generative design. Researchers in the field of machine learning also utilized FID to measure diversity, where a larger FID represented that the generated designs were less close to the training dataset and more diverse. From this viewpoint, the generated designs by cGAN + TO were more diverse compared with cGAN.

### 4.3 Effect of Loss Function Weights.

Here, we investigate the loss function of cGAN + TO, Eq. (10). This loss function can be reformulated as $LG=\lambda 1LGWGAN\u2212GP+\lambda 2(Lc+\lambda 4Lv)$ where *λ*_{3} = *λ*_{2}*λ*_{4}. In this way, the first term $\lambda 1LGWGAN\u2212GP$ will become the penalty for cGAN, (*L*_{c} + *λ*_{4}*L*_{v}) will be the penalty for TO, and $\lambda 2\lambda 1$ measures the relative importance of cGAN and TO. For this formulation, we assume that when $\lambda 2\lambda 1$ is large, the generated images will have better mechanical performance, while when $\lambda 2\lambda 1$ is smaller, the generated result will be closer to the training dataset and have more diversity. Similarly, when *λ*_{4} is larger, the accuracy of the volume fraction will be higher, and vice versa. Based on this new formulation, several tests on different combinations of $\lambda 2\lambda 1$ and *λ*_{4} were conducted as shown in Fig. 11. We have tested $\lambda 2\lambda 1=3\xd710\u22127$, 10^{−6}, and 10^{−5} and *λ*_{4} = 100, 300, 3,000 and 10,000. Results show that all generated designs with these weight values have reasonable shapes. This proves the robustness of cGAN + TO. When $\lambda 2\lambda 1$ is small, the generated images are closer to the images in the training dataset like $\lambda 2\lambda 1=3\xd710\u22127$ and *λ*_{4} = 100, while larger $\lambda 2\lambda 1$ forces the generated designs to be close to optimized wheels. Although all weights result in designs with acceptable volume fraction accuracy, larger *λ*_{4} have less possibility of generating final designs with grayscale pixels.

### 4.4 Effects of Topology Optimization filter Radius.

In TO, a filter is utilized to avoid the checkerboard problem; from a practical perspective, it controls the minimum feature size (more details can be found in Refs. [16,18]). The filter is a weighted combination of neighboring pixels where the size of the neighborhood is controlled by the filter radius. In general, 0.02*D* is a good suggestion, where *D* is the resolution of the image as shown in Fig. 2. In Figs. 9 and 11, all the filters utilized a radius of 0.02*D*. In this section, we explore the effects of filter radius.

The results of generating wheel designs using cGAN + TO with different filter sizes are shown in Fig. 12. Consistent with previous research conclusions in TO literature [15], cGAN + TO produces designs that generally have thicker spokes when trained with a larger radius and thinner spokes when trained using a smaller radius as shown in Fig. 12. The left figure in Fig. 12 uses a smaller radius, i.e., 0.01*D* and the right figure uses a larger radius, i.e., 0.06*D*. Though different wheel designs have different spoke thicknesses, spoke thickness in the right figure is generally larger than in the left figure, especially when the number of spokes is large.

### 4.5 Variations of Load and Boundary Conditions.

As shown in Fig. 2, the green non-design region represents the axle. A fixed boundary condition is applied to the wheel-axle interface, meaning that the internal wheel boundary cannot move. In this section, the radius of the axle region is varied. As shown in Fig. 13, the left figure has the boundary conditions *BC* = 0.15*D*, while the right figure has a larger restrained boundary condition, i.e., *BC* = 0.2*D*. The loading condition is another frequently considered parameter in design problems because different loads may be encountered in applications. As shown in Fig. 2, the relative ratio between the lateral and radial forces is an important parameter when defining loading conditions in Eq. (1). Two values of the ratio are investigated, *rat* = 1, −5.

The case with *rat* = 1 and *BC* = 0.15*D* is shown on the left in Fig. 13. Since the direction of lateral forces is counterclockwise, all the generated wheels have counterclockwise-rotated spokes. In comparison, on the right side of Fig. 13, the case with *rat* = −5 and *BC* = 0.2*D* is shown. As a result, all the generated wheels have clockwise-rotated spokes. Furthermore, many wheels on the right side are twisted or sheared more than wheels on the left side due to the larger lateral loads. Changing the axle size, which adjusts the fixed boundary condition, does not seem to have any effect on the generated results. These results are motivating because, with the second stage of training, the model can accommodate different load and boundary conditions. This property generalizes the application of cGAN + TO greatly and provides the model with the potential to be utilized in real engineering design.

### 4.6 Comparison With Previous Methods.

In this subsection, a qualitative comparison between our method, the previous “iteration method” [30], and manual design space exploration using TO (simply called “TO” in the sequel) was conducted as shown in Table 2. The strengths and weaknesses of all methods will be discussed in detail.

cGAN + TO and the iteration method require a training process, while TO does not require training. The training process can take a long time, which is a drawback of machine learning methods. In comparison, iteration between GAN and TO should take an even longer time for training because it will require a period to train GAN and generate novel functional designs. Then, additional time will be required to improve the functionality of generated designs by TO. The improved generated designs will be further utilized to train GAN again, and this iteration process repeats until convergence. Unfortunately, Ref. [5] did not report the number of iterations or the computation times. Our cGAN + TO method does not require these iterations and therefore has a significantly reduced computational burden.

Two additional benefits of cGAN + TO exist over the iteration method. The iteration method requires optimized designs derived from TO, each of which requires hundreds or thousands of TO iterations, to train GAN, while cGAN + TO optimizes designs using only one TO iteration for training. Then, cGAN + TO generates optimized designs directly during the test phase. Finally, the iteration method requires post-processing for quality and symmetry, while cGAN + TO does not require post-processing.

During the test phase, both cGAN + TO and the iteration method will be efficient, generating diverse designs in seconds. In comparison, if only TO is used (the third method in Table 2), the optimization process of TO will take minutes or even hours to generate optimized designs. In addition, to generate diverse designs, different initial conditions should be utilized, which requires lots of trials and designer oversight of the design exploration process.

TO can accommodate different load and boundary conditions. In comparison, cGAN + TO and the iteration method can only generate functional designs for the load and boundary conditions utilized in the training process. In comparison, cGAN + TO can accommodate different load and boundary conditions by the retraining scheme described in Sec. 4.5, while it may be difficult for the iteration method to accommodate new load and boundary conditions.

Then, cGAN + TO and TO are compared quantitatively, while the iteration method is ignored since it requires post-processing. The comparison between cGAN + TO and TO follows the method proposed by a review paper by Woldseth et al. [35], and the quantitative results can be found in Table 3. The comparison process considers four factors: (1) data collection, (2) training cost, (3) acceleration achieved at test time, and (4) generality of the trained model. For data collection, the time costs of cGAN + TO and TO are similar as the training dataset of cGAN + TO can be utilized as the initial conditions of TO. For the generality of the trained models, cGAN + TO can be transformed to new boundary conditions by a second-stage retraining without training from scratch or gathering new data. This is a great advantage of cGAN + TO compared with previous work on a combination of NN with TO. Though the generability of TO is better than cGAN + TO, future investigations can potentially solve this problem. For the training time process, cGAN + TO requires a relatively long time for training. The training for cGAN + TO is 72 h mainly due to the serial computation model for computing compliance through matrix inversion. If the parallel computation becomes available, the training will be greatly decreased. For parallel computation, if the batch size is 64, the training time will be shorten to 1.125 h. For the testing time, the advantages of cGAN + TO come into being. For comparison, 6000 designs are generated by cGAN, and 6000 initial designs are sent to TO for optimization. The average time to generate a functional design was 0.066 s for cGAN + TO. In comparison, the average time for generating a functional design for TO was 86.4 s. In addition, the average compliance of these 64 designs was computed. The average compliances of TO and cGAN + TO were 3.18 × 10^{8} and 3.79 × 10^{8}, respectively. The standard deviations of TO and cGAN + TO are 0.85 × 10^{8} and 1.69 × 10^{8}. The average performance of TO is better than cGAN + TO for several reasons. The enforced symmetry may weaken the power of cGAN + TO in deriving the final optimized structure. In addition, in the loss function of cGAN + TO, there are loss terms for GAN and volume fraction. These terms will undermine the power of cGAN + TO for generating optimized structures as a trade-off. However, the results of cGAN + TO are still motivating from the perspective of the authors as the compliance has been greatly reduced from the original unfunctional designs generated by cGAN. In addition, 64 designs generated by cGAN and cGAN + TO, respectively, were selected. When cGAN and cGAN + TO-generated these 64 designs, the same condition and random noise were utilized. The 64 designs generated by cGAN were further fed into TO for optimization and compared with the results of cGAN + TO. Optimized designs by TO and cGAN + TO can be found in Fig. 14. The optimized designs from TO lacked symmetry, while the ones from cGAN + TO have better symmetry properties. As a result, both cGAN + TO and TO have their advantages and disadvantages. In particular, previous methods of combining NN and TO cannot achieve generalizability, i.e., generalizing to different load and boundary conditions. This is the main advantage of our method.

### 4.7 Discussion.

In this paper, six different investigations were conducted. The first investigation provided a qualitative comparison of the results of cGAN and cGAN + TO, where the different spoke shapes showed the influence of including TO in the solution process versus not including TO. In the quantitative comparison investigation, the compliance and volume fraction of different wheel designs were tracked, and from these results, the compliance was seen to continue to decrease until convergence, while the volume fraction oscillated around the true volume fraction. This showed the strong capability of cGAN + TO in optimizing the wheel designs and maintaining the prescribed parameters. To quantitatively compare cGAN and cGAN + TO, two metrics were utilized: FID and average compliance, and the results showed that the mechanical performances of the designs generated by cGAN + TO are better than those produced by cGAN. The third investigation examined different weight combinations in the loss functions of cGAN and TO, and the results showed the robustness of cGAN + TO. The fourth investigation experimented with the filter radius parameter in TO. With a larger filter radius, the generated wheels typically had thicker spokes compared to the use of a smaller filter radius, which is consistent with previous studies on TO-generated feature sizes. The fifth investigation tested different combinations of load and boundary conditions, and the results showed that the cGAN + TO method can be generalized to different load and boundary conditions without having to completely retrain the system. The sixth experiment compared our methods with the previous method qualitatively and quantitatively. Though different methods have different strengths and weaknesses, the strengths of our method overshadow the weaknesses and are therefore superior to the other methods discussed in the paper.

Results from the first two experiments showed the potential of cGAN + TO to automatically generate functional designs directly. In this work, cGAN + TO was trained using a two-stage scheme, which can be further improved. In the future, a one-stage scheme is preferred as it will be more convenient for training and applications. Another interesting direction is to enforce the volume fraction constraint in TO, rather than through a penalty term, and in this way, simplify the cGAN + TO loss function so that it includes only the cGAN loss term and compliance. Comparison with other GD methods [30] will be another interesting topic. As our method does not require iterations between TO and cGAN, it is significantly more computationally efficient than other methods with separate GAN and TO modules.

Different generative models can be utilized for generative designs. The current development of VAE and diffusion models, rather than GAN-based models, may be good directions, since they can not only generate new designs but also modify existing designs. The current 2D GD study can further be extended to the 3D domain, which should be more useful for real engineering applications, though the computation cost will be higher.

As a final comment, this work represents an initial study of generative design by cGAN + TO. Although promising, significant work remains to improve and generalize the methods. Some research issues were highlighted that should guide future investigations.

## 5 Conclusion

GD should automatically generate a wide variety of new functional designs. Previous research on GD regarded GAN and TO as two separate modules, and iterations were required between the GAN and TO modules to achieve both goals of functionality and design diversity. In this work, another direction was pursued, that of combining GAN and TO into one module and performing optimization at the same time. A new formulation scheme was proposed to embed TO into GAN by adding compliance and volume fraction loss into the GAN loss function. The combination of a conditional GAN model and TO resulted in a novel cGAN + TO architecture and GD method. A two-stage training scheme was developed with the objective of robust design generation. Qualitative and quantitative studies showed that cGAN + TO can generate diverse high-quality functional designs. Two metrics, FID and average compliance, showed the benefits of cGAN + TO over the cGAN model without TO. Results of additional investigations demonstrated that the cGAN + TO model behaves as expected for different loss function weights and filter radii. Furthermore, a trained cGAN + TO model can be applied to different load and boundary conditions without requiring complete retraining. The demonstrated robustness and generalization capabilities should significantly extend the range of potential applications of cGAN + TO.

In conclusion, deep neural networks represent a promising technique for automatically generating new functional designs with prescribed parameters. In particular, the combination of cGAN and TO successfully generated a wide range of designs that satisfy engineering requirements. The developed cGAN + TO architecture shows promising robustness and generalizability characteristics that enable it to be adapted to a wide variety of design problems.

Future research should focus on several directions. The loss function of cGAN + TO should be further simplified if possible, especially the volume fraction loss term. The robustness of training cGAN + TO should be improved so that one-stage training becomes possible. Other generative models like VAE and diffusion models should be investigated and compared with GAN-based models. This work should be extended to the 3D domain using appropriate 3D deep neural network architectures. Various computational techniques should be investigated to reduce computational efforts that build on machine learning applications in TO, parallel computing implementations, among others.

## Acknowledgment

The authors gratefully acknowledge support from the National Science Foundation (Grant Nos. CMMI-2113672 and CMMI-2229260). Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation.

*The codes of this work will be available*.^{1}

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The authors attest that all data for this study are included in the paper.