9

A fascinating approach to deep generative design

 3 years ago
source link: https://uxdesign.cc/a-fascinating-approach-to-deep-generative-design-d05dfcd1df48
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

A fascinating approach to deep generative design

Paper summary: “Deep generative design: integration of topology optimization and generative models”

Lots of wheels with different designs. All generated using GANs and TopOp

Abstract:

This paper proposes integrating Deep learning with the generative design! Models are able to create new wheels which are aesthetic but also optimized for engineering performance. The framework uses topology optimization + generative models in an iterative fashion.

This framework can generate lots of different designs by starting with a limited set of designs. You can also score the generated designs with anomaly detection to detect the novelty of the designs! This framework generates better aesthetics, diversity and robustness of generated designs compared with previous techniques!

Introduction:

ML is used in a ton of areas — including engineering. For design optimization, ML has been applied in a ton of areas:

  1. Topology Optimization
  2. Shape Parameterization
  3. Computer-Aided Engineering Simulation and Meta Modeling
  4. Material Design
  5. Design Preference Estimation

We’ll get into exactly what these are in the Related Works Section. But first, what is Generative Design all about?

Generative Design is about exploring the design options that satisfy certain conditions imposed by it. Generative design generates lots of designs, but designers choose the designs that are suited to them. The process looks like:

  1. Define the design parameters, constraints, and goals for the topology optimization.
  2. Generate designs by running the model under different parameters
  3. Study all the options, iterate a ton, choose the best one
  4. Production (3D printing)!

But there are some problems with Generative Design: It’s not using SOTA ML, it can’t create aesthetic designs (it focuses solely on engineering performance, but aesthetics is important for consumer-facing products), and the diversity of designs is low.

What’s the solution? Generative Models! Models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). These methods are able to produce higher dimensional outputs through encodings in low dimensional space.

Applying generative models directly to engineering designs is limited — but this paper combines generative models with topology optimization to make it useful!

Here’s a list of barriers with why generative models can’t be used directly for engineering, and how topology optimization fixes it.

  • Generative models need data, and this data is hard to come by (since it’s confidential/hard to access). So we’ll use topology optimization to generate training data for us!
  • Generative models don’t guarantee feasible engineering. But topology optimization is able to evaluate it!
  • Generative models suffer from mode collapse — where it only outputs 1 type of result/has a large variance in output quality. But we can improve designs by post-processing from topology optimization.

This paper’s contribution is this novel framework of generative design by integrating topology optimization with generative models! This framework solves the problems faced in previous methods. It can generate lots of designs from a small number of previous designs and includes quantification of design attributes.

Literature Review:

Topology Optimization:

Topology optimization can be seen as a form of machine learning since it’s able to distribute materials while taking into account constraints. But it’s really computationally expensive.

One paper used a CNN-encoder decoder to generate low-resolution structures, then a conditional GAN comes in and upscales it to a high-resolution structure. Or we could use 3D CNN-encoder decoders, then grab its intermediate states as structural inputs.

Another paper used VAEs and style transfer to do topology optimization. We can also use active learning to constrain the NN training to create near-optimal topologies.

Shape Parameterization:

Parameterizing the geometries is a hard task since the correlation between variables is too strong. But we can use VAEs to parameterized 2D shapes, or 3D meshes.

Computer-Aided Engineering Simulation and Meta Modeling:

A lot of work has been in Computational Fluid Dynamics (CFD), since it has a high cost. CNN’s can be used to predict the CFD response on a car. Another study has used CNN’s to speed up fluid simulations. And cGANs are used to solve the “steady-state” problem for head conduction and incompressible fluid flow.

Material Design:

One paper used Bayesian optimization, and use GANs to map microstructures to lower dimensions using GANs. Others use convolutional deep belief networks or VAEs for the microstructure data.

Design Preference Estimation:

Using a Restricted Boltzmann Machine, one paper can predict customer preferences of the designs. Another model learned the aesthetic appeal using Siamese NNs with cGANs.

Deep Generative Design Framework:

The whole architecture, which integrates topology optimization and generative models, looks like this:

A diagram depicting the pipeline. It’s described below
  1. Collect previous design data. This is used later in stages 2 and 7. And it’s also benchmark data to compare the newly generated designs.
  2. Designs enter here, and new designs are spit right out. This is done through topology optimization — which is a multi-objective function: 1) Maintain engineering performance. 2) L1 loss of the reference. Different designs are obtained by weighing the objectives in different ways (hyperparameter).
  3. Now we have a ton of designs — but we don’t want to waste computation on irrelevant (aka designs that are too similar). So we filter similar designs! So we do a pixel-wise L1 loss. The user defines the threshold (hyperparameter), the higher it is, the tighter the constraint, the fewer designs that pass, but the more diverse they are. An important note is that if the generator rotated the image, L1 won’t be able to pick it up and call them different. We can use generative models to map designs to a latent space, but this paper didn’t do it because rotations didn’t occur too often
  4. If new designs / total designs from previous iteration < user-defined threshold — then you exit the loop and go to stage 8. Else you continue on to stage 5.
  5. A GAN (more specifically a Boundary Equilibrium GAN (BEGAN)) is used to generate new designs. (it will be explained in detail later in the section about the generative models)
  6. Just like stage 3, which filters out similar designs. Outputs of stage 6 are inputted back into stage 2.
  7. This is where we build the loss function — which is an autoencoder that’s trained on stage 1 designs. The loss function is to evaluate the novelty of designs (details are also in the later section about generative models)
  8. We use the loss function, along with other evaluation metrics to get evaluations for each design — things like novelty, volume, compliance to designs.
  9. Finally, we use those data points and visualize them on a graph. We can then choose designs based on this graph.

Topology Optimization:

A High-Level Overview:

Diagram of how Top Op works. Described below
Diagram of how Top Op works. Described below
Credit: Jun Wu’s Lectures

On a high level, Topology Optimization is just iterating over the design over and over again until we reach an optimal design. Let’s just say we’re building a bridge. We first get the model to just guess where we should place the material (x_e).

Then we compute the displacement (like getting the loss in ML) to see how good our design is. Afterwards, we do sensitivity analysis (like getting the gradient for each weight in NNs), with respect to the displacement.

This sensitivity analysis will tell us how important each part of the design is. The parts which are important have additional material added to them. The unimportant parts will have the material taken away from it

We do this over and over again until the design has converged! Now we have an optimized design that can withstand the engineering requirements with as little material as possible!

The Nitty Gritties:

Topology Optimization is all about trying to find the optimum design for a specific task. In most cases, you want to maximize stiffness — otherwise known as minimizing the elastic energy of the system:

Formula of the Minimiziation of the elastic energy
Formula of the Minimiziation of the elastic energy
  • Where U is the displacement vector
  • K is the global stiffness matrix
  • c(x) is the compliance
  • u_e is the element displacement vector
  • k_0 is the element stiffness matrix
  • N_e is the total number of elements
  • x_e is the design variable (density)

Why does this look a bit familiar? Well, it’s because of Hooke’s law:

Hooke’s Law
Hooke’s Law

Cool! Do we just let it run now? Nope — we have some constraints we have to place on the design: like Volume

Volume limitation
Volume limitation
  • V(x) is the volume that the design takes up
  • V_0 is the volume of the entire design domain
  • f is a user-defined fraction of the volume that the design is allowed to take up

So the model starts off with this play area in which it can fill. When it starts filling the area, it won’t be filling it in as 0 and 1 → It’s too difficult for the program to learn like that. Thus we allow it to learn a continuous plane:

Boundary of the values the blocks can take
Boundary of the values the blocks can take

But we’re using Solid Isotropic Material with Penalization (SIMP) to encourage the model to go towards 0 and 1 such that we do get a clean black and white (0 and 1) image. Which looks like this:

SIMP Formula
SIMP Formula
  • p is the penalization factor
  • E_min is to avoid instability when the element becomes a 0

Great! Now it has an objective with some constraints. But how does it iterate and create better designs? It’s time for sensitivity analysis! Just like in machine learning where we find the partial derivative of loss with respect to the parameter, we’re doing the same here:

Formula to update the values of the generated design
Formula to update the values of the generated design

All elements have a unit volume:

Continuation of previous formula
Continuation of previous formula

And we update the material (x_e) by:

Formula to update the material
Formula to update the material
Formula to update the material
Formula to update the material

And the termination condition (when it converges) is:

Formula for termination condition
Formula for termination condition
  • Ɛ is a user set value — usually something like 0.01

Are we done? Nope! This usually gives an ugly checkerboard pattern + other artifacts, thus this paper uses a 3 field SIMP projection scheme to avoid it. Here we apply density and sensitivity filters on the neighbourhood of points (like a convolution).

Sensitivity filter:

Formula for Sensitivity filter
Formula for Sensitivity filter
Formula for Sensitivity filter
Formula for Sensitivity filter

Density filter:

Formula for density filter
Formula for density filter

Now if we just used those two it still wouldn’t be enough because there are grey transition pixels — thus we have a projection filter:

Formula for Projection filter
Formula for Projection filter

Now we’re done with Topology Theory! Now let’s move on to what this study did:

What This Study Did:

The entire design space is 128 x 128 pixels → But we won't be letting it just put stuff wherever it wants. Instead, we’ll set some areas which it isn’t allowed to place:

Design domain that’s described below
Design domain that’s described below

Thus it’s only the light grey part in which generative design is actually allowed to play with (the spoke). The force we’ll be putting on it is the normal force + the sheer force tire pressure + traction forces — and we can change the ratio between them.

To make sure that our generate creates designs that match reference designs, we create an objective function that’s equal to the compliance (how stiff the wheel is against external forces) plus the L1 Norm between inputs and outputs:

Objective function
Objective function
  • First part = the compliance
  • Second part = the L1 loss part (with a ratio of λ to specify which one we want to weigh more)

And our sensitivity analysis looks like this:

Sensitivity anlysis
Sensitivity anlysis

There are different parameters with which we can play with. In this study, there are 2 sets of parameters we can adjust, each with 5 discrete levels.

2 parameters at 5 different levels
2 parameters at 5 different levels

And thus we’re able to run our framework on 25 different parameter combinations!

Generative Models:

The true data has a distribution: p_data(x) and the goal of the generator is to create data to match this distribution. The generator gets inputted with noise, and then outputs data in distribution p_data(x).

Meanwhile, this is going on, we have a discriminator that tries to figure out which images are from the original p_data(x) distribution, and which are from the generator. It’s a zero-sum game between the discriminator and the generator → Both of their loss functions are dependent on the other’s performance (the lower the performance of the opponent, the lower your loss is!).

There are tons of sub-branches of GANs — We have DCGANs (Deep convolutional GANs, ALI (Adversarially Learned Inference), BiGANs (Bidirectional GANs), EBGANs (energy-based GANs), WGANs (Wasserstein GANs), InfoGANs (Information maximizing GANs), cGANs (Conditional GANs) and BEGANs (Boundary Equilibrium GANs).

This paper uses BEGANs. BEGANs use an equilibrium concept, which balances the discriminator and generator to global convergence. It also uses the autoencoder architecture, and the Wasserstein distance → Thus enabling high-resolution images to be generated.

BEGANs also match the autoencoder loss distribution rather than the probability distribution — because the generator is actually the decoder for the autoencoder:

Breakdown of the GANs
Breakdown of the GANs

In a similar vein, we’re able to detect novelty using autoencoders → Since we’re able to effectively reconstruct designs that we’ve seen during training, but dissimilar data won’t be constructed as well. Thus we’re using autoencoders for stage 7!

Results:

Different types of parameters end up with different types of designs in topologic optimization:

Parameter for force:

1*FmqpXIEfiExm9GfrQZNXVg.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Parameter for similarity:

1*bWZ2ab0avpfEsnbC4fn49w.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

We can also see that different reference designs end up with different optimized designs (meaning we have lots of diversity)

1*EEKwuX_To-ctAowIDP5d8g.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Now it’s time for the GANs:

1*HKsq597wfP8aeXuVOrFjQw.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

What’s amazing is that the outputs look pretty feasible — they’re all symmetric, have holes and have around the same boundary for the rim.

Let’s test out the novelty evaluation:

1*CPCmPJBXrWLx2UHge8YvNw.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

We can see that it’s really bad at reconstructing designs that it’s never seen before — affirming that the autoencoder works for novelty detection!

And just after 1.5 epochs (since the second epoch triggered the termination criteria) we got 2004 new designs:

1*pWmMjsrr-9yCsKbCITnFoA.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Here are some examples of generated designs:

1*aO1QcbBdeXON93S0A4GTDw.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

And they plotted the wheels and their factors:

1*nAxZdlJ2h_mP6N3mDD2ZLw.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

From these 2D designs, we can then turn them into 3D designs!

1*RgozOuKIof1BRNHhoPtu4g.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Discussion:

Topology Optimization:

They first started off by seeing: are references actually necessary? So they took out references and only got it to loop through topology optimization → Results had identical outcomes if the parameters were the same, thus there was no aesthetic diversity. Sometimes it also failed to converge. Therefore references augmented it.

1*XECPC2NP_XsT2e1Th3_Olg.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Additionally, they validate the use of GAN’s (since they morph the fundamental topology of the wheel). Because if you only use topologic optimization, similar reference designs will, after iterations, have results that seem the same as well:

1*5ri6NHB2zVW0YC2mAN6cMg.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

BEGAN:

Now we move on to the GAN. One thing to note: the genius of the framework proposed in this paper is that the GAN doesn’t have to be perfect — since topology optimization can take over! It’s quite robust!

They tested it out with other GANs like DCGAN and VAE. But they were blurrier, less symmetrical and lacked novelty:

1*Ao7onq3hsNl4SPOUvpn-Qw.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

BEGAN is a good choice becomes it creates topologically novel designs that are simple — which is a key to better inputs for better designs for topologic optimization.

Future work lies in training a model which is able to select the good models from multiple generative networks, along with different latent variables.

Autoencoder:

Afterwards, they tested the effectiveness of the autoencoder — they did so by seeing how accurate the model was at predicting between reference designs and generated models. It did so with a 91.6% accuracy:

1*MHHlIW8hodg2ZaaL9HW2CQ.png?q=20
a-fascinating-approach-to-deep-generative-design-d05dfcd1df48

Conclusion:

This paper puts forth an amazing framework! It combines the topological optimation along with GANs in a cyclical and iterative fashion in order to achieve novel aesthetic and engineering performance in generated designs simultaneously. They can start off with a tiny amount of designs and explode them to create tons of new designs!

They also created a clever way to test the novelty of designs + compliance + cost → And then presented them in a clean fashion for designers to choose. The robustness of the model increases due to help from topological optimization!

Next steps is to advance to the realm of 3D (voxels), along with a recommendation system to predict the preference of designers and consumers.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK