[ad_1]

## 2.1 problem 🎯

When using Physically Informed Neural Networks (PINN), it is not surprising that neural network hyperparameters such as network depth, width, choice of activation function, etc. .

Naturally, people applied **AutoML** (more specifically, neural architecture search) to automatically identify optimal network hyperparameters. But before we can do that, there are two issues that need to be addressed:

- How to effectively navigate the huge search space?
- How to determine the right search goal?

This last point is due to the fact that PINN is usually considered as an “unsupervised” problem: no labeled data is needed, since the training is done by minimizing the ODE/PDE residuals.

To better understand these two issues, the authors conducted extensive experiments to investigate the sensitivity of PINN performance to network structure. Now let’s look at what they found.

## 2.2 Solution 💡

The first idea proposed in the paper is that **The training loss can be used as a surrogate for the search objective**, as it is very consistent with the final prediction accuracy of PINN. This relates to the issue of determining an appropriate optimization objective for hyperparametric search.

The second idea is that **It is not necessary to optimize all network hyperparameters simultaneously**. Instead, we can get a **A step-by-step splitting strategy** For example, first find the optimal activation function, then fix the choice of activation function and find the optimal network width, then fix the previous solutions and improve the network depth, and so on. In their experiments, the authors showed that this strategy is very effective.

With these two ideas in mind, let’s see how we can implement the search in detail.

First, which network hyperparameters are considered? The search space recommended in the paper is:

**width**: number of neurons in each hidden layer. The range considered is [8, 512] 4 or 8 steps.**depth**: number of hidden layers. The range considered is [3, 10] 1 step.**Activation function**: Tanh, Sigmoid, ReLU and Swish.**changing point**: part of the eras using Adam up to the entire study eras. Values are considered [0.1, 0.2, 0.3, 0.4, 0.5]. It is common practice in PINN to first use Adam to train some epochs and then switch to L-BFGS to continue training for some epochs. This changing point hyperparameter defines the change time.**learning rate**: Fixed value of 1e-5 as it has little effect on final architecture search results.**training eras**: Fixed value of 10000 as it has little effect on the final architecture search results.

Second, let’s consider the proposed procedure in detail:

- The first search target is
*activation function*. To achieve this, we sample the width and depth parameter space and calculate the losses for all width-depth samples with different activation functions. These results can give us ideas about which activation function is dominant. After making a decision, we fix the activation function for the next steps.

- The second search target is
*width*. More specifically, we look for some width intervals where PINN performs well.

- The third search target is
*depth*. Here, we only consider the width variation in the best performance intervals determined from the last step, and we want to find the best K width-depth combinations where PINN performs well.

- The ultimate search target is
*changing point*. We simply look for the best changes for each top-K configuration identified from the last step.

This is the result of the search procedure **K different PINN structures**. We can select the most successful of these K candidates, or simply use all of them to build a K-ensemble PINN model.

Note that the above procedure must specify several parameters (eg, number of width intervals, number of K, etc.), which will depend on the available tuning budget.

As for the specific optimization algorithms used in individual steps, off-the-shelf AutoML libraries can be used to accomplish the task. For example, the authors of the paper used the Tune package to perform hyperparameter tuning.

## 2.3 Why the solution might work 🛠️

By disabling the search for various hyperparameters, the scale of the search space can be significantly reduced. This not only significantly reduces the complexity of the search, but also significantly increases the chance of finding a (near) optimal network architecture for the physical problems under investigation.

Also, using learning loss as a search target is easy to implement and desirable. Since the training loss (mainly the PDE residual loss) is strongly correlated with the PINN’s inference accuracy (according to the experiments in this paper), identifying an architecture that provides minimal training loss also leads to a model with high prediction accuracy. .

## 2.4 Benchmark ⏱️

A total of 7 different benchmark problems were discussed in the paper. All problems are forward problems where PINNs are used to solve PDEs.

- Heat equation with Dirichlet boundary condition. This type of equation describes the distribution of heat or temperature in a given domain

time.

- Heat equation with Neumann boundary conditions.

- A wave equation that describes the propagation of oscillations in space, such as mechanical and electromagnetic waves. Both Dirichlet and Neumann conditions are considered here.

- Burger’s equation used to model shock flows, wave propagation in combustion chambers, vehicle motion, and more.

- An advection equation that describes the motion of a scalar field when it is driven by a vector field of known velocity.

- The advection equation with different boundary conditions.

- A reaction equation that describes chemical reactions.

Benchmark studies have shown that:

- The proposed Auto-PINN shows stable performance for different PDEs.
- In most cases, Auto-PINN can identify the neural network architecture with the smallest error values.
- Search attempts are less with the Auto-PINN approach.

## 2.5 Strengths and Weaknesses ⚡

**strong sides **💪

- Significantly reduced computational cost for performing neural architecture searches for PINN applications.
- Improved likelihood identification of (near) optimal neural network architectures for various PDE problems.

**Weaknesses **📉

- The effectiveness of using the training loss value as a search objective may depend on the specific characteristics of the PDE problem, as benchmarks are performed only for a specific set of PDEs.
- Data sampling strategy affects Auto-PINN performance. Although the paper discusses the impact of different data sampling strategies, it does not provide clear guidance on how to choose the best strategy for a given PDE problem. This may add another layer of complexity to the use of Auto-PINN.

## 2.6 Alternatives 🔀

Conventional AutoML algorithms can also be used to solve the hyperparameter optimization problem in physically informed neural networks (PINNs). These algorithms include *Random search*, *Genetic algorithms*, *Bayesian optimization*etc.

Compared to these alternative algorithms, the newly proposed Auto-PINN is specially designed for PINN. This makes it a unique and effective solution for optimizing PINN hyperparameters.

There are several possibilities for further improvement of the proposed strategy:

- Incorporating more sophisticated data sampling strategies, such as adaptive and residual-based sampling methods, to improve search accuracy and model performance.

To learn more about optimizing the distribution of residual points, see this blog in the PINN design pattern series.

- More benchmarking on the search objective to assess whether the training loss value is indeed a good surrogate for different types of PDEs.
- Inclusion of other types of neural networks. The current version of Auto-PINN is only for Multi-Layer Perceptron (MLP) architectures. Future work may explore convolutional neural networks (CNNs) or recurrent neural networks (RNNs), which may enhance the capabilities of PINNs in solving more complex PDE problems.
- Transfer learning to Auto-PINN. For example, architectures that perform well on certain types of PDE problems can be used as starting points for a search process for similar types of PDE problems. This may speed up the search process and improve model performance.

[ad_2]

Source link