Machine Learning in Physics
Differential Equations & ML
Before reading this section, please make sure that you know the classical methods for solving DEs numerically (like the Euler's method).
As most of the modern machine learning techniques are closely intertwined with optimization, ML can be applied to solve quite complex differential equations numerically if we reduce them to optimization tasks. Specifically, any differential equation (or even functional equation in general) of the form \[L[f] = R[f]\] where \(f\) is the function in question, \(L[f]\) and \(R[f]\) are some expressions involving the function \(f\), can be reduced to \[|L[f] - R[f]| = 0\] after which we simply solve a minimization task \[f(\textbf{x})=\underset{f}{\text{argmin}}|L[f] - R[f]|\] numerically. This is because we choose a function \(f(\textbf{x})\) such that it minimizes the expression, the minimum of which is obviously 0, which gives us the solution to our initial differential equation. For approaching this task, we can make a neural network or just a parameterized function \(f(\textbf{p}, \textbf{x})\) of an acceptable form and solve approximately for the set of parameters \(\textbf{p}\) \[\textbf{p}_{\text{min}} \approx \underset{\textbf{p}}{\text{argmin}}|L[f(\textbf{p}, \textbf{x})] - R[f(\textbf{p}, \textbf{x})]|\] We can then consider the function \(\tilde{f}(\textbf{x})=f(\textbf{p}_{\text{min}}, \textbf{x})\), where we fix the set of parameters and only vary the arguments, approximately our solution. In the language of machine learning, we define \(|L[f] - R[f]|\) to be our loss function and use any minimization algorithm (such as gradient descent) to find its minimum.
This idea is crucial and, in fact, not that hard to grasp. I highly recommend the Parallel Computing and Scientific Machine Learning Course and this set of links for more details and other approaches.