The author presents a theory of invertible and injective deep neural networks for likelihood estimation and uncertainty quantification.
In this presentation, an applied mathematician from South Dakota State University presents his research on the mathematical foundations of deep learning, especially at the intersection of geometry, topology and universality. The presentation introduces some of the mathematical theory behind deep learning, aiming to provide a taste of how deep learning can be formalized and how these formalisms yield interesting mathematics. Machine learning tries to let machines ‘learn’ patterns in data. Deep learning is a subset of machine learning and is best defined through example. Deep learning composes simple functions to build more complex ones. These four steps still guide deep learning to this day: 1) collect and clean data; 2) choose an architecture that depends on a parameter θ; 3) train the network to find the ‘right’ θ ∈ Θ; and 4) measure how well F works on data that it wasn’t trained on. Statistical Learning Theory (SLT) give us a useful formalism for understanding these questions.
Downloads
Similar Publications
- The Role of Simulated Data in Making the Best Predictions (from the 87th Annual Meeting of the American Association of Physical Anthropologists - 2018)
- Do the criminal histories of vacant properties matter? Evidence from demolition and rehab interventions in Cleveland, Ohio
- Bacterial Symbionts and Taphonomic Agents of Humans