The author presents a theory of invertible and injective deep neural networks for likelihood estimation and uncertainty quantification.
In this presentation, an applied mathematician from South Dakota State University presents his research on the mathematical foundations of deep learning, especially at the intersection of geometry, topology and universality. The presentation introduces some of the mathematical theory behind deep learning, aiming to provide a taste of how deep learning can be formalized and how these formalisms yield interesting mathematics. Machine learning tries to let machines ‘learn’ patterns in data. Deep learning is a subset of machine learning and is best defined through example. Deep learning composes simple functions to build more complex ones. These four steps still guide deep learning to this day: 1) collect and clean data; 2) choose an architecture that depends on a parameter θ; 3) train the network to find the ‘right’ θ ∈ Θ; and 4) measure how well F works on data that it wasn’t trained on. Statistical Learning Theory (SLT) give us a useful formalism for understanding these questions.
Downloads
Similar Publications
- Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates Through Understanding and Predicting Difficulty
- A Systematic Study of Liquid Chromatography in Search of the Best Separation of Cannabinoids for Potency Testing of Hemp-Based Products Using Diode Array Detector and Electrospray Ionization Mass Spectrometry
- Ultrahigh Speed Direct PCR: A Method for Obtaining Y-STR and STR Based Genotypes in Under 20 Minutes