Top > Seminars & Events > Seminars > The Stone-Weierstrass Theorem and Neural Networks

Seminars

The Stone-Weierstrass Theorem and Neural Networks

Hold Date
2017-08-22 12:00〜2017-08-22 13:00
Place
Lecture Room S W1-C-503, West Zone 1, Ito campus, Kyushu University
Object person
 
Speaker
Hien Nguyen (Department of Mathematics and Statistics, La Trobe University)

Abstract:
Neural networks have become a ubiquitous tool in modern artificial intelligence, data analytics, and machine learning. A major key to the success of neural networks have been their ability to learn arbitrarily complex functions using simple architectures. The Stone-Weierstrass theorem extends upon the famous Weierstrass approximation theorem. The Stone-Weierstrass theorem states that any subalgebra of functions that can separate points is uniformly dense in the class of continuous functions on compact sets. Using the Stone-Weierstrass theorem, Cotter (1990, IEEE T Neural Networks) demonstrated that many common architectures could be proved uniformly dense. In a similar way, Sandberg (2001, Circuit Systems Signal Processing) used the Stone-Weierstrass theorem to prove the uniform denseness of Gaussian radial basis networks, another very popular architecture.

In this talk, we will introduce the Stone-Weierstrass theorem and present its application to some of the proofs in Cotter (1990) and the proof of Sandberg (2001). Furthermore, we demonstrate how the Stone-Weierstrass theorem can be used to prove denseness in the more modern class of networks: the mixture of experts models of Jacobs et al. (1991, Neural Computation). These results come from our recent works Nguyen et al. (2016, Neural Computation) and Nguyen (2017, ArXiv:1704.00946).