The Stone-Weierstrass Theorem and Neural Networks
開催期間
12:00 ~ 13:00
場所
講演者
概要
Neural networks have become a ubiquitous tool in modern artificial intelligence, data analytics, and machine learning. A major key to the success of neural networks have been their ability to learn arbitrarily complex functions using simple architectures. The Stone-Weierstrass theorem extends upon the famous Weierstrass approximation theorem. The Stone-Weierstrass theorem states that any subalgebra of functions that can separate points is uniformly dense in the class of continuous functions on compact sets. Using the Stone-Weierstrass theorem, Cotter (1990, IEEE T Neural Networks) demonstrated that many common architectures could be proved uniformly dense. In a similar way, Sandberg (2001, Circuit Systems Signal Processing) used the Stone-Weierstrass theorem to prove the uniform denseness of Gaussian radial basis networks, another very popular architecture.
In this talk, we will introduce the Stone-Weierstrass theorem and present its application to some of the proofs in Cotter (1990) and the proof of Sandberg (2001). Furthermore, we demonstrate how the Stone-Weierstrass theorem can be used to prove denseness in the more modern class of networks: the mixture of experts models of Jacobs et al. (1991, Neural Computation). These results come from our recent works Nguyen et al. (2016, Neural Computation) and Nguyen (2017, ArXiv:1704.00946).