Seminars for This Month: July, 1997
- 07/01/97 - Dynamics of Autoassociative Neural Networks.
- 05/06/97 - Approximating Matrix Multiplication for Pattern Recognition Tasks.
Seminars In The Past
- May & June & July, 1997
- March & April, 1997
- January & February, 1997
- December, 1996
- November, 1996
- October, 1996
Seminar AbstractsTuesday, July 1st, 1997
306 Soda Hall
System Basics Research Laboratory
C&C Research Laboratories
Dynamics of Autoassociative Neural Networks
In this talk, the dynamics of autoassociative analogue neural networks is analyzed theoretically. First, we show that the distance between the network state vector and the subspace $ \Pi_1 $ spanned by the given patterns decreases in exponential order of time. And we show that the network dynamics is a gradient flow in $ \Pi_1$. Then we prove that the dynamics outside of $ \Pi_1 $ is also a gradient flow. It means that the dynamics is not chaotic. Finally, we look into the dynamics in $ \Pi_1 $ and the dynamics toward $ \Pi_1 $ by computer simulations.
Tuesday, May 6th, 1997
306 Soda Hall
Approximating Matrix Multiplication for Pattern Recognition Tasks
UC Berkeley and AT&T Labs-Research
Many pattern recognition tasks, including estimation, classification, and the finding of similar objects, make use of linear models. The fundamental operation in such tasks is the computation of the dot product between a query vector and a large database of instance vectors. Often we are interested primarily in those instance vectors which have high dot products with the query. We present a random sampling based algorithm that enables us to identify, for any given query vector, those instance vectors which have large dot products, while avoiding explicit computation of all dot products. We provide experimental results that demonstrate considerable speedups for text retrieval tasks. Our approximate matrix multiplication algorithm is applicable to products of $k\geq 2$ matrices and is of independent interest. Our theoretical and experimental analysis demonstrates that in many scenarios, our method dominates standard matrix multiplication.
Joint work with David D. Lewis.