Danilo P. Mandic is a Professor in signal processing with Imperial College London, UK, and has been working in the areas of adaptive signal processing and bioengineering. He is a Fellow of the IEEE, member of the Board of Governors of International Neural Networks Society (INNS), member of the Big Data Chapter within INNS and member of the IEEE SPS Technical Committee on Signal Processing Theory and Methods. He has received five best paper awards in Brain Computer Interface, runs the Smart Environments Lab at Imperial, and has more than 300 publications in journals and conferences. He has authored/coauthored research monographs: Recurrent Neural Networks for Prediction (Wiley, 2001), Complex Valued Nonlinear Adaptive Filters: Nonlinearity, Widely Linear and Neural Models (Wiley, 2009), and a two volume monograph Tensor Networks for Dimensionality Reduction and Large Scale Optimisation (Now Publishers, 2016, 2017). Prof Mandic has received the President Award for Excellence in Postgraduate Supervision at Imperial. He is a pioneer of Ear-EEG, a radically new in-the-ear-canal EEG recording system, and has extended this work to in-ear monitoring of vital signs. This work appeared in IEEE Spectrum, MIT Technology Review and has won several awards.
Tensor Networks and their Potential Applications in Dimensionality Reduction and Blind Signal Processing
In this talk we discuss briefly tensor networks which provide a natural sparse and distributed representation for large scale data, and address both established and emerging methodologies for tensor-based decomposition and optimization. Our particular focus will on low-rank tensor network representations, which allow for huge data tensors to be approximated (compressed) by interconnected low-order core tensors. The usefulness of this concept is illustrated over a number of applied areas, including multiway analysis, multilinear ICA/BSS, deep learning, generalized regression, tensor canonical correlation analysis and higher order partial least squares. Special emphasis will be given to the links between tensor networks and deep learning and abilities of some specific tensor networks that significantly compress both the fully connected layers and the convolutional layers of deep neural networks.
- Cichocki, A., Lee, N., Oseledets, I., Phan, A. H., Zhao, Q., & Mandic, D. P. (2016). Tensor Networks for Dimensionality Reduction and Large-Scale Optimization: Part 1 Low-Rank Tensor Decompositions. Foundations and Trends® in Machine Learning, 9(4-5), 249-429. https://arxiv.org/abs/1609.00893
- Cichocki, A., Phan, A. H., Zhao, Q., Lee, N., Oseledets, I., Sugiyama, M., & Mandic, D. P. (2017). Tensor Networks for Dimensionality Deduction and Large-Scale Optimization: Part 2 Applications and Future Perspectives. Foundations and Trends® in Machine Learning, 9(6), 431-673. https://arxiv.org/abs/1708.09165
- Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G., Zhao, Q., Caiafa, C., & Phan, H. A. (2015). Tensor Decompositions for Signal Processing Applications: From Two-Way to Multiway Component Analysis. IEEE Signal Processing Magazine, 32(2), 145-163.