#### Tutorial 1: Mining Sparse Representations: Theory, Algorithms, and Applications

**Abstract:**

The objective of this tutorial is to give a comprehensive overview of the theories,
algorithms, and applications of sparse learning. The last decade has witnessed a
growing interest in the search for sparse representations of data, as the underlying
representations of many real-world processes are often sparse. For example, in
disease diagnosis, even though humans have a huge number of genes, only a small
number of them contribute to certain disease (Golub et al., 1999; Guyon et al.,
2002). In neuroscience, the neural representation of sounds in the auditory cortex
of unanesthetized animals is sparse, since the fraction of neurons that are active
at a given instant is typically small (Hromadka et al., 2008). In signal processing,
many natural signals are sparse in that they have concise representations when
expressed under a proper basis (Candes&Wakin, 2008). Therefore, finding sparse
representations is fundamentally important in many fields of science.
This tutorial will focus on introducing the necessary background for sparse
learning, presenting the sparse learning techniques based on L1-norm regularization
and its variants, demonstrating successful application of these techniques in
various application domains, introducing the efficient algorithms for optimization,
and discussing recent advances and future trends in the area.

**Tutors' Biographies:**

Jun Liu is a Postdoc Associate at the Biodesign Institute at Arizona State University.
He received his Ph.D. in Computer Science from Nanjing University of
Aeronautics and Astronautics in 2007. His research areas include sparse learning,
large-scale optimization, and dimensionality reduction.

Shuiwang Ji is a Ph.D. candidate in the Department of Computer Science and Engineering
at Arizona State University. His research interests include sparse learning,
dimensionality reduction, multi-task learning, kernel methods, large-scale optimization,
and biological image informatics.

Jieping Ye is an Assistant Professor of the Department of Computer Science and
Engineering at Arizona State University. He received his Ph.D. in Computer Science
from University of Minnesota, Twin Cities in 2005. His research interests
include machine learning, data mining, and biomedical informatics. In 2004, his
paper on generalized low rank approximations of matrices won the outstanding
student paper award at the Twenty-First International Conference on Machine
Learning.
He has given a tutorial on the subject of Dimensionality
Reduction at SDM 2007.