For many feature selection problems, a human defines the features that
are potentially useful, and then a subset is chosen from the original pool
of features using an automated feature selection algorithm. In contrast
to supervised learning, class information is not available to guide the
feature search for unsupervised learning tasks. In this paper, we introduce
Visual-FSSEM (Visual Feature Subset Selection using Expectation-Maximization
Clustering), which incorporates visualization techniques, clustering, and
user interaction to guide the feature subset search and to enable a deeper
understanding of the data. Visual-FSSEM, serves both as an exploratory
and multivariate-data visualization tool. We illustrate Visual-FSSEM on
a high-resolution computed tomography lung image data set.