About

Log in?

DTU users get better search results including licensed content and discounts on order fees.

Anyone can log in and get personalized features such as favorites, tags and feeds.

Log in as DTU user Log in as non-DTU user No thanks

DTU Findit

Conference paper

Top-down attentionwith features missing at random

From

Cognitive Systems, Department of Informatics and Mathematical Modeling, Technical University of Denmark1

Department of Informatics and Mathematical Modeling, Technical University of Denmark2

In this paper we present a top-down attention model designed for an environment in which features are missing completely at random. Following (Hansen et al., 2011) we model top-down attention as a sequential decision making process driven by a task - modeled as a classification problem - in an environment with random subsets of features missing, but where we have the possibility to gather additional features among the ones that are missing.

Thus, the top-down attention problem is reduced to finding the answer to the question what to measure next? Attention is based on the top-down saliency of the missing features given as the estimated difference in classification confusion (entropy) with and without the given feature. The difference in confusion is computed conditioned on the available set of features.

In this work, we make our attention model more realistic by also allowing the initial training phase to take place with incomplete data. Thus, we expand the model to include a missing data technique in the learning process. The top-down attention mechanism is implemented in a Gaussian Discrete mixture model setting where marginals and conditionals are relatively easy to compute.

To illustrate the viability of expanded model, we train the mixture model with two different datasets, a synthetic data set and the well-known Yeast dataset of the UCI database. We evaluate the new algorithm in environments characterized by different amounts of incompleteness and compare the performance with a system that decides next feature to be measured at random.

The proposed top-down mechanism clearly outperforms random choice of the next feature.

Language: English
Publisher: IEEE
Year: 2011
Pages: 1-6
Proceedings: 2011 IEEE International Workshop on Machine Learning for Signal Processing
Series: Machine Learning for Signal Processing
ISBN: 1457716216 , 1457716224 , 9781457716218 , 9781457716225 , 1457716232 and 9781457716232
ISSN: 21610363 and 15512541
Types: Conference paper
DOI: 10.1109/MLSP.2011.6064577
ORCIDs: Larsen, Jan and Hansen, Lars Kai

DTU users get better search results including licensed content and discounts on order fees.

Log in as DTU user

Access

Analysis