One paper accepted to ECCV 2018!

Deep Component Analysis via Alternating Direction Neural Networks

Calvin Murdock; Ming-Fang Chang; Simon Lucey

Deep Component Analysis via Alternating Direction Neural Networks Conference Forthcoming

European Conference on Computer Vision (ECCV), Forthcoming.

Abstract | Links | BibTeX

New paper posted to arXiv!

Paper

Deep Component Analysis via Alternating Direction Neural Networks

Calvin Murdock; Ming-Fang Chang; Simon Lucey

Deep Component Analysis via Alternating Direction Neural Networks Conference Forthcoming

European Conference on Computer Vision (ECCV), Forthcoming.

Abstract | Links | BibTeX

Deep Component Analysis via Alternating Direction Neural Networks

Calvin Murdock, Ming-Fang Chang, Simon Lucey

Abstract

Despite a lack of theoretical understanding, deep neural networks have achieved unparalleled performance in a wide range of applications. On the other hand, shallow representation learning with component analysis is associated with rich intuition and theory, but smaller capacity often limits its usefulness. To bridge this gap, we introduce Deep Component Analysis (DeepCA), an expressive multilayer model formulation that enforces hierarchical structure through constraints on latent variables in each layer. For inference, we propose a differentiable optimization algorithm implemented using recurrent Alternating Direction Neural Networks (ADNNs) that enable parameter learning using standard backpropagation. By interpreting feed-forward networks as single-iteration approximations of inference in our model, we provide both a novel theoretical perspective for understanding them and a practical technique for constraining predictions with prior knowledge. Experimentally, we demonstrate performance improvements on a variety of tasks, including single-image depth prediction with sparse output constraints.

Read moreDeep Component Analysis via Alternating Direction Neural Networks

Approximate Grassmannian Intersections: Subspace-Valued Subspace Learning

Calvin Murdock, Fernando De la Torre

Abstract

Subspace learning is one of the most foundational tasks in computer vision with applications ranging from dimensionality reduction to data denoising. As geometric objects, subspaces have also been successfully used for efficiently representing certain types of invariant data. However, methods for subspace learning from subspace-valued data have been notably absent due to incompatibilities with standard problem formulations. To fill this void, we introduce Approximate Grassmannian Intersections (AGI), a novel geometric interpretation of subspace learning posed as finding the approximate intersection of constraint sets on a Grassmann manifold. Our approach can naturally be applied to input subspaces of varying dimension while reducing to standard subspace learning in the case of vector-valued data. Despite the nonconvexity of our problem, its globally-optimal solution can be found using a singular value decomposition. Furthermore, we also propose an efficient, general optimization approach that can incorporate additional constraints to encourage properties such as robustness. Alongside standard subspace applications, AGI also enables the novel task of transfer learning via subspace completion. We evaluate our approach on a variety of applications, demonstrating improved invariance and generalization over vector-valued alternatives.

Read moreApproximate Grassmannian Intersections: Subspace-Valued Subspace Learning

Presenting “Approximate Grassmannian Intersections” at ICCV 2017!

Paper

Approximate Grassmannian Intersections: Subspace-Valued Subspace Learning

Calvin Murdock; Fernando De la Torre

Approximate Grassmannian Intersections: Subspace-Valued Subspace Learning Inproceedings

International Conference on Computer Vision (ICCV), 2017.

Abstract | Links | BibTeX

Additive Component Analysis

Calvin Murdock, Fernando De la Torre

Abstract

Principal component analysis (PCA) is one of the most versatile tools for unsupervised learning with applications ranging from dimensionality reduction to exploratory data analysis and visualization. While much effort has been devoted to encouraging meaningful representations through regularization (e.g. non-negativity or sparsity), underlying linearity assumptions can limit their effectiveness. To address this issue, we propose Additive Component Analysis (ACA), a novel nonlinear extension of PCA. Inspired by multivariate nonparametric regression with additive models, ACA fits a smooth manifold to data by learning an explicit mapping from a low-dimensional latent space to the input space, which trivially enables applications like denoising. Furthermore, ACA can be used as a drop-in replacement in many algorithms that use linear component analysis methods as a subroutine via the local tangent space of the learned manifold. Unlike many other nonlinear dimensionality reduction techniques, ACA can be efficiently applied to large datasets since it does not require computing pairwise similarities or storing training data during testing. Multiple ACA layers can also be composed and learned jointly with essentially the same procedure for improved representational power, demonstrating the encouraging potential of nonparametric deep learning. We evaluate ACA on a variety of datasets, showing improved robustness, reconstruction performance, and interpretability.

Read moreAdditive Component Analysis