Structure Learning in Sparse-Data Environments with Discrete Random Walks – We study the problem of constructing a semantic data model from low-dimensional sparse data using a random walk approach to the problem. The goal is to recover a high-dimensional vector space from data using a sparse model. We consider a set of datasets, where the model is modeled using a stochastic optimization, and the data is generated using a sparse solution. This is accomplished via a greedy optimization followed by a sequential search that optimizes a small local optimizer and the global optimizer. This solution is consistent with the low level representation of the data and the observation that the resulting model is efficient and robust to noise. We show that this approach is equivalent to minimizing a small subset of the entries of a deep network, provided the global optimizer returns results that are consistent with the low level representation of the data. Experiments in both synthetic data and real data show that the proposed approach can be effective for learning in a sparse dataset with arbitrary data and noise conditions.

In this paper we propose a general method, named Context-aware Temporal Learning (CTL), for extracting long-term dependencies across subnetworks from multi-task networks (MTNs) as well as in particular from multi-task networks. To understand why it is useful for this task, we examine the impact of two factors: (1) the structure of the MTN and the performance of the model; and (2) the number of training blocks. The results indicate that in this setting, we can achieve state-of-the-art performance, despite only using two large MTNs.

An efficient framework for fuzzy classifiers

Discovery Points for Robust RGB-D Object Recognition

# Structure Learning in Sparse-Data Environments with Discrete Random Walks

Stacked Generative Adversarial Networks for Multi-Resolution 3D Point Clouds Regression

Fast Learning of Multi-Task Networks for Predictive ModelingIn this paper we propose a general method, named Context-aware Temporal Learning (CTL), for extracting long-term dependencies across subnetworks from multi-task networks (MTNs) as well as in particular from multi-task networks. To understand why it is useful for this task, we examine the impact of two factors: (1) the structure of the MTN and the performance of the model; and (2) the number of training blocks. The results indicate that in this setting, we can achieve state-of-the-art performance, despite only using two large MTNs.