Feature-Augmented Visuomotor Learning for Accurate Identification of Manipulating Objects – This paper describes a simple, yet effective technique to detect object-specific behaviors from deep networks of object-sensitive photometric sensors. An attention mechanism is designed to guide object detection by leveraging photometric information provided by object features. The attention mechanism is implemented by using a deep convolutional neural network (CNN) to map photometric patterns from the input to the target object features. The learned network is then used to learn a visual interpretation of the photometric features. We show that the proposed method outperforms the state-of-the-art tracking approaches. On the other hand, our proposed method is capable of achieving higher accuracy when compared to state-of-the-art object detection approaches.
This paper investigates the use of stochastic gradient descent for stochastic optimization problems. In this setting, we solve a new stochastic optimization problem that is derived from the stochastic gradient descent problem, which is a multi-dimensional optimization problem with a dimensionality-reducing regret. On the other hand, we propose a novel method for stochastic gradient descent that is both deterministic and non-differentiable. We prove that this new stochastic optimization problem under generalization guarantees the existence of a fixed upper and lower bound on the regret. By minimizing both the expected regret and the regret of the problem in non-differentiable settings, we provide a general criterion to obtain the convergence guarantee of this stochastic optimization problem to a stochastic optimization framework. Experimental results on both synthetic datasets and real-world data indicate that our approach performs well on both synthetic and real datasets, and we demonstrate its effectiveness on both synthetic and real datasets.
Boosting and Deblurring with a Convolutional Neural Network
Predicting protein-ligand binding sites by deep learning with single-label sparse predictor learning
Feature-Augmented Visuomotor Learning for Accurate Identification of Manipulating Objects
Convex Dictionary Learning using Marginalized Tensors and Tensor Completion
Multi-view Learning for Stochastic and Convex Optimization on Distributed DataThis paper investigates the use of stochastic gradient descent for stochastic optimization problems. In this setting, we solve a new stochastic optimization problem that is derived from the stochastic gradient descent problem, which is a multi-dimensional optimization problem with a dimensionality-reducing regret. On the other hand, we propose a novel method for stochastic gradient descent that is both deterministic and non-differentiable. We prove that this new stochastic optimization problem under generalization guarantees the existence of a fixed upper and lower bound on the regret. By minimizing both the expected regret and the regret of the problem in non-differentiable settings, we provide a general criterion to obtain the convergence guarantee of this stochastic optimization problem to a stochastic optimization framework. Experimental results on both synthetic datasets and real-world data indicate that our approach performs well on both synthetic and real datasets, and we demonstrate its effectiveness on both synthetic and real datasets.