Predictive Policy Improvement with Stochastic Gradient Descent – This paper describes a technique for performing inference from a large set of probabilistic constraints on the future. The goal of this paper is to learn predictors for predictions based on large-scale probabilistic constraints on the future. We address the problem of explaining how predictive models are predicted, and show that a general framework called predictive policy improvement (SPE) is a generalization of a policy improvement method that has been used in computer science.
As an alternative to the classic sparse vector factorization (SVM), we propose a two-vector (2V) representation of the data, which is well suited to handle nonnegative matrices. In contrast to the typical sparse learning model that tries to preserve the identity or preserve features, we show that our 2V representation can handle matrices with large dimensionality, by using a new variant of the convex relaxation of the log-likelihood. Our result results show a substantial improvement of the state-of-the-art approach in dimensionality reduction over sparse data, and is based on the principle that a linear approximation of the log-likelihood is equivalent to a convex relaxation.
A Random Fourier Feature Based on Binarized Quadrature
A Hierarchical Segmentation Model for 3D Action Camera Footage
Predictive Policy Improvement with Stochastic Gradient Descent
DeepFace: Learning to see people in real-time
Tight Inference for Non-Negative Matrix FactorizationAs an alternative to the classic sparse vector factorization (SVM), we propose a two-vector (2V) representation of the data, which is well suited to handle nonnegative matrices. In contrast to the typical sparse learning model that tries to preserve the identity or preserve features, we show that our 2V representation can handle matrices with large dimensionality, by using a new variant of the convex relaxation of the log-likelihood. Our result results show a substantial improvement of the state-of-the-art approach in dimensionality reduction over sparse data, and is based on the principle that a linear approximation of the log-likelihood is equivalent to a convex relaxation.