The Role of Recurrence and Other Constraints in Bayesian Deep Learning Models of Knowledge Maps – We present a general method for learning feature representations from the knowledge-base of an underlying Bayesian network. Our method consists of two steps. First, a new feature distribution over the data is generated which is used to estimate the posterior distribution of the Bayesian network. Since each new feature is a feature vector, the prior distribution of each vector can be computed on the data by the distribution associated with the feature distribution. We can then represent the posterior distribution as a Bayesian network. We study the learning capacity of a model of an underlying Bayesian network. On a machine learning dataset, we train a deep network with a recurrent neural network (RNN) to estimate the posterior distribution of the network. Experiments show that the system outperforms previous state-of-the-art Bayesian networks by a large margin. Additionally, we demonstrate that neural network-based representations are much more interpretable than regular Bayesian networks.
We present a novel method by which neural networks solve complex optimization problems with a linear optimization objective, and then use a regularizer to solve the problem simultaneously. This allows us to obtain a closed-form method under certain assumptions on the objective function. The method is presented for solving complex optimization problems with a linear optimization objective and a regularizer. The method has been demonstrated to yield competitive results in multiple tasks compared with the state-of-the-art methods.
The Dempster-Shafer theory of variance and its application in machine learning
Binary Projections for Nonlinear Support Vector Machines
The Role of Recurrence and Other Constraints in Bayesian Deep Learning Models of Knowledge Maps
Determining the optimal scoring path using evolutionary process predictions
Deep Learning with Nonconvex Priors and Nonconvex Loss FunctionsWe present a novel method by which neural networks solve complex optimization problems with a linear optimization objective, and then use a regularizer to solve the problem simultaneously. This allows us to obtain a closed-form method under certain assumptions on the objective function. The method is presented for solving complex optimization problems with a linear optimization objective and a regularizer. The method has been demonstrated to yield competitive results in multiple tasks compared with the state-of-the-art methods.