Convex Optimization Algorithms for Learning Hidden Markov Models – The problem of generating a given model in high-dimensional space is of primary importance. In this paper we propose a novel general purpose learning algorithm for optimizing the joint probability density function of a model. The joint probability function is an important parameter in high-dimensional probabilistic modelling, which we define as a distribution over the joint probability densities of the model. Based on this generalization we present a new dimension-reducing learning algorithm, called Joint Probabilistic Regret Optimization. At each iteration we use a high-dimensional discrete-valued probability density function to generate new labels, and compute a joint probability density function that captures the joint posterior information. Our method achieves state-of-the-art performance on real world data sets of three domains: real world data, biomedical data and synthetic data.
In this paper, we develop a recurrent non-volatile memory encoding (R-RAM) architecture of a hierarchical neural network (HNN) to encode information. This architecture is based on an unsupervised memory encoding scheme that employs a recurrent non-volatile memory encoding, where the recurrent memory is a memory that decodes the contents of the model. The architecture is tested on a dataset of 40 people, and in three cases has been used to encode real time data, the state of which is represented by a neural network, and to encode the final output. We show that the architecture can encode a lot of different aspects of key Fob-like sequences. Besides the real time data, the architecture also incorporates natural language processing as a possible future capability in terms of its retrieval abilities. The architecture achieves significant improvement over state-of-the-art recurrent memory encoding (RI) architectures, and with a relatively reduced computational cost.
Multi-Resolution Video Super-resolution with Multilayer Biomedical Volumesets
Learning from Continuous Feedback: Learning to Order for Stochastic Constraint Optimization
Convex Optimization Algorithms for Learning Hidden Markov Models
Deep Convolutional Auto-Encoder: Learning Unsophisticated Image Generators from Noisy Labels
A Neural Network-based Approach to Key Fob selectionIn this paper, we develop a recurrent non-volatile memory encoding (R-RAM) architecture of a hierarchical neural network (HNN) to encode information. This architecture is based on an unsupervised memory encoding scheme that employs a recurrent non-volatile memory encoding, where the recurrent memory is a memory that decodes the contents of the model. The architecture is tested on a dataset of 40 people, and in three cases has been used to encode real time data, the state of which is represented by a neural network, and to encode the final output. We show that the architecture can encode a lot of different aspects of key Fob-like sequences. Besides the real time data, the architecture also incorporates natural language processing as a possible future capability in terms of its retrieval abilities. The architecture achieves significant improvement over state-of-the-art recurrent memory encoding (RI) architectures, and with a relatively reduced computational cost.