Deep Multimodal Convolutional Neural Networks for Object Search – We present a new neural network based framework for object segmentation with deep learning that combines convolutional and recurrent neural networks. The framework is fully unsupervised and can learn object segmentation with a small amount of supervision and trained a deep residual network with a small amount of supervision. We demonstrate the effectiveness of deep learning in object detection at scales ranging from tens of thousands to thousands of pixels for object segmentation. We show that the model can successfully segment objects with a low-dimensional manifold and can perform object detection well.
We propose a framework to automatically recognize the identity of a person from a set of short clips and perform a face detection task. Our framework works by encoding a semantic similarity score between the sentences and outputs a binary label to infer identities. We use a convolutional neural network to learn semantic similarity and recognition in a supervised manner, where the learned label information is used to predict the person’s identity. The person is assumed to be of the same gender as the label and the labels are automatically assigned according to gender. We further propose two novel techniques to generate the labels given images to show the person’s pose and gender information. This technique can provide a more accurate identification of the person as well as a more informative prediction of the person’s identity. Extensive research has been carried out to demonstrate the effectiveness of the proposed method.
Embedding Information Layer with Inferred Logarithmic Structure on Graphs
Bayesian Networks and Hybrid Bayesian Models
Deep Multimodal Convolutional Neural Networks for Object Search
Fast Convergence Rate of Matrix Multiplicative Matrices via Random Convexity
Deep learning for the classification of emotionally charged eventsWe propose a framework to automatically recognize the identity of a person from a set of short clips and perform a face detection task. Our framework works by encoding a semantic similarity score between the sentences and outputs a binary label to infer identities. We use a convolutional neural network to learn semantic similarity and recognition in a supervised manner, where the learned label information is used to predict the person’s identity. The person is assumed to be of the same gender as the label and the labels are automatically assigned according to gender. We further propose two novel techniques to generate the labels given images to show the person’s pose and gender information. This technique can provide a more accurate identification of the person as well as a more informative prediction of the person’s identity. Extensive research has been carried out to demonstrate the effectiveness of the proposed method.