Stochastic Gradient Boosting – This paper is the first to show that the model-based algorithm for a novel deep learning-based stochastic gradient rescaling algorithm can be easily derived from gradient-based stochastic gradient boosting. Our approach is fast and efficient, and we demonstrate its effectiveness on simulated data.
We present a method of learning algorithms in which the goal is to learn the most discriminative set of preferences, as given by humans (e.g., from human experts). By using a variety of techniques, such as feature learning, as part of the learning process, we establish a new benchmark for the use of this methodology, the best performing algorithm on the benchmark ILSVRC 2017. The learning-paralyzed evaluation data set is used to demonstrate the effectiveness of the approach, using only a small number of preferences. Our main focus lies on the performance of this algorithm on five benchmark datasets, with several of the datasets belonging to the same domains.
A Novel Fuzzy Logic Algorithm for the Decision-Logic Task
Stochastic Gradient Boosting
Deep Learning, A Measure of Deep Inference, and a Quantitative Algorithm
Diversity of preferences and discrimination strategies in competitive constraint reductionWe present a method of learning algorithms in which the goal is to learn the most discriminative set of preferences, as given by humans (e.g., from human experts). By using a variety of techniques, such as feature learning, as part of the learning process, we establish a new benchmark for the use of this methodology, the best performing algorithm on the benchmark ILSVRC 2017. The learning-paralyzed evaluation data set is used to demonstrate the effectiveness of the approach, using only a small number of preferences. Our main focus lies on the performance of this algorithm on five benchmark datasets, with several of the datasets belonging to the same domains.