Folkscanomy Miscellaneous
1,006
1.0K
Dec 30, 2015
12/15
by
European Conference on Machine Learning (12th : 2001 : Freiburg, Germany); Raedt, Luc de, 1964-; Flach, Peter, 1956-
texts
eye 1,006
favorite 0
comment 0
Machine Learning: ECML 2001: 12th European Conference on Machine Learning Freiburg, Germany, September 5–7, 2001 Proceedings Author: Luc De Raedt, Peter Flach Published by Springer Berlin Heidelberg ISBN: 978-3-540-42536-6 DOI: 10.1007/3-540-44795-4 Table of Contents: An Axiomatic Approach to Feature Term Generalization Lazy Induction of Descriptions for Relational Case-Based Learning Estimating the Predictive Accuracy of a Classifier Improving the Robustness and Encoding Complexity of...
Topics: Machine learning, Machine learning
Folkscanomy Miscellaneous
809
809
Dec 30, 2015
12/15
by
European Conference on Machine Learning (11th : 2000 : Barcelona, Spain); López de Mántaras, Ramon, 1952-; Plaza, Enric
texts
eye 809
favorite 0
comment 0
Machine Learning: ECML 2000: 11th European Conference on Machine Learning Barcelona, Catalonia, Spain, May 31 – June 2, 2000 Proceedings Author: Ramon López de Mántaras, Enric Plaza Published by Springer Berlin Heidelberg ISBN: 978-3-540-67602-7 DOI: 10.1007/3-540-45164-1 Table of Contents: Beyond Occam’s Razor: Process-Oriented Evaluation The Representation Race — Preprocessing for Handling Time Phenomena Short-Term Profiling for a Case-Based Reasoning Recommendation System K-SVCR. A...
Topics: Machine learning, Machine learning
5
5.0
texts
eye 5
favorite 0
comment 0
x, 580 pages : 28 cm
Topics: Machine learning -- Congresses, Machine learning
Folkscanomy Miscellaneous
2,073
2.1K
Dec 30, 2015
12/15
by
Paliouras, Georgios, 1970-; Karkaletsis, Vangelis, 1966-; Spyropoulos, Constantine D., 1951-
texts
eye 2,073
favorite 5
comment 0
Machine Learning and Its Applications: Advanced Lectures Author: Georgios Paliouras, Vangelis Karkaletsis, Constantine D. Spyropoulos Published by Springer Berlin Heidelberg ISBN: 978-3-540-42490-1 DOI: 10.1007/3-540-44673-7 Table of Contents: Comparing Machine Learning and Knowledge Discovery in DataBases: An Application to Knowledge Discovery in Texts Learning Patterns in Noisy Data: The AQ Approach Unsupervised Learning of Probabilistic Concept Hierarchies Function Decomposition in Machine...
Topic: Machine learning
3
3.0
May 23, 2022
05/22
by
Moshkov, Mikhail Ju
texts
eye 3
favorite 0
comment 0
xiv, 181 pages : 24 cm
Topic: Machine learning
5
5.0
Jul 21, 2021
07/21
by
Šuc, Dorian
texts
eye 5
favorite 0
comment 0
xi, 119 p. : 24 cm
Topic: Machine learning
6
6.0
Dec 14, 2021
12/21
by
Camastra, Francesco, 1960-
texts
eye 6
favorite 0
comment 0
xvi, 494 p. : 25 cm
Topic: Machine learning
42
42
texts
eye 42
favorite 2
comment 0
xv, 255 p. : 25 cm
Topic: Machine learning
11
11
texts
eye 11
favorite 0
comment 0
x, 169 p. : 24 cm
Topic: Machine learning
89
89
Sep 30, 2021
09/21
by
Mueller, John, 1958- author
texts
eye 89
favorite 7
comment 0
xii, 410 pages : 24 cm
Topic: Machine learning
Folkscanomy Miscellaneous
418
418
Dec 30, 2015
12/15
by
IWLCS 2000 (2000 : Paris, France); Lanzi, Pier Luca, 1967-; Stolzmann, Wolfgang, 1966-; Wilson, Stewart W., 1937-
texts
eye 418
favorite 0
comment 0
Advances in Learning Classifier Systems: Third International Workshop, IWLCS 2000 Paris, France, September 15–16, 2000 Revised Papers Author: Pier Luca Lanzi, Wolfgang Stolzmann, Stewart W. Wilson Published by Springer Berlin Heidelberg ISBN: 978-3-540-42437-6 DOI: 10.1007/3-540-44640-0 Table of Contents: An Artificial Economy of Post Production Systems Simple Markov Models of the Genetic Algorithm in Classifier Systems: Accuracy-Based Fitness Simple Markov Models of the Genetic Algorithm in...
Topic: Machine learning
Folkscanomy Miscellaneous
284
284
Dec 29, 2015
12/15
by
European Conference on Machine Learning (15th : 2004 : Pisa, Italy); Boulicaut, Jean-François
texts
eye 284
favorite 0
comment 0
Author: Published by ISBN: DOI: Table of Contents:
Topic: Machine learning
Folkscanomy Miscellaneous
760
760
Dec 30, 2015
12/15
by
European Conference on Machine Learning (14th : 2003 : Cavtat, Croatia); Lavrač, Nada
texts
eye 760
favorite 0
comment 0
Machine Learning: ECML 2003: 14th European Conference on Machine Learning, Cavtat-Dubrovnik, Croatia, September 22-26, 2003. Proceedings Author: Nada Lavrač, Dragan Gamberger, Hendrik Blockeel, Ljupčo Todorovski Published by Springer Berlin Heidelberg ISBN: 978-3-540-20121-2 DOI: 10.1007/b13633 Table of Contents: From Knowledge-Based to Skill-Based Systems: Sailing as a Machine Learning Challenge Two-Eyed Algorithms and Problems Next Generation Data Mining Tools: Power Laws and...
Topic: Machine learning
26
26
Aug 5, 2021
08/21
by
Watt, Jeremy, author
texts
eye 26
favorite 7
comment 0
pages cm
Topic: Machine learning
782
782
Oct 6, 2014
10/14
by
Ethem Alpaydin
texts
eye 782
favorite 9
comment 0
Topic: Machine learning
Source: removedNEL
Machine Learning is basically giving computers an ability to learn without being programmed. Some key applications include the following: Web Page Ranking: Submitting a query to a search engine returns the most relevant answers that are sorted in the order of their relevance. Facial Recognition based on an input image used in security related applications. Classifying customers based on some criterion e.g. customers who are in need of financial products like insurance. This is done from a base...
Topic: Machine Learning
5
5.0
May 18, 2022
05/22
by
Drugowitsch, Jan
texts
eye 5
favorite 0
comment 0
xiv, 267 p. : 25 cm
Topic: Machine learning
21
21
Jun 27, 2020
06/20
by
European Working Session on Learning (1991 : Porto, Portugal)
texts
eye 21
favorite 0
comment 0
xi, 537 pages : 25 cm
Topics: Machine learning -- Congresses, Apprentissage automatique -- Congrès, Machine learning,...
28
28
Dec 10, 2019
12/19
by
European Conference on Machine Learning (12th : 2001 : Freiburg, Germany)
texts
eye 28
favorite 0
comment 0
xvii, 618 p. : 24 cm
Topics: Machine learning -- Congresses, Machine learning -- Industrial applications -- Congresses
5
5.0
Oct 8, 2020
10/20
by
International Conference on Machine Learning (18th : 2001 : Williams College)
texts
eye 5
favorite 0
comment 0
xi, 643 pages : 28 cm
Topics: Machine learning -- Congresses, Machine learning, Apprentissage automatique -- Congrès
146
146
Dec 28, 2019
12/19
by
Langley, Pat
texts
eye 146
favorite 8
comment 0
xii, 419 pages : 23 cm
Topics: Machine learning, Machine-learning, Maschinelles Lernen, Apprentissage automatique
3
3.0
Jun 30, 2018
06/18
by
Xiao Zhang; Lingxiao Wang; Quanquan Gu
texts
eye 3
favorite 0
comment 0
We study the problem of estimating low-rank matrices from linear measurements (a.k.a., matrix sensing) through nonconvex optimization. We propose an efficient stochastic variance reduced gradient descent algorithm to solve a nonconvex optimization problem of matrix sensing. Our algorithm is applicable to both noisy and noiseless settings. In the case with noisy observations, we prove that our algorithm converges to the unknown low-rank matrix at a linear rate up to the minimax optimal...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1701.00481
3
3.0
Jun 30, 2018
06/18
by
Valery A. Kalyagin; Alexander P. Koldanov; Petr A. Koldanov; Panos M. Pardalos
texts
eye 3
favorite 0
comment 0
Gaussian graphical model is a graphical representation of the dependence structure for a Gaussian random vector. It is recognized as a powerful tool in different applied fields such as bioinformatics, error-control codes, speech language, information retrieval and others. Gaussian graphical model selection is a statistical problem to identify the Gaussian graphical model from a sample of a given size. Different approaches for Gaussian graphical model selection are suggested in the literature....
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1701.02071
4
4.0
Jun 29, 2018
06/18
by
Suriya Gunasekar; Arindam Banerjee; Joydeep Ghosh
texts
eye 4
favorite 0
comment 0
In this paper, we present a unified analysis of matrix completion under general low-dimensional structural constraints induced by {\em any} norm regularization. We consider two estimators for the general problem of structured matrix completion, and provide unified upper bounds on the sample complexity and the estimation error. Our analysis relies on results from generic chaining, and we establish two intermediate results of independent interest: (a) in characterizing the size or complexity of...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1603.08708
6
6.0
Jun 29, 2018
06/18
by
Ricardo Silva
texts
eye 6
favorite 0
comment 0
Controlled interventions provide the most direct source of information for learning causal effects. In particular, a dose-response curve can be learned by varying the treatment level and observing the corresponding outcomes. However, interventions can be expensive and time-consuming. Observational data, where the treatment is not controlled by a known mechanism, is sometimes available. Under some strong assumptions, observational data allows for the estimation of dose-response curves....
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1605.01573
5
5.0
Jun 29, 2018
06/18
by
Philippe C. Besse; Brendan Guillouet; Jean-Michel Loubes; Francois Royer
texts
eye 5
favorite 0
comment 0
In this paper we propose a new method to predict the final destination of vehicle trips based on their initial partial trajectories. We first review how we obtained clustering of trajectories that describes user behaviour. Then, we explain how we model main traffic flow patterns by a mixture of 2d Gaussian distributions. This yielded a density based clustering of locations, which produces a data driven grid of similar points within each pattern. We present how this model can be used to predict...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1605.03027
4
4.0
Jun 29, 2018
06/18
by
Alexander Moreno; Tameem Adel; Edward Meeds; James M. Rehg; Max Welling
texts
eye 4
favorite 0
comment 0
Approximate Bayesian Computation (ABC) is a framework for performing likelihood-free posterior inference for simulation models. Stochastic Variational inference (SVI) is an appealing alternative to the inefficient sampling approaches commonly used in ABC. However, SVI is highly sensitive to the variance of the gradient estimators, and this problem is exacerbated by approximating the likelihood. We draw upon recent advances in variance reduction for SV and likelihood-free inference using...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1606.08549
3
3.0
Jun 29, 2018
06/18
by
Brian Baingana; Georgios B. Giannakis
texts
eye 3
favorite 0
comment 0
Contagions such as the spread of popular news stories, or infectious diseases, propagate in cascades over dynamic networks with unobservable topologies. However, "social signals" such as product purchase time, or blog entry timestamps are measurable, and implicitly depend on the underlying topology, making it possible to track it over time. Interestingly, network topologies often "jump" between discrete states that may account for sudden changes in the observed signals. The...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1606.08882
4
4.0
Jun 29, 2018
06/18
by
Chengtao Li; Stefanie Jegelka; Suvrit Sra
texts
eye 4
favorite 0
comment 0
We study probability measures induced by set functions with constraints. Such measures arise in a variety of real-world settings, where prior knowledge, resource limitations, or other pragmatic considerations impose constraints. We consider the task of rapidly sampling from such constrained measures, and develop fast Markov chain samplers for them. Our first main result is for MCMC sampling from Strongly Rayleigh (SR) measures, for which we present sharp polynomial bounds on the mixing time. As...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1608.01008
3
3.0
Jun 28, 2018
06/18
by
Emile Contal; Cédric Malherbe; Nicolas Vayatis
texts
eye 3
favorite 0
comment 0
In this paper, we consider the problem of stochastic optimization under a bandit feedback model. We generalize the GP-UCB algorithm [Srinivas and al., 2012] to arbitrary kernels and search spaces. To do so, we use a notion of localized chaining to control the supremum of a Gaussian process, and provide a novel optimization scheme based on the computation of covering numbers. The theoretical bounds we obtain on the cumulative regret are more generic and present the same convergence rates as the...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1510.05576
3
3.0
Jun 28, 2018
06/18
by
Javier González; Michael Osborne; Neil D. Lawrence
texts
eye 3
favorite 0
comment 0
We present GLASSES: Global optimisation with Look-Ahead through Stochastic Simulation and Expected-loss Search. The majority of global optimisation approaches in use are myopic, in only considering the impact of the next function value; the non-myopic approaches that do exist are able to consider only a handful of future evaluations. Our novel algorithm, GLASSES, permits the consideration of dozens of evaluations into the future. This is done by approximating the ideal look-ahead loss function,...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1510.06299
6
6.0
Jun 28, 2018
06/18
by
Yizhe Zhang; Ricardo Henao; Lawrence Carin; Jianling Zhong; Alexander J. Hartemink
texts
eye 6
favorite 0
comment 0
When learning a hidden Markov model (HMM), sequen- tial observations can often be complemented by real-valued summary response variables generated from the path of hid- den states. Such settings arise in numerous domains, includ- ing many applications in biology, like motif discovery and genome annotation. In this paper, we present a flexible frame- work for jointly modeling both latent sequence features and the functional mapping that relates the summary response variables to the hidden state...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1512.05219
4
4.0
Jun 28, 2018
06/18
by
Hideaki Kim; Hiroshi Sawada
texts
eye 4
favorite 0
comment 0
The histogram method is a powerful non-parametric approach for estimating the probability density function of a continuous variable. But the construction of a histogram, compared to the parametric approaches, demands a large number of observations to capture the underlying density function. Thus it is not suitable for analyzing a sparse data set, a collection of units with a small size of data. In this paper, by employing the probabilistic topic model, we develop a novel Bayesian approach to...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1512.07960
4
4.0
Jun 30, 2018
06/18
by
Akash Srivastava; Charles Sutton
texts
eye 4
favorite 0
comment 0
Topic models are one of the most popular methods for learning representations of text, but a major challenge is that any change to the topic model requires mathematically deriving a new inference algorithm. A promising approach to address this problem is autoencoding variational Bayes (AEVB), but it has proven diffi- cult to apply to topic models in practice. We present what is to our knowledge the first effective AEVB based inference method for latent Dirichlet allocation (LDA), which we call...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1703.01488
3
3.0
Jun 30, 2018
06/18
by
Zi Wang; Stefanie Jegelka
texts
eye 3
favorite 0
comment 0
Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the $\arg\max$ of the unknown function. Yet, both are plagued by expensive computation, e.g., for estimating entropy. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum value. We observe that MES maintains...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1703.01968
3
3.0
Jun 30, 2018
06/18
by
Kirthevasan Kandasamy; Gautam Dasarathy; Jeff Schneider; Barnabas Poczos
texts
eye 3
favorite 0
comment 0
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design. Recently, \emph{multi-fidelity} methods have garnered considerable attention since function evaluations have become increasingly expensive in such applications. Multi-fidelity methods use cheap approximations to the function of interest to speed up the overall optimisation process. However, most multi-fidelity methods assume only...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1703.06240
3
3.0
Jun 30, 2018
06/18
by
Badong Chen; Lei Xing; Haiquan Zhao; Bin Xu; Jose C. Principe
texts
eye 3
favorite 0
comment 0
The maximum correntropy criterion (MCC) has recently been successfully applied in robust regression, classification and adaptive filtering, where the correntropy is maximized instead of minimizing the well-known mean square error (MSE) to improve the robustness with respect to outliers (or impulsive noises). Considerable efforts have been devoted to develop various robust adaptive algorithms under MCC, but so far little insight has been gained as to how the optimal solution will be affected by...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1703.08065
4
4.0
Jun 30, 2018
06/18
by
Shahar Mendelson
texts
eye 4
favorite 0
comment 0
In this note we answer a question of G. Lecu\'{e}, by showing that column normalization of a random matrix with iid entries need not lead to good sparse recovery properties, even if the generating random variable has a reasonable moment growth. Specifically, for every $2 \leq p \leq c_1\log d$ we construct a random vector $X \in R^d$ with iid, mean-zero, variance $1$ coordinates, that satisfies $\sup_{t \in S^{d-1}} \|\|_{L_q} \leq c_2\sqrt{q}$ for every $2\leq q \leq p$. We show that if $m...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1702.06278
5
5.0
Jun 28, 2018
06/18
by
Xu Wang; Gilad Lerman
texts
eye 5
favorite 0
comment 0
Kernel methods obtain superb performance in terms of accuracy for various machine learning tasks since they can effectively extract nonlinear relations. However, their time complexity can be rather large especially for clustering tasks. In this paper we define a general class of kernels that can be easily approximated by randomization. These kernels appear in various applications, in particular, traditional spectral clustering, landmark-based spectral clustering and landmark-based subspace...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1510.08406
3
3.0
Jun 28, 2018
06/18
by
Thang D. Bui; José Miguel Hernández-Lobato; Yingzhen Li; Daniel Hernández-Lobato; Richard E. Turner
texts
eye 3
favorite 0
comment 0
Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers. DGPs are probabilistic and non-parametric and as such are arguably more flexible, have a greater capacity to generalise, and provide better calibrated uncertainty estimates than alternative deep models. The focus of this paper is scalable approximate Bayesian learning of these networks. The paper...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1511.03405
5
5.0
Jun 28, 2018
06/18
by
Dekang Zhu; Dan P. Guralnik; Xuezhi Wang; Xiang Li; Bill Moran
texts
eye 5
favorite 0
comment 0
We derive a statistical model for estimation of a dendrogram from single linkage hierarchical clustering (SLHC) that takes account of uncertainty through noise or corruption in the measurements of separation of data. Our focus is on just the estimation of the hierarchy of partitions afforded by the dendrogram, rather than the heights in the latter. The concept of estimating this "dendrogram structure'' is introduced, and an approximate maximum likelihood estimator (MLE) for the dendrogram...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1511.07944
3
3.0
Jun 28, 2018
06/18
by
Simone Romano; Nguyen Xuan Vinh; James Bailey; Karin Verspoor
texts
eye 3
favorite 0
comment 0
Adjusted for chance measures are widely used to compare partitions/clusterings of the same data set. In particular, the Adjusted Rand Index (ARI) based on pair-counting, and the Adjusted Mutual Information (AMI) based on Shannon information theory are very popular in the clustering community. Nonetheless it is an open problem as to what are the best application scenarios for each measure and guidelines in the literature for their usage are sparse, with the result that users often resort to...
Topics: Statistics, Machine Learning
Source: http://arxiv.org/abs/1512.01286
3
3.0
Jun 29, 2018
06/18
by
Hiroaki Sasaki; Gang Niu; Masashi Sugiyama
texts
eye 3
favorite 0
comment 0
Non-Gaussian component analysis (NGCA) is aimed at identifying a linear subspace such that the projected data follows a non-Gaussian distribution. In this paper, we propose a novel NGCA algorithm based on log-density gradient estimation. Unlike existing methods, the proposed NGCA algorithm identifies the linear subspace by using the eigenvalue decomposition without any iterative procedures, and thus is computationally reasonable. Furthermore, through theoretical analysis, we prove that the...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1601.07665
3
3.0
Jun 30, 2018
06/18
by
Brian McWilliams; Gabriel Krummenacher; Mario Lucic; Joachim M. Buhmann
texts
eye 3
favorite 0
comment 0
Subsampling methods have been recently proposed to speed up least squares estimation in large scale settings. However, these algorithms are typically not robust to outliers or corruptions in the observed covariates. The concept of influence that was developed for regression diagnostics can be used to detect such corrupted observations as shown in this paper. This property of influence -- for which we also develop a randomized approximation -- motivates our proposed subsampling algorithm for...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1406.3175
6
6.0
Jun 30, 2018
06/18
by
Diego Vidaurre; Iead Rezek; Samuel L. Harrison; Stephen S. Smith; Mark Woolrich
texts
eye 6
favorite 0
comment 0
Despite the fact that they do not consider the temporal nature of data, classic dimensionality reduction techniques, such as PCA, are widely applied to time series data. In this paper, we introduce a factor decomposition specific for time series that builds upon the Bayesian multivariate autoregressive model and hence evades the assumption that data points are mutually independent. The key is to find a low-rank estimation of the autoregressive matrices. As in the probabilistic version of other...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1406.3711
7
7.0
Jun 30, 2018
06/18
by
Vu Dinh; Lam Si Tung Ho; Nguyen Viet Cuong; Duy Nguyen; Binh T. Nguyen
texts
eye 7
favorite 0
comment 0
We prove new fast learning rates for the one-vs-all multiclass plug-in classifiers trained either from exponentially strongly mixing data or from data generated by a converging drifting distribution. These are two typical scenarios where training data are not iid. The learning rates are obtained under a multiclass version of Tsybakov's margin assumption, a type of low-noise assumption, and do not depend on the number of classes. Our results are general and include a previous result for...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1408.2714
3
3.0
Jun 30, 2018
06/18
by
Kacper Chwialkowski; Dino Sejdinovic; Arthur Gretton
texts
eye 3
favorite 0
comment 0
A wild bootstrap method for nonparametric hypothesis tests based on kernel distribution embeddings is proposed. This bootstrap method is used to construct provably consistent tests that apply to random processes, for which the naive permutation-based bootstrap fails. It applies to a large group of kernel tests based on V-statistics, which are degenerate under the null hypothesis, and non-degenerate elsewhere. To illustrate this approach, we construct a two-sample test, an instantaneous...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1408.5404
5
5.0
Jun 30, 2018
06/18
by
Zhe Liu
texts
eye 5
favorite 0
comment 0
$L_1$ regularized logistic regression has now become a workhorse of data mining and bioinformatics: it is widely used for many classification problems, particularly ones with many features. However, $L_1$ regularization typically selects too many features and that so-called false positives are unavoidable. In this paper, we demonstrate and analyze an aggregation method for sparse logistic regression in high dimensions. This approach linearly combines the estimators from a suitable set of...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1410.6959
4
4.0
Jun 30, 2018
06/18
by
Guy Baruch
texts
eye 4
favorite 0
comment 0
Motivated by large-scale Collaborative-Filtering applications, we present a Non-Commuting Latent Factor (NCLF) tensor-completion approach for modeling three-way arrays, which is diagonal like the standard PARAFAC, but wherein different terms distinguish different kinds of three-way relations of co-clusters, as determined by permutations of latent factors. The first key component of the algebraic representation is the usage of two non-commutative real trilinear operations as the building blocks...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1410.7383
4
4.0
Jun 29, 2018
06/18
by
Seth Flaxman; Dino Sejdinovic; John P. Cunningham; Sarah Filippi
texts
eye 4
favorite 0
comment 0
Kernel methods are one of the mainstays of machine learning, but the problem of kernel learning remains challenging, with only a few heuristics and very little theory. This is of particular importance in methods based on estimation of kernel mean embeddings of probability measures. For characteristic kernels, which include most commonly used ones, the kernel mean embedding uniquely determines its probability measure, so it can be used to design a powerful statistical testing framework, which...
Topics: Machine Learning, Statistics
Source: http://arxiv.org/abs/1603.02160