For unsupervised learning, we propose a cross-modal subspaceĬlustering method to learn a common structure for different modalities. Propose a compound regularization framework to deal with the pairwiseĬonstraint, which can be used as a general platform for developing cross-modalĪlgorithms. This paper studies cross-modal learning via the pairwise constraint,Īnd aims to find the common structure hidden in different modalities. In multimedia applications, the text and image components in a web documentįorm a pairwise constraint that potentially indicates the same semanticĬoncept. Results are validated through multiple non-parametric statistical analysis. Experimental results indicate the better performance of ensemble methods than single-classifiers, but especially the best results of the multi-view multi-instance approaches. The experimental study evaluates and compares the performance of the proposal with 20 traditional, ensemble-based, and multi-view algorithms on a set of 15 multi-instance datasets. Importantly, the ensemble must deal with the different feature spaces coming from each of the views, while data for the bags may be partially represented in the views. This paper proposes an ensemble approach that combines view learners and pursues consensus among the weighted class predictions to take advantage of the complementary information from multiple views. Data fusion of different multi-instance views cannot be simply concatenated into a single set of features due to their different cardinality and feature space. Multi-instance learning represents examples as labeled bags containing sets of instances. Multi-view learning combines data from multiple heterogeneous sources and employs their complementary information to build more accurate models. And the comprehensive experimental results on twenty-five datasets demonstrate the validity and advantage of our approach. Three major contributions of this paper can be summarized as follows: (1) acquiring the high-order relationships between different samples by hypergraph learning (2) presenting a more reasonable discriminative regularization term by combining the discrimination metric and hypergraph learning (3) improving the performance of the existing SVM classifier by introducing HPC regularization term. The new classifier is expected to not only learn the structural information of each point itself, but also acquire the prior distribution knowledge about each constrained pair by combining the discrimination metric and hypergraph learning together. Inspired by the modified pairwise constraints trick, in this paper, we propose a novel classifier termed as support vector machine with hypergraph-based pairwise constraints to improve the performance of the classical SVM by introducing a new regularization term with hypergraph-based pairwise constraints (HPC). Finally, we demonstrate the effectiveness of our method in our experiments.Īlthough support vector machine (SVM) has become a powerful tool for pattern classification and regression, a major disadvantage is it fails to exploit the underlying correlation between any pair of data points as much as possible. The CvML and the CvCL generalize the traditional single-view must-link (SvML) and single-view cannot-link (SvCL), and to the best of our knowledge, are first definitely introduced and applied into the multi-view classification situations. In this paper, we propose an learning framework to design the multi-view classifiers by only employing the weak side information of cross-view must-links (CvML) and cross-view cannot-links (CvCL). Yet this requirement is difficult to satisfy in some settings and the multi-view data could be totally unpaired. In many real world situations, data with multiple representations or views are frequently encountered, and most proposed algorithms for such learning situations require that all the multi-view data should be paired. However, so far such information has never been applied in multi-view classification tasks. Side information, like must-link (ML) and cannot-link (CL), has been widely used in single-view classification tasks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |