Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Editorial

Editorial Journal of Classification 27:277-278 (2010) DOI: 10.1007/s00357-010-9065- 5 Blockmodeling aims at a simultaneous clustering of the nodes and edges of a graph, and is one of the leading techniques in social network analysis. The lead paper of this issue by Brandes and Lerner breaks new ground here by exploiting the concept of structural similarity to accommo- date lack of perfect fit between model and data. Structural similarities are efficiently computable by eigenvalue decompositions, can recover latent node partitions from specific random graphs, and result in a generalization of current spectral graph partitioning methods. The second paper, by Ma, Cardinal-Stakenas, Park, Trosset and Priebe also addresses dyadic data of a particular kind, multiple dissimilari- ty matrices, in the context of a supervised classification task. They show the advantages of first embedding each dissimilarity matrix in low- dimensional space by multidimensional scaling and then combining the embeddings to build a classifier, instead of first building multiple classifi- ers on the individual dissimilarity matrices and then combining the sepa- rate classifiers. Agreement between two labeled partitions is often measured by Co- hen’s kappa statistic. Warrens gives a formal proof of the puzzling pheno- menon that for a fixed amount of concordantly classified http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Classification Springer Journals

Loading next page...
 
/lp/springer-journals/editorial-TZ7K0O6PB0

References (0)

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Springer Journals
Copyright
Copyright © 2010 by Springer Science+Business Media, LLC
Subject
Statistics; Marketing ; Psychometrics; Signal, Image and Speech Processing; Bioinformatics; Pattern Recognition; Statistical Theory and Methods
ISSN
0176-4268
eISSN
1432-1343
DOI
10.1007/s00357-010-9065-5
Publisher site
See Article on Publisher Site

Abstract

Journal of Classification 27:277-278 (2010) DOI: 10.1007/s00357-010-9065- 5 Blockmodeling aims at a simultaneous clustering of the nodes and edges of a graph, and is one of the leading techniques in social network analysis. The lead paper of this issue by Brandes and Lerner breaks new ground here by exploiting the concept of structural similarity to accommo- date lack of perfect fit between model and data. Structural similarities are efficiently computable by eigenvalue decompositions, can recover latent node partitions from specific random graphs, and result in a generalization of current spectral graph partitioning methods. The second paper, by Ma, Cardinal-Stakenas, Park, Trosset and Priebe also addresses dyadic data of a particular kind, multiple dissimilari- ty matrices, in the context of a supervised classification task. They show the advantages of first embedding each dissimilarity matrix in low- dimensional space by multidimensional scaling and then combining the embeddings to build a classifier, instead of first building multiple classifi- ers on the individual dissimilarity matrices and then combining the sepa- rate classifiers. Agreement between two labeled partitions is often measured by Co- hen’s kappa statistic. Warrens gives a formal proof of the puzzling pheno- menon that for a fixed amount of concordantly classified

Journal

Journal of ClassificationSpringer Journals

Published: Nov 10, 2010

There are no references for this article.