Multiblock data fusion in statistics and machine learning

Chichester, West Sussex, UK: Wiley. Edited by Tormod Næs & Kristian H. Liland (2022)
  Copy   BIBTEX

Abstract

Combining information from two or possibly several blocks of data is gaining increased attention and importance in several areas of science and industry. Typical examples can be found in chemistry, spectroscopy, metabolomics, genomics, systems biology and sensory science. Many methods and procedures have been proposed and used in practice. The area goes under different names: data integration, data fusion, multiblock analyses, multiset analyses and a few more. This book is an attempt to give an up-to-date treatment of the most used and important methods within an important branch of the area; namely methods based on so-called components or latent variables. These methods have already obtained an enormous attention in for instance chemometrics, bioinformatics, machine learning and sensometrics and have proved to be important both for prediction and interpretation. The book is primarily a description of methodologies, but most of the methods will be illustrated by examples from the above-mentioned areas. The book is written such that both users of the methods as well as method developers will hopefully find sections of interest. In the end of the book there is a description of a software package developed particularly for the book. This package is freely available in R and covers many of the methods discussed. To distinguish the different type of methods from each other, the book is divided into five parts. Part I is introduction and preliminary concepts. Part II is the core of the book containing the main unsupervised and supervised methods. Part III deals with more complex structures and, finally, Part IV discusses alternative unsupervised and supervised methods. The book ends with Part V discussing the available software. Our recommendation for reading the book are as follows. A minimum read of the book would involve chapters 1, 2, 3, 5 and 7. Chapters 4, 6 and 8 are more specialized and chapters 9 and 10 contain methods we think are more advanced or less obvious to use. We feel privileged to have so many friendly colleagues who were willing to spend their time on helping us to improve the book by reading separate chapters. We would like to express our thanks to: Rasmus Bro, Margriet Hendriks, Ulf Indahl, Henk Kiers, Ingrid MaÌ⁽ge, Federico Marini, AÌ⁽smund Rinnan, Rosaria Romano, Lars Erik Solberg, Marieke Timmerman, Oliver Tomic, Johan Westerhuis and Barry Wise. Of course, the correctness of the final text is fully our responsibility!

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,897

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The statistical analysis of experimental data.John Mandel - 1964 - New York: Dover Publications.
Statistical Machine Learning and the Logic of Scientific Discovery.Antonino Freno - 2009 - Iris. European Journal of Philosophy and Public Debate 1 (2):375-388.
A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
Statistical explanation & statistical relevance.Wesley C. Salmon - 1971 - [Pittsburgh]: University of Pittsburgh Press. Edited by Richard C. Jeffrey & James G. Greeno.

Analytics

Added to PP
2023-08-14

Downloads
7 (#1,387,389)

6 months
4 (#790,314)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references