RMIT University
Browse

Difficulties in benchmarking ecological null models: an assessment of current methods

journal contribution
posted on 2024-11-02, 11:58 authored by Chai Molina, Lewi StoneLewi Stone
Identifying species interactions and detecting when ecological communities are structured by them is an important problem in ecology and biogeography. Ecologists have developed specialized statistical hypothesis tests to detect patterns indicative of community‐wide processes in their field data. In this respect, null model approaches have proved particularly popular. The freedom allowed in choosing the null model and statistic to construct a hypothesis test leads to a proliferation of possible hypothesis tests from which ecologists can choose to detect these processes. Here, we point out some serious shortcomings of a popular approach to choosing the best hypothesis for the ecological problem at hand that involves benchmarking different hypothesis tests by assessing their performance on artificially constructed datasets. Terminological errors concerning the use of Type‐I and Type‐II errors that underlie these approaches are discussed. We argue that the key benchmarking methods proposed in the literature are not a sound guide for selecting null hypothesis tests, and further, that there is no simple way to benchmark null hypothesis tests. Surprisingly, the basic problems identified here do not appear to have been addressed previously, and these methods are still being used to develop and test new null models and summary statistics, from quantifying community structure (e.g., nestedness and modularity) to analyzing ecological networks.

History

Related Materials

  1. 1.
    DOI - Is published in 10.1002/ecy.2945
  2. 2.
    ISSN - Is published in 00129658

Journal

Ecology

Volume

101

Number

e02945

Issue

3

Start page

1

End page

9

Total pages

9

Publisher

John Wiley & Sons

Place published

United States

Language

English

Copyright

© 2019 The Authors

Former Identifier

2006096697

Esploro creation date

2020-06-22

Usage metrics

    Scholarly Works

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC