Appendix B — Near Equality Tests

This manual is still under development and may be subject to change.

This warning will be removed once the manual is finalized.

This document provides unit tests for the LogoClim NetLogo model. The tests perform near-equality comparisons to validate tolerance expectations during the processing of WorldClim data in NetLogo.

B.1 Problem

LogoClim integrates WorldClim data into NetLogo models, requiring the conversion of raster data between formats.

WorldClim provides raster data in GeoTIFF format, a high-precision standard for geospatial data with extensive metadata capabilities. However, NetLogo’s GIS extension requires Esri ASCII format, a simpler text-based format with lower precision and limited metadata.

This conversion process inevitably introduces information loss or alteration. These unit tests ensure that discrepancies remain within acceptable tolerance levels, maintaining the reliability of downstream simulations and analyses.

B.2 Methods

B.2.1 Source of Data

Data used in this report come from the following sources:

B.2.2 Data Munging

The data munging followed the data science workflow outlined by Wickham et al. (2023), as illustrated in Figure B.1. All processes were made using the Quarto publishing system (Allaire et al., n.d.), the NetLogo environment, the R programming language (R Core Team, n.d.), and several R packages.

For data manipulation and workflow, priority was given to packages from the tidyverse, rOpenSci and rspatial ecosystems, as well as other packages adhering to the tidy tools manifesto (Wickham, 2023).

Figure B.1: Data science workflow created by Wickham, Çetinkaya-Runde, and Grolemund.

Source: Reproduced from Wickham et al. (2023).

B.2.3 Data Extraction and Transformation

Data extraction was performed using the worldclim_download() function from the orbis R package (Vartanian, 2026c). This function scrapes climate data from the WorldClim website and downloads the relevant GeoTIFF files for the specified variables and time periods.

Following the extraction, the transformation from GeoTIFF to Esri ASCII format is carried out using the worldclim_to_ascii() function from the orbis R package.

B.2.4 NetLogo Integration

Integration with NetLogo (Wilensky, 1999) is facilitated by the logolink R package (Vartanian, 2026b). This package enables the execution of BehaviorSpace experiments directly from R.

Output is extracted in Table and Lists format, containing values, latitude, and longitude for patches, along with global variables that describe the model’s configuration and settings.

No Java dependencies are required. NetLogo bundles its own Java Runtime Environment (JRE), ensuring independent operation regardless of the system’s Java installation.

B.2.5 Continuous Integration

These tests use the latest release of NetLogo and are automated using GitHub Actions provided by the LogoActions project (Vartanian, 2026a). Each commit to the code repository triggers test execution, ensuring that changes to the codebase are validated against defined tolerance levels.

B.2.6 Near Equality Tests

The data validation is performed using error tolerance tests with expectations functions from the testthat R package (Wickham, 2011). These tests compare the values of the WorldClim data loaded in LogoClim against the original WorldClim dataset.

Each test selects a random variable for a random month or year using the worldclim_random() function from the orbis R package. No seed is set for the random number generator, so results vary between runs.

Elevation, bioclimatic variables, and the models FIO-ESM-2-0, GFDL-ESM4, and HadGEM3-GC31-LL are excluded due to their data limitations.

Two main aspects are considered while performing near equality tests:

B.2.6.1 Minimum Number of Cells

To ensure valid statistical analysis, a minimum number of cells is required. Resolution for each country was determined based on its area, aiming to achieve approximately 1,000 cells. This calculation considers the available resolutions and their approximate cell areas at the Equator:

  • 10 minutes (~340 km² per cell)
  • 5 minutes (~85 km² per cell)
  • 2.5 minutes (~21 km² per cell)
  • 30 seconds (~1 km² per cell)

Micronations, like Dominica, were excluded from the analysis due to their small size and limited data availability.

B.2.6.2 Tolerance Level

The all.equal and testthat expect_equal functions were used to perform near-equality tests, both relying on relative tolerance. Comparisons are conducted between the original GeoTIFF file and patch values extracted directly from the LogoClim model.

Relative tolerance is proportional to the value of the quantity being measured or calculated. The principle is that larger values can tolerate larger errors.

Given \(x\) and \(y\), relative tolerance can be expressed as:

\[ |x - y| \leq \text{tolerance} \times \max(|x|, |y|) \]

or

\[ \text{tolerance} \geq \frac{|x - y|}{\max(|x|, |y|)} \]

where:

  • \(x\) and \(y\) are the values being compared
  • \(\text{tolerance}\) is the relative tolerance level
  • \(\max(|x|, |y|)\) is the maximum absolute value of \(x\) and \(y\)

For this analysis, the following tolerance level is used:

Code
tolerance <- 0.01

\[ \frac{|x - y|}{\max(|x|, |y|)} \leq 0.01 \]

B.2.7 Code Style

The Tidyverse Tidy Tools Manifesto (Wickham, 2023), code style guide (Wickham, n.d.-a) and design principles (Wickham, n.d.-b) were followed to ensure consistency and enhance readability.

B.2.8 Reproducibility

The pipeline is fully reproducible and can be run again at any time. To ensure consistent results, the renv package (Ushey & Wickham, 2025) was used to manage and restore the R environment. See the README file in the code repository to learn how to run it.

B.3 Set the Environment

B.3.1 Load Packages

Code
library(brandr)
library(checkmate)
library(cli)
library(dplyr)
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union
library(fs)
library(geodata)
#> Loading required package: terra
#> terra 1.8.86
#> 
#> Attaching package: 'terra'
#> The following object is masked from 'package:knitr':
#> 
#>     spin
#> The following objects are masked from 'package:magrittr':
#> 
#>     extract, inset
library(ggplot2)
library(here)
library(knitr)
library(leaflet)
library(logolink)
library(magrittr)
library(moments)
library(orbis) # github.com/danielvartan/orbis
library(patchwork)
#> 
#> Attaching package: 'patchwork'
#> The following object is masked from 'package:terra':
#> 
#>     area
library(purrr)
#> 
#> Attaching package: 'purrr'
#> The following object is masked from 'package:magrittr':
#> 
#>     set_names
library(sf)
#> Linking to GEOS 3.14.1, GDAL 3.12.0, PROJ 9.7.0; sf_use_s2() is TRUE
library(stringr)
library(terra)
library(testthat)
#> 
#> Attaching package: 'testthat'
#> The following objects are masked from 'package:terra':
#> 
#>     compare, describe
#> The following objects are masked from 'package:magrittr':
#> 
#>     equals, is_less_than, not
library(tidyr)
#> 
#> Attaching package: 'tidyr'
#> The following object is masked from 'package:terra':
#> 
#>     extract
#> The following object is masked from 'package:magrittr':
#> 
#>     extract
library(tidyterra)
#> 
#> Attaching package: 'tidyterra'
#> The following object is masked from 'package:stats':
#> 
#>     filter

B.3.2 Load Custom Functions

The source code for the functions below can be found in the R directory of the code repository.

Code
here("R", "compare_plots.R") |> source()
here("R", "compare_statistics.R") |> source()

B.3.3 Set Data Directory

The here R package (Müller, 2025) is used to construct file paths relative to the project root directory, ensuring portability across different systems.

Code
data_dir <- here("data")
Code
if (!dir_exists(data_dir)) {
  dir_create(data_dir, recurse = TRUE)
}

B.3.4 Set Initial Variables

Setting the JAVA_TOOL_OPTIONS is optional, but recommended to avoid unnecessary messages from the Java Media Framework.

Code
Sys.setenv(JAVA_TOOL_OPTIONS = "-Dcom.sun.media.jai.disableMediaLib=true")
Code
model_path <- here("nlogox", "logoclim.nlogox")

B.4 Select Random Country

B.4.1 Select Country

The list of countries is based on the ISO 3166-1 alpha-3 standard and draw using the ISOcodes R package (Hornik & Buchta, 2025).

Code
country <- country_names("alpha 3") |> sample(1)
Code
country
#> Mexico 
#>  "MEX"

B.4.2 Download Country Shape

The rspatial geodata R package (Hijmans et al., 2024) was used to download country shapes from the GADM database (Hijmans, n.d.).

Code
country_shape <-
  country |>
  gadm(
    level = 0,
    path = path(data_dir)
  )

B.4.3 Calculate Shape Area

Code
shape_area <-
  country_shape |>
  expanse(unit = "km") |>
  magrittr::extract(1)

This while loop ensures that micronations are not selected. Micronation territories are very small and do not achieve the minimum threshold of 1,000 cells.

Code
while (!(shape_area / 21 >= 1000)) {
  country <- country_names("alpha 3") |> sample(1)

  country_shape <-
    country |>
    gadm(
      level = 0,
      path = path(data_dir)
    )

  shape_area <-
    country_shape |>
    expanse(unit = "km") |>
    magrittr::extract(1)
}
Code
shape_area
#> [1] 1951230.548
Code
country
#> Mexico 
#>  "MEX"

B.4.4 Visualize Country Shape

The rotate() function from the terra package (Hijmans, 2026) is used to adjust shapes that cross the International Date Line (e.g., Russian territory).

This adjustment is applied solely for visualizing countries on leaflet. The Esri ASCII transformation function already accounts for such cases.

Code
idl_countries <- c(
  "FJI",
  "KIR",
  "NZL",
  "RUS",
  "USA"
)
Code
if (country %in% idl_countries) {
  country_shape_leaflet <-
    country_shape |>
    rotate()
} else {
  country_shape_leaflet <- country_shape
}
Code
leaflet() |>
  addTiles() |>
  fitBounds(
    lng1 = country_shape_leaflet |>
      st_bbox() |>
      magrittr::extract("xmin") |>
      unname(),
    lat1 = country_shape_leaflet |>
      st_bbox() |>
      magrittr::extract("ymin") |>
      unname(),
    lng2 = country_shape_leaflet |>
      st_bbox() |>
      magrittr::extract("xmax") |>
      unname(),
    lat2 = country_shape_leaflet |>
      st_bbox() |>
      magrittr::extract("ymax") |>
      unname()
  ) |>
  addPolygons(
    data = country_shape,
    fillColor = "transparent",
    color = "blue",
    weight = 2,
    opacity = 1
  )

B.5 Test Historical Climate Data

This section performs near-equality tests using WorldClim’s Historical Climate Data series.

B.5.1 Select Random WorldClim Dataset

Code
setup <- worldclim_random("hcd")
Code
while (setup$variable %in% c("bioc", "elev")) {
  setup <- worldclim_random("hcd")
}
Code
setup <-
  setup |>
  inset2(
    "resolution",
    case_when(
      shape_area / 340 >= 1000 ~
        c("10 Minutes (~340 km2 at the Equator)" = "10m"),
      shape_area / 85 >= 1000 ~
        c("5 Minutes (~85 km2 at the Equator)" = "5m"),
      shape_area / 21 >= 1000 ~
        c("2.5 Minutes (~21 km2 at the Equator)" = "2.5m"),
      TRUE ~
        c("30 Seconds (~1 km2  at the Equator)" = "30s")
    )
  )
Code
setup
#> $series
#> Historical Climate Data 
#>                   "hcd" 
#> 
#> $resolution
#> 10 Minutes (~340 km2 at the Equator) 
#>                                "10m" 
#> 
#> $variable
#> Average Temperature (°C) 
#>                   "tavg" 
#> 
#> $year
#> 1970-2000 
#>      1980 
#> 
#> $month
#> April 
#>     4

B.5.2 Download Dataset

Code
tif_files <- worldclim_download(
  series = setup$series,
  resolution = setup$resolution,
  variable = setup$variable,
  model = setup$model,
  ssp = setup$ssp,
  year = names(setup$year),
  dir = tempdir()
)
#> ℹ Scraping WorldClim Website
#> ✔ Scraping WorldClim Website [470ms]
#> 
#> ℹ Calculating File Sizes
#> ℹ Total download size (compressed): 35.6M.
#> ℹ Calculating File Sizes
✔ Calculating File Sizes [737ms]
#> 
#> ℹ Creating LICENSE and README Files
#> ✔ Creating LICENSE and README Files [102ms]
#> 
#> ℹ Downloading Files
#> ℹ Downloading 1 file to '/tmp/Rtmp8SWov7/historical-climate-data'
#> ℹ Downloading Files
✔ Downloading Files [19.8s]
#> 
#> ℹ Unzipping Files
#> ✔ Unzipping Files [21ms]

B.5.3 Transform Data to Esri ASCII Format

Code
tif_file <-
  tif_files |>
  str_subset(
    paste0(
      "(?<=_)",
      setup$variable |> unname(),
      "_",
      str_pad(setup$month, width = 2, pad = "0")
    )
  )

The dx parameter specifies the degree and direction of data rotation. Negative values rotate the data to the left, while positive values rotate it to the right. This adjustment is applied only for countries crossing the International Date Line.

Code
asc_file <-
  tif_file |>
  worldclim_to_ascii(
    shape = country_shape,
    dx = if_else(country == "USA", 30, -45)
  )

B.5.4 Run Data in LogoClim

Code
setup_file <- create_experiment(
  name = paste0("WorldClim", ": ", names(setup$series)),
  setup = 'setup false',
  metrics = c(
    'month',
    'year',
    'world-width',
    'world-height',
    'cell-size',
    '[first latitude] of patches',
    '[first longitude] of patches',
    '[value] of patches'
  ),
  constants = list(
    "data-series" = names(setup$series),
    "data-resolution" = names(setup$resolution),
    "climate-variable" = names(setup$variable),
    "start-month" = names(setup$month),
    "start-year" = setup$year,
    "data-path" = tempdir()
  )
)
Code
results <-
  model_path |>
  run_experiment(
    setup_file = setup_file,
    output = c("table", "lists")
  )
Code
results |> glimpse()
#> List of 3
#>  $ metadata:List of 6
#>   ..$ timestamp       : POSIXct[1:1], format: "2026-01-12 09:32:44"
#>   ..$ netlogo_version : chr "7.0.3"
#>   ..$ output_version  : chr "2.0"
#>   ..$ model_file      : chr "logoclim.nlogox"
#>   ..$ experiment_name : chr "WorldClim: Historical Climate Data"
#>   ..$ world_dimensions: Named int [1:4] -135 135 -117 117
#>   .. ..- attr(*, "names")= chr [1:4] "min-pxcor" "max-pxcor" "min-pycor" "max-pycor"
#>  $ table   : tibble [1 × 16] (S3: tbl_df/tbl/data.frame)
#>   ..$ run_number                : num 1
#>   ..$ data_series               : chr "Historical Climate Data"
#>   ..$ data_resolution           : chr "10 Minutes (~340 km2 at the Equator)"
#>   ..$ climate_variable          : chr "Average Temperature (°C)"
#>   ..$ start_month               : chr "April"
#>   ..$ start_year                : num 1980
#>   ..$ data_path                 : chr "/tmp/Rtmp8SWov7"
#>   ..$ step                      : num 1
#>   ..$ month                     : chr "April"
#>   ..$ year                      : chr "1970-2000"
#>   ..$ world_width               : num 191
#>   ..$ world_height              : num 110
#>   ..$ cell_size                 : num 0.167
#>   ..$ first_latitude_of_patches : chr "[27.166666666692002 23.333333333351 19.166666666676 15.500000000001998 16.166666666669997 19.500000000009997 31"| __truncated__
#>   ..$ first_longitude_of_patches: chr "[-92.49999999994799 -109.999999999983 -116.49999999999599 -89.66666666660899 -107.833333333312 -100.66666666663"| __truncated__
#>   ..$ value_of_patches          : chr "[false 18.3132495880127 false false false 20.6547508239746 19.2145004272461 false false false false false false"| __truncated__
#>  $ lists   : tibble [21,010 × 12] (S3: tbl_df/tbl/data.frame)
#>   ..$ run_number                : num [1:21010] 1 1 1 1 1 1 1 1 1 1 ...
#>   ..$ data_series               : chr [1:21010] "Historical Climate Data" "Historical Climate Data" "Historical Climate Data" "Historical Climate Data" ...
#>   ..$ data_resolution           : chr [1:21010] "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" ...
#>   ..$ climate_variable          : chr [1:21010] "Average Temperature (°C)" "Average Temperature (°C)" "Average Temperature (°C)" "Average Temperature (°C)" ...
#>   ..$ start_month               : chr [1:21010] "April" "April" "April" "April" ...
#>   ..$ start_year                : num [1:21010] 1980 1980 1980 1980 1980 1980 1980 1980 1980 1980 ...
#>   ..$ data_path                 : chr [1:21010] "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" ...
#>   ..$ step                      : num [1:21010] 1 1 1 1 1 1 1 1 1 1 ...
#>   ..$ index                     : num [1:21010] 0 1 2 3 4 5 6 7 8 9 ...
#>   ..$ first_latitude_of_patches : num [1:21010] 27.2 23.3 19.2 15.5 16.2 ...
#>   ..$ first_longitude_of_patches: num [1:21010] -92.5 -110 -116.5 -89.7 -107.8 ...
#>   ..$ value_of_patches          : chr [1:21010] "false" "18.3132495880127" "false" "false" ...

B.5.5 Compare Plots

In some cases, LogoClim plots may appear striped. This happens because no data interpolation is applied to compensate for geodesic distortions. There is nothing wrong with the data itself.

Code
compare_plots(
  tif_file = tif_file,
  shape = country_shape,
  results = results,
  setup = setup,
  dx = if_else(country == "USA", 30, -45),
  viridis = FALSE
)

B.5.6 Compare Statistics

Code
statistics <- compare_statistics(
  tif_file = tif_file,
  shape = country_shape,
  results = results,
  tolerance = tolerance
)
Code
statistics

B.5.7 Test Near-Equality

Code
for (i in seq(4, nrow(statistics))) {
  statistics |>
    pull(statistic) |>
    magrittr::extract(i) |>
    cli_progress_step()

  statistics |>
    pull(logoclim) |>
    magrittr::extract(i) |>
    expect_equal(
      statistics |>
        pull(worldclim) |>
        magrittr::extract(i),
      tolerance = tolerance
    )
}
#> ℹ mean
#> ✔ mean [343ms]
#> 
#> ℹ var
#> ✔ var [53ms]
#> 
#> ℹ sd
#> ✔ sd [44ms]
#> 
#> ℹ min
#> ✔ min [44ms]
#> 
#> ℹ q_1
#> ✔ q_1 [67ms]
#> 
#> ℹ median
#> ✔ median [46ms]
#> 
#> ℹ q_3
#> ✔ q_3 [46ms]
#> 
#> ℹ max
#> ✔ max [45ms]
#> 
#> ℹ iqr
#> ✔ iqr [45ms]
#> 
#> ℹ range
#> ✔ range [44ms]
#> 
#> ℹ skewness
#> ✔ skewness [65ms]
#> 
#> ℹ kurtosis

cli_progress_done()
#> ✔ kurtosis [48ms]
#> 

B.6 Test Historical Monthly Weather Data

This section performs near-equality tests using WorldClim’s Historical Monthly Weather Data series.

B.6.1 Select Random WorldClim Dataset

Code
setup <- worldclim_random("hmwd")

WorldClim’s Historical Monthly Weather Data series does not include the 30 seconds (~1 km² at the Equator) resolution.

Code
setup <-
  setup |>
  inset2(
    "resolution",
    case_when(
      shape_area / 340 >= 1000 ~
        c("10 Minutes (~340 km2 at the Equator)" = "10m"),
      shape_area / 85 >= 1000 ~
        c("5 Minutes (~85 km2 at the Equator)" = "5m"),
      TRUE ~
        c("2.5 Minutes (~21 km2 at the Equator)" = "2.5m")
    )
  )
Code
setup
#> $series
#> Historical Monthly Weather Data 
#>                          "hmwd" 
#> 
#> $resolution
#> 10 Minutes (~340 km2 at the Equator) 
#>                                "10m" 
#> 
#> $variable
#> Total Precipitation (mm) 
#>                   "prec" 
#> 
#> $year
#> 2020-2024 
#>      2023 
#> 
#> $month
#> April 
#>     4

B.6.2 Download Dataset

Code
tif_files <- worldclim_download(
  series = setup$series,
  resolution = setup$resolution,
  variable = setup$variable,
  model = setup$model,
  ssp = setup$ssp,
  year = names(setup$year),
  dir = tempdir()
)
#> ℹ Scraping WorldClim Website
#> ✔ Scraping WorldClim Website [433ms]
#> 
#> ℹ Calculating File Sizes
#> ℹ Total download size (compressed): 104M.
#> ℹ Calculating File Sizes
✔ Calculating File Sizes [701ms]
#> 
#> ℹ Creating LICENSE and README Files
#> ✔ Creating LICENSE and README Files [12ms]
#> 
#> ℹ Downloading Files
#> ℹ Downloading 1 file to '/tmp/Rtmp8SWov7/historical-monthly-weather-data'
#> ℹ Downloading Files
✔ Downloading Files [1m 5.1s]
#> 
#> ℹ Unzipping Files
#> ✔ Unzipping Files [15ms]

B.6.3 Transform Data to Esri ASCII Format

Code
tif_file <-
  tif_files |>
  str_subset(
    paste0(
      setup$year,
      "-",
      str_pad(setup$month, width = 2, pad = "0")
    )
  )

The dx parameter specifies the degree and direction of data rotation. Negative values rotate the data to the left, while positive values rotate it to the right. This adjustment is applied only for countries crossing the International Date Line.

Code
asc_file <-
  tif_file |>
  worldclim_to_ascii(
    shape = country_shape,
    dx = if_else(country == "USA", 30, -45)
  )

B.6.4 Run Data in LogoClim

Code
setup_file <- create_experiment(
  name = paste0("WorldClim", ": ", names(setup$series)),
  setup = 'setup false',
  metrics = c(
    'month',
    'year',
    'world-width',
    'world-height',
    'cell-size',
    '[first latitude] of patches',
    '[first longitude] of patches',
    '[value] of patches'
  ),
  constants = list(
    "data-series" = names(setup$series),
    "data-resolution" = names(setup$resolution),
    "climate-variable" = names(setup$variable),
    "start-month" = names(setup$month),
    "start-year" = setup$year,
    "data-path" = tempdir()
  )
)
Code
results <-
  model_path |>
  run_experiment(
    setup_file = setup_file,
    output = c("table", "lists")
  )
Code
results |> glimpse()
#> List of 3
#>  $ metadata:List of 6
#>   ..$ timestamp       : POSIXct[1:1], format: "2026-01-12 09:34:23"
#>   ..$ netlogo_version : chr "7.0.3"
#>   ..$ output_version  : chr "2.0"
#>   ..$ model_file      : chr "logoclim.nlogox"
#>   ..$ experiment_name : chr "WorldClim: Historical Monthly Weather Data"
#>   ..$ world_dimensions: Named int [1:4] -135 135 -117 117
#>   .. ..- attr(*, "names")= chr [1:4] "min-pxcor" "max-pxcor" "min-pycor" "max-pycor"
#>  $ table   : tibble [1 × 16] (S3: tbl_df/tbl/data.frame)
#>   ..$ run_number                : num 1
#>   ..$ data_series               : chr "Historical Monthly Weather Data"
#>   ..$ data_resolution           : chr "10 Minutes (~340 km2 at the Equator)"
#>   ..$ climate_variable          : chr "Total Precipitation (mm)"
#>   ..$ start_month               : chr "April"
#>   ..$ start_year                : num 2023
#>   ..$ data_path                 : chr "/tmp/Rtmp8SWov7"
#>   ..$ step                      : num 1
#>   ..$ month                     : chr "April"
#>   ..$ year                      : num 2023
#>   ..$ world_width               : num 191
#>   ..$ world_height              : num 110
#>   ..$ cell_size                 : num 0.167
#>   ..$ first_latitude_of_patches : chr "[30.000000000031 27.500000000026 30.333333333365 32.166666666702 22.000000000015 29.666666666697 24.66666666668"| __truncated__
#>   ..$ first_longitude_of_patches: chr "[-100.16666666663 -100.49999999996399 -102.99999999996899 -100.833333333298 -112.333333333321 -88.666666666607 "| __truncated__
#>   ..$ value_of_patches          : chr "[false 63.6124992370605 false false false false false false false false false false false false 4.5124998092651"| __truncated__
#>  $ lists   : tibble [21,010 × 12] (S3: tbl_df/tbl/data.frame)
#>   ..$ run_number                : num [1:21010] 1 1 1 1 1 1 1 1 1 1 ...
#>   ..$ data_series               : chr [1:21010] "Historical Monthly Weather Data" "Historical Monthly Weather Data" "Historical Monthly Weather Data" "Historical Monthly Weather Data" ...
#>   ..$ data_resolution           : chr [1:21010] "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" "10 Minutes (~340 km2 at the Equator)" ...
#>   ..$ climate_variable          : chr [1:21010] "Total Precipitation (mm)" "Total Precipitation (mm)" "Total Precipitation (mm)" "Total Precipitation (mm)" ...
#>   ..$ start_month               : chr [1:21010] "April" "April" "April" "April" ...
#>   ..$ start_year                : num [1:21010] 2023 2023 2023 2023 2023 ...
#>   ..$ data_path                 : chr [1:21010] "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" "/tmp/Rtmp8SWov7" ...
#>   ..$ step                      : num [1:21010] 1 1 1 1 1 1 1 1 1 1 ...
#>   ..$ index                     : num [1:21010] 0 1 2 3 4 5 6 7 8 9 ...
#>   ..$ first_latitude_of_patches : num [1:21010] 30 27.5 30.3 32.2 22 ...
#>   ..$ first_longitude_of_patches: num [1:21010] -100 -100 -103 -101 -112 ...
#>   ..$ value_of_patches          : chr [1:21010] "false" "63.6124992370605" "false" "false" ...

B.6.5 Compare Plots

In some cases, LogoClim plots may appear striped. This happens because no data interpolation is applied to compensate for geodesic distortions. There is nothing wrong with the data itself.

Code
compare_plots(
  tif_file = tif_file,
  shape = country_shape,
  results = results,
  setup = setup,
  dx = if_else(country == "USA", 30, -45),
  viridis = FALSE
)

B.6.6 Compare Statistics

Code
statistics <- compare_statistics(
  tif_file = tif_file,
  shape = country_shape,
  results = results,
  tolerance = tolerance
)
Code
statistics

B.6.7 Test Near-Equality

Code
for (i in seq(4, nrow(statistics))) {
  statistics |>
    pull(statistic) |>
    magrittr::extract(i) |>
    cli_progress_step()

  statistics |>
    pull(logoclim) |>
    magrittr::extract(i) |>
    expect_equal(
      statistics |>
        pull(worldclim) |>
        magrittr::extract(i),
      tolerance = tolerance
    )
}
#> ℹ mean
#> ✔ mean [16ms]
#> 
#> ℹ var
#> ✔ var [20ms]
#> 
#> ℹ sd
#> ✔ sd [28ms]
#> 
#> ℹ min
#> ✔ min [20ms]
#> 
#> ℹ q_1
#> ✔ q_1 [19ms]
#> 
#> ℹ median
#> ✔ median [20ms]
#> 
#> ℹ q_3
#> ✔ q_3 [19ms]
#> 
#> ℹ max
#> ✔ max [19ms]
#> 
#> ℹ iqr
#> ✔ iqr [19ms]
#> 
#> ℹ range
#> ✔ range [19ms]
#> 
#> ℹ skewness
#> ✔ skewness [19ms]
#> 
#> ℹ kurtosis

cli_progress_done()
#> ✔ kurtosis [19ms]
#>