Skip to contents

Plot fair-average diagnostics using base R

Usage

plot_fair_average(
  x,
  diagnostics = NULL,
  facet = NULL,
  metric = c("AdjustedAverage", "StandardizedAdjustedAverage", "FairM", "FairZ"),
  plot_type = c("difference", "scatter"),
  top_n = 40,
  draw = TRUE,
  preset = c("standard", "publication", "compact"),
  ...
)

Arguments

x

Output from fit_mfrm() or fair_average_table().

diagnostics

Optional output from diagnose_mfrm() when x is mfrm_fit.

facet

Optional facet name for level-wise lollipop plots.

metric

Adjusted-score metric. Accepts legacy names ("FairM", "FairZ") and package-native names ("AdjustedAverage", "StandardizedAdjustedAverage").

plot_type

"difference" or "scatter".

top_n

Maximum levels shown for "difference" plot.

draw

If TRUE, draw with base graphics.

preset

Visual preset ("standard", "publication", or "compact").

...

Additional arguments passed to fair_average_table() when x is mfrm_fit.

Value

A plotting-data object of class mfrm_plot_data. With draw = FALSE, the payload includes title, subtitle, legend, reference_lines, and the stacked fair-average data.

Details

Fair-average plots compare observed scoring tendency against model-based fair metrics.

FairM is the model-predicted mean score for each element, adjusting for the ability distribution of persons actually encountered. It answers: "What average score would this rater/criterion produce if all raters/criteria saw the same mix of persons?"

FairZ standardises FairM to a z-score across elements within each facet, making it easier to compare relative severity across facets with different raw-score scales.

Use FairM when the raw-score metric is meaningful (e.g., reporting average ratings on the original 1–4 scale). Use FairZ when comparing standardised severity ranks across facets.

Plot types

"difference" (default)

Lollipop chart showing the gap between observed and fair-average score for each element. X-axis: Observed - Fair metric. Y-axis: element labels. Points colored teal (lenient, gap >= 0) or orange (severe, gap < 0). Ordered by absolute gap.

"scatter"

Scatter plot of fair metric (x) vs observed average (y) with an identity line. Points colored by facet. Useful for checking overall alignment between observed and model-adjusted scores.

Interpreting output

Difference plot: ranked element-level gaps (Observed - Fair), useful for triage of potentially lenient/severe levels.

Scatter plot: global agreement pattern relative to the identity line.

Larger absolute gaps suggest stronger divergence between observed and model-adjusted scoring.

Typical workflow

  1. Start with plot_type = "difference" to find largest discrepancies.

  2. Use plot_type = "scatter" to check overall alignment pattern.

  3. Follow up with facet-level diagnostics for flagged levels.

Further guidance

For a plot-selection guide and a longer walkthrough, see mfrmr_visual_diagnostics and vignette("mfrmr-visual-diagnostics", package = "mfrmr").

Examples

toy <- load_mfrmr_data("example_core")
fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)
p <- plot_fair_average(fit, metric = "AdjustedAverage", draw = FALSE)
if (interactive()) {
  plot_fair_average(fit, metric = "AdjustedAverage", plot_type = "difference")
}