Quick reference for end-to-end mfrmr analysis and for checking which
output objects support summary() and plot().
Typical workflow
Fit a model with
fit_mfrm(). For final reporting, prefermethod = "MML"unless you explicitly want a fast exploratory JML pass.(Optional) Use
run_mfrm_facets()ormfrmRFacets()for a legacy-compatible one-shot workflow wrapper.Build diagnostics with
diagnose_mfrm().(Optional) Estimate interaction bias with
estimate_bias().Generate reporting bundles:
apa_table(),build_fixed_reports(),build_visual_summaries().(Optional) Audit report completeness with
reference_case_audit(). Usefacets_parity_report()only when you explicitly need the compatibility layer.(Optional) Benchmark packaged reference cases with
reference_case_benchmark()when you want an internal package-native benchmark/audit run.(Optional) For design planning or future scoring, move to the simulation/prediction layer:
build_mfrm_sim_spec()/extract_mfrm_sim_spec()->evaluate_mfrm_design()/predict_mfrm_population()->predict_mfrm_units()/sample_mfrm_plausible_values(). Fixed-calibration unit scoring currently requires anMMLfit, and prediction export requires actual prediction objects in addition toinclude = "predictions".Use
summary()for compact text checks andplot()(or dedicated plot helpers) for base-R visual diagnostics.
Three practical routes
Quick first pass:
fit_mfrm()->diagnose_mfrm()->plot_qc_dashboard().Linking and coverage review:
subset_connectivity_report()->plot(..., type = "design_matrix")->plot_wright_unified().Manuscript prep:
reporting_checklist()->build_apa_outputs()->apa_table().Design planning and forecasting:
build_mfrm_sim_spec()orextract_mfrm_sim_spec()->evaluate_mfrm_design()->predict_mfrm_population()->predict_mfrm_units()orsample_mfrm_plausible_values()from anMMLfit ->export_mfrm_bundle(population_prediction = ..., unit_prediction = ..., plausible_values = ..., include = "predictions", ...).
Objects with default summary() and plot() routes
mfrm_fit:summary(fit)andplot(fit, ...).mfrm_diagnostics:summary(diag); plotting via dedicated helpers such asplot_unexpected(),plot_displacement(),plot_qc_dashboard().mfrm_bias:summary(bias)andplot_bias_interaction().mfrm_data_description:summary(ds)andplot(ds, ...).mfrm_anchor_audit:summary(aud)andplot(aud, ...).mfrm_facets_run:summary(run)andplot(run, type = c("fit", "qc"), ...).apa_table:summary(tbl)andplot(tbl, ...).mfrm_apa_outputs:summary(apa)for compact diagnostics of report text.mfrm_threshold_profiles:summary(profiles)for preset threshold grids.mfrm_population_prediction:summary(pred)for design-level forecast tables.mfrm_unit_prediction:summary(pred)for fixed-calibration unit-level posterior summaries.mfrm_plausible_values:summary(pv)for draw-level uncertainty summaries.mfrm_bundlefamilies:summary()and class-awareplot(bundle, ...). Key bundle classes now also use class-awaresummary(bundle):mfrm_unexpected,mfrm_fair_average,mfrm_displacement,mfrm_interrater,mfrm_facets_chisq,mfrm_bias_interaction,mfrm_rating_scale,mfrm_category_structure,mfrm_category_curves,mfrm_measurable,mfrm_unexpected_after_bias,mfrm_output_bundle,mfrm_residual_pca,mfrm_specifications,mfrm_data_quality,mfrm_iteration_report,mfrm_subset_connectivity,mfrm_facet_statistics,mfrm_parity_report,mfrm_reference_audit,mfrm_reference_benchmark.
plot.mfrm_bundle() coverage
Default dispatch now covers:
mfrm_unexpected,mfrm_fair_average,mfrm_displacementmfrm_interrater,mfrm_facets_chisq,mfrm_bias_interactionmfrm_bias_count,mfrm_fixed_reports,mfrm_visual_summariesmfrm_category_structure,mfrm_category_curves,mfrm_rating_scalemfrm_measurable,mfrm_unexpected_after_bias,mfrm_output_bundlemfrm_residual_pca,mfrm_specifications,mfrm_data_qualitymfrm_iteration_report,mfrm_subset_connectivity,mfrm_facet_statisticsmfrm_parity_report,mfrm_reference_audit,mfrm_reference_benchmark
For unknown bundle classes, use dedicated plotting helpers or custom base-R plots from component tables.
Examples
toy <- load_mfrmr_data("example_core")
keep_people <- unique(toy$Person)[1:15]
toy <- toy[toy$Person %in% keep_people, , drop = FALSE]
fit <- fit_mfrm(
toy,
person = "Person",
facets = c("Rater", "Criterion"),
score = "Score",
method = "JML",
maxit = 15
)
#> Warning: Optimizer did not fully converge (code = 1). Consider increasing maxit (current: 15) or relaxing reltol (current: 1e-06).
class(summary(fit))
#> [1] "summary.mfrm_fit"
diag <- diagnose_mfrm(fit, residual_pca = "none")
class(summary(diag))
#> [1] "summary.mfrm_diagnostics"
t4 <- unexpected_response_table(fit, diagnostics = diag, top_n = 10)
class(summary(t4))
#> [1] "summary.mfrm_bundle"
p <- plot(t4, draw = FALSE)
sc <- subset_connectivity_report(fit, diagnostics = diag)
p_design <- plot(sc, type = "design_matrix", draw = FALSE, preset = "publication")
class(p_design)
#> [1] "mfrm_plot_data" "list"
chk <- reporting_checklist(fit, diagnostics = diag)
head(chk$checklist[, c("Section", "Item", "DraftReady", "NextAction")])
#> Section Item DraftReady
#> 1 Method Section Model specification TRUE
#> 2 Method Section Data description TRUE
#> 3 Method Section Precision basis TRUE
#> 4 Method Section Convergence FALSE
#> 5 Method Section Connectivity assessed TRUE
#> 6 Global Fit Standardized residuals TRUE
#> NextAction
#> 1 Available; adapt this evidence into the manuscript draft after methodological review.
#> 2 Available; adapt this evidence into the manuscript draft after methodological review.
#> 3 Report the precision tier explicitly and keep the exploratory/hybrid caution in the APA narrative.
#> 4 Resolve convergence before reporting model results.
#> 5 Document the connectivity result before making common-scale or linking claims.
#> 6 Use standardized residuals as screening diagnostics, not as standalone proof of model adequacy.
sim_spec <- build_mfrm_sim_spec(
n_person = 30,
n_rater = 4,
n_criterion = 4,
raters_per_person = 2,
assignment = "rotating"
)
pred_pop <- predict_mfrm_population(sim_spec = sim_spec, reps = 2, maxit = 10, seed = 1)
#> Warning: Optimizer did not fully converge (code = 1). Consider increasing maxit (current: 10) or relaxing reltol (current: 1e-06).
#> Warning: Optimizer did not fully converge (code = 1). Consider increasing maxit (current: 10) or relaxing reltol (current: 1e-06).
summary(pred_pop)$forecast[, c("Facet", "MeanSeparation", "McseSeparation")]
#> # A tibble: 3 × 3
#> Facet MeanSeparation McseSeparation
#> <chr> <dbl> <dbl>
#> 1 Criterion 1.87 0.076
#> 2 Person 2.03 0.027
#> 3 Rater 0.728 0.728