Quick guide to choosing the right report or table helper in mfrmr.
Use this page when you know the reporting question but have not yet decided
which bundle, table, or reporting helper to call.
Start with the question
"How should I document the model setup and run settings?" Use
specifications_report()."Was data filtered, dropped, or mapped in unexpected ways?" Use
data_quality_report()anddescribe_mfrm_data()."Did estimation converge cleanly and how formal is the precision layer?" Use
estimation_iteration_report()andprecision_audit_report()."Which facets are measurable, variable, or weakly separated?" Use
facet_statistics_report(),measurable_summary_table(), andfacets_chisq_table()."Are score categories functioning in a usable sequence?" Use
rating_scale_table(),category_structure_report(), andcategory_curves_report()."Is the design linked well enough across subsets, forms, or waves?" Use
subset_connectivity_report()andplot_anchor_drift()."What should go into the manuscript text and tables?" Use
reporting_checklist()andbuild_apa_outputs().
Recommended report route
Start with
specifications_report()anddata_quality_report()to document the run and confirm usable data.Continue with
estimation_iteration_report()andprecision_audit_report()to judge convergence and inferential strength.Use
facet_statistics_report()andsubset_connectivity_report()to describe spread, linkage, and measurability.Add
rating_scale_table(),category_structure_report(), andcategory_curves_report()to document scale functioning.Finish with
reporting_checklist()andbuild_apa_outputs()for manuscript-oriented output.
Which output answers which question
specifications_report()Documents model type, estimation method, anchors, and core run settings. Best for method sections and audit trails.
data_quality_report()Summarizes retained and dropped rows, missingness, and unknown elements. Best for data cleaning narratives.
estimation_iteration_report()Shows replayed convergence trajectories. Best for diagnosing slow or unstable estimation.
precision_audit_report()Summarizes whether
SE,CI, and reliability indices are model-based, hybrid, or exploratory. Best for deciding how strongly to phrase inferential claims.facet_statistics_report()Bundles facet summaries, precision summaries, and variability tests. Best for facet-level reporting.
subset_connectivity_report()Summarizes disconnected subsets and coverage bottlenecks. Best for linking and anchor strategy review.
rating_scale_table()Gives category counts, average measures, and threshold diagnostics. Best for first-pass category evaluation.
category_structure_report()Adds transition points and compact category warnings. Best for category-order interpretation.
category_curves_report()Returns category-probability curve coordinates and summaries. Best for downstream graphics and report drafts.
reporting_checklist()Turns analysis status into an action list with priorities and next steps. Best for closing reporting gaps.
build_apa_outputs()Creates manuscript-draft text, notes, captions, and section maps from a shared reporting contract.
Practical interpretation rules
Use bundle summaries first, then drill down into component tables.
Treat
precision_audit_report()as the gatekeeper for formal inference.Treat category and bias outputs as complementary layers rather than substitutes for overall fit review.
Use
reporting_checklist()beforebuild_apa_outputs()when a report still needs missing diagnostics or clearer caveats.
Typical workflow
Run documentation:
fit_mfrm()->specifications_report()->data_quality_report().Precision and facet review:
diagnose_mfrm()->precision_audit_report()->facet_statistics_report().Scale review:
rating_scale_table()->category_structure_report()->category_curves_report().Manuscript handoff:
reporting_checklist()->build_apa_outputs().
Companion guides
For visual follow-up, see mfrmr_visual_diagnostics.
For one-shot analysis routes, see mfrmr_workflow_methods.
For manuscript assembly, see mfrmr_reporting_and_apa.
For linking and DFF review, see mfrmr_linking_and_dff.
For legacy-compatible wrappers and exports, see mfrmr_compatibility_layer.
Examples
toy <- load_mfrmr_data("example_core")
toy_small <- toy[toy$Person %in% unique(toy$Person)[1:12], , drop = FALSE]
fit <- fit_mfrm(
toy_small,
person = "Person",
facets = c("Rater", "Criterion"),
score = "Score",
method = "JML",
maxit = 10
)
#> Warning: Optimizer did not fully converge (code = 1). Consider increasing maxit (current: 10) or relaxing reltol (current: 1e-06).
diag <- diagnose_mfrm(fit, residual_pca = "none")
spec <- specifications_report(fit)
summary(spec)
#> mfrmr Specifications Summary
#> Class: mfrm_specifications
#> Components (6): header, data_spec, facet_labels, output_spec, convergence_control, anchor_summary
#>
#> Specification header
#> Engine Title DataFile OutputFile Model Method
#> mfrmr 0.1.0 RSM JMLE
#>
#> Specification rows: data_spec
#> Setting Value
#> Facets 2
#> Persons 12
#> Categories 4
#> RatingMin 1
#> RatingMax 4
#> NonCenteredFacet Person
#> PositiveFacets
#> DummyFacets
#> StepFacet
#> WeightColumn
#>
#> Notes
#> - Model specification summary for method and run documentation.
prec <- precision_audit_report(fit, diagnostics = diag)
summary(prec)
#> mfrmr Precision Audit Summary
#> Class: mfrm_precision_audit
#> Components (4): profile, checks, approximation_notes, settings
#>
#> Precision overview
#> Method PrecisionTier SupportsFormalInference Checks ReviewOrWarn NoteRows
#> JML exploratory FALSE 7 2 4
#>
#> Audit checks: checks
#> Check Status
#> Precision tier review
#> Optimizer convergence review
#> ModelSE availability pass
#> Fit-adjusted SE ordering pass
#> Reliability ordering pass
#> Facet precision coverage pass
#> SE source labels pass
#> Detail
#> This run uses the package's exploratory precision path; prefer MML for formal SE, CI, and reliability reporting.
#> The optimizer did not report convergence; keep SE, CI, and reliability in review mode.
#> Finite ModelSE values were available for 100.0% of rows.
#> Fit-adjusted SE values were not smaller than their paired ModelSE values.
#> Conservative reliability values were not larger than the model-based values.
#> Each facet had sample/population summaries for both model and fit-adjusted SE modes.
#> JML SE labels consistently identify observation-table information.
#>
#> Settings
#> Setting Value
#> model RSM
#> method JMLE
#> precision_tier exploratory
#>
#> Notes
#> - Exploratory precision path detected; use this run for screening and calibration triage, not as the package's primary inferential summary.
checklist <- reporting_checklist(fit, diagnostics = diag)
names(checklist)
#> [1] "checklist" "summary" "section_summary" "references"
#> [5] "settings"
apa <- build_apa_outputs(fit, diagnostics = diag)
names(apa$section_map)
#> [1] "SectionId" "Parent" "Heading" "Available"
#> [5] "SentenceCount" "Text"