Skip to contents

Quick guide to choosing the right report or table helper in mfrmr. Use this page when you know the reporting question but have not yet decided which bundle, table, or reporting helper to call.

Start with the question

  1. Start with specifications_report() and data_quality_report() to document the run and confirm usable data.

  2. Continue with estimation_iteration_report() and precision_audit_report() to judge convergence and inferential strength.

  3. Use facet_statistics_report() and subset_connectivity_report() to describe spread, linkage, and measurability.

  4. Add rating_scale_table(), category_structure_report(), and category_curves_report() to document scale functioning.

  5. Finish with reporting_checklist() and build_apa_outputs() for manuscript-oriented output.

Which output answers which question

specifications_report()

Documents model type, estimation method, anchors, and core run settings. Best for method sections and audit trails.

data_quality_report()

Summarizes retained and dropped rows, missingness, and unknown elements. Best for data cleaning narratives.

estimation_iteration_report()

Shows replayed convergence trajectories. Best for diagnosing slow or unstable estimation.

precision_audit_report()

Summarizes whether SE, CI, and reliability indices are model-based, hybrid, or exploratory. Best for deciding how strongly to phrase inferential claims.

facet_statistics_report()

Bundles facet summaries, precision summaries, and variability tests. Best for facet-level reporting.

subset_connectivity_report()

Summarizes disconnected subsets and coverage bottlenecks. Best for linking and anchor strategy review.

rating_scale_table()

Gives category counts, average measures, and threshold diagnostics. Best for first-pass category evaluation.

category_structure_report()

Adds transition points and compact category warnings. Best for category-order interpretation.

category_curves_report()

Returns category-probability curve coordinates and summaries. Best for downstream graphics and report drafts.

reporting_checklist()

Turns analysis status into an action list with priorities and next steps. Best for closing reporting gaps.

build_apa_outputs()

Creates manuscript-draft text, notes, captions, and section maps from a shared reporting contract.

Practical interpretation rules

  • Use bundle summaries first, then drill down into component tables.

  • Treat precision_audit_report() as the gatekeeper for formal inference.

  • Treat category and bias outputs as complementary layers rather than substitutes for overall fit review.

  • Use reporting_checklist() before build_apa_outputs() when a report still needs missing diagnostics or clearer caveats.

Companion guides

Examples

toy <- load_mfrmr_data("example_core")
toy_small <- toy[toy$Person %in% unique(toy$Person)[1:12], , drop = FALSE]
fit <- fit_mfrm(
  toy_small,
  person = "Person",
  facets = c("Rater", "Criterion"),
  score = "Score",
  method = "JML",
  maxit = 10
)
#> Warning: Optimizer did not fully converge (code = 1). Consider increasing maxit (current: 10) or relaxing reltol (current: 1e-06).
diag <- diagnose_mfrm(fit, residual_pca = "none")

spec <- specifications_report(fit)
summary(spec)
#> mfrmr Specifications Summary 
#>   Class: mfrm_specifications
#>   Components (6): header, data_spec, facet_labels, output_spec, convergence_control, anchor_summary
#> 
#> Specification header
#>       Engine Title DataFile OutputFile Model Method
#>  mfrmr 0.1.0                             RSM   JMLE
#> 
#> Specification rows: data_spec
#>           Setting  Value
#>            Facets      2
#>           Persons     12
#>        Categories      4
#>         RatingMin      1
#>         RatingMax      4
#>  NonCenteredFacet Person
#>    PositiveFacets       
#>       DummyFacets       
#>         StepFacet       
#>      WeightColumn       
#> 
#> Notes
#>  - Model specification summary for method and run documentation.

prec <- precision_audit_report(fit, diagnostics = diag)
summary(prec)
#> mfrmr Precision Audit Summary 
#>   Class: mfrm_precision_audit
#>   Components (4): profile, checks, approximation_notes, settings
#> 
#> Precision overview
#>  Method PrecisionTier SupportsFormalInference Checks ReviewOrWarn NoteRows
#>     JML   exploratory                   FALSE      7            2        4
#> 
#> Audit checks: checks
#>                     Check Status
#>            Precision tier review
#>     Optimizer convergence review
#>      ModelSE availability   pass
#>  Fit-adjusted SE ordering   pass
#>      Reliability ordering   pass
#>  Facet precision coverage   pass
#>          SE source labels   pass
#>                                                                                                            Detail
#>  This run uses the package's exploratory precision path; prefer MML for formal SE, CI, and reliability reporting.
#>                            The optimizer did not report convergence; keep SE, CI, and reliability in review mode.
#>                                                          Finite ModelSE values were available for 100.0% of rows.
#>                                         Fit-adjusted SE values were not smaller than their paired ModelSE values.
#>                                      Conservative reliability values were not larger than the model-based values.
#>                              Each facet had sample/population summaries for both model and fit-adjusted SE modes.
#>                                                JML SE labels consistently identify observation-table information.
#> 
#> Settings
#>         Setting       Value
#>           model         RSM
#>          method        JMLE
#>  precision_tier exploratory
#> 
#> Notes
#>  - Exploratory precision path detected; use this run for screening and calibration triage, not as the package's primary inferential summary.

checklist <- reporting_checklist(fit, diagnostics = diag)
names(checklist)
#> [1] "checklist"       "summary"         "section_summary" "references"     
#> [5] "settings"       

apa <- build_apa_outputs(fit, diagnostics = diag)
names(apa$section_map)
#> [1] "SectionId"     "Parent"        "Heading"       "Available"    
#> [5] "SentenceCount" "Text"