Build an unexpected-after-adjustment screening report
Source:R/api-tables.R
unexpected_after_bias_table.RdBuild an unexpected-after-adjustment screening report
Usage
unexpected_after_bias_table(
fit,
bias_results,
diagnostics = NULL,
abs_z_min = 2,
prob_max = 0.3,
top_n = 100,
rule = c("either", "both")
)Arguments
- fit
Output from
fit_mfrm().- bias_results
Output from
estimate_bias().- diagnostics
Optional output from
diagnose_mfrm()for baseline comparison.- abs_z_min
Absolute standardized-residual cutoff.
- prob_max
Maximum observed-category probability cutoff.
- top_n
Maximum number of rows to return.
- rule
Flagging rule:
"either"or"both".
Value
A named list with:
table: unexpected responses after bias adjustmentsummary: one-row summary (includes baseline-vs-after counts)thresholds: applied thresholdsfacets: analyzed bias facet pair
Details
This helper recomputes expected values and residuals after interaction
adjustments from estimate_bias() have been introduced.
summary(t10) is supported through summary().
plot(t10) is dispatched through plot() for class
mfrm_unexpected_after_bias (type = "scatter", "severity",
"comparison").
Interpreting output
summary: before/after unexpected counts and reduction metrics.table: residual unexpected responses after bias adjustment.thresholds: screening settings used in this comparison.
Large reductions indicate bias terms explain part of prior unexpectedness; persistent unexpected rows indicate remaining model-data mismatch.
Typical workflow
Run
unexpected_response_table()as baseline.Estimate bias via
estimate_bias().Run
unexpected_after_bias_table(...)and compare reductions.
Further guidance
For a plot-selection guide and a longer walkthrough, see
mfrmr_visual_diagnostics and
vignette("mfrmr-visual-diagnostics", package = "mfrmr").
Output columns
The table data.frame has the same structure as
unexpected_response_table() output, with an additional
BiasAdjustment column showing the bias correction applied to each
observation's expected value.
The summary data.frame contains:
- TotalObservations
Total observations analyzed.
- BaselineUnexpectedN
Unexpected count before bias adjustment.
- AfterBiasUnexpectedN
Unexpected count after adjustment.
- ReducedBy, ReducedPercent
Reduction in unexpected count.
Examples
toy <- load_mfrmr_data("example_bias")
fit <- fit_mfrm(toy, "Person", c("Rater", "Criterion"), "Score", method = "JML", maxit = 25)
diag <- diagnose_mfrm(fit, residual_pca = "none")
bias <- estimate_bias(fit, diag, facet_a = "Rater", facet_b = "Criterion", max_iter = 2)
t10 <- unexpected_after_bias_table(fit, bias, diagnostics = diag, top_n = 20)
summary(t10)
#> mfrmr Unexpected-after-Bias Summary
#> Class: mfrm_unexpected_after_bias
#> Components (4): table, summary, thresholds, facets
#>
#> After-bias threshold summary
#> TotalObservations UnexpectedN UnexpectedPercent LowProbabilityN LargeResidualN
#> 384 20 5.208 20 9
#> Rule AbsZThreshold ProbThreshold BaselineUnexpectedN AfterBiasUnexpectedN
#> either 2 0.3 88 20
#> ReducedBy ReducedPercent
#> 68 77.273
#>
#> After-bias flagged rows: table
#> Row Person Rater Criterion Weight Score Observed Expected Residual
#> 343 P043 R01 Accuracy 1 1 1 3.191 -2.191
#> 136 P017 R04 Accuracy 1 3 3 1.451 1.549
#> 279 P035 R02 Accuracy 1 2 2 3.501 -1.501
#> 254 P032 R03 Language 1 4 4 2.216 1.784
#> 31 P004 R02 Accuracy 1 3 3 1.496 1.504
#> 131 P017 R02 Organization 1 1 1 2.717 -1.717
#> 110 P014 R03 Language 1 2 2 3.397 -1.397
#> 215 P027 R01 Accuracy 1 2 2 3.363 -1.363
#> 135 P017 R02 Accuracy 1 4 4 2.471 1.529
#> 269 P034 R02 Language 1 1 1 2.500 -1.500
#> StdResidual ObsProb MostLikely MostLikelyProb CategoryGap Surprise
#> -3.148 0.008 3 0.505 2 2.072
#> 2.630 0.046 1 0.598 2 1.335
#> -2.491 0.052 4 0.556 2 1.284
#> 2.343 0.037 2 0.486 2 1.427
#> 2.467 0.056 1 0.563 2 1.251
#> -2.262 0.048 3 0.487 2 1.320
#> -2.179 0.077 4 0.480 2 1.114
#> -2.091 0.086 4 0.456 2 1.065
#> 1.990 0.077 2 0.419 2 1.114
#> -1.954 0.087 3 0.422 2 1.061
#> Direction FlagLowProbability FlagLargeResidual Severity
#> Lower than expected TRUE TRUE 6.220
#> Higher than expected TRUE TRUE 4.964
#> Lower than expected TRUE TRUE 4.775
#> Higher than expected TRUE TRUE 4.771
#> Higher than expected TRUE TRUE 4.718
#> Lower than expected TRUE TRUE 4.581
#> Lower than expected TRUE TRUE 4.292
#> Lower than expected TRUE TRUE 4.155
#> Higher than expected TRUE FALSE 4.105
#> Lower than expected TRUE FALSE 4.014
#> BiasAdjustment
#> 0.776
#> -1.103
#> 0.246
#> 0.155
#> 0.246
#> -0.023
#> 0.155
#> 0.776
#> 0.246
#> -0.209
#>
#> Settings
#> Setting Value
#> abs_z_min 2
#> prob_max 0.3
#> rule either
#>
#> Notes
#> - Unexpected-response summary after interaction adjustment.Bias interaction: Rater x Criterion x c("Rater", "Criterion") x 2 x pairwise
p_t10 <- plot(t10, draw = FALSE)
class(p_t10)
#> [1] "mfrm_plot_data" "list"