Skip to contents

This is an interface function to estimated factor scores based on the task-level (i.e., passage-level in ORF assessment context) accuracy and speed data. It implements likelihood-based approaches (MLE, MAP, or EAP) described in Qiao et al. (under review) or fully Bayesian method described in Kara et al. (2020).

Usage

scoring(
  calib.data = NA,
  data = NA,
  person.id = "",
  task.id = "",
  sub.task.id = "",
  occasion = "",
  group = "",
  max.counts = "",
  obs.counts = "",
  time = "",
  cens = "",
  cases = NULL,
  est = "map",
  se = "analytic",
  failsafe = 0,
  bootstrap = 100,
  external = NULL,
  type = "general",
  censoring = FALSE,
  testlet = FALSE
)

Arguments

calib.data

A class object. Output from calibration phase by fit.model function

data

A data frame. It is necessary when with the next five arguments person.id, task.id, max. counts, obs. counts, time.

person.id

Quoted variable name in data that indicates the unique individual identifier.

task.id

Quoted variable name in data that represents the unique task identifier. In the ORF assessment context, it is the passage identifier.

sub.task.id

Quoted variable name in data that indicates the unique sub task identifier. In the ORF assessment context, it is the sentence identifier. It is required when testlet is TRUE.

occasion

The column name in the data that represents the unique occasion.

group

The column name in the data that represents the unique group.

max.counts

Quoted variable name in data that represents the number of attempts in the task. In the ORF assessment context, it is the number of words in the passage.

obs.counts

The column name in the data that represents the words read correctly for each case.

time

The column name in the data that represents the time, in seconds, for each case.

cens

The column name in the data that represents the censoring indicators whether a specific task or sub task was censored (1) or fully observed (0). This column is necessary when censoring argument is TRUE.

cases

A vector of individual id for which scoring is desired. If no information is is specified, it will estimate scores for all cases in the data.

est

Quoted string, indicating the choice of the estimator. It has to be one of code/"mle", "map", "eap", "bayes". Default is "map".

se

Quoted string, indication the choice of the standard errors. It has to be one of code/"analytic", "bootstrap". Default is "analytic".

failsafe

Numeric, indicating the number of retries for bootstrap, which can be set between 0 and 50. Default is 0.

external

An optional vector of task ID's in strings. If NULL (default), the wcpm scores are derived with the tasks the individuals were assigned to. If not NULL, wcpm scores are derived with the tasks provided in the vector, rather than the tasks the individuals were assigned.

type

Quoted string, indication of the choice of output. If "general" (default), wcpm scores are not reported. If "orf", wcpm scores will be reported.

censoring

Boolean. If TRUE, interface will call task or sub task censoring. Default is FALSE.

testlet

Boolean. If TRUE, runs with sub task level, otherwise, with task level. This argument is necessary when censoring is TRUE. Default is FALSE.

bootstrp

Numeric, indicating the number of bootstrap iterations. Default is 100.

Value

scoring list or Bootstrap dataset or censoring list

Details

Additional details...

Note

More additional note...

References

Qiao, X, Potgieter, N., & Kamata, A. (2023). Likelihood Estimation of Model-based Oral Reading Fluency. Manuscript submitted for publication.

Kara, Y., Kamata, A., Potgieter, C., & Nese, J. F. (2020). Estimating model-based oral reading fluency: A bayesian approach with a binomial-lognormal joint latent model. Educational and Psychological Measurement, 1–25.

See also

fit.model for model parameter estimation.

Examples

# example code
WCPM_all <- scoring(calib.data=MCEM_run, 
                   data = passage2,
                   person.id = "id.student",
                   occasion = "occasion",
                   group = "grade",
                   task.id = "id.passage",
                   max.counts = "numwords.pass",
                   obs.counts = "wrc",
                   time = "sec",
                   est = "map", 
                   se = "analytic",
                   type="general")
#> Error in eval(expr, envir, enclos): object 'MCEM_run' not found