5b. The probability to pass the item

Psychometric item response models rest on the notion of the probability to pass the item. The present note investigates this notion from the epistemological perspective by contrast to the usual psychometric (i.e., statistical) perspective. Taking an epistemological perspective consists in asking (i) how to do to obtain some knowledge based on the probability notion, and (ii) whether this knowledge is falsifiable. The note concludes that an item response model offers no falsifiable knowledge about the quantitative differences the model invokes. If item responses were (ordinal) measurements, probabilistic item response models would be unnecessary. And if one is aware of the fact that these models are not measurement models, their usefulness remains ambiguous and should be detailed (Vautier, Veldhuis, Lacot, & Matton, 2012; Ziegler & Vautier, 2014). I would appreciate criticisms of my analysis.

1. The truth value of the statement “the probability that Paul passes this item now is 0.8”

How to know whether this statement is true or false? Let us consider any value p from ]0, 1[; the statement “the probability that Paul passes this item at this moment is p” is unfalsifiable because it entails “Paul will fail or pass”, which is tautological. Consequently, the answer to the question is: I don’t know (cf. Vautier, 2012).

If one assigns the values 0 or 1 to p, then at least one of the two statements is false. If Paul passes the item while p = 0 the statement is false; the same if Paul fails the item while p = 1.

Not so simple? Let us try again and consider the statement “the probability that Paul passes this item at this moment is 10-20”. Minuscule probability, which indicates that the possibility of passing is exceedingly small and that, as would say a friend whose psychometric scholarship is vast, one may reasonnably believe that Paul will fail the item. In other words, it would be surprising that Paul passes the item. And one would not bet that Paul will pass the item if one trusts those who claim such a probability. But… there are a lot of players in the gambling market, despite the games of chance are made in such a way that the probability to win is minuscule. The point is not scientific but philosophical: given that one ignores what governs Paul’s success, everyone is free to make his own divination.

Let us move backward to the premise. How to do to know the probability’s value? By deduction, but starting from what premises? Help, mister Valéry! By the use of relevant empirical knowledge? Only frequencies can be known, which deserves further elabotation.

2. Heads or tails, and the Binomial distribution: its works… more or less

“The probability of tails when I will flip the coin is 0.1”. Reading this satement literally, one has no available empirical knowledge since the coin has not been tossed. In other terms, one can only state ignorance; the surplus meaning of the probabilistic statement pertains not on the coin’s future behaviour but on the intention of the person who makes the probability statement (for example, “I would not bet on tails, so follow my example”).

Taking an empiricist point of view, one can invoke a reference class, that is, a set of distinct draws of the same coin, of which one postulates that they obey the same probability. Suppose the coin has been tossed n = 40 times, and one knows k = the number of tails. Can the data be used to measure the probability or to test its value?

Let us begin with the statistical test. If the number k of tails is high, one will be tempted to question this probability’s value (p = 0,1) since tails is by hypothesis very unlikely. But one should be aware of the fact that this feeling of doubt or surprise is logically invalid. The probability p = 0,1 prevents nothing: one may observe 0 tails, 1 tails, 2 tails, etc. until 40 tails in the series of the 40 draws. The Binomial distribution B(40, 0.1) indicates that the probability of observing 40 tails for example is 10-40, a very small but non-null number.

Here is the pragmatics the of null hypothesis testing in statistics: one chooses a type 1 error value (i.e., the risk of rejecting the hypothesis while it is true) of 0.05 for example, which corresponds to at least seven tails. If the series exhibits at least seven tails, the hypothesis p = 0.1 will be rejected. Warning: in this approach, rejection is not falsification because the hypothesis p = 0.1 is unfalsifiable: its consequences on the series of 40 draws prevent no logically possible events.

To sum up: the statement “the probability of tails when I will toss the coin is 0.1” is not statistically testable because the reference class of this probability contains zero element before the draw, only one element once the coin has been tossed. The number 0.1 can be used as an index (and not a measurement) of the trust one can have in the divination “tails”, but, like promises in politics, the divination commits only its believers.

Using a reference class of size n, one can test statistically the probability’s value by counting the relevant events in the series of the n draws, once a rejection policy has been adopted—but it is logically invalid since any number of tails, from 0 to n, is compatible with the hypothesis. Thus, if the divination “tails” does not succeed often enough, one has a conventional means to reject the expertise of those who stated the probability—which has few practical incidence when one considers respondents’ item responses because the experts in probability are not here; psychometricians are smart enough not to risk themselves in practical settings (see Rasch’s remark): they let scoring models’ users… use the scores.

Now let us consider the former approach, where one wants to measure the probability p by using the Binomial distribution B(40, p) and k. One refers to the probability Cnkpk(1 – p)nk of k hits in a series of n draws (see on the Internet any article on the Binomial distribution for details on the formula) and one seeks the value p that maximizes this probability. But there is a problem: if one manages to observe a new k, say k’, one can find another value and hence another measurement of p by maximizing Cnk’pk’(1 – p)nk’. Furthermore, one has to bound p by accumulating observations, instead of determining the exact value from a unique k. The issue arises whether the interval can be more accurate than ]0, 1[.

Let us reason by the limiting case. If the approximation interval could be restricted, some intervals bounded to the left by 0 or to the right by 1 would be excluded; but the probability value p that maximizes Cnkpk(1 – p)nk when k = 0 is p = 0, and the probability value p that maximizes Cnkpk(1 – p)nk when k = 40 is p = 1. As any probability lying in ]0, 1[ is compatible with k = 0 or k = n, one cannot exclude that the estimation yields the values 0 or 1. As the possibility of a counter-example precludes these extreme values, it follows that the probability that is maximized varies in ]0, 1[, depending on k. Consequently, one ignores how to measure the probability p for strictly logical reasons (see also the third section 3 of the note #4). Could anyone object to this argument?

Anyway, to estimate a probability by using the maximum likelihood requires the availability of a big-sized reference class, which stumbles over a second serious objection.

3. Passing or failling the item, and the Binomial distribution: it does not work

Let us go back to the field of scientific psychology: what reference class is available when one speaks of the probability that Paul passes the item now? It is the empty set as long as Paul has not treated the item, a singleton once he treated it. If, by a thinking experiment, one asks Paul to treat the item 40 times in order to use the number of hits to obtain a totally inaccurate measurement of his probability to pass this item at any moment, one realizes that there is a theoretical, and redhibitory impossibility: how to assume that the probability that Paul passes the item at the first trial is the same as the probability that he passes the same item at the nth trial? Paul is learning—i.e., he changes with experience. This argument seems to me sufficient to conclude that in psychology of item responses, one cannot use the notion of a reference class as an empirical basis for a probabilistic theorization of the response of anyone to any item.

4. The psychometric attempt

The preceding discussion seems not very relevant to psychometricians because they are essentially interested in the problem of how to assign a number to a m-tuple of responses. The notion of probability plays no theoretical role but an instrumental or computational role. But this role takes a meta-theoretical function: one is scoring, hence comparability is warranted.

The main idea of an item response model consists in replacing the measurement function, which models the item response with probabilities 0 or 1, by a psychometric function, which models the probability of the item response, with the assumption that the probability varies in ]0, 1[. To give an idea of what is at stake, I will refer to the Rasch model and built on Mark H. Moulton’s Rasch Estimation Demonstration Spreadsheet. As my purpose is not technical but epistemological, I will not draw the reader into the maze of the computations but rather into the analysis of the scientific meaning of the results. The objective is to understand that probabilization enables one to find out numerical values by successive trials, in such a way that these values provide necessarily, or converge toward the best possible solution. The question is: how to define the best solution? The best solution is that which minimizes a so-called residual quantity. The beauty of the art of statistical estimation rests on the existence of mathematical properties like the uniqueness of the solution that minimizes the residual. This uniqueness is a criterion for the social acceptability of the best scoring since one wants to score.

The starting point of the estimation process is a list of m-tuples as illustrated in Moulton’s spreadsheet. It can be seen that the 10-tuples are not simply ordered, which means that no measurement function exists (cf. note #3). Nevertheless, one wants to assign a numerical value to the items’ thresholds and to the 10-tuples, while admitting random ordinal defects. The random factor is modeled by the formula

Pr(Xsi = 1) = exp(βsδi)/[1 + exp(βsδi)],

where Xsi is a random, binary variable, the reference class of which is the virtual population of trials by the person s treating the item i, βs is the value to be assigned to s, and δi is the threshold of i (in the set of the real numbers instead of in [0, max]). This formula is equivalent to the logit

ln[Pr(Xsi = 1)/(1 – Pr(Xsi = 1)] = βsδi,

that defines the difference between the value of s and the item’s threshold.

This theoretical framework is nearly the same as the measurement framework, but the domain of the quantity to be estimated/measured is the set of the real numbers. However, in contrast to the scientific approach that recognizes that the quantity may not be measurable by the m-tuples if the measurement function is false, the psychometric approach assumes measurability by supplementing the theoretical framework with a probabilistic framework, the added value of which is computational, while recognizing that this framework is false (see for example Embretson and Reise’s remark—this is not redhibitory, it suffices that probabilities are useful to the scoring purpose).

Let us follow Moulton. The 10-tuples allow one to compute the proportion of hits for each of the nine respondents, and the proportion of incorrect responses for each of the 10 items, and the proportions are transformed into logits (the items’ logits being centered). The key of the estimation process  is the definition of the residual quantity of each cell in the contingency table (9 × 10). The respondents’ logits and the items’ centered logits are used as estimates of the βss and δis, which allows to estimate the probabilities to pass each item for each respondent (the probabilities are the mathematical expectations of the Xsis). The residual is the difference between the observed value (1 for success) and the expected value that has been computed.

Remark. The expectation of each Xsi is defined only if the variable’s values are treated as additive numbers. But 0 and 1 are just practical codes for “incorrect” and “correct”. The variable Xsi is not a numerical random variable; at best, it is a qualitative random variable if one admits the existence of the probability distribution that is associated with the observed response. The variable Xsi has no mathematical expectation.

Then one has to adjust the estimated logits in such a way that the residuals will be minimized, and so on until the gains become negligible. Indeed, this is great art and the algorithmicians of statistical optimisation could be admired. Finally, one obtains 10 values that order the respondents on a scale with nine rungs that have numerical values.

But the beauty of the picture should not mask this initial fact: the probability that any respondent amongst the 10 respondents pass any item amongst the nine items is a value in ]0, 1[. Even if, for pragmatic reasons, one accepts the unfalsifiability of the statements that can be derived, this probability is not statistically testable because its reference class is a singleton if the item response is known, an empty set if it is ignored.

The psychometric estimation gives the best possible values to the observed 10-tuples, that is, the best hierarchy, along with the socio-technical innovation that consists in making the experts of the optimization algorithm the sole persons who know to define exactly what is the best. The Rasch model, and more generally the psychometric item response models, are sophisticated numerical aggregation methods of partially ordered data, not measurement methods. Their development is possible just because the invoked quantities are not measurable.


OpenEdition vous propose de citer ce billet de la manière suivante :
Stéphane Vautier (9 juillet 2014). 5b. The probability to pass the item. Thérapies contemporaines (ex Épistémologie de la psychologie). Consulté le 12 décembre 2024 à l’adresse https://doi.org/10.58079/ogpm


Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.