QUOTE (Kohlrabi @ Jan 3 2013, 01:13) <{POST_SNAPBACK}>
QUOTE (Nessuno @ Jan 2 2013, 21:12) <{POST_SNAPBACK}>
QUOTE (pdq @ Jan 2 2013, 20:28) <{POST_SNAPBACK}>
However, a failed ABX test does not prove that the sample was transparent to the tester, only that (s)he was unable to show, statistically speaking, that it was not.
Right, by definition transparency cannot be proved in absolute term, only statistically. But if a tester fail in a statistically relevant number of runs, then well, let's say there is a low probability that tomorrow he could as well succeed in a statistically relevant number of runs, all conditions being equals. May we not, then, draw the conclusion that this sample is transparent to him/her?
Stricto sensu, neither a successful test proves anything: a tester can, in theory, guess every time the correct answer without even listening. Of course statistics as well as real life experience lead us to consider this a very unlikely thing to happen, so we accept a successful test as a proof. This is also the reason why a single run cannot prove anything either way.
Now, if a tester fails in a statistically relevant number of runs etc etc... (as per above) what could it possibly mean, according to statistics and real life experience? Will you still accept the hypothesis that the two samples sound different to him?
Edit: yep, I think the whole sense of what has been said is clear, so I stop nitpicking too...
