Quantcast
Channel: Hydrogenaudio Posts
Viewing all articles
Browse latest Browse all 11785

What we measure is what we hear

$
0
0
QUOTE (Kohlrabi @ Jan 3 2013, 01:13) <{POST_SNAPBACK}>
QUOTE (Nessuno @ Jan 2 2013, 21:12) <{POST_SNAPBACK}>
QUOTE (pdq @ Jan 2 2013, 20:28) <{POST_SNAPBACK}>
However, a failed ABX test does not prove that the sample was transparent to the tester, only that (s)he was unable to show, statistically speaking, that it was not.

Right, by definition transparency cannot be proved in absolute term, only statistically. But if a tester fail in a statistically relevant number of runs, then well, let's say there is a low probability that tomorrow he could as well succeed in a statistically relevant number of runs, all conditions being equals. May we not, then, draw the conclusion that this sample is transparent to him/her?
To say what pdq said in other words: The reason a failed ABX test doesn't prove anything is because that is not the intention of the ABX test in the first place. You conduct the test to reject the null hypothesis that two samples sound the same, and this can only be achieved by a successful test.

Stricto sensu, neither a successful test proves anything: a tester can, in theory, guess every time the correct answer without even listening. Of course statistics as well as real life experience lead us to consider this a very unlikely thing to happen, so we accept a successful test as a proof. This is also the reason why a single run cannot prove anything either way.
Now, if a tester fails in a statistically relevant number of runs etc etc... (as per above) what could it possibly mean, according to statistics and real life experience? Will you still accept the hypothesis that the two samples sound different to him?

Edit: yep, I think the whole sense of what has been said is clear, so I stop nitpicking too... wink.gif

Viewing all articles
Browse latest Browse all 11785

Trending Articles