taking the statistical competence (and/or the intellectual honesty) of authors for granted can be misleading....
Σελίδα 1 από 1
taking the statistical competence (and/or the intellectual honesty) of authors for granted can be misleading....
Ten ways to cheat on statistical tests when writing up results
Throw all your data into a computer and report as significant any relation where P<0.05
If baseline differences between the groups favour the intervention group, remember not to adjust for them
Do not test your data to see if they are normally distributed. If you do, you might get stuck with non-itemmetric tests, which aren't as much fun
Ignore all withdrawals (drop outs) and non-responders, so the analysis only concerns subjects who fully complied with treatment
Always assume that you can plot one set of data against another and calculate an "r value" (Pearson correlation coefficient), and assume that a "significant" r value proves causation
If outliers (points which lie a long way from the others on your graph) are messing up your calculations, just rub them out. But if outliers are helping your case, even if they seem to be spurious results, leave them in
If the confidence intervals of your result overlap zero difference between the groups, leave them out of your report. Better still, mention them briefly in the text but don't draw them in on the graph—and ignore them when drawing your conclusions
If the difference between two groups becomes significant four and a half months into a six month trial, stop the trial and start writing up. Alternatively, if at six months the results are "nearly significant," extend the trial for another three weeks
If your results prove uninteresting, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your intervention worked after all in Chinese women aged 52-61
If analysing your data the way you plan to does not give the result you wanted, run the figures through a selection of other tests
Throw all your data into a computer and report as significant any relation where P<0.05
If baseline differences between the groups favour the intervention group, remember not to adjust for them
Do not test your data to see if they are normally distributed. If you do, you might get stuck with non-itemmetric tests, which aren't as much fun
Ignore all withdrawals (drop outs) and non-responders, so the analysis only concerns subjects who fully complied with treatment
Always assume that you can plot one set of data against another and calculate an "r value" (Pearson correlation coefficient), and assume that a "significant" r value proves causation
If outliers (points which lie a long way from the others on your graph) are messing up your calculations, just rub them out. But if outliers are helping your case, even if they seem to be spurious results, leave them in
If the confidence intervals of your result overlap zero difference between the groups, leave them out of your report. Better still, mention them briefly in the text but don't draw them in on the graph—and ignore them when drawing your conclusions
If the difference between two groups becomes significant four and a half months into a six month trial, stop the trial and start writing up. Alternatively, if at six months the results are "nearly significant," extend the trial for another three weeks
If your results prove uninteresting, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your intervention worked after all in Chinese women aged 52-61
If analysing your data the way you plan to does not give the result you wanted, run the figures through a selection of other tests
maria- Αριθμός μηνυμάτων : 5
Ημερομηνία εγγραφής : 16/10/2009
Σελίδα 1 από 1
Δικαιώματα σας στην κατηγορία αυτή
Δεν μπορείτε να απαντήσετε στα Θέματα αυτής της Δ.Συζήτησης
|
|