Many clinician researchers are attempting to "repurpose" old treatments for COVID-19. How shold we evaluate purported positive findings in a small, but rigorous, clinical trial?
Published research from respectable journals and reported by renowned press outlets can be very misleading and of questionable importance. But it helps keep funding for the researchers and readership for the news media.
Alzheimer's Disease has had many failures, and various companies have had mixed results. Bayesian approaches can bring clarity to the inference and primary question: "Does this treatment work?"
Implicit models in the back of our minds can creep into explicit models creating biased predictions that have societal implications.
If we fail to acknowledge that we have biases and assumptions that influence our assessment of 'objective facts,' then we delude ourselves. Our perception of reality and how we judge evidence is colored by our beliefs which arise from our specific experiences.
The probability that the null hypothesis is true is 0.50. How should we interpret that and then write it down mathematically?
You may have heard, “Always do subgroup analysis, but never believe them.” Don't believe this.
The over-reliance on p-values can lead to misinterpretation of data and a $150 million bet on a subgroup with scant evidence.
How do we know when an observed effect is real or spurious?
Some people say, "A p-value=0.05 is not very much evidence against the null hypothesis." Well then, how much evidence is it?