Wednesday, November 30, 2022

COVID Deaths

 From The Reactionary:

The latest data shows that 58% of COVID-19 deaths in August 2022 were from people who were vaccinated or boosted. Based on past figures and the current trends, we can reasonably estimate that the number of vaccinated/boosted COVID-19 deaths will only rise. (In September 2021, the vaccinated accounted for 23% of COVID-19 deaths; in January/February 2022, the vaccinated were 42%.)

This is what happens when you rush ineffective and dangerous vaccines.

The FDA’s promises of efficacy – 91% for the Pfizer vaccine and 93% for the Moderna vaccine – were always based on hope, not data. So too were the promises of safety. At the time of the official approvals, both Pfizer and Moderna hadn’t submitted any type of long-term numbers on effectiveness. Their trials were polluted with the unblinding of participants and their safety studies are “ongoing.”

Now, we’re seeing efficacy numbers plummet within months of vaccination. The pandemic is of the vaccinated. The boosters? They’re to the benefit of the medical establishment and the pharmaceutical companies, as they mask the true problems with the two-shot vaccines. (Read more.)

 

Evidence vs plausibility. From The Brownstone Institute:

There is perhaps no bigger plausibility sham today than “evidence-based medicine” (EBM). This term was coined by Gordon Guyatt in 1990, after his first attempt, “Scientific Medicine,” failed to gain acceptance the previous year. As a university epidemiologist in 1991, I was insulted by the hubris and ignorance in the use of this term, EBM, as if medical evidence were somehow “unscientific” until proclaimed a new discipline with new rules for evidence. I was not alone in criticism of EBM (Sackett et al., 1996), though much of that negative response seems to have been based on loss of narrative control rather than on objective review of what medical research had actually accomplished without “EBM.”

Western medical knowledge has accreted for thousands of years. In the Hebrew Bible (Exodus 21:19), “When two parties quarrel and one strikes the other … the victim shall be made thoroughly healed” [my translation] which implies that individuals who had types of medical knowledge existed and that some degree of efficacy inhered. Hippocrates, in the fifth-fourth century BCE, suggested that disease development might not be random but related to exposures from the environment or to certain behaviors. In that era, there were plenty of what today we would consider counterexamples to good medical practice. Nevertheless, it was a start, to think about rational evidence for medical knowledge.

James Lind (1716-1794) advocated for scurvy protection through the eating of citrus. This treatment was known to the ancients, and in particular had been earlier recommended by the English military surgeon John Woodall (1570-1643)—but Woodall was ignored. Lind gets the credit because in 1747 he carried out a small but successful nonrandomized, controlled trial of oranges and lemons vs other substances among 12 scurvy patients.

During the 1800s, Edward Jenner’s use of cowpox as a smallpox vaccine was elaborated by culturing in other animals and put into general use in outbreaks, so that by the time of the 1905 Supreme Court case of Jacobson v. Massachusetts, the Chief Justice could assert that smallpox vaccination was agreed upon by medical authorities to be a commonly accepted procedure. Medical journals started regular publications also in the 1800s. For example, the Lancet began publishing in 1824. Accreting medical knowledge started to be shared and debated more generally and widely.

Fast-forward to the 1900s. In 1914-15, Joseph Goldberger (1915) carried out a nonrandomized dietary intervention trial that concluded that pellagra was caused by lack of dietary protein. In the 1920s, vaccines for diphtheria, pertussis, tuberculosis and tetanus were developed. Insulin was extracted. Vitamins, including Vitamin D for preventing rickets, were developed. In the 1930s, antibiotics began to be created and used effectively. In the 1940s, acetaminophen was developed, as were chemotherapies, and conjugated estrogen began to be used to treat menopausal hot flashes. Effective new medications, vaccines and medical devices grew exponentially in number in the 1950s and 1960s. All without EBM.

In 1996, responding to criticisms of EBM, David Sackett et al. (1996) attempted to explain its overall principles. Sackett asserted that EBM followed from “Good doctors use both individual clinical expertise and the best available external evidence.” This is an anodyne plausibility implication, but both components are basically wrong or at least misleading. By phrasing this definition in terms of what individual doctors should do, Sackett was implying that individual practitioners should use their own clinical observations and experience. However, the general evidential representativeness of one individual’s clinical experience is likely to be weak. Just like other forms of evidence, clinical evidence needs to be systematically collected, reviewed, and analyzed, to form a synthesis of clinical reasoning, which would then provide the clinical component of scientific medical evidence.

A bigger failure of evidential reasoning is Sackett’s statement that one should use “the best available external evidence” rather than all valid external evidence. Judgments about what constitutes “best” evidence are highly subjective and do not necessarily yield overall results that are quantitatively the most accurate and precise (Hartling et al., 2013; Bae, 2016). In formulating his now canonical “aspects” of evidential causal reasoning, Sir Austin Bradford Hill (1965) did not include an aspect of what would constitute “best” evidence, nor did he suggest that studies should be measured or categorized for “quality of study” nor even that some types of study designs might be intrinsically better than others. In the Reference Manual on Scientific Evidence, Margaret Berger (2011) states explicitly, “… many of the most well-respected and prestigious scientific bodies (such as the International Agency for Research on Cancer (IARC), the Institute of Medicine, the National Research Council, and the National Institute for Environmental Health Sciences) consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.” This is exactly Hill’s approach; his aspects of causal reasoning have been very widely used for more than 50 years to reason from observation to causation, both in science and in law. That EBM is premised on subjectively cherry-picking “best” evidence is a plausible method but not a scientific one.

Over time, the EBM approach to selectively considering “best” evidence seems to have been “dumbed down,” first by placing randomized controlled trials (RCTs) at the top of a pyramid of all study designs as the supposed “gold standard” design, and later, as the asserted only type of study that can be trusted to obtain unbiased estimates of effects. All other forms of empirical evidence are “potentially biased” and therefore unreliable. This is a plausibility conceit as I will show below.

But it is so plausible that it is routinely taught in modern medical education, so that most doctors only consider RCT evidence and dismiss all other forms of empirical evidence. It is so plausible that this author had an on-air verbal battle over it with a medically uneducated television commentator who provided no evidence other than plausibility (Whelan, 2020): Isn’t it “just obvious” that if you randomize subjects, any differences must be caused by the treatment, and no other types of studies can be trusted? Obvious, yes; true, no.

Who benefits from a sole, obsessive focus on RCT evidence? RCTs are very expensive to conduct if they are to be epidemiologically valid and statistically adequate. They can cost millions or tens of millions of dollars, which limit their appeal largely to companies promoting medical products likely to bring in profits substantially larger than those costs. Historically, pharma control and manipulation of RCT evidence in the regulation process provided an enormous boost in the ability to push products through regulatory approval into the marketplace, and the motivation to do this still continues today. (Read more.)


Share

No comments: