I'm a fan of Neil DeGrasse Tyson. I don't think he's lying about this.
I think you might be speaking politics. I'm thinking science.
I can recommend an alternative methodology if you want to check for yourself.
Academic journals have something called impact factors, it`s basically a way of ranking journals in a particular discipline based on the average number of citations a paper from that journal receives.
For the past few years, the top journals have been
Journal of Climate,
Bulletin of the American Meteorological Society,
Global Biogeochemical Cycles,
Atmospheric Chemistry and Physics (open access, no paywall!), and
Climate Dynamics.
Pick an issue from one of the journals, or many journals from one month, and just sample. To be scientifically rigorous of course you should randomly sample (to eliminate bias) from the population, which includes far more than the top 5. Check out the abstracts. How many support the mainstream view, and how many do not? Count abstracts that offer no indication towards the mainstream view or contrarian views as not a success, and those that do obviously as successes. Ignore papers on methodology development that make no conclusions one way or the other. So now you have a number of abstracts you viewed (minus the excluded method papers), the number of successes, and the probability of a success. You can test the 97% figure for yourself with an online binomial probability calculator, to test the statistical hypothesis that the 97% figure is true. All you need is three numbers, the number of trials (number of sampled articles), the number of successes (articles supporting human induced climate change) and a test expected proportion (the 97% figure estimated by other studies of the literature).
As an example, suppose you sampled 24 articles, and 20 of them are supporting human induced climate change. This is less than 24*0.97=23.3 expected. But of course there is a distribution of results we should expect, so we need to take into account the distribution of possible results, i.e. 1 fail 23 successes, 2 fails 22 successes, etc. The calculator will determine the probability of the number of observed successes given the expected proportion of successes. If there is evidence that the expected proportion is wrong, the probability calculated will be less than our significance level. The standard is 5% or 0.05.
So, at the standard 5% level of significance, do we have evidence that the expected proportion is wrong?
Sign and binomial test
The answer is no. Not until we have 18 or fewer successes out of 24 does the probability fall below our 5% level of significance.
You can test this for yourself Twila, it's very simple, and it's what we scientists do! Just three numbers in a calculator is all you need, and you'll very likely learn something along the way!
