The Human Truth Foundation

What is Science and the Scientific Method?

By Vexen Crabtree 2014

Like this page:

Share this page:
Comments:
FB, LJ

#epistemology #knowledge #science #truth

The "Scientific Method" is a set of steps taken to ensure that conclusions are reached sensibly, experiments designed carefully, data is interpreted in accordance with the results of tests, and that procedures can be verified independently. The system is designed to reduce as much Human error and bias as possible1. Ideas and theories must be subject to criticism, and counter-evidence must be taken into account in order to produce new and more accurate theories. Everything should be questioned. Most people cannot "do" science and do not have the skills to analyse data in an adequate manner2. The Scientific Method is hard and demanding, with high standards of ethical conduct expected - Daniel C. Dennett wrote that "good intentions and inspiration are simply not enough" (2007)3. The effects of science can impact on all human development, changing entire societies4. Science has been responsible for a staggering increase in human knowledge, human technology and human capabilities over the last few centuries.5

One of my favourite definitions of science is that of E. O. Wilson:

Book CoverScience, to puts its warrant as concisely as possible, is the organized, systematic enterprise that gathers knowledge about the world and condenses the knowledge into testable laws and principles. The diagnostic features of science that distinguish it from pseudoscience are first, repeatability: The same phenomenon is sought again, preferable by independent investigation, and the interpretation given to it is confirmed or discarded by means of novel analysis and experimentation. Second, economy: Scientists attempt to abstract the information into the form that is both simplest and aesthetically most pleasing - the combination called elegance - while yielding the largest amount of information with the least amount of effort. Third mensuration: If something can be properly measured, using universally accepted scales, generalizations about it are rendered unambiguous. Fourth, heuristics: The best science stimulates further discovery. Fifth and finally, consilience: The explanations of different phenomena most likely to survive are those that can be connected and proved consistent with one another.

"Consilience: The Unity of Knowledge" by E. O. Wilson (1998)6


1. The Scientific Method

1.1. New Theories and New Facts (Only a Theory?)

#science

The building-block of science is the theory. New data results in new theories, and theories inspire experiments which are designed to test them... resulting in new data, which may then require new theories. This cyclic process propels science forwards. Any new theory must displace an old one, and each new theory therefore needs abundant evidence in its favour. No-one will abandon the standing theory without good reason.

New theories are first of all necessary when we encounter new facts which cannot be "explained" by existing theories.

"Ideas and Opinions" by Albert Einstein (1950)7

Richard Gross opens his prominent book "Psychology: The Science of Mind and Behaviour" (1996), with some chapters on science, and offers the following as two major steps in scientific theory:

  1. Theory Construction, "an attempt to explain observed phenomena".
  2. Hypothesis Testing, involving "making specific predictions about behaviour under certain specified conditions".

The best thing about theories is that when new evidence comes to light, new theories arise to replace or modify the old ones. Russell (1935) explains how science starts with initial observations and continually builds until major theories are brought to general acceptance through long periods of practical trial and error.

Book CoverScience starts, not from large assumptions but from particular facts discovered by observation or experiment. From a number of such facts a general rule is arrived at, of which, if it is true, the facts in questions are instances. This rule is not positively asserted, but is accepted, to begin with, as a working hypothesis. If it is correct, certain hitherto unobserved phenomenon will take place in certain circumstances. If it is found that they do take place, that so far confirms the hypothesis; if they do not, the hypothesis must be discarded and a new one must be invented. However many facts are found to fit the hypothesis, that does not make it certain, although in the end it may come to be thought of in a high degree probable; in that case, it is called a theory rather than a hypothesis.

"Religion and Science" by Bertrand Russell (1935)8

You might notice that the theory is king: data without a supporting theory is all but useless. It can even be dangerous: If data leads a researcher to claim some radical new element of cause and effect, then, there has to be a valid underlying theoretical framework in addition to the data9. The lack of good theory can lead people far 'down the garden path', i.e., to false conclusions, and to have undue confidence in the data and their own interpretation of it.

Only a Theory? A common criticism of the theories of evolution and of the big bang is that "they are only theories". However, they misunderstand what the word "theory" means. A scientific theory that explains the facts well is accepted; whereas one that doesn't is rejected. That something "is only a theory" does not affect whether it is accurate or not. Some example theories include the theory of gravity, and the theory that the Earth orbits the Sun. Clearly, the evidence is the important aspect of any theory!

1.2. Falsification: All Theories Must be Testable

#science

Theories must be disprovable. A theory must make it clear exactly what criteria would falsify it, and therefore, the theory must be testable10. Richard Dawkins defines all of science in terms of its testability: science is, he says, "defined as the set of practices which submit themselves to the ordeal of being tested"11.

The academic Karl Popper, is often cited as being the source of this requirement and it has become one of the most well-known 'rules' of scientific methodology. Karl Popper proclaimed the principal in Logik der Forschung in 1934, published in Vienna. He translated it into English as The Logic of Scientific Discovery in 1959, published in London. Professor Victor Stenger points out that Popper and Rudolf Carnap explored the same idea, Carnap in "Testability and Meaning" in Philosophy of Science (1936)12, therefore it appears that Popper is given undue credence as the sole purveyor of the idea by academics. However the science historian Patricia Fara states that Popper first voiced his falsification criteria as long ago as 1919 after observing a lecture by Einstein13. No matter the history, it is now a very well established principal.

Falsification [is] the demarcation criterion proposed [...] as a means for distinguishable legitimate scientific models from nonscientific conjectures. [...] While failure to pass a required test is sufficient to falsify a model, the passing of the test is not sufficient to verify the model. This is because we have no way of knowing a priori that other, competing models might be found someday that lead to the same empirical consequences as the one tested.

"God, the Failed Hypothesis: How Science Shows That God Does Not Exist"
Prof. Victor J. Stenger (2007)14

Imagine a game of hangman, where a person must guess what word is being revealed but can only see so many of the word's letters. With the evidence available, the person can guess a word - this is his theory. The criteria by which he can be affirmed or proven wrong is through the revealing of new evidence. If a letter is revealed that does not fit his theory then the theory must instantly be discarded. So in science (where the world is almost infinitely complex), theories are much easier to deny than to ultimately confirm. To say that a theory is true you must wait until the very end of the game, until every letter is revealed. The only problem is, as new facts are continually discovered, it is hard to be sure that any future evidence won't suddenly falsify the theory; this is why some hold that all scientific models will always remain theories. To abandon this concept is to try to stop the flow of new discoveries!

1.3. Peer Review15

#science

Peer-review is an important part of the scientific method16. It is naive to believe that scientists act without passion, subconscious bias or social influences when they conduct studies - and this shortcoming is admitted firstly by scientists themselves17. So scientific publications are sent to a number of recognized experts in appropriate fields for review. The publishing journal will wait for the results and feedback from those experts, and decide whether they want to publish the paper or not. Some get published straight away. Others might be sent back to their authors with the scientific concerns of the experts put forth, and the journal will wait for an edited version to be resubmitted, before (probably) sending it off for peer review again. Some studies are found to be "fatally flawed" and so never get published.

Likewise, some papers can be removed from the original publication even years after the journal was printed, such as where a study is later on found to be flawed, completely erroneous, or to be fraudulent. Sometimes, things such as undisclosed funding can cause an article to be withdrawn, such as when a scientist is secretly paid by an industry body to produce favourable "science" to support the industry in question. Such tactics have been employed by oil, tobacco, drinks, alternative therapies and lobby groups for other industries that are responding to criticism from governments and scientists by attempting to "buy" scientific credence for their activities. Being removed from previous publications is a serious indication that something was wrong with the study.

Peer-review will look at the methods used in the research, the strength of the statistical analysis, whether proper care is taken with the wording of the conclusion, whether the blurb does indeed reflect what the data shows, etc. The idea is not to publish misleading or faulty papers. This quality-control boosts both the quality of the publication itself (hence, making the journal more trusted) and aids science in general: to publish in a scientific journal, you have to display the correct care and attention to detail, and to avoid the many pitfalls of bad science and poor methodology.

The main strength of this approach is that the scientific methods involved, and the conclusions, are scrutinized by experts who do not have a vested interested in the quality of the study. It is well-known that those who conduct studies often believe their own stated conclusions and are biased towards seeing their own work in a positive light18, and peer-review is the process used to allow critical evaluation of work from others' point of view. As in all human endeavours, a second set of eyes will often reveal problems that the original author would never spot.

1.4. Reproducibility and Independent Verification of Results

#science

Reproducibility and independent verification are integral parts of the scientific method19,20. Whether it is research in physics, chemistry or psychology, the results of any experiment must be reproduced independently in another location. This checks that the results were not the result of unintentional but consistent human bias in the original experiments. There have been plenty of cases where a scientist declares results and describes his experiment in a scientific journal but other researchers fail to reproduce in their own experiments. If results cannot be duplicated then the data is not accepted as valid. This is why newspaper reports on single experiments should be heeded with care: any experimenter can claim results but if others around the world cannot verify the procedure then the chances are the experiment was flawed. Results should only be acclaimed once they have been verified and this is why sometimes public announcements are not made for some time, especially with highly technical or long-term experiments. Always think to check who done the original experiments, and who verified the methods.

Book CoverScience requires that a phenomenon be reliably produced in different laboratories for it to be accepted as genuine. Whoever claims to have discovered a phenomenon must describe in sufficient detail how it was produced so that other investigators, following similar steps, can reproduce it themselves. This requirement of replicability applies to all fields of science. [...]

Although the history of science contains numerous examples of an investigator's expectations clouding his or her vision and judgement, the most serious of these abuses are overcome by the discipline's insistence on replicability and the public presentation of results. Findings that rest on a shaky foundation tend not to survive in the intellectual marketplace. [...] The biggest difference between the world of science and everyday life in protecting against erroneous beliefs is that scientists utilize a set of formal procedures to guard against [...] sources of bias and error.

"How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life" by Thomas Gilovich (1991)21

1.5. Occam's Razor: Simplicity & Fewer Assumptions are Better

#atheism #epistemology #philosophy #science #theism

The aim of science is, on the one hand, a comprehension, as complete as possible, of the connection between the sense experiences in their totality, and, on the other hand, the accomplishment of this aim by the use of a minimum of primary concepts and relations.

Albert Einstein (1936)22

A hypothesis has assumptions which must then be backed up by evidence if the idea is to take ground. Clearly, the fewer such assumptions are, the better. In general this has led to a principal in science that the theory with fewest assumption and fewest complicated side-effects is probably a better theory than others. This is commonly called 'Occam's Razor':

Occam is best known for a maxim which is not to be found in his works, but has acquired the name of 'Occam's razor'. This maxim says: 'Entities are not to be multiplied without necessity.' Although he did not say this, he said something which has much the same effect, namely: 'It is vain to do with more what can be done with fewer'. That is to say, if everything in some science can be interpreted without assuming this or that hypothetical entity, there is no ground for assuming it. I have myself found this a most fruitful principal in logical analysis.

"History of Western Philosophy" by Bertrand Russell (1946)23

In philosophical arguments, it is frequently used to mean that if a particular belief or idea leads to the requirement for a massive amount of special explanation, other odd conclusions, and outstanding complexity, then such a belief is probably wrong.

For example, in the theological debate between atheists and theists, both attempt to account for the existence of the universe using similar ideas. Atheists believe that the universe is self-contained and had no preceding 'cause'. Theists believe that the universe was created by God, and that God is self-contained and has no 'cause'. Both theories contain a similar uncaused element, but, the theistic theory contains an additional assumption that the uncaused cause is a god. By employing Occam's razor, many would guess that the simpler, atheistic, theory is more likely to be correct because it contains less unanswered questions (assumptions) than the theist one.

1.6. Randomized Double-Blinded Trials24

#placebo_effect #science

When it comes to testing theories that involve humans, all kinds of psychological factors come into play. These can change the results of experiments, and all good designs will try to minimize indirect effects, so that the primary theory alone is being tested. One such mitigation is blinding - that means, not letting the subjects in a test know which group they are in (i.e., are they in the control group whose main task is just to carry on as normal even if they think they're being made subject to a procedure?). To achieve this, subjects should be randomly placed into the different groups in the experiment, so that the researcher's subconscious biases do not result in the control group being filled with people according to subtle judgements about their personality (etc). So, "double-blinded" trials are those in which the researchers nor the subjects know who is receiving what treatment, and therefore it is impossible for psychological effects to bias the selection process as the experiment proceeds.

A placebo is most likely to work if the doctor genuinely believes it to be a cure and communicates that conviction to the patient. Quack doctors who have a talent for invoking the placebo effect sometimes attract huge followings. [...] The randomized placebo-controlled, double-blind trial is perhaps the most important advance in medical research. Neither patient nor the doctor knows whether a treatment is real or a sugar pill. By eliminating even unconscious bias, it makes it possible to determine what works and what doesn't.

"Superstition: Belief in the Age of Science" by Robert L. Park (2008)25

Randomisation is not a new idea. It was first proposed in the seventeenth century by John Baptista van Helmont, a Belgian radical who challenged the academics of his day to test their treatments like blood-letting and purging. [...] Does randomisation matter? As with blinding, people have studied the effect of randomisation in huge reviews of large numbers of trials, and found that the ones with dodgy methods of randomisation overestimate treatment effects by 41 per cent. [...] A review of blinding in all kinds of trials of medical drugs [in particular,] found that trials with inadequate blinding exaggerated the benefits of the treatments being studied by 17 per cent.

"Bad Science" by Ben Goldacre (2008)26

Ben Goldacre studied the evidence surrounding acupuncture and found that properly controlled studies show no advantage of acupuncture above the placebo effect, but poorly controlled studies where psychological factors are not properly accounted for end up showing that acupuncture is effective.26. Poor experimental design can have real effects - and in this day and age where most news outlets don't have scientifically qualified staff that examine trials before they are printed, this means that large numbers of people can be easily duped into trusting something that ought to be debunked.

1.7. Be Open to Criticism and Make Way for New Evidence

#knowledge #science

The nature of the scientific method lends itself to continual improvement in knowledge, based on a continual stream of new data. Scientists frequently admit when their theories have been superseded or corrected by their peers11,27. The procedures of peer review and independent verification both ensure that mistakes in theory, application or analysis of data are spotted when other scientists examine and repeat the experiment. "The essence of science is that it is self-correcting" says the eminent scientist Carl Sagan (1995)28. Yet it is still rare that new evidence completely destroys a theory. Philosopher Bertrand Russell states that "theories, if they are important, can generally be revived in a new form"29. Hence, theories undergo continual improvement.

Still perhaps it may appear better, nay to be our duty where the safety of the truth is concerned, to upset if need be even our own theories, specially as we are lovers of wisdom: for since both are dear to us, we are bound to prefer the truth.

"Ethics" by Aristotle (350BCE)30

Sometimes, individual theories do have to be abandoned. Rarely, entire scientific paradigms are questioned such as when Newtonian physics gave way to Einstein. New evidence can cause entire theoretical frameworks to be undermined, resulting in a scientific revolution. "Science involves an endless succession of long, peaceful periods [... and then periods of] scientific revolution" (Kuhn 1962). To overthrow a theory requires strong evidence and a full cycle of the scientific method. To overthrow an established theory, which is often supported by many other theories and mountains of testing and evidence, requires extraordinary evidence.

"Scientific Theories Must Make Way for New Evidence" by Vexen Crabtree (2017)

2. A History of Science

#history #science

2.1. Ionia, 6th century BCE

Book Cover2,500 years ago, there was a glorious awakening in Ionia: on Samos and the other nearby Greek colonies that grew up among the islands and inlets of the busy eastern Aegean Sea. Suddenly there were people who believed that everything was made of atoms; that human beings and other animals had sprung from simpler forms; that diseases were not caused by demons or the gods; that the Earth was only a planet going around the Sun. And that the stars were very far away. [...]

In the 6th century B.C., in Ionia, a new concept developed, one of the great ideas of the human species. The universe is knowable, the ancient Ionians argued, because it exhibits an internal order: there are regularities in Nature that permit its secrets to be uncovered. [...] This ordered and admirable character of the universe was called Cosmos. [...]

Between 600 and 400 B.C., this great revolution in human thought began. [...] The leading figures in this revolution were men with Greek names, largely unfamiliar to us today, the truest pioneers in the development of our civilization and our humanity.

"Cosmos" by Carl Sagan (1995)31

The city of Alexandria was the greatest in the ancient world. Its famous Library of Alexandria was constructed in the third century BCE by the Greek Kings, the Ptolemys. It became a scientific research centre and publishing capital of the world. Ionians forged ahead in many arenas of knowledge. "Eratosthenes accurately calculated the size of the Earth [...], Hipparchus anticipated that the stars come into being, slowly move during the course of centuries, and eventually perish, it was he who first catalogued the positions and magnitudes of the stars to detect such changes. Euclid produced a textbook on geometry from which humans learned for twenty-three centuries"32. Such astounding wisdom backed up by studious thinking and experimentation could have launched the world into the modern era. But it didn't.

Rising superstition, the taking of slaves and the growth of monotheistic religion led to the demise of scientific enterprise. The culture changed. The last great scientist of Alexandria, Hypatia, was born in 370CE at a time when the "growing Christian Church was consolidating its power and attempting to eradicate pagan influence and culture". Cyril, the Archbishop of Alexandria, considered Hypatia to be a symbol of the learning and science which he considered to be pagan. "In the year 415, on her way to work she was set upon by a fanatical mob of Cyril's parishioners. They dragged her from her chariot, tore off her clothes, and, armed with abalone shells, flayed her flesh from her bones. Her remains were burned, her works obliterated, her name forgotten. Cyril was made a saint"32.

The last remains of the Alexandrian Library were destroyed not long after Hypatia's death. Nearly all the books and documents were completely destroyed. The Western Dark Ages had begun, and all knowledge and science was forgotten in the West for over a thousand years.

2.2. The Rise of Science From the 17th Century

During the Middle Ages, the West had again begun to contribute to the science and learning of the world. In the interim the Arabic lands to the East had thankfully translated Greek works and carried the torch of knowledge. Philosopher-scientists emerged from the West and East, and debated the finer points of epistemology. As the centuries went on, thought became freer, and as the material life improved, the seventeenth century saw the dawning of a new age of human thought: modern scientific methods were back on the menu after nearly 2000 years in hiatus.

Almost everything that distinguishes the modern world from earlier centuries is attributable to science, which achieved its most spectacular triumphs in the seventeenth century. The Italian Renaissance, though not medieval, is not modern; it is more akin to the best age of Greece. [...] The modern world, so far as mental outlook is concerned, begins in the seventeenth century. No Italian of the Renaissance would have been unintelligible to Plato or Aristotle; Luther would have horrified Thomas Aquinas, but would not have been difficult for him to understand. With the seventeenth century it is different: Plato and Aristotle, Aquinas and Occam, could not have made head nor tail of Newton. [...]

Four great men - Copernicus [1473-1543], Kepler, Galileo, and Newton - are pre-eminent in the creation of science. Of these, Copernicus belongs to the sixteenth century, but in his own time he had little influence.

"History of Western Philosophy" by Bertrand Russell (1946)33

3. The Battles Between Science and Religion

Further reading on Science and Religion:

As a scientist, I am hostile to fundamentalist religion because it [...] teaches us not to change our minds

"The God Delusion" by Prof. Richard Dawkins (2006)34

4. Open Access to Research

Secrecy can impede the progress of science, and openness is a hallmark of good science"

Prof. A. Scott
In Skeptical Inquirer (2007)35

Open Access speeds up the worldwide application of scientific research and allows theories and results to be tested, checked and analysed to scientists across the world, leading to more reliable science, data, technology for everyone. As much science is funded by government, the general populace should therefore have free access to its results.

4.1. Publishing Charges

#australia #belgium #canada #denmark #finland #germany #hungary #netherlands #portugal #sweden #UK #USA

A mass of valued research comes from university researchers that are funded by national government, costing hundreds of millions of dollars each year to support. Their results are published in peer-reviewed journals that have to be paid for; the publications are then bought by the Universities where the research is done. This is highly inefficient, and the public end up paying twice in order to read the results (when they pay the taxes that supports the research, and when they pay for the publications).

Prof. Michael Geist holds the Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, and says “The model certainly proved lucrative for large publishers [but] the emergence of the internet dramatically changes the equation. Researchers are increasingly choosing to publish in freely available, open access journals posted on the internet, rather than in conventional, subscription-based publications”37.

Sweden leads the world in open access to research archives (see the chart). A Swedish project call the "Directory of Open Access Journals" links to scientific open access journals. It now lists more than 2500 worldwide, including over 127000 articles.37

Aided by the Open Journal System, a Canadian open source software project based at Simon Fraser University in British Columbia, more than 800 journals, many in the developing world, currently use the freely available OJS to bring their publications to the internet.

For those researchers committed to traditional publication, open access principles mandate that they self-archive their work by depositing an electronic copy in freely available institutional repositories shortly after publication. This approach grants the public full access to the work, while retaining the current peer-reviewed conventional publication model.

While today this self-archiving approach is typically optional, a growing number of funding agencies are moving toward a mandatory requirement. These include the National Institutes of Health in the US, the Wellcome Trust in the United Kingdom, and the Australian Research Council. Moreover, some countries are considering legislatively mandating open access.

Prof. Michael Geist (2007)37

4.2. An EU-wide Open Access Principle

Last month five leading European research institutions launched a petition that called on the European Commission to establish a new policy that would require all government-funded research to be made available to the public shortly after publication. That requirement - called an open access principle - would leverage widespread internet connectivity with low-cost electronic publication to create a freely available virtual scientific library available to the entire globe.

Despite scant media attention, word of the petition spread quickly throughout the scientific and research communities.

Within weeks, it garnered more than 20,000 signatures, including several Nobel Prize winners and 750 education, research, and cultural organisations from around the world.

In response, the European Commission committed more than $100m (£51m) towards facilitating greater open access through support for open access journals and for the building of the infrastructure needed to house institutional repositories that can store the millions of academic articles written each year.

Prof. Michael Geist (2007)37

It seems right that such a depository of publicly-funded research should be made available for free to the public that paid for it.

5. Overcoming Thinking Errors

The point of The Scientific Method is to overcome the many sources of human error that arise from flawed cognitive processes and our imperfect perceptions of reality.

Current edition: 2014 Mar 12
Last Modified: 2017 Mar 14
Originally published 2006 May 19
http://www.humantruth.info/science.html
Parent page: The Human Truth Foundation

Social Media

References: (What's this?)

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

Book Cover

The Guardian. UK newspaper. See Which are the Best and Worst Newspapers in the UK?. Respectable and generally well researched UK broadsheet newspaper..

Skeptical Inquirer. Magazine. Published by Committee for Skeptical Inquiry, NY, USA. Pro-science magazine published bimonthly.

Aristotle. (384-322BCE)
(350BCE) Ethics. E-book. Amazon Kindle digital edition. Originally published 340BCE (approx).

BBC News. The British Broadcasting Corporation (BBC) is a UK mainstream public-service mass media broadcaster, known to be reasonably accurate and responsible with its journalism.

Bloom, Clive
(2001) Literature, Politics and Intellectual Crises in Britain Today. Published by Palgrave.

Coolican, Hugh
(2004) Research Methods and Statistics in Psychology. 4th edition. Published by Hodder Headline, London, UK.

Dawkins, Prof. Richard
(2004) A Devil's Chaplain. Paperback book. Originally published 2003 by Weidenfeld & Nicolson. Current version published by Phoenix of Orion Books Ltd, London UK.
(2006) The God Delusion. Hardback book. Published by Bantam Press, Transworld Publishers, Uxbridge Road, London, UK.

Einstein, Albert. (1879-1955)
(1954) Ideas and Opinions. Paperback book. Published in 1954 by Crown Publishers, New York, USA and in 1982 by Three Rivers Press. A collection of Einstein's writings and texts.

Fara, Patricia
(2009) Science: A Four Thousand Year History. Hardback book. Published by Oxford University Press. Fara has a PhD in History of Science from London University.

Gilovich, Thomas
(1991) How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. Paperback book. 1993 edition. Published by The Free Press, NY, USA.

Goldacre, Ben. MD.
(2008) Bad Science. Published by Fourth Estate, an imprint of HarperCollins Publishers, London, UK.

Gross, Richard
(1996) Psychology: The Science of Mind and Behaviour. Paperback book. 3rd edition. Published by Hodder & Stoughton, London UK.

Kuhn, T.S.
"The Structure of Scientific Revolutions" (1962). University of Chicago Press, Chicago, USA. Via Gross (1996).

Park, Robert L.
(2008) Superstition: Belief in the Age of Science. E-book. Amazon Kindle digital edition. Published by Princeton University Press, New Jersey, USA.

Popper, K.R.
(1959) The Logic of Scientific Discovery. Published by Hutchinson, London, UK.

Russell, Bertrand. (1872-1970)
(1935) Religion and Science. Paperback book. 1997 edition. Published by Oxford University Press, Oxford, UK. Introduction by Michael Ruse.
(1946) History of Western Philosophy. Paperback book. 2000 edition. Published by Routledge, London, UK.

Sagan, Carl
(1995) Cosmos. Paperback book. Originally published 1981 by McDonald & Co. Current version published by Abacus.

Stenger, Prof. Victor J.
(2007) God, the Failed Hypothesis: How Science Shows That God Does Not Exist. Published by Prometheus Books, NY, USA. Stenger is a Nobel-prize winning physicist, and a skeptical philosopher whose research is strictly rational and evidence-based.

Wilson, E. O.
(1998) Consilience: The Unity of Knowledge. Hardback book. Published by Little, Brown and Company, London, UK. Professor Wilson is a groundbreaking sociobiologist.

Footnotes

  1. Goldacre (2008) chapter 13 "Why Clever People Believe Stupid Things" digital location 3570. Added to this page on 2017 Mar 14.^
  2. Added to this page on 2014 Sep 10. Ben Goldacre's Bad Evidence, BBC Radio 4 programme aired on 2013 Jan 01 at 2000hrs.^
  3. Dennett, Daniel C. from an essay that will be published 'later this year' in "Philosophers Without God" , preview in Skeptical Inquirer (2007 Mar/Apr) p44.^
  4. Bloom (2001) p165 . He says 'Whilst the intellectual may work in a specific field, it is in the nature of his or her work to have implications for the whole field of knowledge'.^
  5. Skeptical Inquirer (2009 May/Jun) article "Playing by the Rules" p42-44 by Harriet Hall, MD. Added to this page on 2017 Feb 06.^
  6. Wilson (1998) p57. Added to this page on 2010 Jul 11.^
  7. Einstein (1950) Scientific American Vol. 182, No. (1950 Apr 04). In Einstein (1954) p342.^
  8. Russell (1935) p13-14.^
  9. Coolican (2004) p333 . Coolican adds that 'Researchers mostly have a background of theoretical argument and previous research findings that leads them to a reasonable argument for the effect they are expecting'. Added to this page on 2014 Mar 10.^
  10. Coolican (2004) p15. Added to this page on 2014 Mar 10.^
  11. Dawkins (2004) p210.^^
  12. Stenger (2007) p26 , he cites Philosophy of Science B 3 (1936): 19-21; B 4 (1937): 1-40.^
  13. Fara (2009) p301.^
  14. Stenger (2007) p26.^
  15. Added to this page on 2014 Mar 11.^
  16. Skeptical Inquirer (2009 May/Jun) article "Playing by the Rules" p42-44 by Harriet Hall, MD . See items 1, 4 and 5 in Dr Hall's list of the necessary steps to ensure good science. Added to this page on 2017 Feb 06.^
  17. Coolican (2004) p16 . Author cites Mitroff, I.I. (1974) Studying the lunar rock scientist. Saturday Review World, 2 November, 64-65. Added to this page on 2015 Nov 20.^
  18. Coolican (2004) p16. Added to this page on 2015 Nov 20.^
  19. Coolican (2004) p17,333. Added to this page on 2014 Mar 09.^
  20. Skeptical Inquirer (2009 May/Jun) article "Playing by the Rules" p42-44 by Harriet Hall, MD . See item 6 in Dr Hall's list of the necessary steps to ensure good science. Added to this page on 2017 Feb 06.^
  21. Gilovich (1991) p168,56-57.^
  22. Einstein (1936). From the Journal of the Franklin Institute, Vol. 221, No. 3, March 1936. Via Einstein (1954) p293.^
  23. Russell (1946) p462-463.^
  24. Added to this page on 2015 Sep 06.^
  25. Park (2008) digital location 906-908. Added to this page on 2016 Dec 29.^
  26. Goldacre (2008) chapter 4 "Homeopathy" digital location 725-730,748,773. Added to this page on 2015 Sep 06.^
  27. Skeptical Inquirer (2009 May/Jun) article "Playing by the Rules" p42-44 by Harriet Hall, MD.^
  28. Sagan (1995) p16.^
  29. Russell (1946) p69.^
  30. Aristotle (350BCE) digital location 412-413.^
  31. Sagan (1995) p194-195.^
  32. Sagan (1995) p364,366.^
  33. Russell (1946) p512.^
  34. Dawkins (2006) p284.^
  35. Skeptical Inquirer (2007 May/Jun) p42 . Alan Scott is professor of physics at the University of Wisconsin-Stout in Menomonie, Wisconsin, 54751. He received his PhD in 1995 from Kent State University in experimental nuclear physics. His quote referenced the American Physical Society Council (1999) What is Science Policy Statement.^
  36. The Guardian (2005 May 17) article "Britain at forefront of move to freely available research".^
  37. BBC News (2007 Feb 28) article "Push for open access to research" by Michael Geist, Prof. Date last accessed 2017 Mar 14.^^

©2017. All rights reserved.