An Alternate — My Proposed Explanation

14 Sep

In the last post, I took a look at Hawking’s explanation for the current physical universe – analyzing the theoretical development using the standards of Place. Now, I’m going to take a turn and suggest an alternate theoretical development. I’ll put forth a theory to explain phenomena related to physical space, objects and the relative motion of objects. The context defined here is meant to represent a mental model of the physical universe.

——–Begin Massfluid-Time-Space Theory

Start with a blank context. Open a new empty mental space and call it the Context of Physics [CoP]. Now, conceive of an ideal fluid that is homogeneous and without structure. Call this ideal fluid massfluid. Determine that massfluid is present in the Context of Physics so that it completely fills the context.

Introduce time into the Context of Physics such that time fills part, not all, of the context. Time enables sequence, it also defines from -> to in a single direction.

Now conceive of physical space. Introduce physical space into the Context of Physics, displacing massfluid like a big bubble in the middle, such that space is present in time, but whereever space is present, massfluid is not present.

These set-up tasks result in the following axiom being true in the Context of Physics.

Physical Stuff Axiom
Massfluid, time, and physical space are present such that the presence of space determines: a) the absence of massfluid and b) the presence of time.

Introduce the meaning to cut. Now determine that massfluid has the ability to cut space such that massfluid effects separation but not destruction [in keeping with my theory of geometry called Space-Cut Theory]. Create point-like sources for mass to flow into space. Also create a point-like sinks for massfluid to exit space.

These set-up tasks lead to the following axiom.

Ability to Cut Axiom
Massfluid enters space at point-shaped sources and exits through point-shaped sinks and massfluid has the ability to make geometric cuts in space as it flows from source to sink.

Observe that the very successful equations that describe the behavior of electricity and magnetism, Maxwell’s equations, completely support the notion of sources and sinks (divergence and curl). The sources and sinks are perceived as “particles.”

Now, determine that the meanings of to push and to pull are present in the Context of Physics.

As massfluid enters space or moves through space (cutting it), it pushes on space and space pushes back – equal and opposite reactions. I assert that this interaction, space pushing on massfluid and massfluid pushing back, is the fundamental phenomenon involved in what we call gravity. [Proving this goes beyond the scope of this development.]

Ability to Push Axiom
At any boundary between space and massfluid, space pushes on massfluid and massfluid pushes on space such that the magnitude of the pushing force is equal in magnitude and opposite in direction.

The Ability to Push Axiom draws on insight from Sir Isaac Newton that for every action, there is an equal and opposite action. Experience with this law in classical mechanics gives insight about how essential this interaction is.

As massfluid enters space or moves through space (cutting it), it exhibits a kind of internal tension. Massfluid pulls on massfluid. The network of massfluid lines in space provide the means for perturbations, or waves, to propagate. Observation about the propogation of waves in “the vacuum” reveal a uniform tension (associated with the constant speed of light).

Ability to Pull Axiom
Massfluid pulls on massfluid creating a tension that pulls sources and sinks together; the direction is orthogonal to the pushing force and the magnitude of the pulling force is inversely proportional to the magnitude of the pushing force against space.

Consider that in the nucleus of an atom, there is very limited space, so that the tension in lines and planes of massfluid is very strong while the pushing force of space is relatively weak. In the distances between planets and stars, there is a lot of space, so the pushing force is stronger and the tension or pulling force is relatively weaker.

Following are some rules that govern the Context of Physics. Whether they can be deduced from axioms or need to be axioms themselves is a matter that is open for investigation.

The first rule is due to consistent observations regarding the conservation of energy. Note that the units of energy are made up of mass, distance and time. These are measurements respectively of massfluid, space, and time.

P-Rule 1: The total quantity of massfluid, time, and space remains fixed.

This rule implies that the means to destroy massfluid, space, and time is not present in the Context of Physics. It also implies that the rate of massfluid flowing into space is equal to the rate of massfluid flowing out of space.

P-Rule 2: A source-point is an origin for massfluid to flow in multiple directions, different characteristic distances:
•short-range
•medium-range
•long-range

This rule is established due to beta-decay, observations/knowledge regarding neutrinos, and the base of knowledge regarding electro-magnetic waves. The short-range massfluid lines are in the neutron. And perhaps these lines relate to observations of something akin to string vibrations that led to superstring theory. Medium-range flow goes to sinks associated with electrons in the same atom. Long-range flow goes to sinks associated with black holes at the center of stars and galaxies. The long-range massfluid lines provide a network of lines that E-M waves propagate on.

This network of long-range massfluid lines accounts for the constant speed of light. It is known through conventional physics that: “the speed of a wave along a stretched ideal string depends only on the tension and linear density of the string and not on the frequency of the wave.” The network of massfluid lines provide a 3-dimensional network of these ideal lines. The pushing force provides the mass-density of the lines and the pulling force provides the tension. These are constant over distances at the macro level. Thus, so is the speed of light constant.

P-Rule 3: Massfluid lines only intersect at sources or sinks

This rule is established due to the success of Michael Faraday’s research. In his visualization of field lines, this was one of his rules. Also, we know from geometry that unique lines require space between them.

——–End Massfluid-Time-Space Theory

There’s more to do, and more to explain, but this outlines start of a possibly helpful theory.

Now it’s your turn to poke holes and tell me the problems with this theory.

Advertisement

Analyzing the Cosmology Explanation Given by Stephen Hawking on Curiosity

28 Aug

In my paper, “Working Together on Knowledge,” I advance some standards for using language in the service of science and the type of understanding, clarity, and consistency that we want from scientific explanations. In this post, I’d like to use my proposed approach, or framework – called Place – to look at a specific example. The example I’ll analyze is Stephen Hawking’s explanation for the origins/beginning of the physical world that was made via the Discovery Channel’s TV show, Curiosity.

————————-

A theory context should be empty at the beginning so that we can know clearly and explicitly what is in the theory and what is not. Thus, start by setting up a blank theoretical context in your mind. The goal for this context is for it to be the place where we develop the target explanation put forward by Hawking. Let’s call this context SH Cosmo Ex for Stephen Hawking Cosmology Explanation.

In order to communicate about theory (or anything really) we need nouns and verbs. [The most simple unit of communication is a declarative clause which consists of a noun and a verb.] Thus, distinguish between noun meanings [stuff] and verb meanings [forces]. The primitives of a theory are noun meanings called concepts, and verb meanings either capabilities (triggered) or rules (always effecting or active).

Next, introduce the concept of nothing into the SH Cosmo Ex context, we’ll call it Nothing. [This may seem strange, but it is also done by the set-theoretic development of numbers which starts with the null set; so it’s not without precedent.] And introduce a capability: to-separate-nothing.

Now, it could be that Stephen Hawking [Hawking] would take exception to the idea that his explanation needs a capability.

Consider that Hawking indicates through the use of a hole and the stuff removed from the hole that he envisions Nothing being separated into Energy and Negative Energy.

A principle in Place is that in any construction site (a theoretical context is a type of construction site) things stay the way they are unless a force is active to effect the change. (Force ≡ that which is common to verb meanings.) Scholars don’t generally accept magic as a part of science; if there’s a change, there is a reason for it. Therefore, if Nothing changes, if Nothing is separated into energy and negative energy, then some force acted to cause the separation. A capability, must be employed: to-separate-nothing.

Hawking wants energy to be a raw material in his explanation. However, with (conventional) dimensional analysis we know that energy has units of mass, distance, and time. Thus, the the basic raw materials involved in the SH Explanation are matter, space, and time. The presence of energy requires the presence of mass, space, and time.

Returning to the development of Hawking’s Explanation in SH Cosmo Ex, to-separate-nothing acts on Nothing to separate Nothing into amounts of positive mass, space, and time, plus amounts of negative mass, space, and time. Before to-separate-nothing spontaneously acts, there is no time or space. After to-separate-nothing is active, time begins, space begins, and matter begins (which is measured in units of mass). Anti-time begins, anti-space begins, and anti-matter begins.

It is known that if matter and anti-matter collide in equal quantities, then both are annihilated. But to my knowledge, most program-viewers and I have not been exposed to the idea of negative space-time. Given the dirt separated into a hole and a hill analogy given by Hawking, the positive something known as space-time would require a corresponding negative something which we’ll call anti-space-anti-time in order to balance the equation (established by Hawking) which equals Nothing.

Next, rules are introduced into SH Cosmo Ex that provide the 4 basic physical forces: gravity, electro-magnetism, the strong nuclear force, and the weak nuclear force. The analytical lens of Place reveals that whenever we have order rather than randomness, then a force known as a rule is active. The physical behavior associated with the basic physical forces is consistent and non-random. Thus, these forces are cast as rules in SH Cosmo Ex.

In the SH Explanation, the raw materials: space, time, matter, anti-space, anti-time, and anti-matter are initially present in a single clump that starts to expand. I think that Hawking sees this expansion as a result of the to-separate-nothing force, but it’s hard to say for sure. He asserts that causality requires time and since there is no time before the raw materials of the physical universe blink into existence there is no cause for the intial separation of Nothing. Would SH agree to a self-triggering force/capability acting on Nothing? A force, to-separate-nothing, is absent from the explanation he offered; however, it is necessary for a well-formed theory.

Let’s grant the required set up SH Cosmo Ex according to SH’s Explanation, starting with Nothing and the to-separate-nothing force. And we’ll further grant that everything starts out in a clump such that space (and anti-space) has practically no extension in any direction. Why does the clump expand?

Gravity acts in a radial direction pointed towards the center of mass, so gravity acts to keep matter clumped together. If no space is separating matter (as is the case if space has no appreciable distance in any direction initially), then individual charged particles are not present (a particle inherently requires a spatial boundary), so electro-magnetism would not have a role in the Big Expansion. The strong and weak nucler forces are short-range forces which are generally not seen as playing a role in the macroscopic movement of matter. And initially, there are not any nuclear particles, right? …just one singular clump of stuff. So why would there be any expansion of the clump?

The SH Explanation, as I see it, does not account for the Big Expansion which happens in his history of time.

Here’s another issue. What keeps anti-matter, anti-space and anti-time separate from matter, space and time? What keeps quantities of them from recombining into nothing? If some quantity of negative energy and positive energy combine, then we would experience this as (observable) energy being destroyed. And yet the conservation (non-destruction) of energy is a widely-observed property of the universe. The conservation of energy means that Hawking’s negative energy and positive energy don’t combine; they remain separate. This consistent non-random behavior must be effected by some force. However, the SH Explanation does not address this. Without a force maintaining separation, the SH Cosmo Ex permits the destruction of energy which would mean that the SH Explanation is at odds with the conservation of energy.

———————-

I hope this analysis shows how Place can be helpful as a rubric for evaluating theories.

On the other hand, the SH Explanation makes me wonder if scholars are prepared to accept acts of magic – spontaneous appearances and disappearances – as part of a scientific explanation. Is Hawking’s explanation acceptable to the academic community?

Working Together on Knowledge

28 Jul

Given that language is the means to accomplish many different ends, such as artistic expression, historical accounts, marketing tools, and propaganda, how is written language used to understand ourselves and the world around us?

Language is used to pursue and acquire knowledge through the development of theories.

A theory is essentially an explanation about how the world works with respect to some subject area. For example, we have theories to explain: why the sun, moon and planets move the way they do; why the human eye perceives colors; what is the nature of light, etc. Theories are the substance of society’s knowlege base.

Knowledge can be seen as a large system that many different groups are working on. Not surprisingly, everyone has his or her own idea of the best way to proceed. People who work on knowledge acquisition have experienced regular controversy over assumptions and language standards.

Consider that working together as a group requires some attention to process. In order for a group to be productive, everyone in the group must accept certain standards. For example, consider whether the people working on the US space shuttle would have accomplished anything without agreement on: the language to communicate in, standard work schedules, project planning according to professional standards, rules for inspection, and a standard measurement system (metric or English) among other things.

Typically, theories that advance explanations for society’s knowledge have been advanced in an ad hoc manner. Subjective aspects of the scholar and his academic institution play a large role in how the work is regarded, rather than objective analytical criteria.

What are the standards for theoretical work in the service of science?

There are established standards for logic and methods of proof; however efforts to establish a standard system for formal theories have not been especially successful.

I propose some standards with respect to  definition, construction, communication, and theories to support the objective that a scientific explanation – any scientific explanation – be created so that it is clear and consistent. This body of work called Place of Understanding [or just Place] is a kind of constitution to establish order and regulate controversy with respect to theoretical science. It also offers a protocol for establishing a formal theory.

Please review it. It is published in chapters here (although the Results and Conclusion are still in process):
Working Together on Knowledge
Introduction to “Working Together on Knowledge”
Overview Presentation of Place of Understanding

See if you think it is successful. Consider ways that it could be improved. If we can achieve consensus for a final version of these standards, we will secure an opportunity for significant gains in productivity.

Evidence Regarding Subjectivity in Approval Process

3 Jul

I’m going to deviate from my previous mentioned plan for today’s post.

I came across the article below in a weekly e-newsletter from AAAS / Science:
Editor’s Choice: Highlights of the Recent Literature, April 30, 2010

It asserts a finding that there is a 10% drop in publications and funding when a “superstar” researcher is subtracted abruptly from a team due to death.

“The authors’ analyses of these consequences favor a causal explanation in which the critical factor in these downward trends was being deprived of the intellectual input from these superstars, as opposed to a loss of collective experimental expertise or of privileged channels of communication to funding panels and journal editors.”

I urge you to consider, however, if this is not evidence that the selection of research for peer-reviewed publication is subjective, and driven by politics and personality over the substance of the ideas. Following is the article from the weekly e-newsletter.

By Gilbert Chin

It is no longer uncommon to see multi-authored original research papers, and in many instances, these studies represent the fruits of collaborations between multiple laboratories, especially in the biomedical sciences. How important are the lead researchers in these social and scientific networks? Answering this question empirically appears at first glance to be intractable, but Azoulay et al. have compiled a data set that enables them to take advantage of natural events—when still-active superstar researchers are subtracted from collaborations via death. Of the roughly 230,000 U.S. medical school faculty members, 10,000 were classified as elite according to seven objective professional criteria; during the last two decades of the 20th century, 112 of these scientists died suddenly. The effect on the productivity of the surviving faculty-level collaborators in these superstar-coauthor dyads was unambiguous and persistent: They suffered decrements of almost 10% in publications and funding. The authors’ analyses of these consequences favora causal explanation in which the critical factor in these downward trends was being deprived of the intellectual input from these superstars, as opposed to a loss of collective experimental expertise or of privileged channels of communication to funding panels and journal editors.

Q. J. Econ. 125, 549 (2010).  IMAGE: Leena Peltonen-Palotie, 1952–2010. Image Credit: PASI HYTTI

_________________________________________________

An alternate possibility is that the publishers and funders were willing to publish or fund boring or minor research because of the star, but when the star was gone, the dynamic changed. This has not been ruled out.

Why don’t I buy the conclusion of the analyses of Azuoulay et al? A team of researchers do most of the big decisions at the beginning of undertaking their project: the subject of the investigation, the method of investigation, key assumptions, etc. Following this, the research team and their investigation has a lot of inertia (resistance to change). The superstar on the team leaving abruptly might have changed aspects of presentation, or might recommend some adjustments to the actions of the team, but I find it hard to believe that an abrupt exit due to the death of the superstar researcher would change the main thrust or substance of the investigation. The substance of the investigation includes the inquiry into trying to answer some question about how or why the world works the way it does. This inquiry is what we expect to be the primary interest of the funding panel or journal editor. Do the other participants become any less professional? The dependence of research publication and funding on superstar researchers points to human factors independent from the research itself. Does the politics of research – as it is currently done – help us achieve the next level of discovery and knowledge?

In the end, the pursuit of knowledge is an activity involving people working together. And this will means that there is no escaping the human aspects of conflict, collaboration, prestige and intuitive judgements.

Working as an Independent Scholar on Theoretical Foundations

4 Feb

In the blog post before last (the first one), I raised some concerns about how the current system works for proposing and evaluating new theories. In the last blog post, shared some of my experience about the difficulties of receiving a fair hearing/review.

So why not just go with the flow and follow the usual path for being a grad student, post-grad, lecturer, or professor in an esteemed academic institution?

First of all, people who see new openings for leaving the beaten path are not the people who are willing to spend a lot of time to get perfect test scores. Rather, personal curiosity drives the process of discovery and education. One observer of this tension in my life observed that a person cannot serve two masters — the master that demands activities for getting good grades versus the master known as curiosity or the quest to know.

Secondly, is it practical to work on changing the foudation of a building by working within that building? In this context, the building is the construct ideas that the seasoned yeomen of the academic business all ascribe to. Working outside the building is really the only way that such a major change can be attempted. We want to preserve the good parts of the structure and put it on a foundation that provides better support and allows for a greater scope of development.

Then, consider if a graduate advisor would let a grad student take on some big issue like the theoretical underpinnings of mathematics. Not likely. An advisor wants to know that the grad student can be successful with the proposed topic. Then, post-doc and lecturer opportunities are related to the scholar’s grad work. So, the usual path (leading to peer reviewed and/or published theoretical work) pretty much assures that new theoretical work is incremental and built on existing approved foundations.

That’s enough.  Just a little bit of attention to address questions that may have been raised. Let’s move on.

The main thing I want to get to is that in working together on knowledge, we need to have some standards that govern theoretical development. I plan to take this up in my next blog post.

Some Personal Experience

31 Jan

As an independent scholar, I am not the kind of scholar who can get her work peer reviewed. 

From my previous post: “People that work for the journal or the book publisher provide an initial filter to see if a work should be  accepted for peer review. They evaluate if the person is a reasonable candidate to be regarded well by others in the field and they consider the abstract to see if it seems reasonable and offers something new or interesting.” I don’t fit the criteria since I do not have a doctorate from an esteemed school with an esteemed advisor.

 That’s just the way it is.

Maybe it is relevant to share some examples of attempts to get my work reviewed by people in the academic community; although, without knowing the quality of my work, it’s hard to know how justified my brush-offs were.

I developed a paper that captured some defects I have observed in Set Theory — 2 problems with formal set theory and 2 problems with informal set theory. There’s at least one well-known problem with Set Theory associated with a certain type of paradox — so it’s not like Set Theory has an unblemished reputation. However in the absence of another theory in the first half of the 20th century to fill its role as the theoretical foundation of Mathematics, scholars have moved forward with the idea that everything is okay with Set Theory as long as practitioners abandon formalism and avoid the special cases leading to the paradoxes. (1)

If you are up for the challenge, you can judge for yourself if I support my claim in this paper: “Why Set Theory is Not an Acceptable Theoretical Foundation for Mathematics.”
[http://www.placeofunderstanding.us/pdf_files/WhySetTheory.pdf]
I am interested to know if my arguments can be refuted.

I submitted this paper to “The Journal of the American Mathematical Society” (under 2000 Mathematics Subject Classifications 03A05 and 03E99) in September 2002. I was prepared that it would not be published, but I was hoping for some review and comments on my work. I only received a short email saying “I am sorry to say that we will not be able to publish your paper.” No comments, no feedback.

In another case, I found a post-doc in Mathematics at MIT who was willing to read my paper on Set Theory and look at my paper proposing a new foundation for Mathematics. He obtained copies of my references. He seemed to give it a serious go. But he did not deliver any observations on problems with my reasoning.

He consulted with the professor that he took his one course on set theory with.

The professor indicated that formal Set Theory and the Predicate Calculus of formal Mathematical Logic are 2 distinct theoretical systems. This pertains to the first problem with formal set theory.

Per the textbooks I read, formal Set Theory is a formal theory defined within the theory of formal theories, the Predicate Calculus (of Mathematical Logic). Formal Set Theory is formally used to define numbers. So there is a problem with circular definition. Numbers are used to define the Predicate Calculus (supposedly because counting numbers are part of natural language), and Predicate Calculus is the theoretical setting in which formal Set Theory is defined, and formal Set Theory is the theory used to formally define numbers (via successive sets). Natural numbers define the Predicate Calculus, the Predicate Calculus defines Set Theory, and Set Theory defines natural numbers.

Doesn’t it seem on the face of it that the attempt to understand the theoretical foundations of mathematics should not start with using numbers as primitive concepts?

I don’t agree with the position put forward by the professor that formal Set Theory and the Predicate Calculus are independent. Consider this. If Set Theory is defined outside of the Predicate Calculus, is it a formal theory? What makes a theory a formal theory according to the tenets of academic mathematics? I’m interested to know what other scholars think.

The post-doc in a sense agreed with one of the problems of informal set theory, because it is involved with the known paradoxes. He seems to think that the axioms of Zermelo-Frankel Set Theory correct the problem, but ZF Set Theory is a case of formal set theory, not informal set theory. He felt I was wasting people’s time to include this known problem in my paper. I include this issue because it is not some weirdness that can be fixed by avoidance; it is a serious flaw in which “the whole” and “a part, not all, of the whole” are NOT distincly different; they can be the same. 

In the end, the post-doc confessed that he believed the ideas and work of the respected giants of the field must be more correct than the ideas of a lone independent female scholar. I asked, “What about the new numbers defined in my paper on the new foundation for mathematics.” He said, “What are they useful for?”

I have to tell you that I was shocked to get this reaction from a mathematician. Yes, the “imaginary” number also got this reaction; I just thought that a modern mathematician would understand from the example of i (the square root of -1) that numbers are objects worthy of analysis and most likely a practical purpose will show up over time. We agreed to disagree and went our separate ways.

So one of the changes that I would love to see in academic science — working together on knowledge — is a way for independent scholars and scholars from lesser schools to get their work reviewed. Consider that if the current system of academic review existed in 1905, that Albert Einstein would NOT have had his 3 seminal articles published — he was an independent scholar working as a patent clerk.

I think it is relevant to this ongoing monologue to share some of my experience so that you have more understanding about personal context related to the subject. Also, it allows you to get to know me a little bit.

Footnote

1. The Mathematical Experience. Philip J Davis & Ruben Hersh. Houghton-Mifflin, Boston; copyright 1981 by Birkh
Page 331-337
“The theory of sets was developed by Cantor as a new and fundamental branch of mathematics in its own right. …
“Set theory at first seemed to be almost the same as logic. The set-theory relation of inclusion, A is a subset of B, can always be rewritten as the logical relation of implication, ‘If A, then B.’ So it seemed possible that set-theory-logic could serve as the foundation for all of mathematics. ‘Logic,’ as understood in this context, refers to the fundamental laws of reason, the bedrock of the universe. …
“Since all mathematics can be reduced to set theory, all one need consider is the foundation of set theory. However, it was Russell himself who discovered that the seemingly transparent notion of set contained unexpected traps.
“…
“The Russell paradox and the other antinomies showed that intuitive logic, far from being more secure than classical mathematics, was actually much riskier, for it could lead to contradictions in a way that never happens in arithmetic or geometry.
“This was the ‘crisis in foundations,’ the central issue in the famous controversies of the first quarter of this century. Three principal remedies were proposed.
“…
“… In 1930, Gödel’s incompleteness theorems showed that the Hilbert program was unattainable — that any consistent formal system strong enough to contain elementary arithmetic would be unable to prove its own consistency. The search for secure foundations has never recovered from this defeat.”
Page 344
“In recent years, a reaction against formalism has been growing. In recent mathematical research, there is a turn toward the concrete and the applicable.”
Page 347-348
“Thus, Lakatos applied his epistemological analysis, not to formalized mathematics, but to informal mathematics, mathematics in process of growth and discovery, which is of course mathematics as it is known to mathematicians and students of mathematics. Formalized mathematics, to which most philosophizing has been devoted in recent years, is in fact hardly to be found anywhere on earth or in heaven outside the texts and journals of symbolic logic.”

Competition of Theories, Where We Are Now

24 Jan

We have many reasons to feel good about the progress that has been made with respect to understanding our physical world. Gaining understanding about gravity, electro-magnetism, and statistical mechanics has enabled technologies for space travel, computers, and refrigerators. Continued progress, however, is being limited by the screening process for new theories.

Any explanation of how the world works involves a theory. A theory makes certain fundamental assumptions (axioms), has special key terms, and uses language in careful way in order to foster clarity and preserve consistency (i.e. it is not the case that a statement and it’s negation are both true). We like and want theories that let us know more about the world around us. We  especially like theories that lead to more power, more health, more free time, and other benefits.

What many people may be unaware of is this: the world of science has plenty of competing theories but unlike professional sports there are no formal rules that govern the competition.

In professional American football, does the enforcement of the rules (via referees) matter for which team wins? Yes. So how does it work in professional science? How does one theory win over another?

First, a person who is advancing a theory must be a grad student, post-doc, or professor at a college or university. For example, the movie Lorenzo’s Oil tells the story of parents with a son who is dying and they make a medical breakthrough. The breakthrough halts the progressive deterioration of their son. The process and politics involved in  academic medical knowledge rejects their breakthrough.  The benefit of the breakthrough is denied to other families who rely on and trust the medical establishment.

Being a grad student, post-doc or professor is not a guarantee that a person’s new theory will be considered, however. The person’s academic background, his or her thesis advisor, or the school where he/she works are all potential disqualifiers.  In order to be allowed onto the field of competition at all, the creator of a new theory must be from the best of schools and/or working with other top professors in the field. Theoretical work is not qualified based on its merits, rather the status of its creator.

We wouldn’t want to have to consider all possible theories — any theory proposed by anyone — but consider how this situation puts so much power in the hands of a very small group of people, say professors from the top 10-15 universities. (The size of the group depends on the field of study and the discipline that the theory is related to.) This small group of people are usually older and apt to hold on to the mental pathways that they have been successful with all their lives. This can potentially inhibit discovery. This also cultivates an environment where the politics of human relationships matters in a way that is contrary to the level of objectivity we expect in human intercourse related to science.

A theory is advanced in some written form, say a journal article or a book. People that work for the journal or the book publisher provide an initial filter to see if a work should be  accepted for peer review. They evaluate if the person is a reasonable candidate to be regarded well by others in the field and they consider the abstract to see if it seems reasonable and offers something new or interesting. This is another part of the process that fails to meet the level of objectivity desired, since it relies on the subjective opinion of a few people .

Next, the trusted professionals who perform the peer review look over the article or book and make sure that it meets basic standards: are claims supported, was the research done properly, does it fit with what is known, and according to their professional opinion is the paper acceptable. There are some basic standards in this process, but no rules to govern why one theory should win over another.

After a paper or book is published, then the competition begins in earnest.

For scientists who have familiarized themselves with paradigms and theories and the fact that they are subject to change, there is at least one principle governing the competition. If a proposed theory is more simple, explains what is known, and provides new results – new understanding – then it should win. (The power of incumbency is very strong as it probably should be.)

The comparable situation in American football would be putting 2 teams on a football field and saying that the winner is the one with the most points gained in one hour by touchdowns, field goals, safetys and extra points, with no rules regarding acceptable ways to advance the ball or change possession of the ball, no rules against roughing the kicker or pass interference. However, even this situation has more clarity than the competition between theories, because the definition of each method for gaining points: a touchdown, field goal, safety or extra point kick is much better defined than “what is known” or the property of being “more simple.”

You ask what kind of rules could be applied to theories?

I have some ideas, but that’s a post for another day.

Hopefully, you see that the current state of working together on scientific knowledge has a lot of room for improvement.