Saturday, November 8, 2008

Changing the Future

With the "Origin of Species" and subsequent refinements of evolutionary theory, directed evolution of any kind has been abandoned as a possibility by almost all of the scientific establishment. This contrasts with our high regard and even idolisation of those that write great books, invent new gadgets, and that have vision that becomes reality (eg. man landing on the moon by the end of the decade). By principle 4, nothing complex is created without extensive precedent. The only thing separating the process of evolution and the creative process is context. Both processes have elements in the transition where it is unclear what role pure chance plays, and how much is borrowed (or even plagiarised) from existing elements. Also, it is unclear how much importance to place to a "vision" of the desired endpoint of evolution/creative design. One cannot objectively determine the link between what the minds eye sees and the resulting creation. Alternatively, science has not come up with a mechanism for the same vision thing guiding evolution, though a combination of subconscious individual choices and the activity of some currently poorly understood genes has enough potential for a forward-looking mechanism of evolution. Analogies between natural processes and human design are crucial in progress towards success for any design.



8) Creativity and Vision - In marconomic terms, this entails visualising an "evolutionary" (see principle 4) pathway from something that exists now to something desired or required in the future, of which a reasonable analogue exists and which personal actions can be a part of. Creativity cannot come from "intelligence" without knowledge of precedents and having available analogues - nor is creativity simple plagiarism/copying with random changes.

Wednesday, November 5, 2008

More Boundaries of Objectivity

If one accepts that objective reasoning and argument can only start if there is agreement on starting axioms, then one must take that most ad hoc arguments are subjective, and have motives other than getting to the objective truth of the matter.
When something bad happens, our natural human instinct is to find out why. Whether if it is for a death, injury, loss of money or change of weather, we often look for something or someone to blame, especially other than oneself, or something we cling close to. We are instinctively aware of moral hazard and a skewed profit motive, and we instinctively believe in making and enforcing laws that aim to prevent cheating the system for selfish motives. When something bad happens, we tend to put it in the context of our interpretation of the law, rather than the risk/reward calculations of game theory. Well designed law can in fact be viewed as a special case in game theory where crime, on balance, doesn't pay. With international law especially, the unenforceability of many nominal laws, a lack of jurisdictional clarity with potentially enforceable ones, and overlapping disfunctional multilateral entities each with their own agenda, the assumption of well designed law should really be thrown out. With well designed law the objectivity of the law is tested in court.


7)Blame - Marconomics views the discussion and analysis of blame as too subjective to be considered scientific with some exception. To objectively analyse blame requires agreement on who makes the rules, what evidence to take into account, and the criteria by which judgement should be made (as who is to blame) The exceptions are where there is strong separation of powers of making, policing and judgement of laws. Then there can be something resembling scientific objectivity. Thus for the most part arguments about ill deeds ought to revolve around what strategies to take to move to a system with strong separation of powers to fairly deter, prosecute and convict evil-doers.

Monday, November 3, 2008

Ok, ok, it's serious, but it is still *like* a game

The whole point of abstracting a set of objects into a single object is both to simplify the model of the dynamics and to reflect a dynamic that the action of individual units cannot explain. This is the most apparent in strategic situations with humans. Many species of animals and especially humans rally behind a leader. Humans have a theory of mind and human leaders will try to predict how other leaders are going to react to get the best outcome either for themselves or their group - much like any strategic board game. Often the strategies involve sacrificing some of your own, complete misrepresentations, and outright slaughter. Game theory is still the best representation of the motivations and dynamics - The point is not to trivialise the situation, but to objectively analyse it. The leaders themselves may or may not be consciously thinking about the game-like nature of the strategies, but it neither increases or decreases the risks of the situation. The risk of the situation is a function of the dynamics and boundaries - the rules of the game if you will.

Game theory has plenty of analogues in nature as well - Whether you are talking about a group of vultures vying for the same carcass, or a territorial battle between packs of dogs, game theory brings useful models to the table.

In politics, the game of "brinksmanship" is quite common for tyrants under pressure from sanctions and various threats. The Israel-Palestine stalemate can be seen to be a Nash Equilibrium, as could have been the Cold War.

6) Game theory - If one is looking at a strategic context, and there are few enough units to contend with (or if the units can be grouped thus), Marconomics views Classical game theory as the correct modelling approach. "Dillemma's" are a state of play where there is uncertainty to a decision due to a lack of information of and/or dependence on decisions made by other units: in which case any decision carries irreducible risks and/or rewards either way. In a "Tragedy" scenario, a conflict exists between optimal strategies for the individual unit, and what would be optimal for the whole group if it was considered a unit in itself.

Friday, October 31, 2008

Top down design - Bottom up testing

With design and evolution processes being defined as essentially the same thing, just in different contexts, this opens up analysis of evolutionary processes more in the way we would analyse say software design techniques. Conversely, processes that are primarily seen as design or creativity (say new consumer products, movies, tools, software etc.) can be analysed more in line with how evolution is analysed. An important concept is that complex units (animal species, mobile phones, complex software) can be conceptually broken down into independently acting sub-units (individual animals, camera part of the phone, subroutines or software objects) to get a perspective of how the sub-unit affects the design/evolution success of the unit, or even the class of units.

There can be contexts in which because there is interaction between different levels, one cannot come to reliable conclusions. An individual animal's success may have situations which don't confer success for the species. If camera's are banned from certain situations then the whole mobile phone is devalued for that context.

When looking at countries, the way to test if a law is going to work for the benefit of the world, the ideal way to test it is to have two countries with all laws and circumstances identical except for the law change, and see which country works better, and then assume that this will transpose to it being better for the whole world. Dynamics of having prohibition countries next to non-prohibition countries will elicit all sorts of interfacial behaviour which can swamp the effect you are trying to test- e.g., it is easy for someone in prohibitionist Gujarat to point to the alcohol-fuelled chaos of the coastal enclaves that are part of Goa, Daman and Diu as evidence that prohibition is a good thing, but those enclaves are the way they are because of Gujaratis evading prohibition. This is the kind of example I am talking about for where there is interaction between different levels of abstraction. Things like mendelian traits in genetics have very little of this kind of interaction between the levels. A trait that is good for an individual is most commonly good for the whole species to have.

5)Complex units can be defined as a set of multiple complex subunits for Marconomic analysis. In this there is no one "level" more special than another. It is a way of modelling what is really happening, that will be useful in bounded conditions where there is little interaction between the levels. Different aspects of analysis will be better dealt with at different levels. For instance, The Panda's Thumb, talks at length about "units of evolution" being the Gene, the Individual, or groups of individuals etc. Dawkins famously in his book points to "the Selfish Gene" as being the primary unit, while others rate the individual as the unit, others favour group selection theories. Marconomics states that all these are valid models that will give the right dynamics (and answers) in different contexts.

Tuesday, October 28, 2008

What philosophy should we really have learnt from Charles Darwin?

I distinctly remember in grade four I was taken aside by a teacher specialising in gifted children and had me study Charles Darwin's foray to the Galapagos islands and retrace his discoveries and see what conclusions one would come up with from first principles knowing what he knew. The conclusion I came up with based on the circumstances was that all the species on those islands had adapted from species common on the mainland that had reached the islands. This contradicted previously widely held views on nature that life on earth had appeared in their final forms everywhere on earth at the same time. So far so uncontroversial. However, my take on the idea is that having seen several specific examples that neither required nor had evidence for appearance of species without precedent, that Darwin just made a logical leap to *make that an axiom*. ie. complex species have a precedent of similar complex species without exception. At a different level this is the same with individuals (are similar to their parents), and at a lower level again with genes (genes are copied fairly faithfully from cell to cell down the germ line).
A secondary part to this was that random changes (mutations) in the copy are enough to account for any *truly new* features. All other complex features that might appear truly new must be assumed to be imported from an unknown source (with similar complex precedents itself). Evolutionary biology is as far as Charles Darwin took the axiom.

There is a popular anecdote amongst evangelicals about a creation (say a human) betrays the fact that it is created, by its complexity, just as a a human artefact (say a watch) betrays the fact that it is created by a human (watchmaker). I like to turn it around and demonstrate that the complexity of a watch actually betrays that it evolved, just like living complex things have. Thus, truly new things about a watch are randomly selected by the watchmaker or other interested designers and tried through iteration after iteration and tested, in the "environment" (marketplace) or a (not truly new) feature is added (eg. mobile phone) from something with extensive precedent in itself. The design process of a designer must iterate random changes to the design and test them via thought experiment so that it gives the appearance of *not* having evolved through random changes/ rejecting obvious failures. The design process itself is also subject to evolution as a watchmaker teaches the next trainee watchmaker how to "design" new watches.



4)Complex design and creation - Complex artefacts of any description or purpose always have precedents that are complex. A lineage must always be assumed to exist to a less complex precedent with either trial and error pathway for any additional complexity, or a lineage with an added component that has extensive precedent in itself. Whether the pathway to the antecedent is directed or undirected is immaterial to this principle. "Creation" of a complex artefact without precedent does not exist under Marconomic principles. "The design process" is also a complex "artefact" under this concept and thus itself necessarily has precedents and lineage that can be traced back.

Monday, October 27, 2008

How is my idea different from that of my peers?

In "scientific" environmental forums such as Realclimate, the subject matter is scientific, but the debates are entirely political for all intensive purposes. On the main line you have the "alarmists" at one extreme, and the "denialists" on the other. A rough indicator of where you might be on the line is by answering the question "How certain are you that Global Warming is caused by humans?". This might not seem like "the" crucial point in the argument, but that question is perhaps the only one that science can definitively speak about, and it is the main point that proponents of not taking action are getting traction on (by casting doubt on the objectivity or accuracy of that science). Why is it not plausible to be 100% convinced that global warming is caused by humans and be as convinced that we should not spend a single cent on reducing greenhouse gases - or alternatively, to believe that there is grave doubts that there is any dangerous warming due to greenhouse gases, but to still believe that we should vigorously pursue alternative energies, cap and trade restrictions AND new restrictive regulations. The key for one-on-one discussions on the issue is to know where the line is, even using it as a reference to more clearly define how your idea is different from that of your peers which are near a point on the line. It may not be strictly any more "scientific" than ad hoc discussions, but it brings a NEW and OBJECTIVE philosophy to the table.





3)Ideas as a volume in Idea space - Rather than as points on a line, Marconomics views ideas more accurately as a multidimensional volume in idea space. Whether it be a concept of God as an idea, or views on the environment, or a description of a behavioural disorder, ideas need the freedom of movement in several dimensions to better grasp reality

Wednesday, October 15, 2008

Political/pseudo-scientific debate

Debates tend to place people with different, often opposing views in turn to make reasoned-sounding arguments to convince them and/or the audience of their point of view. Since there is a large spectrum of viewpoints, all good debaters take advantage of correlated viewpoints. In debates, these are way more important and effective than raw facts and logic. For any topic (say Global Warming) one starts with an indisputable fact (greenhouse gases can cause global warming) and attach it in a logical sounding sequence to the desired conclusion - (we need to act to reduce greenhouse gases urgently).

Visually, this process is modelled by placing everybody on a metaphorical line, even being forced by the logic of the debate to go on the line. The peer group of people near you on the line tend to have very correlated views and can be targeted by the logic of the debater specifically to push you in one direction or another. Richard Dawkins does this very effectively in the God Delusion by asking the question "What probability do you give of there being an omnipotent God which created the universe?" Thus the target audience (those who are unsure in any way) become captive to the argument.

Corellation of viewpoints of experts is often confused with logical corollary. For example, most climate experts that believe in the certainty of global warming being anthropogenic, also believe that targets of carbon emission cuts of 20% to 50% within a couple of generations are necessary to save the planet. Many in the global warming debate actually pass off the argument (If global warming is caused by humans, large emission cuts are necessary to save the planet) as scientific logic.

To generalise, just because in a scientific peer group people who believe A also believe B does not mean if A is true then B must be true.

I, for one, don't think that open debates forward either the scientific process or philosophy in general. Therefore, for objectively reasoned arguments, logic based on correlated views must be avoided:

2)Perception is NOT reality. "Political" or policy arguments on which large numbers of individuals have a perceived interest are correctly placed on a continuum LINE. This is because people's views tend to be highly correllated (with their peers on the same part of the line). This is the only practical way to come to conclusions when there are a large number of individuals in the argument. Marconomics states that it is both possible and likely that no conceptual point on the line has got any of the right combination of facts or logic. Correllation in viewpoint is not the same as a logical statement. For instance if there are two logical statements A and B - even if everybody within a peer group believe both A & B, this does in no way prove (and should not even be assumed) the "if A then B" logical statement in a scientific context(even if those in the peer group are scientists).

Monday, October 13, 2008

Axiomatic Reasoning

Being a mathematically minded person from a young age, I have had a great respect for formal mathematical systems such as Euclidian geometry, and the name of this blog book is in total respect and regard for Principia Mathematica, the great work on the foundation of mathematics. Similarly, I wanted to partly formalise my ideas on objective reasoning in this blog book. In reasoning, just as in mathematics, for a proof to have meaningful completeness, one must start with agreed upon axioms and demonstrate that they lead directly to the conclusion; or alternatively presume the opposite to be true, and come up with a contradiction. Note that one can still be "wrong" if the axioms don't quite reflect reality.

I often accept others axioms *For the benefit of the argument* . I don't want differences in conclusions to be a direct result of differences in starting points. The starting points can be argued about at a separate time. Logical Positivism is a denial that any *assumptions without proof* are required at all and that watertight *proofs* are still possible regardless. Without consistency of logic, proofs are not watertight, and science either implicitly or explicitly requires axioms, like it or not. Thus, anyone who rejects metaphysics and theology, and thinks axioms are not required, I would label as a logical positivist, whose implicit axioms are the rejection of metaphysics and theology as false. Having implicit assumptions, but explicitly denying that they are assumptions, I believe to be a form of cheating when it comes to philosophy. This is why I have put axiomatic reasoning as my first principle:

1)Marconomics rejects Logical Positivism, as a basis for philosophical argument. Marconomics regards a call to the concepts that form its basis as cheating. On the one hand, it defines reality in terms of observations, yet ignores its problems of self-consistency.

Monday, October 6, 2008

Marconomics Definition

Marconomics is an inter-disciplinary objective study of anything and everything that is of interest to the human race, as observed by Marco Parigi.


It is necessarily broad and is founded on the primacy of axiomatic reasoning, discussion and argument.

See: Principles

Sunday, September 21, 2008

Technology is not a ladder, it is a bush

From principle 4 - Complex artefacts of any description or purpose always have precedents that are complex. A lineage must always be assumed to exist to a less complex precedent with either trial and error pathway for any additional complexity, or a lineage with an added component that has extensive precedent in itself.

Now this is part of the main argument against "ex nihilo" creation of the Earth's species - The Earth shows historical evidence of a progression of species, and no evidence of species occuring without similar species preceding. The genetic record of species also concurs.

The historical human record also shows that for every highly technological product humans "created", there has been remarkably similar products preceding them. Just as in paleontology a lot of the "in-between" record is inaccessible (eg. extensive beta-testing, simulations etc. by private companies) such that products seem quite different from their predecessors more so than simple trial and error and/or combining with other technologies would indicate.

Humans can no more dictate what artefacts will survive and reproduce in the artificial artefact environment anymore than we can genetically engineer animals to better live in their environment. Thus technology has no preferred direction, just like evolution. We often naively believe that technology only goes up, but all technologies rather become survivors in the artificial human environment, and we can't really know what the next big thing will be. We just know that successful stuff gets duplicated ad nauseum for a while, and others just die out quietly, even if they had potential.

Saturday, September 20, 2008

Self-Adjusting Systems

Most discussions regarding democracy and liberal economy, etc. revolve around the freedoms and benefits to the individual over those of dictatorships and strictly controlled economies. Discussions on Darwinian evolution dwell a lot on the selfishness of the individual progressing the genetic quality of the species. Marconomics is more interested in the self-adjusting nature of these systems.

Very successful non-democracies have existed with little problem for the individual citizen - There is even advantages as dictators can make unpopular decisions that benefit the country. However, succession is almost always a problem, and bad dictatorships don't have an automatic way of being reviewed and fixed.

Command and control economies, similarly, are capable of achieving specific goals immediately and without fuss, but every goal (eg. lower fuel prices) has unpredictable knock-on effects down the line, and in general these economies are unstable. Liberal economies are self adjusting in this regard, but one has to put aside capabilities of dictating that certain goals be met.

Similarly, genetic engineering may well produce genetics and species that achieve specific goals better than anything in nature. However, these genetics and species will fail spectacularly in nature because the self-adjusting systems of evolution are put aside as the survival of the genetically engineered is guaranteed for the purposes of the design, while survival under stress of competition etc. is the only thing that matters in nature.

Thursday, August 28, 2008

Legal Definitions vs quantitative definition

I will discuss two examples private(isation) definition and Democracy definition.
Privatisation vs Nationalisation of utilities etc. which I had discussed here is a case in point where with the legal definition is a Yes/No answer as to whether the legal owner of the utility is the government or a private entity. When analysing the effects of whether a utility is private or not the legal definition is a big hindrance to analysing whether having something private or nationalised is better or worse. This is because something can be privatised but have publicly controlled prices and be heavily subsidised by the government (or conversely be public, but so reliant on private consultants and subcontractors, and be competing with other utilities that it is essentially in private hands). There is nothing marconomically relevant in who owns something, but the score on the continuum is very relevant. What needs to happen is (even very roughly) a public/private score be placed on utilites etc. before jumping to conclusions as to whether policy should change to move the score along one way or the other.

Democracy has a similar problem. There is clearly cases where there can be too much democracy (proportional democracy often doesn't let new or unpopular ideas take their turn) or too little democracy (leaders can keep themselves in power indefinitely). The definition of democracy that indicates that the Soviet Union was Democratic and apartheid in South Africa was not, may be of some use in the moral sense but of no practical marconomic use. The democracy score I would give would (even roughly) judge how responsive government policy is to popular opinion. I would give Italy (too) high democratic marconomic score for democracy as popular opinion tends to stop almost every attempt to update taxation policy etc. Freedomhouse has a democracy score that with little tweaking, could give a quantitative answer of an ideal amount of democracy. Even decades ago, I would have given a very low democratic score to Zimbabwe, as there were few checks and balances to ensure polls accurately reflected the peoples wishes.

Saturday, August 23, 2008

Nash Equilibria in geopolitics

From principle 6, most geopolitical issues should be analysed via game theory, since there are few enough units to contend with and most issues can be modelled via various strategic games. Whereas most game theory proponents have strict guidelines to test whether Nash equilibrium is possible, marconomics starts with geopolitical situations which have been at a stalemate for an extended period of time, and surmises that it can only be due to Nash Equilibria. If any player was using sub-optimal strategies, they would have "lost" by now. Equally, if the situation was out of equilibrium the stalemate would break until it found a new equilibrium down the track.

As an example, I have modelled the Israel-Palestine Conflict to give an idea as to why it is so intractable in the long run.

The Cold War is another clear-cut case of a Nash Equilibrium. I would argue that the stalemate broke down only because finances eventually disabled Russia from following its optimal strategies. The New World Order (NWO) since the fall of the Berlin Wall and the breakup of the USSR, has been demonstrably not in equilibrium. Due to the fact that "the rules" (which in the case of the cold war is the structure of the UN) have remained unchanged a new equilibrium (which invokes the threat of Mutually Assured Destruction as the only ultimate check on misbehaviour of one or another of the veto wielding members) is entirely possible. Thus as an example, if Russia threatens Poland with nuclear strike, the main deterrent is the possibility of a disproportionate response by the US, etc.

Monday, July 28, 2008

Priviliged Knowledge

A general principle of philosophers regarding "knowledge" is that experimentation is the only means to knowledge. In an ideal open scientific world, results of experimentation, after scrutiny by experts in the field gets disseminated as knowledge in texts etc. However, what if you want to know the financial structure of criminal gangs? The book Freakonomics touches on such subjects, where unusual circumstances allow information that is normally out of bounds to scientific scrutiny, to be studied - often revealing useful knowledge. As a marconomic principle "privileged knowledge" is the range of knowledge that can be *known* by someone, is based on at least rudimentary experiment and observation, but due to issues of dissemination will never be freely available. Everyone with few exception has access to knowledge of this kind. Frustration may be felt that if dissemination is attempted, it will either not be believed due to lack of proof, or the proof of same will result in related legal issues, financial loss, social outcasting or at least any fear of any combination of these. This concept of privileged knowledge is not the same as the more general concept of "privacy". Although every conceivable privileged knowledge can have a future where it would be publicly accessible, information science excludes the possibility of any human knowing every relevant information on another human. Thus an imbalance of information is impossible to avoid - thus there will always be privileged information available to be exploited regardless on how complete the erosion of privacy is.

As a random example, place yourself in the boots of someone performing terminations in the 1960's. Due to being illegal, terminations on the black market were very pricey, but available. One may find that he/she would have detailed knowledge of the demographic of those having terminations in their own country as well as neighbouring countries where it was legal. I would suspect that the great majority of those having terminations where it was illegal would have been high income earners having them for convenience reasons. Those in desperate social or economic situations would not afford them. The unborn-baby-killers could know themselves whether it would make the country's demographic direction unsustainable, but policymakers would have absolutely no access to this knowledge or how much they could do about it. Policy-makers and voters alike often run with blind sentiment because the information that is out there has a natural resistance to dissemination.

Abortion
As an aside, the trick with policy-makers that would both like to make terminations rarer and move along a moral path that equates them towards a stance where the law can equate it with murder is multi-faceted. One key is to encourage voluntary registration of pregnancies linked to earlier family payments(and/or tax benefits): As far as I know, this has not been tried in any country yet. Another key is to note that extensive early family planning education and universal health benefits are policy aspect of countries where they are rarer. Knowledge about cause and effect of policy changes ought to be a science in these kinds of issues, but it depends a great deal on priviliged knowledge. Governments have had improved immunisation rates with linking family benefits with it - There is no reason to believe that similar incentives would not break the democratic resistance to pregnancy registrations. Pregnancy registrations are an absolute pre-requisite for considering them anywhere towards as we do babies.

Tuesday, March 11, 2008

Exogenesis

Marconomics takes the view that Exogenesis *Must* be the default assumption of evolutionary science. The analogy of an island is useful to me as to why this should be so. We do not consider the bacteria, insects, plants etc. on that island to have evolved separately from the primordial ooze. Our base assumption is that living creatures from elsewhere somehow landed on the island and through further evolution, populated vacant niches as best they could. Further encroachments from other areas may have upset the balance but allowed for more competition, genetic diversity, and more effective filling of all the niches. Geogenesis requires a lot to happen in a relatively short time frame in order for life to arise. It doesn't just rest on whether this is possible or not. It is possible that land-dwelling higher creatures on an an island evolved from bacteria residing on the island, but it happens in such a short time that the possibility never gets the chance because of the easy feasibility of the spread of creatures already existent is so much quicker and matches the observed timing. The more obviously appropriate environment for life to have evolved is in a cluster of stars and planets in the early stages of the Galaxy. Say in a region of space with hundreds of stars of different sizes with an average distance between stars of approx 100 Astronomical Units. Amongst and orbiting these would be tens of thousands of planets, with frequent collisions sending stuff from one planet into space to land on another planet within 100,000 years. This is a perfect environment for life to both start in a variety of different guises, and for those guises to both compete and to spread to other planets in the cluster. Types of life that could survive inter-planetary migration of this kind would have distinct advantages, and the only kind of life you would end up with amongst this cluster would be of a type that was compatible with inter-planetary migration and adaptation to any sorts of environments within that cluster. The life would tend not to lose this ability to spread itself, no matter how much evolution happened in the meantime, much as life on Earth never loses the ability to spread itself to proximal environments. In the solar system, the obvious proximal source of DNA is the star systems which exploded forming the dust and gases which the solar system was formed from. The difficulties of inter-system spread of DNA is dwarfed by the difficulties of it happening quickly on any one particular planet.

Thursday, February 21, 2008

Evolution - The scientific method

I believe strongly in the development of evolutionary science, and I have new hypotheses that ought to be tested that don't contradict any experiments done, nor do they contradict the overall modern evolutionary synthesis. However, these hypotheses, if proven, may open up some arguments thought long ago settled.

The role of random mutations is well researched. These are well observed and are thought to be the basic unit of truly new, potentially beneficial adaptations.

DNA repair Is a process where damage to the DNA is fixed. This process is imperfect, however and failure in repairing mutations is a variable source of mutations. The process of DNA repair is blind to the function of the gene it is repairing. Thus, resulting mutations tend be in random places, and the build up of mutations on sections of DNA that don't change the function of the gene happen at a well defined rate. Thus, a process of feedback of information whether the gene is functioning properly, is the only way to account for genetic drift NOT occuring on important functional elements of the gene. Natural selection acting on whole organisms holding that gene is the only feedback commonly agreed on by evolutionary biologists to occur. However, my view is with a hierarchy of genes (ie. genes that control a bank of genes, that each control a set of basic genes that relate to a phenotype), there needs to be a hierarchy of feedback to ensure that the lower level genes are functioning. Ie. there needs to be a form of selection within an organism such that each basic gene can be selected or rejected based on its function independently of all other genes. The corollary being that lower level genes that are suppressed in some way by some higher level genetic action, can have its function ensured with the same process of feedback.

There are a number of mechanisms that could be at play to ensure functional integrity of genes, that are not natural selection between whole organisms. One of the mechanisms proposed, which is quite likely to be involved in some way is through the selective properties of sperm. In this mechanism, sperm act as selective proxies for the organism but specifically for genes lower down in the genetic hierarchy. Thus, every lower level gene affects the selective aspects of sperm, and that which gets to reproduce has fully functioning lower level genetics.

In this sense, this hypothesis is not concerned with the genetic variability due to the shuffling of phenotypes that follow the laws of
Mendelian Inheritance, but only the functionality of the individual allele itself. This hypothesis takes it as a given that the spread of mendelian traits is the primary source of variability in phenotypes that are subject to selection in a standard Darwinian way. This hypothesis is concerned with: 1)How "improved" versions of alleles arise.
2) How stress triggers greater mutations.
3) How latent phenotypes not visible in a species can become common again.
4) How many "truly new" genes are involved in speciation, and how many genes are latent ones that are re-activated (or de-activated), or inserted via horizontal gene transfer, or are just a previously unobserved combination of mendelian and non-mendelian traits.


This hypothesis takes the view that the Weismann barrier and the
Central dogma of molecular biology do have some exceptions. The Weismann barrier is proposed to exist to cancel out the "noise" of individual successes and failures of mutations such that the exceptions, important feedback that is genetically significant can adjust the gene accordingly.