thinking

My Photo
Name: Tyler
Location: Mountain View, California, United States

thinking := [life, games, movies, philosophy, math, coding, pizza, &c.]

Tuesday, March 24, 2009

experimental integrity and the search for causality

The phrase the scientific method implies that there is some universal, automated process that investigators blindly follow in order to do science. In truth, there is a great deal of improvisation and creativity required for the doing of good science. Great leaps forward, such as general relativity or the complex (as in complex numbers) proof of the prime number theorem, often rely on bold, inspired insights into the nature of an unsolved problem.

However, there are a few common principles that unite the rational attitudes of modern research. I want to highlight a few that I feel are somewhat neglected. They are:
  • experimental candor,
  • easily reproducible experiments, and
  • induced correlation.
Experimental candor

Here's a nice way to get great results: suppose you think that drug A will help people lose weight. Conduct a thousand studies on small groups of test subjects. Suppose one of those studies shows good results - publish those good results, and throw away the rest of the results.

This may sound a bit unrealistic, but something like this can happen much more easily in computer science. In this case, there is a growing field of algorithms which are both probabilistic and approximate - very similar to experimental drugs in medicine. If they do pretty well most of the time, that's good enough. Yet with an algorithm, it's incredibly easy to run a million trials of your code, and only publish the best subset of that. Even if the quality of your results are completely random, it's just a matter of time before one small subset of the test results look good.

Hence the need for experimental candor. It's important to reveal all the relevant experiments performed, including the negative or inconclusive ones. The web is the perfect platform for this kind of data disclosure - you can pre-publish your intended experiments and hypotheses before you actually run the experiments. This way, good results look better, and other researchers won't waste time on previously failed experiments. Of course, it's always possible that an experiment failed for unaccounted-for parameters (including human error), which is why experimental reproducibility is also crucial to good research.

Easily reproducible experiments

This scientific tenet is well agreed upon, but poorly executed. In practice, I know of very few experiments which can be very easily reproduced at the research level. In some cases, one may wish to build upon the work of another, such as by augmenting a biochemical procedure with a new step. Articles involving experimental lab work do indeed contain careful procedural explanations meant just for this purpose, which is great. But in many cases, even this is not enough for other researchers - in my days as a grad student, I would see other grad students emailing or calling other investigators (often ones who were considered serious competitors) to ask for critical clarifications in procedure.

We can do better than that.

I'm going to pick on computer scientists for a moment, because they're the worst offenders. An algorithmic experiment has the most potential to be easily reproducible. Ironically, it seems typical to leave out necessary parameters to perform the experiments used in many papers. In order to reproduce a certain graph of time complexity versus input size on a certain real-world dataset, for example, a reader will often have to code up the algorithm based on very vague pseudocode and hand-wavy explanations, guess at parameter values, and separately download the dataset. I've even seen code used which was nowhere available in either pseudocode or executable code - the reference given was by personal communication with another researcher (who won't answer my emails).

There is no excuse for this. Any good algorithmic experiment can be reproducible at the click of a button. The experimenters have already written the code - it is simply a matter of adding a link to this code to a website. It would be friendly to add a little documentation; or better yet, to follow a pattern of operation for the field, in much the same way that some software installation procedures have become standardized.

Induced correlation

This point is a call for the conscious recognition of an idea that's been implicitly used for some time.

Certain experiments have the goal of looking for something like a causal relationship. If a drug company is testing a weight-loss drug, they want to know that their drug causes the weight loss, as opposed to it causing something else, or something else causing the weight loss.

Unfortunately, there's no fool-proof way to experimentally test causality. This is a well-known problem. It's also interesting to note that, philosophically, causality itself is subjective in nature, although that is the matter of another post.

Here's the trouble: Let's hypothesize that chemical X causes weight gain. As an experiment, get a large group of people together. We randomly select some folks as the control - they won't change their diets, and we randomly select some others to change their diet to no longer consume chemical X. We see the desired results: the control group gains a little weight on average, but the experimental group (no chemical X) actually loses some.

Does that mean anyone can prevent weight gain by avoiding chemical X? Absolutely not. Here is one possible explanation: Suppose that the vast majority of foods contain both chemicals X and Y together, or not at all. So when the experimental group avoided X, they were also avoiding Y without knowing it. Now you unleash your study on the world, and everyone starts avoiding X. But there are some foods with chemical Y in it, without X. It could happen that those foods become more popular, or that certain people subconsciously crave Y. In either case, we have people consuming Y, not X, and gaining weight.

Is there anything we can do to experimentally show something stronger than mere correlation? A little bit, yes - we can show induced correlation. This is a correlation between parameters which was observed specifically by either turning on or off the cause in each trial, and purposefully leaving all other known parameters the same. Let's use the term natural correlation to indicate experiments where the cause was either present or absent without any control by the experimenters. Induced correlation gives more evidence of causality than natural correlation since there is more evidence that we can control the effect by controlling the cause.

I think this general idea has been understood already, but I'm not sure that it has been explicitly recognized. My goal throughout this post has been to encourage the codification and emulation of a few good core principles of scientific investigation. There are definitely more key principles, although I've been reminded many times that at least these three could use a little more awareness and observation.

Friday, March 06, 2009

thoughts on junk DNA

It's interesting to think of DNA as the source code for life. A lot of ideas fall into place nicely with this analogy.

You need some sort of compiler or interpreter; this role is given to RNA. You need a basic set of atomic instructions, and something like labels to certain parts of the code base - pointers into memory. Codons are the instruction set, with start codons helping to act as labels. A central processing unit executes the commands - ribosomes turn the codon sequences into proteins, and the proteins interact to achieve various goals. Chemistry itself is the ultimate processor, but it takes more focused form in the complex interaction of the enzymes produced by the DNA. Some of the proteins act as inhibitors, decreasing the activity of enzymes; others are activators, doing the opposite. These constructed molecules are capable of effecting or halting the production of still other amino acid complexes. The end result is a logically sophisticated dance worthy of the millennia of evolution which produced it.

As I write code on my own, in an experimental fashion, I sometimes don't worry about the readability of the code. It is in this scenario that the evolution of source code best matches that of DNA. There is a small cost to having extra/old code, yes, but it is far outweighed by the raw functionality created.

Looking at some source which has grown up just a little bit, mostly unsupervised, offers a few suggestions about bits of information that may, at first glance, appear non-functional (aka junk DNA):
  • Old functions which are never or rarely ever called

    As code evolves, some functions become less useful, or replaced by newer ones. It would make sense that some codon sequences would become obsolete, and the encoding would remain in the DNA.

  • Literal strings and other initialization data

    There might be a bit of initialization data in DNA - information not obviously functional, yet still used. For example, some DNA may only be active for a very short time when an embryo is first developing, or triggered temporarily at certain key development stages. An even more interesting hypothesis is the possibility that some instincts, or primal knowledge, are somehow encoded in DNA, in a manner somewhat different than traditional protein transcription.

  • Debug code

    Debug code is useful for figuring out what part of a process has failed. Although there may not be a conscious debugger to check the output, we could still hypothesize that a little extra information about each step in a procedure could give enough information to locate and react to a failure or attack in the system. In this case, the usually non-functional code would be rarely and temporarily activated as a defensive mechanism.