Friday, February 22, 2008

Robust Design and Biology

Friday, the day for lunch with theories.

Today we had another speaker from MIT. Different to the one last week, Gerald Jay Sussman gave a very good presentation.

Gerald is an expert in computer science. His talk was to explain how people make artificial systems robust, and what is different compared with biological systems.

"Robust" is a hot term in systems biology. Gerald started from its definition in computer science - when you are building a system, "robustness" means you won't get any unpredictable result; thus you have a robust system.

In order to make the system robust, programmers usually discards many information which could be valuable. Sometimes it leads to a totally wrong result. Using "floating-point calculation" as an exmaple.

Asking a floating-point processor to average 0.998 and 0.996.

If the processor can only handle 3 digits, it will work like this:

0.998 -> 998 x 10^(-3) ==> 3 digids

0.996 -> 998 x 10^(-3) ==> 3 digits

0.998+0.996 = 1.994 x 10^(-3) ==> 4 digits,

floating point to 1.99 x 10^(-3) ==> 3 digits

(1.99 x 10^(-3)) / 2 = 0.995 ==> 3 digits

Now you get the average 0.995 which is not even between 0.998 and 0.996.


This seemed to be a classic joke in computer science, because a small population of people always responded these kind of joke immediately by first few words.

What surprised me most, is Gerald really knows biology, amazing. He thinks the artificial system still have its robustness in a very different way to biology systems, i.e. removing anything it cannot handle or explain. Thus the user-end feels fine without knowing anything inside the box.

I feel it's like the old modeling question, basically you need several assumptions to elimate factors which could bring you troubles. By doing that, it is in a risk moving one step further away from the real.

--

Robust design and biology
22 February 2008

Gerald Jay Sussman
Computer Science and Artificial Intelligence Laboratory
MIT

Abstract
It is hard to build robust systems: systems that have acceptable behavior over a larger class of situations than was anticipated by their designers. The most robust systems are evolvable: they can be easily adapted to new situations with only minor modification. How can we design systems that are flexible in this way?

Observations of biological systems tell us a great deal about how to make robust and evolvable systems. Techniques originally developed in support of symbolic Artificial Intelligence can be viewed as ways of enhancing robustness and evolvability in programs and other engineered systems. By contrast, the common practice of computer science actively discourages the construction of robust systems.

Robust designs are built on an additive infrastructure: there are exposed interfaces for attaching new functionality without serious disruption of preexisting mechanisms. Indeed, the ability to harmlessly duplicate a mechanism and then modify the copy to supply useful new functionality is one of the principal ploys appearing in natural evolution. What are the preconditions that support such augmentation? Can we engineers arrange our systems to be extensible in this way? Are there exploitable analogies between the techniques that we have created to make extensible artifacts and the mechanisms that we find in biological systems?

I will try to address these and related issues.

No comments: