The notion that DNA and proteins form some sort of computational network goes back to at least as far as the 1970’s.
In 1982, Richard Feynman referred to the idea of a 'quantum computer', a computer that uses the effects of quantum mechanics. In 1994, Len Adelman demonstrated his- computation in a test tube- based on DNA splicing mechanisms. Currently, many uses of the term “biological mathematics” refer to Adelman’s splicing techniques. More recently "Intramolecular computation" has been added to this list, i.e. computation occurring within single molecules e.g. The Amino Acid Code and The Histone Code, as well as any other techniques that biological systems may use that qualify as mathematical in nature.
Computation implies some form of mathematics. Adelman’s experiment solved the so-called traveling salesman or shortest path problem, at least for a limited set of data. DNA and protein networks respond to complex logical environments involving decisions based on the absence or presence of different conditions, molecules or organisms in the cellular environment. At the level of the brain, extremely complex mathematical processing must be occurring. Artificial Intelligence techniques give many potential models- Decision theory, Statistical pattern recognition, Image processing techniques- to name just a few.
The tetrahedral geometry of the carbon atom is at the center or kernel of biological mathematics(smart molecules). Carbon atoms readily form chains amongst themselves (as well as other classes of atoms) by the sharing of electrons. The interactions of these neighboring covalent bonds result in switching elements similar in potential function to their man made digital counterparts. The result is that as few as two or three atoms acting in concert have a wealth of mathematical and logical processing capabilities. (Computational Structures in Non-Coding DNA and the Histone Code)
The Theory of Predication in Aristotle's Categories
"There is a theory called the theory of categories which in a more or less developed form, with minor or major modifications, made its appearance first in a large number of Aristotelian writings and then, under the influence of these writings, came to be a standard part of traditional logic, a place it maintained with more or less success into the early part of this century, when it met the same fate as certain other parts of traditional logic.
There are lots of questions one may ask about this theory. Presumably not the most interesting question, but certainly one for which one would want to have an answer if one took an interest in the theory at all, is the following: What are categories? It turns out that this is a rather large and difficult question. And hence I want to restrict myself to the narrower and more modest question, What are categories in Aristotle?, hoping that a clarification of this question ultimately will help to clarify the more general questions. But even this narrower question turns out to be so complicated and controversial that I will be content if I can shed some light on the simple questions: What does the word "category" mean in Aristotle? What does Aristotle have in mind when he talks of "categories"?
Presumably it is generally agreed that Aristotle's doctrine of categories involves the assumption that there is some scheme of classification such that all there is, all entities, can be divided into a limited number of ultimate classes. But there is no agreement as to the basis and nature of this classification, nor is there an agreement as to how the categories themselves are related to these classes of entities. There is a general tendency among commentators to talk as if the categories just were these classes, but there is also the view that, though for each category there is a corresponding ultimate class of entities, the categories themselves are not to be identified with these classes. And there are various ways in which it could be true that the categories only correspond to, but are not identical with, these classes of entities. It might, e.g., be the case that the categories are not classes of entities but rather classes of expressions of a certain kind, expressions which we—following tradition—may call "categorematic." On this interpretation these categorematic expressions signify the various entities we classify under such headings as "substance," "quality," or "quantity." And in this case we have to ask whether the entities are classified according to a classification of the categorematic expressions by which they are signified, or whether, the other way round, the expressions are classified according to the classification of the entities they signify. Or it might be thought that the categories are classes of only some categorematic expressions, namely, those which can occur as predicate-expressions. Or it might be the case that the categories themselves are not classes at all, neither of entities nor of expressions, but rather headings or labels or predicates which collect, or apply to, either entities or expressions, i.e., the category itself, strictly speaking would be a term like "substance" or "substance word." Or it might be the case that categories are neither classes nor terms but concepts. All these views have had their ardent supporters." pp. 1-2
From: Michael Frede - Categories in Aristotle. In Studies in Aristotle. Edited by O'Meara Dominic. Washington: Catholic University Press 1981. pp. 1-25
Reprinted in: M. Frede - Essays in Ancient Philosophy - Minneapolis, University of Minnesota Press, pp. 29-48.
In mathematics, a set can be thought of as any collection of distinct objects considered as a whole. Although this appears to be a simple idea, sets are one of the most fundamental concepts in modern mathematics. The study of the structure of possible sets, set theory, is rich and ongoing. Having only been invented at the end of the 19th century, set theory is now a ubiquitous part of mathematics education, being introduced from primary school in many countries. Set theory can be viewed as the foundation upon which nearly all of mathematics can be derived.
In its simplest meaning in mathematics and logic, an operation is an action or procedure which produces a new value from one or more input values. There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its range. For example, in the real numbers, the squaring operation only produces nonnegative numbers; the codomain is the set of real numbers but the range is the nonnegative numbers.
Operations can involve dissimilar objects. A vector can be multiplied by a scalar to form another vector. And the inner product operation on two vectors produces a scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on.
The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs.
An operation is like an operator, but the point of view is different. For instance, one often speaks of "the operation of addition" or "addition operation" when focusing on the operands and result, but one says "addition operator" (rarely "operator of addition") when focusing on the process, or from the more abstract viewpoint, the function +: S×S → S.
In mathematics, the concept of a relation or relationship is a generalization of 2-place relations, such as the relation of equality, denoted by the sign "=" in a statement like "5 + 7 = 12," or the relation of order, denoted by the sign "<" in a statement like "5 < 12". Relations that involve two places or roles are called binary relations by some and dyadic relations by others, the latter being historically prior but also useful when necessary to avoid confusion with binary (base 2) numerals.
The next step up is to consider relations that can involve more than two places or roles, but still a finite number of them. These are called finite place or finitary relations. A finitary relation that involves k places is variously called a k-ary, a k-adic, or a k-dimensional relation. The number k is then called the arity, the adicity, or the dimension of the relation, respectively.
Numerical Analysis is the study of algorithms for the problems of continuous mathematics(as distinguished from discrete mathematics).
One of the earliest mathematical writings is the Babylonian tablet YBC 7289, which gives a sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being able to compute the sides of a triangle (and hence, being able to compute square roots) is extremely important, for instance, in carpentry and construction. In a square wall section that is two meters by two meters, a diagonal beam has to be meters long.
Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation to , modern numerical analysis does not seek exact answers, because exact answers are impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors.
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in the movement of heavenly bodies (planets, stars and galaxies); optimization occurs in portfolio management; numerical linear algebra is essential to quantitative psychology; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.
Before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Nowadays (after mid 20th century) these tables have fallen into disuse, because computers can calculate the required functions. The interpolation algorithms nevertheless may be used as part of the software for solving differential equations and the like.
Animations for Numerical Methods and Numerical Analysis
Monte Carlo method
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.
Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions.
The simplest example of an orthonormal basis is the standard basis for Euclidean space . The vector is the vector with all 0s except for a 1 in the th coordinate. For example, . A rotation (or flip) through the origin will send an orthonormal set to another orthonormal set. In fact, given any orthonormal basis, there is a rotation, or rotation combined with a flip, which will send the orthonormal basis to the standard basis. These are precisely the transformations which preserve the inner product, and are called orthogonal transformations.
Usually when one needs a basis to do calculations, it is convenient to use an orthonormal basis. For example, the formula for a vector space projection is much simpler with an orthonormal basis. The savings in effort make it worthwhile to find an orthonormal basis before doing such a calculation. Gram-Schmidt orthonormalization is a popular way to find an orthonormal basis.
The above images demonstrate a propensity for "unstructured" proteins to form perpendicular cloud structures.
A wave function or wavefunction is a mathematical tool used in quantum mechanics to describe any physical system. It is a function from a space that maps the possible states of the system into the complex numbers. The laws of quantum mechanics (i.e. the Schrödinger equation) describe how the wave function evolves over time. The values of the wave function are probability amplitudes — complex numbers — the squares of the absolute values of which give the probability distribution that the system will be in any of the possible states.
Unsupervised learning is a method of machine learning where a model is fit to observations. It is distinguished from supervised learning by the fact that there is no a priori output. In unsupervised learning, a data set of input objects is gathered. Unsupervised learning then typically treats input objects as a set of random variables. A joint density model is then built for the data set.
Unsupervised learning can be used in conjunction with Bayesian inference to produce conditional probabilities (i.e. supervised learning) for any of the random variables given the others. A holy grail of unsupervised learning is the creation of a factorial code of the data, i. e., a code with statistically independent components. Later supervised learning usually works much better when the raw input data is first translated into a factorial code.
An important step in any clustering is to select a distance measure, which will determine how the similarity of two elements is calculated. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. For example, in a 2-dimensional space, the distance between the point (x=1, y=0) and the origin (x=0, y=0) is always 1 according to the usual norms, but the distance between the point (x=1, y=1) and the origin can be 2, or 1 if you take respectively the 1-norm, 2-norm or infinity-norm distance.
Common distance functions:
- The Euclidean distance (also called distance as the crow flies or 2-norm distance). A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance.
- The Manhattan distance (also called taxicab norm or 1-norm)
- The maximum norm
- The Mahalanobis distance corrects data for different scales and correlations in the variables
- The angle between two vectors can be used as a distance measure when clustering high dimensional data. See Inner product space.
- The Hamming distance (sometimes edit distance) measures the minimum number of substitutions required to change one member into another.
- Some notions of Semantic relatedness are distance functions. These include distances based on databases such as wordnet and search engines, and distances learned from machine-learned semantic analysis of a corpus.
Cardinal and Ordinal numbers
One view is that the core of mathematics is based upon two simple questions based on practical needs.
. How many?
. How much?
This is the cardinal number viewpoint.
Another view is that mathematics may have an even earlier basis based on ordinals used to establish pecking orders and rank. Such basic questions are:
. Who eats first, second, etc?
. What comes first, etc?
This is the ordinal number viewpoint.
One to One Correspondence
The notion of one-to-one correspondence is fundamental to counting. When we count out a set of cards, we say, 1, 2, 3, ... , 52, and as we say each number we lay down a card. Each number corresponds to a card. Technically, we can say that we have put the cards in the deck and the numbers from 1 to 52 in a one-to-one correspondence with each other.
In abstract algebra, a homomorphism is a (one to one) structure-preserving map between two algebraic structures (such as groups, rings, or vector spaces). The word homomorphism comes from the Greek language: homos meaning "same" and morphe meaning "shape". Note the similar root word "homoios," meaning "similar," which is found in another mathematical concept, namely homeomorphisms.
In abstract algebra, an isomorphism (Greek: ison "equal", and morphe "shape") is a (one to one and onto) bijective map f such that both f and its inverse f −1 are homomorphisms, i.e., structure-preserving mappings.
Algebra is a branch of mathematics concerning the study of structure, relation and quantity. The name is derived from the treatise written by the Persian mathematician, astronomer, astrologer and geographer, Muhammad bin Mūsā al-Khwārizmī titled (in Arabic الكتاب الجبر والمقابلة ) Al-Kitab al-Jabr wa-l-Muqabala (meaning "The Compendious Book on Calculation by Completion and Balancing"), which provided symbolic operations for the systematic solution of linear and quadratic equations
Together with geometry, analysis, combinatorics, and number theory, algebra is one of the main branches of mathematics. Elementary algebra is often part of the curriculum in secondary education and provides an introduction to the basic ideas of algebra, including effects of adding and multiplying numbers, the concept of variables, definition of polynomials, along with factorization and determining their roots.
Algebra is much broader than elementary algebra and can be generalized. In addition to working directly with numbers, algebra covers working with symbols, variables, and set elements. Addition and multiplication are viewed as general operations, and their precise definitions lead to structures such as groups, rings and fields.
Sets: Rather than just considering the different types of numbers, abstract algebra deals with the more general concept of sets: a collection of all objects (called elements) selected by property, specific for the set. All collections of the familiar types of numbers are sets. Other examples of sets include the set of all two-by-two matrices, the set of all second-degree polynomials (ax² + bx + c), the set of all two dimensional vectors in the plane, and the various finite groups such as the cyclic groups which are the group of integers modulo n. Set theory is a branch of logic and not technically a branch of algebra.
Binary operations: The notion of addition (+) is abstracted to give a binary operation, * say. The notion of binary operation is meaningless without the set on which the operation is defined. For two elements a and b in a set S a*b gives another element in the set; this condition is called closure). Addition (+), subtraction (-), multiplication (×), and division (÷) can be binary operations when defined on different sets, as is addition and multiplication of matrices, vectors, and polynomials.
Identity elements: The numbers zero and one are abstracted to give the notion of an identity element for an operation. Zero is the identity element for addition and one is the identity element for multiplication. For a general binary operator * the identity element e must satisfy a * e = a and e * a = a. This holds for addition as a + 0 = a and 0 + a = a and multiplication a × 1 = a and 1 × a = a. However, if we take the positive natural numbers and addition, there is no identity element.
Inverse elements: The negative numbers give rise to the concept of inverse elements. For addition, the inverse of a is -a, and for multiplication the inverse is 1/a. A general inverse element a-1 must satisfy the property that a * a-1 = e and a-1 * a = e.
Associativity: Addition of integers has a property called associativity. That is, the grouping of the numbers to be added does not affect the sum. For example: (2+3)+4=2+(3+4). In general, this becomes (a * b) * c = a * (b * c). This property is shared by most binary operations, but not subtraction or division or octonion multiplication .
Commutativity: Addition of integers also has a property called commutativity. That is, the order of the numbers to be added does not affect the sum. For example: 2+3=3+2. In general, this becomes a * b = b * a. Only some binary operations have this property. It holds for the integers with addition and multiplication, but it does not hold for matrix multiplication or quaternion multiplication .
The sidechain dihedral angles of proteins are denoted as χ1-χ5, depending on the distance up the sidechain. The χ1 dihedral angle is defined by atoms N-Cα-Cβ-Cγ, the χ2 dihedral angle is defined by atoms Cα-Cβ-Cγ-Cδ, and so on.
The sidechain dihedral angles tend to cluster near 180°, 60°, and -60°, which are called the trans, gauche+, and gauche- conformations. The choice of sidechain dihedral angles is affected by the neighbouring backbone and sidechain dihedrals; for example, the gauche+ conformation is rarely followed by the gauche+ conformation (and vice versa) because of the increased likelihood of atomic collisions.
Twenty Elementary Algebras
The above image is a Moncznik (Perry Moncznik) multiplication table (in the Abstract Algebra sense) where the elements being "multiplied" are the different possible conformational states of a single amino acid. The colors are an attempt to highlight the symmetries. The elements could also be thought of as transformations of some given initial state. The twenty amino acids can each be represented by such a table and thus form twenty elemental algebras, which it could be argued, may in some sense form the basis from which a large portion of mathematics arises.
Molecular Spin Groups
A Group is a set together with an operation on the members of that set such that the operation applied to any 2 members of the set yields another member of that set. The Integers and the operation of addition form a group, for example.
Given a carbon chain with N carbons and angle of 109.5 degrees between successive carbon atoms, and thus N-1 covalent bonds.
Initial state of the molecule is simply state at time zero.
Assign spin (ratio) rate to each covalent bond eg radians per unit time: R1,R2,...RN-1. The elements of the set are the possible transformations of the system, which will be ratio preserving multiples of R1...RN-1.
If Bond 1 and 2 have ratio values of 2 and 3 then a transformation in the set will preserve this ratio (as well as the ratios of all other respective covalent bonds). So for example, 20 degrees and 30 degrees will satisfy the required conditions. So will 30 and 45, as well as 1 and 1.5 etc. Thus we have an infinite set.
Note also that the transformations are additive. eg 20+2=22, 30+3=33 satisfy the required ratio. Thus any 2 members of the set added together yield another member of the set. And thus the requirements of a Group are satisfied.
Variables, Expressions, and Equations
A variable is a symbol that represents a number. Usually we use letters such as n, t, or x for variables. For example, we might say that s stands for the side-length of a square. We now treat s as if it were a number we could use. The perimeter of the square is given by 4 × s. The area of the square is given by s × s. When working with variables, it can be helpful to use a letter that will remind you of what the variable stands for: let n be the number of people in a movie theater; let t be the time it takes to travel somewhere; let d be the distance from my house to the park.
An expression is a mathematical statement that may use numbers, variables, or both.
The following are examples of expressions:
3 + 7
2 × y + 5
2 + 6 × (4 - 2)
z + 3 × (8 - z)
An equation is a statement that two numbers or expressions are equal. Equations are useful for relating variables and numbers. Many word problems can easily be written down as equations with a little practice. Many simple rules exist for simplifying equations
A bijective function.
For example, consider the function succ, defined from the set of integers to , that to each integer x associates the integer succ(x) = x + 1. For another example, consider the function sumdif that to each pair (x,y) of real numbers associates the pair sumdif(x,y) = (x + y, x − y).
A bijective function is also called a permutation. This is more commonly used when X = Y. It should be noted that one-to-one function means one-to-one correspondence (i.e., bijection) to some authors, but injection to others. The set of all bijections from X to Y is denoted as XY.
Bijective functions play a fundamental role in many areas of mathematics, for instance in the definition of isomorphism (and related concepts such as homeomorphism and diffeomorphism), permutation group, projective map, and many others.
Composition and Inverses
The composition g o f of two bijections f XY and g YZ is a bijection. The inverse of g o f is (g o f)−1 = (f −1) o (g−1).
A bijection composed of an injection and a surjection.
On the other hand, if the composition g o f of two functions is bijective, we can only say that f is injective and g is surjective.
A relation f from X to Y is a bijective function if and only if there exists another relation g from Y to X such that g o f is the identity function on X, and f o g is the identity function on Y. Consequently, the sets have the same cardinality.
In mathematics , a structure on a set, or more generally a type, consists of additional mathematical objects that in some manner attach to the set, making it easier to visualize or work with, or endowing the collection with meaning or significance.
A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, equivalence relations, and differential structures.
Sometimes, a set is endowed with more than one structure simultaneously; this enables mathematicians to study it more richly. For example, an order induces a topology. As another example, if a set both has a topology and is a group, and the two structures are related in a certain way, the set becomes a topological group.
Mappings between sets which preserve structures (so that structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures.
Discrete mathematics, also called finite mathematics or decision mathematics, is the study of mathematical structures that are fundamentally discrete in the sense of not supporting or requiring the notion of continuity. Objects studied in finite mathematics are largely countable sets such as integers, finite graphs, and formal languages.
Discrete mathematics has become popular in recent decades because of its applications to computer science. Concepts and notations from discrete mathematics are useful to study or describe objects or problems in computer algorithms and programming languages. In some mathematics curricula, finite mathematics courses cover discrete mathematical concepts for business, while discrete mathematics courses emphasize concepts for computer science majors.
Discrete mathematics includes the following topics:
- Logic - a study of reasoning
- Set theory - a study of collections of elements
- Number theory
- Combinatorics, including
- Algorithmics - a study of methods of calculation
- Information theory
- Digital geometry
- Computability and complexity theories - dealing with theoretical and practical limitations of algorithms
- Partially ordered sets
- Counting and relations
The term geometric primitive in computer graphics and CAD systems is used in various senses, with common meaning of atomic geometric objects the system can handle (draw, store). Sometimes the subroutines that draw the corresponding objects are called "geometric primitives" as well. The most "primitive" primitives are point and straight line segment, which were all that early vector graphics systems had.
Modern 2D computer graphics systems may operate with primitives which are lines (segments of straight lines, circles and more complicated curves), as well as shapes (boxes, arbitrary polygons, circles).
A common set of two-dimensional primitives includes lines, points, and polygons, although some people prefer to consider triangles primitives, because every polygon can be constructed from triangles. All other graphic elements are built up from these primitives. In three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves (such as Bézier curves, circles, etc.) may be considered primitives; in other cases, curves are complex forms created from many straight, primitive shapes.
Commonly used geometric primitives include:
- lines and line segments
- circles and ellipses
- triangles and other polygons
- spline curves
Note that in 3D applications basic geometric shapes and forms are considered to be primitives rather than the above list. Such shapes and forms include:
These are considered to be primitives in 3D modelling because they are the building blocks for many other shapes and forms. A 3D package may also include a list of extended primitives which are more complex shapes that come with the package. For example, a teapot is listed as a primitive in 3D Studio Max.
The specific three dimensional arrangement of atoms in molecules is referred to as molecular geometry. Molecular geometry is associated with the specific orientation of atoms as a result of bonding and non bonding electrons about the central atom. A careful analysis of electron pairs will usually result in correct molecular geometry determinations. In addition, the simple writing of Lewis diagrams which show the electron arrangements can also provide important clues for the determination of molecular geometry. Molecules with no lone electron pairs: Molecular geometry has its basis in the electron pair geometry of a molecule. If the molecule has all electron pairs bonded to atoms, then the the molecular geometry is identical with the electron pair geometry. This is a common occurrence
An example of trigonal bipyramid molecular geometry that results from five electron pair geometry is PCl5. The phosphorus has 5 valence electrons and thus needs 3 more electrons to complete its octet. However this is an example where five chlorine atoms present and the octet is expanded.
The Lewis diagram is as follows:
Cl = 7 e- x 5 = 35 e-
P = 5 e- = 5 e-
Total = 40 e-
The Chlorine atoms are as far apart as possible at nearly 90o and 120obond angle. This is trigonal bipyramid geometry.
Trigonal bipyramid geometry is characterized by 5 electron pairs.
An example of octahedral molecular geometry that results from six electron pair geometry is SF6. The sulfur atom has 6 valence electrons. However this is an example where six fluoride atoms are present and the octet is expanded.
The Lewis diagram is as follows:
F = 7 e- x 6 = 42 e-
S = 6 e- = 6 e-
Total = 48 e-
The fluorine atoms are as far apart as possible at nearly 90o bond angle in all directions. This is octahedral geometry.
Octahedral geometry is characterized by 6 electron pairs.
In chemistry a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular dipyramid. This is one of the few cases where bond angles surrounding an atom are not identical (see also pentagonal dipyramid), which is simply because there is no geometrical arrangement which can result in five equally sized bond angles in three dimensions.
Isomers with a trigonal bipyramidal geometry are able to interconvert through a process known as Berry pseudorotation. Pseudorotation is similar in concept to the movement of a conformational diastereomer, though no full revolutions are completed. In the process of pseudorotation, two equatorial ligands (both of which have a shorter bond length than the third) "shift" toward the molecule's axis, while the axial ligands simultaneously "shift" toward the equator, creating a constant cyclical movement. Pseudorotation is particularly notable in simple molecules such as PF5.
Constructive solid geometry (CSG) is a technique used in solid modeling. CSG is often, but not always, a procedural modeling technique used in 3D computer graphics and CAD. Constructive solid geometry allows a modeler to create a complex surface or object by using Boolean operators to combine objects. Often CSG presents a model or surface that appears visually complex, but is actually little more than cleverly combined or decombined objects. (In some cases, constructive solid geometry is performed on polygonal meshes, and may or may not be procedural and/or parametric.)
The simplest solid objects used for the representation are called primitives. Typically they are the objects of simple shape: cuboids, cylinders, prisms, pyramids, spheres, cones. The set of allowable primitives is limited by each software package. Some software packages allow CSG on curved objects while other packages do not.
A primitive can typically be described by a procedure which accepts some number of parameters; for example, a sphere may be described by the coordinates of its center point, along with a radius value. These primitives can be combined into compound objects using operations like these:
|Boolean union||Boolean difference||Boolean intersection|
|The merger of two objects into one.||The subtraction of one object from another.||The portion common to both objects.|
Combining these elementary operations, it is possible to build up objects with high complexity starting from simple ones.
The Mother Centriole Plays an Instructive Role in Defining Cell Geometry
"The centriole is unique among cellular structures in its complexity, chirality, stability, and templated replication, and these features make it an ideal hub around which to organize and propagate particular aspects of cellular geometry."
The Telomere Counting Mechanism
The search for the molecular counting mechanism ended when Calvin Harley and Carol Greider discovered that the telomeres of cultured normal human fibroblasts become shorter each time the cells divide. When telomeres reach a specific short length, they signal the cell to stop dividing. Therefore, cellular aging, as marked by telomere shortening, is not based on the passage of time. Instead, telomere loss measures rounds of DNA replication. For this reason, Hayflick has coined the term "replicometer" for this mechanism.
The Telomere Code
Experimental design and data analysis
In the design of experiments and data analysis, control variables are those variables that are not changed throughout the trials in an experiment because the experimenter is not interested in the effect of that variable being changed for that particular experiment. In other words, control variables are extraneous factors, possibly affecting the experiment, that are kept constant so as to minimize their effects on the outcome. An example of a control variable in an experiment might be keeping the pressure constant in an experiment designed to test the effects of temperature on bacterial growth.
In control theory, control variables are variables that are input to the control system. Reaction rate is the dependent variable and everything else that can change the reaction rate must be controlled (kept constant) so that you only measure the effects of concentration. Variables that need to be controlled in this case include temperature, catalyst, surface area of solids, and pressures of gases. If not controlled, they complicate the experiment and hence, the result.
In programming, a control variable is a program variable that is used to regulate the flow of control of the program. For example, a loop control variable is used to regulate the number of times the body of a program loop is executed; it is incremented (or decremented when counting down) each time the loop body is executed.
The Microtubule Code
Dynein and kinesin motor proteins transport cellular cargoes toward opposite ends of microtubule tracks. In neurons, microtubules are abundantly decorated with microtubule-associated proteins (MAPs) such as tau. Motor proteins thus encounter MAPs frequently along their path. Dynein tends to reverse direction, whereas kinesin tends to detach at patches of bound tau. The differential modulation of dynein and kinesin motility suggests that MAPs can spatially regulate the balance of microtubule-dependent axonal transport. Does a "microtubule code" regulate activity of MAPs?
The microtubule lattice features a series of helical winding patterns which repeat on longitudinal protofilaments at 3, 5, 8, 13, 21 and higher numbers of subunit dimers (tubulins). These particular winding patterns (whose repeat intervals match the Fibonacci series) define attachment sites of the microtubule-associated proteins (MAPs), and are found in simulations of self-localized phonon excitations in microtubules (Samsonovich, 1992: These suggest topological global states in microtubules which may be resistant to local decoherence. Penrose has suggested the Fibonacci patterns on microtubules may be optimal for error correction.
Cylindrical cellular automata
Source: Comm. Math. Phys. Volume 118, Number 4 (1988), 569-590. PDF File (2172 KB)
This paper is concerned with the analysis of one-dimensional cellular automata with periodic boundary conditions. Such an automaton may be viewed as a lattice of sites on a cylinder of specified size n evolving according to a local interaction rule…
Every hexagonal number is a triangular number since
In 1830, Legendre (1979) proved that every number larger than 1791 is a sum of four hexagonal numbers, and Duke and Schulze-Pillot (1990) improved this to three hexagonal numbers for every sufficiently large integer.
There are exactly 13 positive integers that cannot be represented using four hexagonal numbers, namely 5, 10, 11, 20, 25, 26, 38, 39, 54, 65, 70, 114, and 130 (Sloane's A007527; Guy 1994a).
Similarly, there are only two positive integers that cannot be represented using five hexagonal numbers, namely:
Every positive integer can be represented using six hexagonal numbers.
Duke, W. and Schulze-Pillot, R. "Representations of Integers by Positive Ternary Quadratic Forms and Equidistribution of Lattice Points on Ellipsoids." Invent. Math. 99, 49-57, 1990.
Guy, R. K. "Every Number Is Expressible as the Sum of How Many Polygonal Numbers?." Amer. Math. Monthly 101, 169-172, 1994a.
Guy, R. K. "Sums of Squares." §C20 in Unsolved Problems in Number Theory, 2nd ed. New York: Springer-Verlag, pp. 136-138, 1994b.
Legendre, A.-M. Théorie des nombres, 4th ed., 2 vols. Paris: A. Blanchard, 1979.
Cellular Automata and the Game of Life in the Hexagonal Grid
Only one true hexagonal Game of Life has been found. The rule 3/2 supports a glider and also stabilizes. (The rule 3,5/2 and 3,5,6/2 also behave silimarly.) (3/2,4,5) barely doesn't qualify, as random patterns never stabilize.
Polyglutamylation and polyglycylation are two posttranslational polymodifications that have initially been discovered on tubulin
minimal communication cost
capability of embedding topological Data structures such as
Stoichiometry (sometimes called reaction stoichiometry to distinguish it from composition stoichiometry) is the calculation of quantitative (measurable) relationships of the reactants and products in chemical reactions (chemicals).
The Krebs Cycle
The Kreb's cycle converts pyruvate to CO2 and reducing energy (NADH and FADH2) and phosphorylated energy (GTP).
2 pyruvate + 2 GDP + 2 H3PO4 + 4 H2O + 2 FAD + 8 NAD+ ----> 6 CO2 + 2 GTP + 2 FADH2 + 8 NADH
Cellular Automata, Transition Algebras and The Genetic State Vector
The Cellular machinery can be seen as the interaction of these 3 distinct mathematical objects acting in a loop.
The concept of "hypercellular automata", or multilayered cellular automata, has recently been proposed by [Bandini, 1995] and [Bandini et al., 1996], as a particular case of multilayered automata network. A hierarchical structure is defined through a hypergraph, i.e. a graph composed by vertices and arcs, where each vertex is in turn a hypergraph. The multilayered automata network is directly obtained from this structure by introducing status attributes and transition functions. Two-level multilayered cellular automata have been developed and employed to model biological systems: the first level constitutes a bidimensional cellular space (diffusion space), while at the second level a totally connected graph corresponds to each cell (first-level vertex) to generate an intrinsically parallel and local reaction space.
Figure 1. The hypercellular automaton