woolrich Outlet
isabel marant online
woolrich donna
parka woolrich
isabel marant boots
hollister uk
hollister store
Discovery of Mathematical Techniques and Theory - Biomatics.org

Discovery of Mathematical Techniques and Theory

From Biomatics.org

Jump to: navigation, search

Biomatics seems to potentially be a rich source of mathematical concepts. If the interaction of two atoms is some sort of beginning and considering the wealth of mathematical patterns present then theoretically the developmental steps of known mathematics can be described .  In other words, deciphering the techniques used by a handful of smart atoms in developing all of mathematics.

The Fundamental Theorem of Biomatics

     The interactions of two tetrahedral carbon atoms form a group.

 


Other theorems-

Goedel's incompleteness theorem
 

 Heisenberg Uncertainty Principle (Axiom?)

Contents

Binomial Theorem, Expansion, Coefficients, Distribution

This section demonstrates how a rather simple physical process can embody important mathematical properties i.e. in this case the Binomial.

 

In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the power (x + y)n into a sum involving terms of the form axbyc, where the coefficient of each term is a positive integer, and the sum of the exponents of x and y in each term is n. For example,

(x+y)^4 \;=\; x^4 \,+\, 4 x^3y \,+\, 6 x^2 y^2 \,+\, 4 x y^3 \,+\, y^4.

The coefficients appearing in the binomial expansion are known as binomial coefficients. They are the same as the entries of Pascal's triangle, and can be determined by a simple formula involving factorials. These numbers also arise in combinatorics, where the coefficient of xnkyk is equal to the number of different combinations of k elements that can be chosen from an n-element set.

 
Jump to: navigation, search
The binomial coefficients appear as the entries of Pascal's triangle.

 

 

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. As an example, flip a coin three times and count the number of heads. The distribution of this random number is a binomial distribution with n = 3 and p = 1/2.

 

 

The Bean Machine

The bean machine, also known as the quincunx or Galton box, is a device invented by Sir Francis Galton to demonstrate the central limit theorem and the normal distribution.

The machine consists of a vertical board with interleaved rows of pins. Balls are dropped from the top, and bounce left and right as they hit the pins. Eventually, they are collected into one-ball-wide bins at the bottom. The height of ball columns in the bins approximates a bell curve.

Overlaying Pascal's triangle onto the pins shows the number of different paths that can be taken to get to each pin.

A large-scale working model of this device can be seen at the Museum of Science, Boston in the Mathematica exhibit.

The bean machine, as drawn by Sir Francis Galton

 

A working replica of the machine (following a slightly modified design.)

 

Prime Hypercubes

Considering the hypercube structure that arises from the joining of tetrahedral atoms the question arises as to the potential significance of placing the prime numbers in this arrangement at the corners. 

 



In number theory, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers. The prime number theorem gives a rough description of how the primes are distributed.

 

Roughly speaking, the prime number theorem states that if you randomly select a number nearby some large number N, the chance of it being prime is about 1 / ln(N), where ln(N) denotes the natural logarithm of N. For example, near N = 10,000, about one in nine numbers is prime, whereas near N = 1,000,000,000, only one in every 21 numbers is prime.

 

Prime Spiral

 

PrimeSpiralGrid

The prime spiral, also known as Ulam's spiral, is a plot in which the positive integers are arranged in a spiral (left figure), with primes indicated in some way along the spiral. In the right plot above, primes are indicated in red and composites are indicated in yellow. 

 

Weisstein, Eric W. "Prime Spiral." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PrimeSpiral.html 

 

Optimization Theory



 

A branch of mathematics which encompasses many diverse areas of minimization and optimization. Optimization theory is the more modern term for operations research. Optimization theory includes the calculus of variations, control theory, convex optimization theory, decision theory, game theory, linear programming, Markov chains, network analysis,  queuing systems, etc. 

 

In mathematics, the simplest case of optimization,  refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. This (a scalar real valued objective function) is actually a small subset of this field which comprises a large area of applied mathematics and generalizes to study of means to obtain "best available" values of some objective function given a defined domain where the elaboration is on the types of functions and the conditions and nature of the objects in the problem domain.

SEE ALSO: Calculus of Variations, Control Theory, Convex Optimization Theory, Decision Theory, Differential Evolution, Evolution Strategies, Game Theory, Genetic Algorithm, Linear Programming, Markov Chain, Nelder-Mead Method, Nonlinear Programming, Operations Research, Optimization, Queue, Stochastic Optimization  

In computing, optimization is the process of modifying a system to make some aspect of it work more efficiently or use fewer resources. For instance, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power. The system may be a single computer program, a collection of computers or even an entire network such as the Internet. See also algorithmic efficiency for further discussion on factors relating to improving the efficiency of an algorithm.


Finite State Machine Minimization

Problem Input | Problem Output

INPUT					OUTPUT

Input Description: A deterministic finite automata M.

 

 

Problem: The smallest deterministic finite automata M' such that M' behaves identically to M'

 

 

Excerpt from The Algorithm Design Manual: Problems associated with constructing and minimizing finite state machines arise repeatedly in software and hardware design applications. Finite state machines are best thought of as pattern recognizers, and minimum-size machines correspond to recognizers that require less time and space. %Space is usually the more important issue. Complicated control systems and compilers are often built using finite state machines to encode the current state and associated actions, as well as the set of possible transitions to other states. Minimizing the size of this machine minimizes its cost.

 

Finite state machines are best thought of as edge-labeled directed graphs, where each vertex represents one of n states and each edge a transition from one state to the other on receipt of the alphabet symbol that labels the edge. The automaton above analyzes a given sequence of coin tosses, with the dark states signifying that an even number of heads have been observed.


 

Network Optimization Issues 

Shortest Paths

In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) such that the sum of the weights of its constituent edges is minimized. An example is finding the quickest way to get from one location to another on a road map; in this case, the vertices represent locations and the edges represent segments of road and are weighted by the time needed to travel that segment.


Maximum Flow 

The maximum flow problem is to find a feasible flow through a single-source, single-sink flow network that is maximum.[1] Sometimes it is defined as finding the value of such a flow. The maximum flow problem can be seen as special case of more complex network flow problems, as the circulation problem. The maximum s–t (source-to-sink) flow in a network is equal to the minimum s–t cut in the network, as stated in the Max-flow min-cut theorem.

Transportation Problem 

In the transportation problem there are a set of nodes called sources, and a set of nodes called destinations. All arcs go from a source to a destination. There is a per-unit cost on each arc. Each source has a supply of material, and each destination has a demand. We assume that the total supply equals the total demand (possibly adding a fake source or destination as needed). ;


Optimal network topologies for local search with congestion

The problem of searchability in decentralized complex networks is of great importance in computer science, economy and sociology. We present a formalism that is able to cope simultaneously with the problem of search and the congestion effects that arise when parallel searches are performed, and obtain expressions for the average search cost--written in terms of the search algorithm and the topological properties of the network--both in presence and abscence of congestion. This formalism is used to obtain optimal network structures for a system using a local search algorithm. It is found that only two classes of networks can be optimal: star-like configurations, when the number of parallel searches is small, and homogeneous-isotropic configurations, when the number of parallel searches is large. 

http://arxiv.org/abs/cond-mat/0206410


Some recent developments in the search for optimal network topologies

We report on some recent developments in the search for optimal network topologies. First we review some basic concepts on spectral graph theory, including adjacency and Laplacian matrices, and paying special attention to the topological implications of having large spectral gaps. We also introduce related concepts as ``expanders, Ramanujan, and Cage graphs. Afterwards, we discuss two different dynamical feautures of networks: synchronizability and flow of random walkers and so that they are optimized if the corresponding Laplacian matrix have a large spectral gap. From this, we show, by developing a numerical optimization algorithm that maximum synchronizability and fast random walk spreading are obtained for a particular type of extremely homogeneous regular networks, with long loops and poor modular structure, that we call entangled networks. These turn out to be related to Ramanujan and Cage graphs. We argue also that these graphs are very good finite-size approximations to Bethe lattices, and provide almost or almost optimal solutions to many other problems as, for instance, searchability in the presence of congestion or performance of neural networks. Finally, we study how these results are modified when studying dynamical processes controlled by a normalized (weighted and directed) dynamics; much more heterogeneous graphs are optimal in this case. Finally, a critical discussion of the limitations and possible extensions of this work is presented.

http://biblioteca.universia.net/ficha.do?id=21806051
 

Microtubular Systems

The Microtubule system apparently occupies a high level in the information processing hierarchy of neurons. It seems likely that evolution has solved many of the above optimization issues. So in the search for optimal solutions, computer scientists may be able to learn lessons in network design from this powerfull biological model. 
 



Cylindrical Hexagonal Grid Hypercube processor Networks

image: Microtubules.jpg


 

The Twenty Elemental Algebraic Structures


Representative electron density for amino acid side chains arranged in order of increasing size. 
From an experimental electron density map calculated at 1.5 Angstrom resolution. 





  

Lysine2.jpg

The above table (by Perry Moncznik) demonstrates the potential algebraic properties of an amino acid side chain. The coloring shows the fractal quadtree nature of the multiplication table. The lower left and upper right quadrants are colored to show the repeating patterns.

Fourier Transforms

Fourier series gives a picture of the kind of content of a signal. For example, a sharp transition in data generally results from a high-frequency sine wave (since only high-frequency sine waves have the fast-changing edge required), and so by cutting out the low frequencies, you can pick out the edges. This is particularly useful in image processing.

 At the other end of the scale, if your data has noise at a particular frequency (e.g. 50 or 60Hz mains), you can pick apart the data to its constituent parts, remove the noise frequency, then stitch the rest back together to get a signal without the noise. This kind of notch filtering is very common in audio.

Molecular Fourier Transformation

Biomatic Kernel Methods

 

In computer science, kernel methods (KMs) are a class of algorithms for pattern analysis, whose best known element is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in general types of data (such as sequences, text documents, sets of points, vectors, images, etc.).

KMs approach the problem by mapping the data into a high dimensional feature space, where each coordinate corresponds to one feature of the data items, transforming the data into a set of points in a Euclidean space. In that space, a variety of methods can be used to find relations in the data. Since the mapping can be quite general (not necessarily linear, for example), the relations found in this way are accordingly very general. This approach is called the kernel trick.

KMs owe their name to the use of kernel functions, which enable them to operate in the feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.

Algorithms capable of operating with kernels include Support vector machine (SVM), Gaussian processes, Fisher's linear discriminant analysis (LDA), principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others.

Because of the particular culture of the research community that has been developing this approach since the mid-1990s, most kernel algorithms are based on convex optimization or eigenproblems, are computationally efficient and statistically well-founded. Typically, their statistical properties are analyzed using statistical learning theory (for example, using Rademacher complexity).

External Links

GoDNA: THE GEOMETRY OF DNA

 Biology's next microscope, mathematics' next physics

http://plus.maths.org/content/biologys-next-microscope-mathematics-new-physics

 

Personal tools
Google AdSense