Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information

José Luis Usó-Doménech, Josué-Antonio Nescolarde-Selva, Miguel Lloret-Climent, Lucía González-Franco

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information

José Luis Usó-Doménech1, Josué-Antonio Nescolarde-Selva1,, Miguel Lloret-Climent1, Lucía González-Franco2

1Department of Applied Mathematics, University of Alicante, Alicante, Spain

2Biodiversity Research Institute CIBIO, University of Alicante, Alicante, Spain

Abstract

If one has a distribution of words (SLUNs or CLUNS) in a text written in language L(MT), and is adjusted one of the mathematical expressions of distribution that exists in the mathematical literature, some parameter of the elected expression it can be considered as a measure of the diversity. But because the adjustment is not always perfect as usual measure; it is preferable to select an index that doesn't postulate a regularity of distribution expressible for a simple formula. The problem can be approachable statistically, without having special interest for the organization of the text. It can serve as index any monotonous function that has a minimum value when all their elements belong to the same class, that is to say, all the individuals belong to oneself symbol, and a maximum value when each element belongs to a different class, that is to say, each individual is of a different symbol. It should also gather certain conditions like they are: to be not very sensitive to the extension of the text and being invariant to certain number of operations of selection in the text. These operations can be theoretically random. The expressions that offer more advantages are those coming from the theory of the information of Shannon-Weaver. Based on them, the authors develop a theoretical study for indexes of diversity to be applied in texts built in modeling language L(MT), although anything impedes that they can be applied to texts written in natural languages.

At a glance: Figures

Cite this article:

  • Usó-Doménech, José Luis, et al. "Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information." American Journal of Systems and Software 2.5 (2014): 113-120.
  • Usó-Doménech, J. L. , Nescolarde-Selva, J. , Lloret-Climent, M. , & González-Franco, L. (2014). Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information. American Journal of Systems and Software, 2(5), 113-120.
  • Usó-Doménech, José Luis, Josué-Antonio Nescolarde-Selva, Miguel Lloret-Climent, and Lucía González-Franco. "Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information." American Journal of Systems and Software 2, no. 5 (2014): 113-120.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction: The Textual Grammar

The Ecological Model as a whole, even though its methodology is not separated from the rest of the models, possesses certain characteristics that specify in its own original form the functions of these. On the one hand, they must be separated from models based on physical, mechanical causes. The existence of feedback expresses the conditions of adaptation, regulation, a structural response to equally structural signals that have been constructed on documentary bases typical of theories of biology, ecology and socio-economics. On the other hand, a special effort has to be made to overcome the ease and ambiguity of intuition. All models have in common that they encode experience and always involve signs, signals, syntaxes, semantics and an ability to decode and derive meaning from what is encoded, Gash (2014).

That is to say, it meets the conditions of a language. The authors consider the idea of language according to Chomsky (1965, 1969) as:

1. The elements are discreet and arbitrary. The only elements which are going to be relevant for the grammatical description are discreet ones.

2. Combinations of elements are linear, denumerable.

3. Not all combinations of sentences constitute a sentence. We describe a sentence (word-string) as an occurring mathematical sequence that obeys the regularities for sentence hood required by "model grammar". Not all finite sequences of elements occur as sentences. The fact that not all combinations occur makes it possible to define larger elements as restrictions on the combinations of smaller elements.

4. In language there is redundancy in respect to the sequence of ultimate elements and this redundancy is composed of a system of intermediate elements.

5. This language has a semantic meaning, the meaning of entities and the meaning of grammatical relations among them.

Obviously, if what we pretend is to build a mathematical model of a system, we will use the formal language of mathematics. Using mathematical expressions, a level of objectivity can be reached, or at least get as close to it as possible. The models that we propose are those based on the Dynamic of Systems (Forrester 1961) with the modifications made by the authors (Usó-Domènech et al.1997), with which it becomes clear that we do not expect to create a generically theory of models, but a specific form, as we consider them one of the most generalized and possibly most powerful among the wide range of alternatives offered to the modeler. For this special type of models the authors have built a language which they have called L(MT) whose syntax is, on a wide scale, the following (Nescolarde-Selva and Usó-Doménech, 2013; Sastre-Vazquez et al., 2000; Usó-Domènech et al., 1997, 2000a, b, 2001, 2002, 2006a,b, 2014; Villacampa & Usó-Domènech, 1999; Villacampa et al., 1999a, b):

We define as associative field of a measurable attribute w and we called , the set constituted by all possible symbols of said measurable attribute: . The set will be a denumerable set. In the practical tool, it will be a requisite to define one subset whose cardinal will be an integer number. The associative field of a measurable attribute w will be called First Order Vocabulary (FOV) or Vocabulary of order one and will be denoted by . The elements of will be called t-symbols and will be denoted by , where i represents an index of the symbol and j denotes the order of transformation. The measurable attributes are a particular case of the t-symbols. The set X formed by a FOV generated by the set of measurable attributes will be called Primary Lexicon (PL) or alphabet of the n-order monoads, .

The primitive monoad or alphabet A is formed by a set W of characters used to express measurable attributes , a set D of differential functions in relation to time and a set of n-order monoads . The W set is formed by the input and state variables, and .

The textual alphabet is jointly built with the alphabet A and the set R of real numbers (model parameters) .

The Simple Lexical Units (SLUN) are constituted by the elements of the set A-D.

The Operating Lexical Units or operator-LUN (op-LUN) are the mathematical signs +, -.

The Ordenating Lexical Units or Ordenating-LUN (or-LUN) are the signs =, <, >.

The Special Lexical Unit (SpLUN) is the sign d/dt, which belongs to the alphabet A and defines the beginning of a phrase (state equation). The differential vocabulary or d-vocabulary of a measurable attribute w, , is the set formed by all partial derivatives of any order of w with respect to any other measurable attribute and the time t.

The primary differential vocabulary, , is the set formed by all partial derivatives of order 1 of w with respect to any other measurable attribute and the time t..

Secondary a higher order differential vocabularies may also be defined and will be denoted by . For ease of calculation in practical complex system modeling, we define a subset of called dimensional primary differential vocabulary, , consisting of all partial first order derivatives of the measurable attribute w with respect to the three spatial dimensions X, Y, Z and time t, .

To implement the models of the System Dynamics (Forrester, 1961), a subset of cardinal 1, , and whose only element is the partial derivative of the p-symbol with respect to the time, will be used.

Let be a set of measurable attributes. The differential Lexicon, d-L, is the set formed by the d-vocabularies generated by the measurable attributes,

The Elements of d-L will be called d-symbols. The characters (, ), { ,}, [, ], are simply signs since they lack of meaning and they are the equivalent to the signs ?, !, ; ( , ) in the natural languages.

The Separating of Lexical Units (s-LUN) are the signs * and /.

The Composed Lexical Units (CLUN) are the strings of a SLUN separated by a s-LUN. The syllables or composed Lexical units (CLUN) are constituted by a SLUN, or a chain of them, separated by an op-LUN or a or-LUN.

The word is the SLUN or CLUN. The symbols [·] preceding the other symbols + or – are word separations.

The opsep vocabulary is the one formed by operating and separating LUNs. and it will be written a element of VS by .

A simple sentence is a flow variable (Forrester, 1961). It is built by a CLUN or a combination of CLUNs.

The vocabulary of order n is the one formed by simple sentences

A short notation would be .

The set of all vocabularies of any order is called t-Lexicon t-L, and it is formed by the FOV and simple sentence vocabularies.

The set will be a subset of t-L.

Let . We say that are related linguistically in a n-order relationship and we call it if and only if and . We will call the whole of all linguistic relationships. Let be vocabularies of n, m,...,l orders, respectively. We say that are related linguistically and we will call it if and only if vocabulary exists so that

where .

A complex sentence is each ordinary differential equation (ODE) or state equation, which is built by linear combination of simple sentences . A text T = (L, A ) is the concatenation of complex sentences, determined by the argument A of the text or semantic links between these complex sentences.

The Lexicon L of a text is the union between the t-Lexicon and the differential Lexicon, . We can say that the text is written in a formal language, and we call it as L(MT).

Mathematical modeling of complex structural systems is the process of producing texts of mathematical relations with the rules defined by the syntax of the L(MT) with a homomorphism in respect to a conceptual semiotic system and/or ontological reality.

The past few years have seen a rapid development in novel high-throughput technologies that have created large-scale data. This data is commonly represented as networks, with nodes. A fundamental challenge to bioinformatics is how to interpret this wealth of data to elucidate the interaction of patterns and the biological characteristics. One significant purpose of this interpretation is to predict unknown functions. Although many approaches have been proposed in recent years, the challenge still remains how to reasonably and precisely measure the functional similarities to improve the prediction effectiveness (Yan, et. al., 2014; Marashi and Tefagh, 2014; Rubin, et. al., 2014; Zhu, et. al. 2010)

2. Diversity and Information in a Text-Model

Consider the lexicon L. Consider a sign system S, representing a set of texts on the lexicon L and . By definition, the sign system S consists of all texts generated by the argument A. being A . It is defined a textual space T = < A, S>. For signs of lexicon L, there is defined a number function, which is interpreted as the complexity of the generation of the sign in the argument A. With each text there is associated a complexity of generation E(T), equal to the sum of the complexities appearing in the sign text

(1)

being the number of distinct appearances of the sign in the text t or frequency of . Obviously, or length of the text (number of equal or different signs).

2.1. Thermodynamic of Text

Using a thermodynamic analogy, E(T) will be the energetic cost or energy of generation of the text T. Mandelbrot (1954, 1961), propose for the Zipf's Law (1949) the following:

(2)

being two parameters depending of the text T and , and P is determined by

(3)

the number of different signs of the text T. the formula (2) can be written in probabilistic form as

(4)

The parameter is the inverse of temperature of information of the text T, . The entropy H of the text, will be determinate by 's formula . . H continuously growths from 0 to when goes from 0 to . H determines for a given. The Mandelbrot's criteria consists on transforming in 0 the variation free of , that is to say, the energy excess if the energy for symbol in the formula of Shannon was . A it will be the usable energy of Helmholtz, that is to say, the available energy for the dissipation, being the enthalpy or heating content of information. Therefore it can assimilate as a text volume, and as state variables. The existence of a hypothetical text volume will make suppose the existence of a "recipient" where the components of this text exercise a hypothetical pressure of information P. The entropy H measures how much information lacks to understand that structure has a system that is disordered for the observer of this system. From (2)

(5)

1. The entropy H is a growing magnitude that goes of 0 to . Therefore in this case the information I will be 0.

2. If and , that which is logical since the signs of very high range add very little to H or to . The information I will be 0. An infinite text is equal to an infinite volume, formed by infinite signs with a structure infinitely rigid without any movement (appearance) of the signs. Then we will be before the absolute zero of information. The absolute zero of information will correspond to the maximum of information.

3. If therefore and the information will spread to be zero. The system will spread to be more and more structured.

4. If , then therefore the information will be I = 0. The structure is zero. An empty volume corresponds to the empty text .

5. To take information means to make the most complex, stronger structure and logically to go bringing near the temperature of information from the system to the absolute zero of information. Contrarily, to give information means to make the weakest structure and to bring near the temperature of information to the infinite.

6. A system is informatively colder regarding other, when its temperature of information is more near the absolute zero of information that the other system that will be considered informatively hotter.

7. If a system informatively cold contacts another informatively hotter, the first one will cool down more, at the same time that the second system warms, increasing its temperature of information.

8. A system takes information of other when it makes more complex its structure and therefore it diminishes its temperature of information.

2.2. The Hypothesis of "soup-state"

Before the modeler prepares to create the text, a system exists constituted by the language that must use in the generation of this text, in our case the L(MT) language. This system is constituted by all primitive symbols (n-order monoads, SLUNs) and its corresponding grammars. This way, in this initial state we can consider the existence of a singularity formed by an infinite number of elements that is in a minimum energy state and having such form that belongs together to the structure of language, but without any relationship to each other to constitute a text. In this state there will be a temperature of infinite information, a volume zero and infinite entropy. We have spoken of this singularity in the sense defined by the current cosmology (Davies, 1983), since it represents the absolute uncognisciability and where is possible to apply the Hawking's principle of absolute ignorance and it is therefore lacking of all information, that is to say I = 0 or . This singularity is in a state of maximum disorder or thermodynamic balance. If it is compared with a perfect gas, the most probable state in certain quantity of gas locked in a recipient, is the uniform density, the position of the individual molecules are at random, that is to say, any configuration among the high number of the possible ones, it will be equally probable. This would be the situation in that is our system before beginning to use the language, when it is only had this singularity to which have defined as "soup state", and is constituted by all the symbols of the language and generated by their corresponding grammar. This is the limit of our measure of information, the situation of maximum statistical entropy.

2.3. The No Selective Contraption

One of features essential of all text is its degree of organization that results notably in a certain specific abundance distribution, by a certain relative frequency specter, of the most abundant symbol to the rarest symbol. However, the relative frequency of a symbol is not other that its probability of apparition in a determined text, when the apparition is done at random by a technique or a no selective contraption. This probability is unknown. The theory of said no selective contraption is the following way:

Suppose the Nature or Ontologic System like a discreet source by heart null generating data . This source emits a sequence of symbols belonging to a finite and fixes alphabet (Abramson, 1980) whose elements form a data structure. These symbols are chosen with a fixed law of probability and we will admit that they are statistically independent. The probabilities with which the symbols are presented are . The quantity of information generated by the occurrence of is:

(6)

It is also called value of surprise of symbol. The formula for the calculation of the mean quantity of information associated to the source is:

(7)

That is to say, they are taking the surprise values of each one of the possibilities of the source and they are pondered according to the occurrence probability. The sum of all they will be the quantity of information generated by the source. To measure of approaches to 1, the quantity of information associated with the occurrence of the symbol spreads to the case limit in that the probability of the symbol is 1, the occurrence of doesn't generate any information. That is to say, information is not generated by the occurrence of symbols for which alternative possibilities don't exist.

We will designate like to a receiver of information on. To will be denominated sign and it will be formed by the elements of the alphabet A-D, that is to say, the union of set W of characters used to express measurable attributes and to set of n-order monoads that are before the interaction in the " soup-state " (Figure 1).

Figure 1. Source-reception process

That way I() is received information of or about ?. will be used to designate this new information, indicating the subindex the part of I() that received information is of . The information transmitted from to is the total quantity of available information in , I(), fewer a quantity R or noise, and it will be expressed as:

(8)

In the same way

(9)

being ε the equivocity of the information generated in which is not transmitted to . The information generated in is divided in two parts:

1. The part that is transmitted to .

2. The part ε that is not transmitted or equivocity. At the same time, the information that there is in can be divided in a similar way in two parts:

a. The one that represents the received information of .

b. The remaining part whose source is not or noise R. An increase of R makes to be hidden a part of the sign , and this way will decrease by means of the increase of the equivocity ε .

If the noise increases the quantity of information that gets lost, it diminishes the quantity of transmitted information, but if it doesn't affect to , then continues being the same one.

In the classic Theory of the Information (Abramson, 1980), an equivalence settles down among R and ε, resultant of having chosen the set of possibilities of the source and of the receiver in such a way that . If we imagine changes in the set of possibilities that define without the corresponding changes in the set of possibilities that define and vice versa, have not to exist a necessary equivalence among them. In case there was a maximum dependence among what happens in and what happens in, then R = ε = 0 and the quantity of transmitted information will be highest and in this case. Let be the conditional probability of given. One will be able to calculate the contribution of to the noise R by means of

(10)

The equivocity ε will be calculated in a similar way

(11)

The flow of information depends on underlying causal processes. But it will be necessary to distinguish between the causal relationships and the existent informational among and . If the real data were always the same ones, that is to say there was an experimental perseverance (what doesn't happen in the reality) there would be a strict causal dependence (strong) among and .

It happens that is cause of depending on the real data. The sign doesn't say what happened exactly in the source , while they say it. From an informational point of view, take more information than it has more than enough what happened in . Although, for a certain structure of data, each symbol possesses a significance or concrete adjustment, the temporary change of this structure, it can determine a change in the symbol that represents it, a change in its significance or adjustment of the symbol to the fact and in its significant one or decoding on the part of observer (Villacampa et al., 1999a).

3. Text and Indetermination

What one can know that is the relative frequency of every symbol in determined model-text, frequency that differs more or less its probability of occurrence or apparition in the text. However, according to the law of the great numbers, one knows that the observed relative frequency offers toward the probability of apparition, which is said, toward the relative frequency when the total strength or size of the sample increases. The symbol to which belongs the individual apparition is uncertain and the degree of uncertainty or indetermination on the result is function of the relative frequencies of symbol in the text, therefore of its diversity. Consider a lexicon L', . Suppose a text formed by S complex sentences (state equations) and in their right hand (functions of flow) of each one of them, their simple sentence are formed by SLUNs or CLUNs, generated through the different generative grammars starting from variables or different primitive symbols. Each primitive monoad generates an associative field whose number of elements (SLUNs) comes given by being m the number of first-order monoads (determined by the modeler) and n the order of the monoad of same or superior order to 2. (Villacampa & Usó-Domènech, 1999). The number of possible different SLUNs for complex sentence in a text T write in L' will be

(12)

The number of possible different SLUNs for text T write in L' will be then:

(13)

Let T a text write in L', with N the number total of symbols and the number of differnt symbols, such as . If it only exists one symbol in L', there is not any indetermination because the result is certain. If it exists two symbols, one very abundant in its apparition in all texts and the other very rare, , one has the big odds to get the first and the indetermination is weak. The indetermination will be maximal if there are two symbol even having relative abundance of apparition, because in this case, there not are not more of odds to apparition one that the other. If instead of two symbols having the equal relative abundances and therefore of the identical apparition probabilities, the indetermination will be even bigger. It is therefore important to be able to encode the degree of indetermination. One will first consider the case of a text T understanding the same abundant symbols n. The structure of T is compliant to the following diagram:

The indetermination on the result of an apparition must be an increasing function of N, because as well as it has been said before, more N are big more it is difficult to predict the result of an apparition. Besides, the function must annul himself for N = 1. We will write therefore: Indetermination = f(N) and f(1) = 0. To determine the shape of function f(N), it is necessary to impose a supplementary condition. If in a text two symbols appear, with , the result of an apparition not influencing on the other, the double apparition has n.n possible and also likely compositions. The corresponding indetermination is therefore. One will impose to be as the sum of the indetermination on each of apparitions. It comes back to impose to function f(n) to satisfy to the condition . One demonstrates that the only function that satisfies to these various conditions at a time is the logarithmic function. One will define the indetermination therefore by the logarithm of the number of possible and also likely cases, either for an apparition: Indetermination = logn. The indetermination will be expressed therefore in bits by the logarithm of basis 2 of possible cases and likely:. In one text T as the one considered all symbol play an equivalent role and one can say that each of the present symbols introduced an element or an equal indetermination part to the total indetermination divided by n, either. This expression can write himself again , this shape having the advantage to make only intervene the relative frequency of symbol . The total indetermination in bits , will be as the sum of n elements of indetermination introduced by every symbol:

(14)

Actually, the words of ours text T doesn't correspond to the considered previously simplistic diagram. In a text understanding different symbols, each has a relative abundance different of others and the diagram is the next one:

And

(15)

or Shannon's formula.

The maximal value corresponds to the theoretical case, impossible in ours theory, where all symbol of the text would even have relative abundance n. One recovers: . The minimal value would be gotten in the case of a text T with N symbols where symbols are represented by only one individual and a symbol by all other individuals. The probability of apparition of the abundant symbol would be very neighbor of 1 and the probability of others apparition very neighbor of zero. The indetermination would be a sum of terms all neighbors of zero. Finally, to a text of composed diversity given N symbols corresponds a indetermination of which the value expressed in bits is understood between 0 and . Here, N is equivalent to text volume and the indetermination is between 0 and .

We consider a text T understanding N individuals distributed between different symbols having some effectives like a composed message of different signals and that brings certain information on the composition and the structure of the source of reality of which has been extracted. There is a deep analogy between the two notions of information and indetermination. It understands himself comfortably if one considers information as the difference between the indetermination on the composition of the message before and after that it is known. If the message has K possible and also likely compositions, the indetermination before is equal to. The indetermination a posteriori is hopeless since the message being known, there is not any uncertainty on its composition. The information is .

In a text of known composition, it is considered the assignment of an individual to some symbol like an elementary signal and the problem is to value the number of compositions possible and also likely of the message. So each individual was identifiable and that one noted the order in which individuals appeared, the text would be assimilated to a composed message of different signs placed the one following others in a determined order. The number of possible and also likely arrangements that one can get as arranging N objects the some following others is given by N!. Therefore, the number of possible and also likely compositions K will be:

(16)

The total information in bits will be equal to:

(17)

In the text T, and fixed by their argument (Villacampa et al. 1999a), will come certain the number of state equations (complex sentences). Therefore, the differential symbol can be considered as not movable. However, the SLUNs form part of the flow equation or sentence where the possibility of exchanges can exist.

If we call to the frequencies of each one of the SLUNs or symbol, then the number of possible and equally probable compositions for complex sentence can be calculated using the equation (16)

(18)

and (17)

(19)

When the effectives are all big and in the measure where they are it, one can replace the factorial by values approached given by the formula of .And when strengths are all very big by . One has then:

(20)

And as one can write

(21)

One doesn't change in anything the value of while multiplying each of terms under the sign by and putting N then in factor. One gets so the expression

(22)

Under this shape, one notes that the total information offers, when all effectives are very big, toward a value that only depends of the relative frequencies and the strength on total N. It is therefore logical, since one it is interested to the structure of the text not to consider any information total, but the average information by individual

(23)

This average information expresses in bits and under reserve that all are sufficiently big and offer toward the value that is not other that the formula of Shannon:

(24)

The first term of the equation (23) is similar to

(25)

and as the second term is the index of diversity of Shannon (24), the formula (25) will be:

(26)

after (23) the following index of diversity will appear

(27)

index of diversity that is equivalent to the one obtained by Margalef (1958) for the species of an animal population in an ecosystem.

The average information by individual, gives by the formula (27), offers toward information given by the formula (26) when all strengths of symbol are sufficiently big. In the practice, the two formulas give the same appreciably value when samples are of large size, but values are much more different than samples which total effectives are weaker.

4. Conclusions

The index of diversity is again an evaluation of the diversity since its value offers toward the one of I when N increases, but it is a biased evaluation because values of are always lower to those given by the formula (26). One can convince itself by the following reasoning: To pass of (27) has (26), is sufficient to replace the factorial by their approached values deducted of the formula of . However this one always drives to approximations by default, of as much less good than he is about the less elevated numbers. In the formula (27), some of numbers that represent to the denominator are rare symbol frequencies and are not in general very elevated. The formula of conducted in this case to an evaluation by default of the denominator and by excess of the logarithm. In other terms, the formula (26) gives a value in bits superior to the one given by the formula (27) and gaps are of as much bigger than the size of the text is weaker. The bias comes from that, in the calculation of information formulated by (27), one not only knows to the departure the relative different symbols frequencies in the text, but as the strength total N. When one calculates the information signal by signal, the first will have, whatever is the chosen order, the probability of apparition is equal to relative frequency in the text, and will provide an equal information quantity to the one calculated by the formula (26). But for the following signals, probabilities will change by the fact of the previous signal knowledge. In particular, the last signal will provide hopeless information because it will perfectly be determined. The average information by signal decreases regularly the first at last and the general average will be weaker than the value given by the formula (26).

References

[1]  Abramson, N. 1980. Information Theory and Coding. McGraw-Hill Book Company, Inc.
In article      
 
[2]  Chomsky. N. 1965. Aspects of the Theory of Syntax. MIT Press. Cambridge, Massachusets.
In article      
 
[3]  Chomsky. N. 1969. Syntactic Structures. Mouton.
In article      
 
[4]  Davies, P. 1983. God and the new physic. J.M. Dent & Sons Ltd. London.
In article      
 
[5]  Forrester. J.W. 1961. Industrial Dynamics. MIT Press, Cambridge, MA.
In article      
 
[6]  Gash, H. 2014. Fixed or Probable Ideas. Foundations of Science. In press.
In article      CrossRef
 
[7]  Mandelbrot, B. 1954. Structure formelle des textes et communication. Deux études. Word, 10. 1-27.
In article      
 
[8]  Mandelbrot, B. 1961. Word frequencies and Markovian models of discourse. In: Structure of Language and its Mathematical Aspects, Proceedings of Symposia in Applied Mathematics, 12. American Mathematical Society, Providence, Rhode Island, 190-219.
In article      CrossRef
 
[9]  Marashi, S-A.; Tefagh, M. 2014. A mathematical approach to emergent properties of metabolic networks: Partial coupling relations, hyperarcs and flux ratios. Journal of Theoretical Biology. 355. pp. 185-193
In article      CrossRef
 
[10]  Margalef, R. 1958. Information theory in Ecology. International Journal of General Systems. 3, 36-71.
In article      
 
[11]  Nescolarde-Selva, J.A and Usó-Doménech, J.l. 2013. An introduction to Alysidal Algebra (V). Kybernetes. Vol. 42 (8), pp. 1248-1264.
In article      CrossRef
 
[12]  Rubin, K. J.; Lawler, K.; Sollich, P.; Ng, T. 2014. Memory effects in biochemical networks as the natural counterpart of extrinsic noise. Journal of Theoretical Biology. 357. pp. 245-267.
In article      CrossRef
 
[13]  Sastre-Vazquez, P., Usó-Domènech, J.L. and Mateu, J. 2000. Adaptation of linguistics laws to ecological models. Kybernetes. Vol 29, 9-10, pp 1306-1323.
In article      
 
[14]  Usó-Domènech, J. L., Mateu, J and J.A. Lopez. 1997. Mathematical and Statistical formulation of an ecological model with applications Ecological Modelling. 101, 27-40.
In article      CrossRef
 
[15]  Usó-Domènech, J.L., Mateu, J. and Lopez, J.A. 2000a. MEDEA: software development for prediction of Mediterranean forest degraded areas. Advances In Engineering Software. 31, pp 185-196.
In article      CrossRef
 
[16]  Usó-Domènech, J.L., Villacampa, Y., Mateu, J., and Sastre-Vazquez, P. 2000b. Uncertainty and Complementary Principles in Flow Equations of Ecological Models. Cybernetics and Systems: An International Journal. 31(2), pp 137-160.
In article      CrossRef
 
[17]  Usó-Domènech, J.L., P. Sastre-Vazquez, J. Mateu. 2001. Syntax and First Entropic Approximation of L(MT): A Language for Ecological Modelling. Kybernetes. Vol 30, 9-10, pp 1304-1318.
In article      
 
[18]  Usó-Domènech, J.L., Sastre-Vazquez, P. 2002. Semantics of L(MT): A Language for Ecological Modelling. Kybernetes 31 (3/4), 561-576.
In article      CrossRef
 
[19]  Usó-Domènech, J.L, Vives Maciá, F. and Mateu. J.. 2006a. Regular grammars of L(MT): a language for ecological systems modelling (I) –part I. Kybernetes 35 nº6, 837-850.
In article      CrossRef
 
[20]  Usó-Domènech, J.L, Vives Maciá, F. and Mateu. J.. 2006b. Regular grammars of L(MT): a language for ecological systems modelling (II) –part II. Kybernetes 35 (9/10), 1137-1150.
In article      
 
[21]  Usó-Domènech, J.L, Nescolarde-Selva, J.A. and Lloret-Climent, M. 2014. Behaviours, processes and probabilistic environmental functions in h-open systems. American Journal of Systems and Software. Accepted.
In article      
 
[22]  Villacampa, Y. and Usó-Domènech, J.L. 1999. Mathematical Models of Complex Structural systems. A Linguistic Vision. International Journal of General Systems. Vol 28, no1, 37-52.
In article      CrossRef
 
[23]  Villacampa, Y., Usó-Domènech, J.L., Mateu, J. Vives, F. and Sastre, P. 1999a. Generative and Recognoscitive Grammars in Ecological Models. Ecological Modelling. 117, 315-332.
In article      CrossRef
 
[24]  Villacampa-Esteve, Y., Usó-Domènech, J.L., Castro-Lopez-M, A. and P. Sastre-Vazquez. 1999b. A Text Theory of Ecological Models. Cybernetics and Systems: An International Journal. Vol 30, 7. 587-607.
In article      
 
[25]  Yan,W; Sun, M.; Hu, G.; Zhou, J.; Zhang, W; Chen, J.; Chen, B.; Shen, B. 2014. Amino acid contact energy networks impact protein structure and evolution. Journal of Theoretical Biology. 355, pp. 91-104.
In article      CrossRef
 
[26]  Zhu, W.; Hou, J.; Phoebe Y. P. 2010. Semantic and layered protein function prediction from PPI networks. Journal of Theoretical Biology. 267. pp. 129-136.
In article      CrossRef
 
[27]  Zipf, G.K. 1949. Human Behavior and the Principle of Least Effort. Cambridge, Mass.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn