Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances

José Luis Usó-Doménech, Josué-Antonio Nescolarde-Selva, Miguel Lloret-Climent, Meng Fan

American Journal of Systems and Software

Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances

José Luis Usó-Doménech1, Josué-Antonio Nescolarde-Selva1, 2,, Miguel Lloret-Climent1, Meng Fan2

1Department of Applied Mathematics, University of Alicante, Alicante, Spain

2School of Mathematics and Statistics, Northeast Normal University, Changchun, China

Abstract

One saw previously that indications of diversity IT and the one of Shannon permits to characterize globally by only one number one fundamental aspects of the text structure. However a more precise knowledge of this structure requires specific abundance distributions and the use, to represent this one, of a suitable mathematical model. Among the numerous models that would be either susceptible to be proposed, the only one that present a real convenient interest are simplest. One will limit itself to study applied three of it to the language L(MT): the log-linear, the log-normal and Mac Arthur's models very used for the calculation of the diversity of the species of ecosystems, and used, we believe that for the first time, in the calculation of the diversity of a text written in a certain language, in our case L(MT). One will show advantages and inconveniences of each of these model types, methods permitting to adjust them to text data and in short tests that permit to decide if this adjustment is acceptable.

Cite this article:

  • José Luis Usó-Doménech, Josué-Antonio Nescolarde-Selva, Miguel Lloret-Climent, Meng Fan. Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances. American Journal of Systems and Software. Vol. 4, No. 2, 2016, pp 32-39. http://pubs.sciepub.com/ajss/4/2/1
  • Usó-Doménech, José Luis, et al. "Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances." American Journal of Systems and Software 4.2 (2016): 32-39.
  • Usó-Doménech, J. L. , Nescolarde-Selva, J. , Lloret-Climent, M. , & Fan, M. (2016). Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances. American Journal of Systems and Software, 4(2), 32-39.
  • Usó-Doménech, José Luis, Josué-Antonio Nescolarde-Selva, Miguel Lloret-Climent, and Meng Fan. "Diversity for Texts Builds in Language L(MT) II: Indexes Based in Abundances." American Journal of Systems and Software 4, no. 2 (2016): 32-39.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

The models that we propose for complex systems (ecological and social systems) are those based on the Dynamic of Systems [1] with the modifications made by the authors [2], with which it becomes clear that we do not expect to create a generically theory of models, but a specific form, as we consider them one of the most generalized and possibly most powerful among the wide range of alternatives offered to the modeler. For this special type of models the authors have built a language which they have called L(MT) ([3-12][3]) whose syntax is, on a wide scale, the following:

1) The primitive monoad or alphabet A is formed by a set W of characters used to express measurable attributes , a set D of differential functions in relation to time and a set of n-order monoads . The W set is formed by the input and state variables, and .

2) The textual alphabet is jointly built with the alphabet A and the set R of real numbers (model parameters) .

3) The Simple Lexical Units (SLUN) are constituted by the elements of the set A-D.

4) The Operating Lexical Units or operator-LUN (op-LUN) are the mathematical signs +, -.

5) The Ordenating Lexical Units or Ordenating-LUN (or-LUN) are the signs =, <, >.

6) The Special Lexical Unit (SpLUN) is the sign d/dt, which belongs to the alphabet A and defines the beginning of a phrase (state equation). The differential vocabulary or d-vocabulary of a measurable attribute w, , is the set formed by all partial derivatives of any order of w with respect to any other measurable attribute and the time t.

7) The primary differential vocabulary, , is the set formed by all partial derivatives of order 1 of w with respect to any other measurable attribute and the time t..

8) Secondary a higher order differential vocabularies may also be defined and will be denoted by . For ease of calculation in practical complex system modeling, we define a subset of called dimensional primary differential vocabulary, , consisting of all partial first order derivatives of the measurable attribute w with respect to the three spatial dimensions X, Y, Z and time t, .

To implement the models of the System Dynamics [1], a subset of cardinal 1, , and whose only element is the partial derivative of the p-symbol with respect to the time, will be used.

9) Let be a set of measurable attributes. The differential Lexicon, d-L, is the set formed by the d-vocabularies generated by the measurable attributes,

10) The Elements of d-L will be called d-symbols. The characters (, ), { ,}, [, ], are simply signs since they lack of meaning and they are the equivalent to the signs ?, !, ; ( , ) in the natural languages.

11) The Separating of Lexical Units (s-LUN) are the signs * and /.

12) The Composed Lexical Units (CLUN) are the strings of a SLUN separated by a s-LUN.

13) The syllables or composed Lexical units (CLUN) are constituted by a SLUN, or a chain of them, separated by an op-LUN or a or-LUN.

14) The word is the SLUN or CLUN. The symbols [·] preceding the other symbols + or – are word separations.

15) The opsep vocabulary is the one formed by operating and separating LUNs. and it will be written a element of VS by .

16) A simple sentence is a flow variable [1]. It is built by a CLUN or a combination of CLUNs.

2. Distributions of Abundances

Suppose known strengths of symbols in a text T built in L(MT), with QT the number of different symbols and N the number of symbols in T. The distribution of abundances is the distribution of frequencies while sequencing symbols by order of decreasing absolute or relative frequencies. The graphic representation of such a distribution of abundances will make carrying in abscises the rank , and in ordinates the corresponding frequencies . A particular case of this distribution is the law of Zipf [13]. The authors have treated the application of this law for the language L(MT) in precedent works. In the theoretical case where touts symbols would even have frequencies (maximal diversity and equitability 1) all not representative points would be found in a horizontal of ordinate. In fact, the polygons as joining points comes closer more or less of a curve in J reversed, whose concavity is as much more accused that the equitability is weaker. It comes because there are less symbols whose strengths are superior to the average, that has some whose strengths are lower to this average there, or again that the very abundant symbols are less numerous than the rare symbols. To palliate inconveniences of this asymmetry, one often uses a logarithmic scale for ordinates and as sometimes for abscises. One can have therefore of diagrams of , and .

3. Log-Linear Distribution

The simpler model is the one where logarithms of frequencies are aligned on a right of slope a:

(1)

and while doing

(2)

The model can write itself therefore

(3)

As putting the previous relation is equivalent to:

(4)

Frequencies form a geometric progression of reason :

and .

While multiplying by the two members of the last equality, it comes:

(5)

from where

(6)

and

(7)

The number can be called constant of text. It is the antilogarithm of the slope of the right. More the diversity of text is weak, more the slope is strong in absolute value because, frequencies having been arranged in decreasing order, the slope of the right is always negative, what comes back to say is always lower to the unit. A geometric progression is determined entirely by the value of the parameter. Indeed, the relative frequency distribution doesn't change if one either multiplies or divides all frequencies by an any number and in particular by . The model cuts down then to . To adjust the model to a distribution of abundances will come back to calculate the slope of the regression right therefore of in . If points are sufficiently well aligned, the right that represents the adjusted model is drawn directly and its slope gives the value of the constant of text . This right passes inevitably by the point having for ordinate the average of frequency logarithms and for abscissa the average of ranks equals to . If points are not aligned well either if one wants a bigger precision, one calculates the equation of the regression right of in . This right passes by the point of abcisse and ordinate . The models of this class have been used in Ecology by Utida [14] and Motomura [15].

4. Log-Normal Distribution

A log-normal model is a model in which logarithms of frequencies are distributed at random around their average . This normal distribution is represented by an equation of the shape:

(8)

and as taking like origin dawns it means, that is while putting

(9)

It is convenient to use logarithms of preference basis 2 to vulgar logarithms (Preston, [16], [17]). The corresponding interval to a R unit is then the octave, that is, the interval in which the frequency doubles of value. When one passes an octave to the superior octave, R increases a unit. The surface understood between the axis of abscissas, the curve and the two ordinates represents the number of symbols of which frequencies its understood between as . The normal curve represented by the equation (9) is the curve of symbols. It is defined mathematically for all values of R, of to and the total surface is equal to . However, extremities of the two branches of the curve, asymptotes to axis of R, cannot have linguistic significance already that the useful part some curve is limited to the number of octaves really covered by the distribution of abundances observed in the text. Of a theoretical point of view, one will admit that boundary-marks of this useful interval must be symmetrical in relation to the origin. Besides, one will admit that the position of these boundary-marks , is as the area understood between the curve, the axis of the R and outside to the useful interval either equal to a symbol. It comes back to say that the useful surface is equal to , number of symbols really existing in the text and that one disregards the useful surface on all sides, an equal area to 0.5. It results some that is defined by the integral:

(10)

While putting and the formula (10) becomes

(11)

Therefore, the theoretical boundary-marks of useful interval for the curve of symbols are equal to the standard deviation multiplied by , x being read in tables of function relative to reduced normal curve

(12)

Let's consider the following series , understood inside the useful interval and definite according to frequencies of symbols by . The useful surface , can be divided in many partial surfaces equal each one to unity, is to say, correspondent each one to a symbol, and limited by parallels to the axis of ordinates framing values . The two extreme parallels are evidently those of boundary-marks . If are cumulated these partial surfaces, one gets the series of the first whole numbers 1, 2,..., .The curve integral of Gauss that represents the increase of the surface accumulated according to R, is other that the curve of variation of according to . It passes therefore by points of co-ordinates . This curve has a characteristic sigmoid shape. It is symmetrical in relation to the average of ranks and in relation to the average of frequency logarithms. A log-normal abundance distribution on a diagram in and , deal therefore an integral curve of Gauss. These curves are transformed in rights of probits, when one replaces the accumulated surfaces expressed in percentages by their probits . However, to calculate these percentages, it is necessary to take account the total surface understood between the normal curve and the axis of abscises of to . This total surface is equal to . And beyond of each boundary-marks of the useful interval, the surface is equal to 0.5. It agrees to take like percentage for the superior boundary-mark of the useful interval and for the lower boundary-mark.

The right of probits passes by the point having for co-ordinates P(0.5) = 5 and . Its slope is equal to . Its equation is the shape:

(13)

This equation permits to calculate the theoretical strengths of a distribution log-normal by the three parameters .

In the curve of symbols each of octaves, correspond a certain number of symbols whose frequency is roughly the event. There is a median or modal octave whose middle frequency has for logarithm the average of logarithms of frequencies. This octave has for limits m - 0.5 and m + 0.5. Let be the number of symbols whose frequency falls in this octave; is the ordinate to the top of the symbols curve. It is therefore equal to . The symbols number corresponding to the modal octave is therefore . One gets the number of symbols corresponding to every octave in the same way while multiplying the number of the different symbols by the middle frequency of symbols. For the octave of rank R, the average frequency is and the number of different symbols is . While multiplying the two numbers, one gets a new distribution and a new curve, so-called curve of individual symbols, because its equation permits to calculate the number of individual symbols corresponding to one interval , that is, whose frequencies are understood between as . This curve is also a normal curve. Its equation is the following:

(14)

Under this shape, the properties of individual symbols curve don't clearly appear. It agrees to transform its equation. One first replaces by what permits to write, in grouping the 2 exponential terms in one alone:

(15)
(16)

One transforms the numerator of the expression then between hooks of way to regroup all terms in and R in only one square. It is sufficient to write:

Therefore

(17)

while putting

(18)

finally:

(19)

One recognises the equation of a normal curve of which the ordinate to the top is, whose standard deviation is and whose average is . The top of the individual symbols curve is baffled of in relation to the curve of symbols. The top of the individual symbols curve occupies therefore, in relation to the extremity of the useful part of symbols curve, a position that depends of . Preston [17] called canonical distributions those for the extremity of the useful part of curve of symbols and the top of the individual symbols curve have the same abscise. One then the equality from where . It results that to a value data of it corresponds a canonical distribution of which all parameters are determined. It is called constant of the text of Preston m’ the inverse of the square of the standard deviation. When the graphic determination of m and is difficult or is not sufficiently precise, it is always possible to resort to the equation of the probits right or while putting. One takes like equation the one of the regression right of in .

The method of Preston has been used in Ecology.

5. Distribution of Mac Arthur

In the distribution of Mac Arthur ([18, 19, 20]), the frequency of symbol of rank from most abundant is given by the formula:

(20)

The more abundant symbol has for frequency:

(21)

and the rarer symbol

(22)

The sum of the relative frequencies is equal á 1.

Diagrams in and show that distributions of Mac Arthur are very little represented by concave and not symmetrical curves in S.

Among the studied distributions, those of Mac Arthur only depend on two parameters, N and the number of symbols , whereas those log-lineal and log-normal depend in addition to a third parameter, the constant of the text. N brought back to the volume of text is other that the density of symbols. is the specific wealth of the text. As for has the constant of text of the distribution log-lineal, whose logarithm is the slope of the right and to the constant of text of the log-normal distribution that is bound to the standard deviation of the curve of symbols and the curve of individual symbols by the relation , they are closely one and the other dependent of the diversity of the text. Distributions log-lineal and log-normal are susceptible of much better adjustments that of Mac Arthur one. The comparison can take place directly between the observed and calculated frequencies. Hairston [21] and King [22] use the relation between the variance of theoretical value and the variance of observed values. The concordance will be perfect if this relation is equal to 1, and it will be less good if is more different of 1. The variance of values observed in the text is:

(23)

The theoretical value variance is:

(24)

This method is independent of the size of the text. The value of the relation must be considered like a simple indication encoded on the degree of concordance or conflict between the distribution observed in the text and the corresponding model of Mac Arthur to the same number of symbols. It is not necessary to assign it the same statistical significance of two sample variances that to the F of Snedecor, that calculates himself of the same way for the comparison of two sample variances, because the method of Snedecor supposes that values to leave of which are calculated variances are normally distributed: this is not the case for distributions of Mac Arthur. It is very appreciable to the size of texts and supposes that the order of symbols in the text is identical that the real order of abundances in the language, otherwise the calculated value for would be underestimated it. Finally, it doesn't take account of the sign of gaps nor the positive and negative gap distribution. Or the positive gap (frequencies observed superior to those foreseen by the model) concerning the most often the abundant symbols and the negative gaps (frequencies observed lower to those foreseen by the model) the rare symbols. It is important to know if distribution observed for signs of gaps can or no be assigned at random. David's test ([23]) answers to this question. When the size of texts is relatively big, it often arrives that values found for , considering the number of liberty degrees, correspondent to probabilities many too weak so that one can assign at random of the sampling gaps between the observed frequencies and those foreseen by the dsitribution of Mac Arthur. In this case, even though all conditions of application of the test of the are not rigorously full, one will be founded to conclude that the distribution of abundances in the language is not compliant to a model of Mac Arthur.

The distribution of Mac Arthur can be used especially to represent distributions of text abundances understanding few individuals and little symbols. The only problem to put would be the one of the comparison between the diversity of text and the one of the distribution. This comparison makes through the intermediary of Shannon's diversity index. The relation between the calculated index on the text and the calculated index on the model of Mac Arthur, for the same number, is equitability. The difference with the definite equitability resides in the choice of the stationary element of comparison, the maximal theoretical diversity being replaced by the diversity of the model of Mac Arthur considered as the limit toward which offers the diversity of the language, being an inaccessible theoretical maximun. As index of Shannon's diversity make intervene of logarithms in their calculation, their comparison can make correctly by a simple quotient.

We proposed a new definition of the equitability, relation of the number of symbols observed to the number of symbols that, distributed according to a model of Mac Arthur, would give the same indication of diversity

(25)

6. Law of Number-Frequency

Let L be the length of a text, where L is the number of signs of the same.

Let p(r) be the probability of occurrence of a sign w of rank r, then the probability that the sign w of rank r appear i sometimes in the text is:

(26)

We define the random variable Ni(L), number of occurrences of the sign of rank i in the text of length L as:

(27)

being ni(r, L) = 1 if the sign w appears, and ni(r, L) = 0, if it does not. These variables ni(r, L) they are not independent, but the average of the sum is the sum of the averages, then:

(28)

Considering that

(29)

and by p(r) = x, it follows that

(30)

For large values of L, the above sum differs little from the sum, restricted to some range, such as (10, ) and the following integral:

(31)

differs little from the integral restricted to (0, p (10)), finally the restricted sum, and the restricted integral differ little, thus:

(32)

For large enough L it is:

(33)

and to i big:

(34)

which is the expression of law number-frequency.

7. Lexic Unities Model

Lexical units can be grouped according to different criteria:

1. According to the type of behavior they describe. The same behavior can be described by different lexical items, which we will call synonymous; we can then form groups of synonyms LUN.

2. According to the functions contained therein. For example we can group the periodic functions, or those whose power series developments are similar.

3. According to their probabilities of occurrence.

4. According to the number of primitive symbols that compose them.

We are interested in estimating the number of lexical units that belong to a certain group and containing a certain amount of primitive symbols.

Let N be lexical units are classified into k non-empty classes. Let Ni be the number of lexical units in the ith class. These Ni lexical units in the ith class are divided in Mi symbols according to a Bose-Einstein distribution, ie:

(35)

with and .

Let be the number of lexical units that are, with exactly s symbols in the i-th class, then it is:

(36)
(37)

The proportion of lexical units with exactly s symbols is:

(38)

Considering only one class, for example the first, we find that:

(39)

when , where , and it converges in probability to 0. Similarly, it states:

(40)

for each i, and furthermore, the weighted average G(S)/M (38) converges to a constant p(s):

(41)

for wherein H is a function of proper distribution. This constant for and a variety of possible H becomes:

(42)

where:

(43)

Whereupon under this model, the individual occurrences of G(S)/M can be expected to follow approximately the law of Zipf.

8. An Application of the Law of Zipf

For simplicity let us first consider that there are only four signs in the text, that the energy of any of the signs is restricted to one of the values E(x) = 0, , , and that the total energy of the text is .

Since the signs can exchange energy between them, all divisions of the total energy between 4 signs are possible. Consider the possible cases:

I) Three signs have E (x) = 0 and one has E(x) = . There are four different ways to achieve this division of energy as any of the 4 signs can be found in the energy state . That is, the number of duplicate distinguishable divisions is 4r.

II) Two signs have E(x) = 0, the third E(x) = and fourth E(x) =. In this case there are 12 different ways to achieve this division. Ie the number of distinct divisions duplicate is 12.

III) One sign is E(x) = 0, and the remaining three have E(x) = , there are 4 different ways to achieve this scheme. Ie the number of duplicate distinct divisions is 4.

In assessing the number of duplicate divisions is counted as distinct duplicate, any arrangement of signs between different energy states. However, any new arrangement of signs in the same state of energy are not counted as duplicates, because equals signs, and have the same energy, cannot be distinguish one from another. That is, identical signs are treated as if they were distinguishable, except for new arrangements in the same energy state.

The total number of permutations of the 4 signs is 4!. If we consider n signs, the number of different orderings is n!, but new arrangements within the same energy level do not count, for example in the case II) the number of distinct divisions is reduced from 4! To 4!/2!. That is 12, since there 2! Arrangements in the state E (x) = 0 do not count as distinguishable. For cases I) and III), the divisions are reduced from 4! to 4!/3! That is 4, and that there are 3!.

New arrangements in the state E(x) = 0, or the state E(x) = do not count as distinguishable. Since all possible divisions of energy occur with the same probability, the probability that a given type divisions occur it is proportional to the number of duplicate distinguishable divisions from this type, and then the probability Pi is exactly equal to this number divided by the total number of those divisions. So the probabilities for the three cases considered are: 4/20, 12/20 and 4/20.

Let's see what the probable number, N´(E(X)) of signs in the state energy E(x).

a) In the state of energy E(x) = 0for the case I) there are 3 signs in this state and the probability Pi = 4/20.

b) For the case II) are two signs in this state and Pi = 12/20.

c) For the case III) there is a sign and Pi = 4/20, then the likely number of signs with zero energy, N’(0) is N´(0) = 3 (4/20) + 2 ( 12/20) + 1 (4/20) = 2

For the remaining cases are: N´() = 24/20, N´(2) = 12/20 and N´(3) =4/20 and N´(4) = 0 , which they add 4 as expected.

9. Conclusions

From all the above we can draw the following conclusions:

1. The relationship empirically observed for the media, or Zipf distribution of values is an example of a distribution that satisfies this constraint: the power-law distributions are stable under charted.

2. The structure of Zipf exactly fulfills the restriction to increase the structure, and the identifiability of features.

3. Zipf structure can evolve by observing basic mechanisms that favor the structure involved in complex systems.

The appearance of Zipf's law is not caused by accident; it can be understood at a level which considers the interaction between the laws of complex systems, which, unlike the laws of physics can be developed. In general one can speak of coevolution of laws (behavior) and objects. We believe that to better understand the complex systems is necessary to take into account these interrelationships.

References

[1]  Forrester. J.W. 1961. Industrial Dynamics. MIT Press, Cambridge, MA.
In article      
 
[2]  Usó-Domènech, J. L., Mateu, J and J.A. Lopez. 1997. Mathematical and Statistical formulation of an ecological model with applications. Ecological Modelling. 101, 27-40.
In article      View Article
 
[3]  Nescolarde-Selva, J.; Usó-Doménech, J. L.; Lloret-Climent, M. 2014. Introduction to coding theory for flow equations of complex systems models. American Journal of Systems and Software. 2(6). pp. 146-150.
In article      
 
[4]  Nescolarde-Selva, J., Usó-Doménech, J.L., Lloret- Climent, M. and González-Franco, L. 2015. Chebanov law and Vakar formula in mathematical models of complex systems. Ecological Complexity. 21. pp. 27-33.
In article      
 
[5]  Sastre-Vazquez, P., Usó-Domènech, J.L. and Mateu, J. 2000. Adaptation of linguistics laws to ecological models. Kybernetes. 29 (9/10). 1306-1323.
In article      View Article
 
[6]  Usó-Domènech, J.L., Sastre-Vazquez, P. and Mateu, J. 2001. Syntax and First Entropic Approximation of L(MT): A Language for Ecological Modelling. Kybernetes. 30(9/10). 1304-1318.
In article      View Article
 
[7]  Usó-Domènech, J.L. and Sastre-Vazquez, P. 2002. Semantics of L(MT): A Language for Ecological Modelling. Kybernetes. 31 (3/4), 561-576.
In article      View Article
 
[8]  Usó-Domènech, J.L., Vives Maciá, F. and Mateu. J.. 2006a. Regular grammars of L(MT): a language for ecological systems modelling (I) –part I. Kybernetes. 35 nº6, 837-850.
In article      View Article
 
[9]  Usó-Domènech, J.L., Vives Maciá, F. and Mateu. J. 2006b. Regular grammars of L(MT): a language for ecological systems modelling (II) –part II. Kybernetes. 35 (9/10), 1137-1150.
In article      View Article
 
[10]  Usó-Doménech, J. L., Nescolarde-Selva, J., Lloret-Climent, M. 2014. Saint Mathew Law and Bonini Paradox in Textual Theory of Complex Models. American Journal of Systems and Software.2 (4), pp. 89-93.
In article      
 
[11]  Usó-Doménech, J. L., Nescolarde-Selva, J. 2014. Dissipation Functions of Flow Equations in Models of Complex Systems. American Journal of Systems and Software. 2 (4), pp. 101-107.
In article      
 
[12]  Usó-Doménech, J. L., Nescolarde-Selva, J., Lloret-Climent, M. and González-Franco, L. 2014. Diversity for Texts Builds in Language L(MT): Indexes Based in Theory of Information. American Journal of Systems and Software. 2(5). pp. 113-120.
In article      
 
[13]  Zipf, G.K. 1949. Human Behavior and the Principle of Least Effort. Cambridge, Mass.
In article      PubMed
 
[14]  Utida. T. 1943. Relation entre les populations expérimentales de Callosobrunchus chinensis Linné (Coléoptères) et son parasite (Hyménopttères). III. Influence de la densité de population de l'hôte sur la proliferation du parasite. Seitaigaku Kenkyuu, 9, 40-54.
In article      
 
[15]  Motomura, L. 1947. Further notes on the law of geometrical progression of the population density in animal association. Seiri Seitai. 1, 55-60.
In article      
 
[16]  Preston, F.W. (1948. The commoness and rarity of species. Ecology, 29, 254-283.
In article      View Article
 
[17]  Preston, F.W. 1962). The canonical dsitribution of commonness and rarity. Ecology, 43, 185-215 and 410-432.
In article      
 
[18]  Mac Arthur, R.H. 1957. On the relative abundance of bird species. Proc. Nat. Acad. Sci. 43, 293-295.
In article      View Article
 
[19]  Mac Arthur, R.H. 1960. On the relative abundance of species. Amer. Nat., 94, 25-36.
In article      View Article
 
[20]  Mac Arthur, R.H. (1969. Patterns of communities in the tropics. Biol. J. Linn. Soc., 1, 19-30.
In article      View Article
 
[21]  Hairston, N.G. 1959. Species abundance and community organization. Ecology. 40, 404-416.
In article      View Article
 
[22]  King, S.E. (1964). Relative abundance of species and Mac Arthur model. Ecology. 45, 716-727.
In article      View Article
 
[23]  David, F. N. 1947. A χ2 smooth test for goodness of fit. Biometrika. 34, 299-304.
In article      View Article
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn