来源：百度文库 编辑：北方网 时间：2020/01/26 10:38:55
From Wikipedia, the free encyclopediaJump to: navigation, search It has been suggested that this article or section be merged with Cross tabulation . (Discuss)
In statistics, contingency tables are used to record and analyse the relationship between two or more variables, most usually categorical variables.
Suppose that we have two variables, sex (male or female) and handedness (right- or left-handed). We observe the values of both variables in a random sample of 100 people. Then a contingency table can be used to express the relationship between these two variables, as follows:
The figures in the right-hand column and the bottom row are called marginal totals and the figure in the bottom right-hand corner is the grand total.
The table allows us to see at a glance that the proportion of men who are right-handed is about the same as the proportion of women who are right-handed. However the two proportions are not identical, and the statistical significance of the difference between them can be tested with a Pearson's chi-square test, a G-test or Fisher's exact test, provided the entries in the table represent a random sample from the population contemplated in the null hypothesis. If the proportions of individuals in the different columns varies between rows (and, therefore, vice versa) we say that the table shows contingency between the two variables. If there is no contingency, we say that the two variables are independent.
The example above is for the simplest kind of contingency table, in which each variable has only two levels; this is called a 2 x 2 contingency table. In principle, any number of rows and columns may be used. There may also be more than two variables, but higher order contingency tables are hard to represent on paper. The relationship between ordinal variables, or between ordinal and categorical variables, may also be represented in contingency tables, though this is less often done since the distributions of ordinal variables can be summarised efficiently by the median.
The degree of association between the two variables can be assessed by a number of coefficients: the simplest is the phi coefficient defined by
where χ2 is derived from the Pearson test, and N is the grand total number of observations. φ varies from 0 (corresponding to no association between the variables) to 1 (complete association). This coefficient can only be used for 2 x 2 tables. Alternatives include the tetrachoric correlation coefficient (also only useful for 2 x 2 tables), the contingency coefficient C and Cramér's V. C suffers from the disadvantage that it does not reach a maximum of 1 with complete association in asymmetrical tables (those where the numbers of row and columns are not equal). The tetrachoric correlation coefficient the Pearson product-moment correlation coefficient between hypothetical row and column variables with Normal distributions, that would reproduce the observed contingency table if they were divided into two categories in the appropriate proportions. It should not be confused with the Pearson product-moment correlation coefficient computed by assigning values 0 and 1 to the cells. In tables with more than two levels for each variable an analogous quantity is called the polychoric correlation coefficient.
The formulae for the other coefficients are:
k being the number of rows or the number of columns, whichever is less.
C can be adjusted so it reaches a maximum of 1 when there is complete association in a table of any number of rows and columns by dividing it by .
The term contingency table was first used by Karl Pearson in "On the Theory of Contingency and its Relation to Association and Normal Correlation" in Drapers' Company Research Memoirs (1904) Biometric Series I.