

REVIEW 

Year : 2013  Volume
: 17
 Issue : 5  Page : 577582 


Periodontal Research: Basics and beyond  Part III (data presentation, statistical testing, interpretation and writing of a report)
Haritha Avula
Department of Periodontics, Sri Sai College of Dental Surgery, Vikarabad, Andhra Pradesh, India
Date of Submission  26Oct2012 
Date of Acceptance  11Aug2013 
Date of Web Publication  4Oct2013 
Correspondence Address: Haritha Avula Department of Periodontics, Sri Sai College of Dental Surgery, Vikarabad, Andhra Pradesh India
Source of Support: None, Conflict of Interest: None  Check 
DOI: 10.4103/0972124X.119293
Abstract   
Statistical analysis is the backbone of research and however befuddling it is to a clinician, it is crucial for a researcher to understand the various assumptions underlying the statistical methods. This paper aims at simplifying the various statistical methods that are routinely used in periodontal research. Data presentation, the relevance of clinical as against statistical significance and writing of a report are also discussed. Keywords: Data, estimation, inference, P value, report, statistics
How to cite this article: Avula H. Periodontal Research: Basics and beyond  Part III (data presentation, statistical testing, interpretation and writing of a report). J Indian Soc Periodontol 2013;17:57782 
How to cite this URL: Avula H. Periodontal Research: Basics and beyond  Part III (data presentation, statistical testing, interpretation and writing of a report). J Indian Soc Periodontol [serial online] 2013 [cited 2022 Jan 25];17:57782. Available from: https://www.jisponline.com/text.asp?2013/17/5/577/119293 
Introduction   
Research is a portal to unravel various perplexing scientific questions and mysteries which usually leave a clinician baffled. Scientific research begins with formulating the research question, identifying the research design, and testing the hypothesis. Though the statistical testing is done by the statistician with the appropriate methods, the clinicians basic knowledge regarding the use of statistics is paramount in bringing out meaningful inferences. The first two parts of this review series have summarized the various methodologies, sampling methods and ethical issues pertaining to good and ethical research. The present paper aims to cover the various statistical aspects that are routinely used by a dental researcher, in a simplified manner especially emphasizing on understanding various basic statistical methods and their interpretation. Writing a good report is another pivotal element in research which is also discussed in this paper.
Representation of Data   
The data that are obtained after the study is usually in the form of filled individual case proformas or questionnaires. All the data are entered manually on paper or directly entered into a spread sheet or database or directly into a computer. A computer program like Microsoft Excel^{©} would facilitate easy entry of data. This forms the raw data which are in the form of a master chart or table. The analysis of data requires a number of closely related operations such as establishment of categories, the application of these categories to raw data through coding, tabulation and then drawing statistical inferences. ^{[1]} Data file can take the form of a spreadsheet with individual people forming the rows of the spreadsheet, and the variables forming columns. It is important to introduce the term variable in this context. A variable is something that can be changed, such as a characteristic or value. There are two types of variables  dependent and independent. ^{[2]} The variable(s) we measure as the outcome of interest is the dependent variable (Y variable), or response/effect. The independent or the X variable causes an effect that is seen on the dependent variable. In the hypothesis ''periodontitis is higher in subjects with poor oral hygiene,'' poor oral hygiene (the cause) is the independent variable and periodontitis (the effect) is the dependent variable.
Data can be classified as ^{[1],[3]} (i) Quantitative data which measure either how much or how many of something, i.e., a set of observations where any single observation is a number that represents an amount or a count and (ii) Qualitative data which provide the quality of observations, i.e., it describes something. It can be further divided into
 Nominal/Categorical data: Variables with no inherent order or ranking sequence with none being better or worse than the other. For example, gender, eye color, city of origin etc.
Dichotomous/Binary data is a type of categorical data where only two possibilities exist such as male/female, present/absent, or disease/no disease.  Ordinal/rank data: Variables with an ordered series, is an extension of categorical data. Contains different categories of data with one better/worse than the other. We could have categories for prognosis such as good, fair, poor, hopeless, or stages of periodontitis as mild, moderate, or severe. Another example of ordinal data include Likert scales used in questionnaire studies which have categories like ''greatly dislike, moderately dislike, indifferent, moderately like, greatly like.''
 Interval scale: Interval variables do not have a true zero but has an arbitrary zero. E.g. Blood pressure is given as 80100 mmHg where 80 is the arbitrary zero.
 Ratio scale: Has a definite ''0'' as the starting point true zero point. For example, a person weighing 80 kgs is twice as heavy as a person weighing 40 kgs because of the absolute zero point.
Qualitative data can also be discrete/discontinuous or continuous. Discrete/discontinuous data are in the form of integers and has no intermediate points. E.g. number of pregnancies (1, 2.), cigarettes lit per day etc. Continuous data can be divided into fractions of whole numbers like height, weight, and pocket depths etc.
Quantitative data deals with numbers with real precision. E.g. Height, weight, age, blood pressure, pocket depths, and alveolar bone level etc.
Paired versus unpaired data
Unpaired (independent or unmatched) data are obtained from two groups that are unrelated to each other. Measurements are taken on two separate groups of individuals. E.g. males vs. females, age groups, and parallel designs. Paired or matched data are where the measurements are taken on the same individual or matched groups as in a split mouth or same group before and after or cross over designs.
Statistical analysis of data is a fundamental step to make inferences and draw conclusions about the research. The data that are obtained at the end of a study are called the raw data. The data are then transferred to a statistical package such as SPSS or SAS for statistical analysis. In research, usually both descriptive and inferential statistics are used to analyze the results and draw conclusions.
Descriptive and inferential statistics   
Descriptive statistics ^{[1]} include the numbers, tables, charts, and graphs used to describe, organize, summarize, and present raw data. They are routinely used in reports which contain a significant amount of qualitative or quantitative data. They include:
 Measures of central tendency: (location) of data, i.e., where data tend to fall, as measured by the mean, median, and mode. Mean refers to the arithmetic average, i.e., the sum of the scores divided by the total number of scores. The median, by definition, is the middle value in a distribution such that onehalf of the units in the distribution have a value smaller than or equal to the median and onehalf has a value higher than or equal to the median. Mode is the score that occurs most frequently. In a data set of the numbers 1, 2, 3, 4, 4, 5, 6, the mean is 3.57 (25 χ 7), median is 4 (the center value when arranged in order) and the mode is 4 (the most frequently occurring number).
 Measures of dispersion: (variability) of data, i.e., how spread out data are, as measured by the variance and its square root, the standard deviation.
Variability, in a statistical sense, is a quantitative measure of how close together or spread out the distribution of scores is. The two measures which quantify variability are the standard deviation (SD) and the standard error (SE). While the SD refers to individual observations in a study sample, the SE quantifies the variability of observed sample means if the study were repeated many times. ^{[4]}
Probability distributions in a sample can be thought of as being bellshaped [Figure 1]. That is, most of the measurements in the population tend to fall around some center point (the mean, μ), and as the distance from μ increases, the relative frequency of measurements decreases. Variables (and statistics) that have probability distributions that can be characterized this way are said to be normally distributed. Normal/Gaussian distributions are symmetric distributions that are classiﬁed by their mean (μ), and their standard deviation (σ). The SD (σ) quantifies the variability of individual measurements in a study sample. The SD characterizes the distribution of the entire sample data points around the sample mean. Approximately half (50%) of the measurements fall above (and thus, half fall below) the mean. Approximately 68% of the measurements fall within one standard deviation of the mean (in the range (μ − σ, μ + σ)). Approximately 95% of the measurements fall within two standard deviations of the mean (in the range (μ − 2σ, μ + 2σ). Virtually all of the measurements lie within three standard deviations of the mean.
A small SD indicates little variability in the sample data while a large SD indicates a lot of variability in the sample data. Mean ± SD is expressed only for continuous data. SD is a part of the descriptive statistics, while SE is a part of the inferential statistics. SE is directly affected by the sample size. Hence, for experiments conducted with a smaller sample size, it is preferable to provide Mean ± SE. ^{[1]}
Inferential statistics are used to draw conclusions and make predictions based on the analysis of numeric data. Inferential statistics are frequently used to answer causeandeffect questions. They are also used to investigate differences between and among groups.
Similar to diagnostic testing by clinicians, researchers conduct statistical tests on the observable data to make inferences about some underlying truth. ^{[4]} Statistical tests provide a measure of the likelihood of misclassification (falsepositive or falsenegative results) or uncertainty regarding the results obtained from the study sample. Two approaches to quantify this uncertainty are
 Hypothesis testing, which is associated with P values, and
 Estimation, which is associated with confidence intervals (CIs).
Hypothesis testing
For any study to make any inference about any relationship between the proposed intervention/treatment and the outcome, hypothesis testing is a fundamental requisite. The null and the alternative or research hypothesis have been discussed in detail in the first part of this review series. The ultimate goal of statistical testing is aimed at rejecting or not rejecting the null hypothesis rather than proving or disproving the alternative hypothesis. ^{[4]} Statistical inference is achieved usually by means of hypothesis testing using P value (Probability value).
Pvalue refers to the probability of detecting a statistically significant difference that is not the result of the treatment but the result of chance. In other words, the P level determines the probability of obtaining an erroneously significant result. In statistical language, this error is called Type I error. The most commonly used criteria are probabilities of 0.05 (5%, 1 in 20), 0.01 (1%, 1 in 100), and 0.001 (0.1%, 1 in 1000). Very often, although not necessarily, this level is chosen to be 0.05 and if the P value is less than 0.05, the result is said to be statistically significant and it is concluded that there is enough evidence to reject the null hypothesis. It is incorrect to say a smaller P value is more significant than a larger P value. However, many investigators do not follow this interpretation and erroneously refer to results as "very" or "extremely" significant when P values are small (P <.001). ^{[4]} P values are always greater than 0. Computer generated values such as P =0.000 should be written as P < 0.001. Nonsignificant P values should be written as exact number, i.e., P = 0.3 and not as P = NS (nonsignificant). ^{[1]}
Rejecting the null hypothesis when it is true (concluding that there is evidence to show that the population means differ when, in fact, they are equal) leads to what is termed a Type I error [Figure 2]. Type I error or false positive is referred to by the Greek letter alpha (α). An example of a type I error would be to say that there is a (significant) difference between the groups when actually there is no (significant) difference. A Type II error or false negative is referred to by the Greek letter beta (β) and is made when the null hypothesis is not rejected when it is false, i.e., when it is concluded that there is insufficient evidence to show that the population means differ when, in fact, these means are not equal. Or to simplify, an example of a type II error would be to say that there is no (significant) difference between the groups when actually there is a (significant) difference.
Estimation
While the P value is based on a single value or point estimate derived from the data, a second form of statistical inference, interval estimation, is a widely used tool to describe a population, based on sample data. While the P value gives the researcher a dichotomous ''significance'' and ''nonsignificance,'' estimation gives an idea of ''how much'' one intervention works better than the other. In estimation, the sample study provides an estimate of the effect of the intervention in the population and consideration of sampling error yields an interval, known as a confidence interval, which is reasonably certain to contain the (unknown) true population effect. ^{[5]} The confidence interval (CI) is used to estimate the upper and lower limits of the variability in the sample data. The idea is to obtain an interval, based on sample statistics, that we can be conﬁdent contains the population parameter of interest. The CI not only indicates significance, it also quantifies variability or uncertainty of the result used to make the statistical inference. The 95 percent CI includes the true population value 95 times of 100; the true population value will be outside this interval 5 times out of 100.
A 95 percent CI that includes 1 is not considered to be statistically significant and the evidence does not support an association between exposure and disease. If the 95 percent CI does not include 1, the result is considered to be statistically significant and provides evidence of a significant association between exposure and disease.
Odds ratio and relative risk
Though the terminologies may sound synonymous, the odds ratio and the relative risk of an event are two distinct statistical concepts. Let us take an example demonstrated in [Figure 3] in a 2 × 2 table. We have two groups of patients one constituting smokers and the other nonsmokers. The risk ratio (RR) is the ratio of probabilities of two events or incidence of periodontitis in the exposed group/incidence in nonexposed. ^{[3]}
RR = a/a + b/c/c + d
The odds ratio (OR) is the odds in the exposed/odds in the nonexposed. It is a ratio of the odds not the percentages. In statistics, the odds of an event occurring is the probability of the event divided by the probability of an event not occurring.
OR = a/b/c/d
An odds ratio of 1 implies that the event is equally likely in both groups. An odds ratio greater than 1 implies that the event is more likely in the first group. An odds ratio less than 1 implies that the event is less likely in the first group. In medical research, the odds ratio is commonly used for casecontrol studies, as odds, but not probabilities, are usually estimated whereas relative risk is used in randomized controlled trials and cohort studies. ^{[6]}
Inference of statistical tests
Statistical tests are used to draw inferences about a population from a sample. These are classified into parametric and nonparametric tests [Figure 4].
Parametric tests
A statistical test which concerns population parameters and requires assumptions about these parameters. Parametric data have an underlying normal (Gaussian) distribution which allows for more conclusions to be drawn as the shape can be mathematically described. Ttest, correlation, regression, analysis of variance (Anova), and Chisquare test are a few frequently used parametric tests.
Non parametric tests
Used in cases when the researcher knows nothing about the parameters of the variable of interest in the population (hence the name nonparametric). These tests rely more on the differences in medians rather than on the estimation of parameters (such as the mean or the standard deviation) and hence also referred to as distributionfree methods. Nonparametric methods are often employed when measurements are available only on a nominal (categorical) or ordinal (rank) scale. A formal statistical test (KolmogorovSmirnoff test) can be used to test whether the distribution of the data differs significantly from a Gaussian distribution.
Intention to treat analyses (ITT): "Intention to treat" is a strategy for the analysis of randomized controlled trials that compare patients in the groups to which they were originally randomly assigned. It generally aims at including all patients, regardless of whether they actually satisfied the entry criteria, the treatment actually received, and subsequent withdrawal or deviation from the protocol. ^{[7]} The approach generally adopted is, wherever possible, to include all withdrawals in the statistical analysis, and analyze the results for these patients as if they were still in the treatment groups to which they were originally assigned. ^{[8]} Such an analysis would avoid the biases that would be otherwise introduced by the alternatives of either omitting the results or analyzing the results according to the treatments that these patients actually received. Intention to treat analysis is most suitable for pragmatic trials of effectiveness rather than for explanatory investigations of efficacy. ^{[7]}
Intermediate or interim analyses: Sometimes the investigators in a clinical trial may wish to perform significance tests at intermediate stages to evaluate treatment effects and if one treatment is found to be superior, the trial can be stopped early and all the patients will go on to receive the most effective treatment. ^{[9]} It is essential however to adjust the intermediate level of significance (nominal significance level) in such a way that the final (the overall significance level) is not disturbed. The overall significance level is usually set at 0.05 or 0.01.
Clinical vs. Statistical significance
''Although it is tempting to equate statistical significance with clinical importance, critical readers should avoid this temptation. To be clinically important requires a substantial change in an outcome that matters." ^{[10]} While statistical significance answers the question, "Is this a real effect?", clinical significance answers the question, "Is this an important effect?". ^{[11]} One cannot be inferred from the other.
The statement that one therapy is statistically significantly better than the other fails to answer the question as to whether the treatment makes an important decision pertaining to a desired outcome. ^{[12]} The results of a study may show statistical significance but may be clinically insignificant. A trial may show a statistically significant gain of 0.5 mm in clinical attachment level between two modes of therapy but whether this statistical significance can be translated to a clinical benefit remains debatable. Hence, clinical significance is more a matter of the clinicians' judgment and the magnitude of the effect being studied. It is therefore crucial that prior to conducting the study, the researcher could select a difference between therapies like for example, a 2 mm probing depth reduction whose detection would be clinically meaningful. ^{[12]}
Number needed to treat
The outcome of research would be more meaningful when the clinical relevance of the statistical significance can be established. In this regard, determination of the number of sites that would need to be treated in a test group to provide a beneficial result or prevent an adverse event at one additional site beyond the control group would provide additional useful information. ^{[13]} An excellent editorial by Greenstein and Nunn ^{[13]} throws light on the use of number needed to treat (NNT) calculations to facilitate the practical worthiness of the periodontal research findings. If P _{C} denotes the proportion of sites in the control arm demonstrating progression and P _{T} the proportion of sites in the treatment arm demonstrating progression, NNT is calculated as the inverse of the difference in diseaseprogression rates (the risk difference) between the control group and the treatment group.
NNT = 1/P _{C}  P _{T}
For example, consider a study by Caton ^{[14]} and others, who compared the use of subantimicrobialdose doxycycline (SDD) in chronic periodontitis as an adjunct to scaling and root planing (SRP) with placebo plus SRP, as discussed by Greenstein and Nunn. ^{[13]} Study end points included progression of periodontitis (defined as ≥ 2 mm loss of clinical attachment) over a 9month treatment period. Among sites with an initial probing depth of at least 7 mm, the reported risk of attachment loss ≥ 2 mm was 0.3% (P _{T} ) for the SDD plus SRP group and 3.6% (P _{C} ) for the placebo plus SRP group. The NNT is calculated as follows: The risk difference is 3.3% (3.60.3). Then divide 100 by 3.3 which results in a number of sites needed to treat i.e 30 after rounding up. Therefore, 30 sites on an average would need to be treated with the combination of SDD and SRP to avoid periodontitis progression at one additional site relative to treatment with SRP plus placebo.
Writing of a report   
Once the study has reached completion, the investigator should write a report of what was attempted and what was achieved. The report should be structured and should have the following contents:
Title
The title should reflect the content and emphasis of the project described in the report. It should be as short as possible and include the essential keywords.
Abstract
A primary objective of an abstract is to communicate to the reader the essence of the paper. The abstract should concisely describe the topic, the scope, the principal findings, and the conclusions. A structured abstract consists of specific sections such as Objective; Method; Findings; Conclusions. Unstructured abstracts contain similar information without clearcut sections.
Introduction
Should contain a clear statement of the objective of the research and an explanation of the methodology adopted in accomplishing the research. This section should describe clearly but briefly the background information on the problem, what has been done before (with proper literature citations), and the objectives of the current project. The scope of the study along with various limitations should also be included in this part.
Materials and Methods
This section describes in detail the experimental details, computation procedures, or theoretical analysis that were used in the study.
Results
Relevant data, observations, and findings of the study are summarized.
Discussion
The crux of the report is the analysis and interpretation of the results. The discussion section would discuss in detail the following aspects. What do the results mean? How do they relate to the objectives of the project? To what extent have they resolved the problem? How do they differ from other reports and what could be the possible reasons?
Conclusion
It is the final summing up of the most significant results/findings where the researcher should again put down the results of his research clearly and precisely at the end of the document. Directions for future work can also be included in this section.
Bibliography or references
Reference lists contain a complete list of all the sources (books, journal articles, websites, etc.) that are cited directly in a document. This appears at the end of the document under the heading ''references''. Each reference entry has four parts: the name of the author, the year of publication, the title, and further publication information.
On the other hand, bibliographies contain all sources that you have used, whether they are directly cited or not.
Tables and figures
Representation of data in the form of tables and graphs makes it very easy for the reader to interpret the results of a study in a comprehensive manner. Figures accompanied by appropriate legends would enhance the understanding of the reader toward various clinical and radiographic conditions, techniques, and other solutions.
Plagiarism
To use someone else's exact words without quotation marks and appropriate credit, or to use the unique ideas of someone else without acknowledgment, is known as plagiarism. ^{[15]} In publishing, plagiarism is illegal; in other circumstances, it is, at the least, unethical.
Paraphrasing
When a written passage is paraphrased, you rewrite it to state the essential ideas in your own words. ^{[15]} The paraphrased material must be properly referenced because the ideas are taken from someone else whether or not the words are identical.
Various kinds of guidelines are available depending on the types of research which ''specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting, in particular, issues that might introduce bias into the research.'' CONSORT ^{[16]} (Consolidated standards of reporting trials) for randomized clinical trials, STARD ^{[17]} (standard for reporting diagnostic accuracy) for reporting study on diagnostic tests, PRISMA ^{[18]} (Preferred Reporting Items for Systematic Reviews and MetaAnalyses) formerly QUOROM (Quality of Reporting of Metaanalyses), MOOSE ^{[19]} (Metaanalysis Of Observational Studies in Epidemiology), QUADAS ^{[20]} (Quality Assessment of studies of Diagnostic Accuracy included in Systematic reviews) and STrengthening the Reporting of Observational studies in Epidemiology (STROBE). ^{[21]}
Conclusion   
Good research is a systematic process of collecting, collating, and analyzing information to better our understanding of the phenomenon under study. A sincere attempt was made in this three part review series, to simplify to the readers, the essential parts in periodontal research i.e., design, implementation and data analysis in their appropriate sections.
References   
1.  Krithikadatta J, Valarmathi S. Research methodology in dentistry: Part II  The relevance of statistics in research. J Conserv Dent 2012;15:20613. [PUBMED] 
2.  Abt E. Understanding Statistics 2. Evid Based Dent 2010;11:934. [PUBMED] 
3.  Abt E. Understanding Statistics 4. Evid Based Dent 2011;12:257. [PUBMED] 
4.  Greenberg BL, Kantor ML. The clinician′s guide to the literature: Interpreting results. J Am Dent Assoc 2009;140HYPERLINK " http://pubget.com/search?q=issn: 03935388+vol: 140+issue: 1&from=19119166":4854 . 
5.  Petrie A, Bulman JS, Osborn JF. Further statistics in dentistry Part 1: Research designs 1. Br Dent J 2002;193:37780. [PUBMED] 
6.  Deeks J. When can odds ratios mislead? Odds ratios should be used only in casecontrol studies and logistic regression analyses. Br Med J 1998;317:11556. 
7.  Roland M, Torgerson DJ. Understanding controlled trials. What are pragmatic trials? BMJ 1998;316:285. [PUBMED] 
8.  Petrie A, Bulman JS, Osborn JF. Further statistics in dentistry Part 3: Clinical trials 1. Br Dent J 2002;193:4958. [PUBMED] 
9.  Petrie A, Bulman JS, Osborn JF. Further statistics in dentistry Part 4: Clinical trials 2. Br Dent J 2002;193:55761. [PUBMED] 
10.  Available from: http://www.acponline.org/clinical_information/journals_publications/ecp/julaug01/primer.htm. [Last accessed on 2012 Jul 22]. 
11.  Younger J, McCue R, Mackey S. Pain outcomes: A brief review of instruments and techniques. Curr Pain Headache Rep 2009;13:3943. [PUBMED] 
12.  Greenstein G, Lamster I. Efficacy of periodontal therapy: Statistical versus clinical significance. J Periodontol 2000;71:65762. [PUBMED] 
13.  HYPERLINK "http://pubget.com/search?q=issn: 03935388+vol: 140+issue: 1&from=19119166"Greenstein G, Nunn ME. A method to enhance determining the clinical relevance of periodontal research data: Number needed to treat (NNT). J Periodontol 2004;75:6204. 
14.  Caton JG, Ciancio SG, Blieden TM, Bradshaw M, Crout RJ, Hefti AF, et al. Treatment with subantimicrobial dose doxycycline improves the efficacy of scaling and root planing in patients with adult periodontitis. J Periodontol 2000;71:52132. [PUBMED] 
15.  Alred GJ, Brusaw CT, Oliu WE. Handbook of technical writing, 9 ^{th} ed. New York: St. Martin′s Press; 2009. 
16.  Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials. Br Med J 2010;340:c332. 
17.  Bossuyt PM, Reitsma JB, Bruns DE Gatsonis PP, Glasziou PP, Irwig LM, et al. Standards for Reporting of Diagnostic Accuracy. The STARD statement for reporting of diagnostic accuracy: Explanation and elaboration. Clin Chem 2003;49:718. 
18.  Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and metaanalyses: The PRISMA Statement. PLoS Med 2009;6:e1000097. 
19.  Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Metaanalysis of Observational Studies in EpidemiologyA Proposal for Reporting. JAMA 2000;283:200812. 
20.  Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: A tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003;3:25. 
21.  Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. J Clin Epidemiol 2008;61:3449. 
[Figure 1], [Figure 2], [Figure 3], [Figure 4]
