Absurd Trivial Errors in Scatchard Plot Analysis
Scatchard plot is an extremely strange and awkward procedure of biochemical data analysis. Nevertheless (I don't understand why) it is enormously popular -- now on decline but 5-10 years ago mentions of its using appeared approximately in 5000 publications every year. The number of theoretical treatments and various types of guides on this procedure is also enormous. I have impression that every sufficiently big biochemical boss considers it his first duty to express his own artistic views on how Scatchard plots should be analysed.
Yet, certainly, dealing with Scatchard plot analysis requires some basic mathematical accuracy. Without it, all those absent-minded guides failed to identify several trivial pitfalls in this procedure, any of them completely depriving the results of analysis of any meaning. The number of scientific publications abusing Scatchard plot analysis in this way is terrible -- it's THOUSANDS published articles.
Though, of course, importance of errors is varying. As an example of really serious case let's mention the article by Hempstead B.L. et. al. (see text appended below) which appeared in top ten most cited works of that year -- and which was completely based on extremely suspicious Scatchard plots. To be correct, quite frequently Scatchard plots are used as just a meaningless nice-looking "vignette"; so actual number of works losing sense thanks to those pitfalls is significantly smaller, yet it is still terrible.
And one funny ramification: successful fabrication of trustworthy Scatchard plots also requires some basic mathematical accuracy, so it turns out to be a rather difficult job for many unscrupulous researchers. I have found several types of 'impossible' peculiarities in appearance of Scatchard plots that are impossible to interpret otherwise than as a result of absent-minded data fabrication. For more information on this theme take look at my notes Nature's editors conceal fraud and Statistics of Scientific Fraud
So, this article presents a dull description of all these pitfalls. They are not strictly limited by Scatchard plot procedure; using computer programs (including my own AFFINOGEN) for analysing binding data does not guard from some errors, so awareness of these pitfalls remains crucial.
Background and non-specific signals in Scatchard plot
I append below the fragment of the article describing the most important pitfall in Scatchard plot analysis; then, apparently, I should pretend to be its discoverer.
There are two types of non-specific signals appearing in binding data.
The background signal is a constant addendum to all datapoints. In other words it is the signal at the "no ligand" vial appearing owing to radioactive dirt in measuring device, environmental background radioactivity or simply wrong setting of zero in device. Obviously, it is equivalent to the presence of very high-affinity binding sites which are completely saturated throughout binding experiment.
As far as I know, existence of the background signal is denied by the modern biochemistry (this is not a joke).
The nonspecific signal is due to the binding of ligand to a big amount of non-specific (i.e. very low-affinity) binding sites. It is a signal linearly growing with amount of added ligand.
Quite naturally, both background and non-specific signals may be experimentally measured and subtracted from summary signal. But the question is how dangerous is it if they are determined with some error and, therefore, some signal of either type remains in data analyzed following Scatchard procedure.
First, it may be definitely said that non-specific signal almost does not influence the calculated "high-affinity" constant. Until it is very big, there is little meaning in its accurate determination.
In contrast, the danger from background signal is catastrophic. To illustrate it the figure below draws the same binding curve with no additional background signal (solid line); with background signal equal to 1% of Bmax (dot line) and to 5% of Bmax (dashed line) at semilogarithmic and Scatchard coordinate planes. As it was said above, background signal is equivalent to the presence of very-high affinity binding, so the Scatchard plot shows it correspondingly. Obviously, if some background signal is presented in data, bound concentration never approaches zero and, therefore, the left section of Scatchard curve has a vertical asymptotic line intersecting X-axis at the concn. equal to the magnitude of background signal. If it is big it's presence may be detected by eye, this sort of published pictures is quite widespread (e.g. Hisabori T. et.al.,1992). Yet, if background signal is small and no vertical asymptotic line is detectable at the plot, it becomes impossible to distinguish background signal from true high-affinity binding by eye.
I think, it’s a fatal pitfall; owing to it graphical analysis of Scatchard plot may be considered reliable only if the Scatchard plot is clearly linear. Otherwise, as you can see it on the left panel of figure above, even 1% addendum results in a pretty looking artifactually "heterogeneous" Scatchard curve. In this regard, an enormous number of research papers reporting tiny high-affinity fraction always may be suspected erroneous; these suspicions aggravate if the magnitude of Scatchard plot' curvature is quite comparable with the magnitude of datapoints scattering or if the whole work is devoted to sudden appeaances and disappearances of this high-affinity fraction (e.g. Hempstead B.L. et.al., 1991).
Naturally, the damage from background signal is not limited by the case when Scatchard plot is linear. If there binder is really heterogeneous, background signal may also severely distort estimation for affinity of the high-affinity fraction derived from the Scatchard plot.
To some extent this pitfall may be confuted using computer programs for affinity analysis. It may be both affinity spectrum approaches (e.g. Tobler & Engle,1983; Yuryev D.K.,1991) or multisite model programs (e.g. Munson P.J. & Rodbard,1980); yet in the second case the program should be run at least in 3-site mode. If some background signal is presented, either of these programs will give a subpopulation in calculated affinity distribution with "too high" affinity - with Kd smaller than the smallest Free ligand concn. used in experiment. Actually, running LIGAND in two-site mode may also give a "too-high" high-affinity constant and this sort of error is also quite widespread (e.g. Chadwick C.C. et.al.1992). In this way background signal at least may be identified. Then it may be subtracted from analyzed data and, if necessary, the correct value of "high-affinity" constant may be estimated.
Munson P.J., Rodbard D. (1980) - Anal.Biochem. 107:220-39.
Hempstead B.L. et.al. (1991) - Nature, 350:678.
Chadwick C.C. et.al. (1992) - J.Biol.Chem., 267:3473-81.
Hisabori T. et.al. (1992) - J.Biol.Chem., 267:4551-56.
Yuryev D.K. (1991) - J.Immunol.Meth., 139:297.
Tobler H.J. & Engle G. (1983) - Naunyn-Scmied.Arch.Pharmacol., 322:183-192.
and another article on inhibition curves analysis Refining Cheng-Prusoff equation>>
Take also look at :
AFFINOGEN(i) - online program in JAVA for reconstructing affinity spectra from inhibition curves