The 1984 issue of Astronomy and Astrophysics Abstracts lists 9361 papers written by 10,863 authors covering 2,200 major topics in astronomy. Under the simple heading of 'Stars', for example, you would find forty-five sub-topics ranging from 'Star Catalogs' to 'Stellar Winds' and spanning some 1189 individual papers. The activity implied by all of these papers is quite enormous. Each paper represents months or years of work based, in part, on the findings of still earlier research programs that have probably appeared in the literature over the years. A single paper may have thirty or more references to data or ideas presented by other researchers.
What is ultimately the result of all this activity? As astronomers, we hope that it will eventually culminate in a thorough, detailed understanding of all the major collections of matter in the universe, their properties at a single instant in time, as well as their evolution from birth to death. These 'major collections' include: interstellar molecules, dust grains, asteroids, planets, solar systems, stars, nebulae, star clusters, galaxies, clusters of galaxies, and the universe itself. This represents a range in length scale of 10^36 and a mass range of 10^52! Not only would we like to know, by recourse to basic physical law, the answers to general questions like "How do stars with 5.5 times the mass of the sun evolve?" but, we would also like to be able to explain specific objects in our universe such as, "What mechanisms are producing the jets in SS 433 and 3C 273?"
As a chronicle of the progress in our knowledge over the centuries, let's consider the subject of stellar structure and evolution. It is safe to say that for the last century, more articles have been written on this subject and related issues than any other. That means that more time and effort has gone into understanding stellar structure and evolution than any other field in astronomy. It is, what you might call, a mature discipline whose basic theoretical and observational ingredients are reasonably well understood at the present time. That we can know so much about objects that are so far away is a testament to the power of the scientific method and human technological inventiveness.
For thousands of years, stars were simply lights burning silently in the depths of the heavens; any discussions about what they were revealed more about the human imagination than about nature. It wasn't until Joseph Fraunhofer invented the spectroscope, and began to examine the light from the sun, the planets, and several bright stars, that the first step was taken towards answering the age old question, "What is a star?". Just as for the sun, each bright star that was examined with the spectroscope revealed a rainbow of colors crossed by a pattern of dark lines. It was quickly discovered that the lines could be matched by a number of commonly known elements available in the laboratory. By 1864 Father Angelo Secchi at the Vatican Observatory began a program of systematically classifying the spectra of 4000 stars. Sir William Huggins and William Miller carefully studied the light from Sirius, Aldebaran and Beta Pegasi, identifying the elements hydrogen, sodium and magnesium from the dozens of spectral lines detected.
The first issues of The Astrophysical Journal, published in 1895, covered new developments in spectroscopy, both the theoretical principles on which it was based, and improvements leading to the design of even more powerful and sensitive spectroscopes. By this time, thanks to the pioneering efforts of Gustav Kirchoff and Robert Bunsen, the solar spectrum had been resolved into twenty thousand spectral lines corresponding to thirty-nine elements. The state of the art of understanding the sun was candidly summarized by E.J. Wilczynski from the University of Berlin: "Almost every student of solar physics has his own theory, and usually he himself is the only one that believes in it." Although much of the work in solar physics had involved a careful study of sunspots and the spectroscopic features of the sun's surface, the internal structure of the sun was also becoming a lively topic of discussion. By the later decades of the 19th century, older ideas about the solar interior, involving a 'liquid body with clouds suspended over its surface', were being quickly replaced by the more modern view of a fully incandescent, gaseous ball which rotated at an increasing rate as you moved from its poles to its equator. It might amuse you to know that among the early speculations about the solar interior, Sir John Hershell announced that the surface of the sun was covered by living, luminescent organisms thousands of miles long!
The origin of the sun's tremendous energy supply also made its appearance into the arena of acceptable inquiry. Hermann von Helmholtz in 1854, had proposed that the gravitational energy lost by the sun during its slow contraction, will show up as a comparable quantity of heat energy which could provide the 'missing energy source' for the sun. To produce the measured solar power of 400 trillion trillion watts, the sun's radius would only need to decrease by about thirty meters each year. Not much of a change when you consider that the radius of the sun is 700 million meters. But, this means that about 30 million years ago, the sun was twice its present size and that in another 30 million years, it will be a burned-out red cinder, incapable of supporting life on earth. Between 1878 and 1883, Helmholtz's idea remained popular and was even refined to obtain an age of 4.3 million years. Fortunately, for us, this cartoon sketch does not represent the real world.
In 1906, Karl Schwarzschild published a fundamental paper in astronomy, describing the appearance of an incandescent, stable ball of gas in considerable detail, using basic principles in physics. Not only did he show that the sun's limb should be darkened to the precise degree observed, but went on to prove that the distribution of matter within the sun could be determined once you could specify the exact dependency of the gas pressure on its temperature and density. He also discovered that, under certain conditions, energy would be transported from the center of the star outwards, either by the convective boiling motion of matter, or by the streaming of radiation from the core to the surface. Sir Arthur Eddington continued this work by including the affects of radiation pressure, showing that stars that are mechanically stable are only possible for certain combinations of mass and luminosity. An amazing discovery indeed! Even for the creation of stars, nature followed a set of very specific rules favoring certain stellar properties over others.
Between 1913 and 1917, Henry Norris Russel and Ejnar Hertzsprung claimed from their study of star sizes, that blue stars were the hottest as well as the largest, while red dwarf stars were the smallest. They proposed that a star began its life as a hot blue star and, by contaction, wound up as a dull red dwarf. Eddington, mentioned above, further discovered that the core temperatures of all the 'main sequence' dwarf stars that Hertzsprung and Russel had been studying, were actually very similar, about 20 to 30 million degrees, and that this temperature didn't depend on the star's mass or size. Instead of evolving from blue to red as they cooled, Eddington proposed that gas clouds would contract until their central temperatures reached about 20 million degrees at which time they would stop contracting and become a stable star. This explanation re-ignited interest in two older questions, "What was the process that stopped the contraction of the star at this temperature?", and "Where did the energy come from if not from Helmholtz's mechanism of gravitational contraction?" The answers could not emerge from the physical principles understood at that time, but had to wait for the 20th century discovery of nuclear dissintigration and fusion.
The British astronomer R. d'E. Atkinson was the first to suggest, in 1931, that the capture of a proton by an atom could liberate enough energy to light the sun. Eight years later, Hans Bethe and C. von Weizacker presented the same idea but marshalled better evidence for it in their study of thermonuclear fusion process known as the carbon-nitrogen-oxygen, or 'CNO cycle'. The CNO cycle was soon found to work well for stars like our own sun where internal temperatures had been estimated to be about 20 million degrees. Yet, the majority of the stars in the sky were less luminous than the sun. The red dwarf stars like Kruger 60A whose core temperature was only 16 million degrees was a case in point. That slight temperature difference translated into a 100-fold reduction in energy production and a predicted luminosity for Kruger 60A about 100 times fainter than it was known to be. So, what is it that powers stars cooler than the sun? The answer was provided by Hans Bethe who showed that a fusion reaction which converted hydrogen into helium, but not involving the CNO reaction, would work at these low temperatures. More advanced burning cycles have been studied since then which are capable not only of supplying even greater quantities of energy to support a star against gravitational collapse, but reactions capable of creating all of the known elements in the periodic table, in the cores of very massive stars.
The first stellar models that showed, in detail, how a star evolves from the hydrogen fusion phase called the 'main sequence', through the red giant phase did not become available until electronic computers were developed. Prior to the advent of computers, the computations had to be done by hand using desk calculators. This led to trade-offs between using a crude model of the star's interior and taking many steps in time, or using a moderatly detailed model of the interior but taking only a handfull of time steps.
In 1955, R. Harm and Martin Schwarzschild published 15 'models', some calculated by hand, others by using the electronic computer at the Princeton Institute for Advanced Study. The models presented the star's interior in three zones: the core, the outer envelope and the intermediate zone where convection would likely occur. Radiation pressure was ignored, as were differences in chemical composition between the zones, and no internal energy source was treated in detail. It took one full year of laborious work on a desk calculator to construct the hand-calculated models which were computed for a total of 127 time steps. The models specified the changes in 18 quantities in each of the three zones. In contrast, the computer generated model was followed for 37 time steps and required less than a day to compute. Continuing in a steady progression as faster computers were developed, present-day computers can calculate complete stellar models in less than one minute!
The computational extension of the models from the hydrogen burning phase to later stages began in ernest in 1961 with the appearance of several papers announcing detailed, independent studies of 5, 10 and 15 solar mass stars by Chushiro Hayashi, Robert Cameron and Emil Polak. They used IBM 650 and 7090 computers, splitting each star into a dozen or more internal shells. Their program followed the evolution of each star's structure, shell by shell, through the helium burning stage. For the most massive stars, the carbon and neon fusion stages were followed as well. They watched as the stars swelled to enormous dimensions and became red supergiants, as their cores collapsed switching first to helium burning, then to carbon and neon.
By 1964, the role of neutrinos in producing added pressure in the dense cores of more massive stars was discovered and incorporated into the models. John Cox and Edwin Salpeter also examined the evolution of stars where electron degeneracy pressure was important. A similar calculation for stars 4 to 8 times the mass of the sun done by David Arnett in 1969 showed that if the carbon burning cycle was triggered in a core that was degenerate, the entire star would blow up in a 'Carbon Detonation Supernova'. Whether anything was left behind other than an expanding cloud of gas seemed to depended very critically on the density of the star's core before the detonation, and just how much pressure the neutrinos escaping from the star's core produced in the overlaying matter. Depending on the core's density and mass, what would be left behind the star after this explosion would be: nothing, a white dwarf, or possibly a neutron star.
Since the 1960's, computer models have become more sophisticated. Periodic revisions have been made in the number of nuclear reactions that are considered, as well as updates in the reaction rates and energy yields based on more exact theoretical calculations supplemented by experimental results. The detailed role of convective mixing in transporting energy from place to place within the star and changeing the composition of the star is also being studied, as are the roles of rotation and mass loss. As the models become more refined, they are used to an ever increasing degree in explaining the observed details of known stars. Some stars show an overabundance of certain elements over others which cannot be entirely explained by temperature effects alone. This suggests that convective mixing seems to be the culprit, wherein the elements in deeper layers in the star are mixed with the visible surface layers. Then again, for the peculiar A-type stars, convection may be supressed by strong magnetic fields that have been measured on the surfaces of these stars, so that atomic diffusion driven by radiation pressure may be a more important factor.
A related area of study concerns the evolution of binary stars. The presence of a nearby star can alter the evolution of both stars, especially if matter is being pulled from one companion and dumped onto the other. The gravitational stresses that result inside a star with a close companion can alter convection patterns and mix enriched hydrogen gas with hydrogen depleted material in the core, so that one star, essentially, gets to re-live its youthful, hydrogen burning phase all over again as though it had just been born.
The final stages in the evolution of stars is also of great theoretical interest. Exactly how do planetary nebulae form? How are neutron stars and black holes produced from supernova explosions? Do all supernovae produce identifiable remnants? Although we are tantalizingly close to answering these questions and can do so in general terms, the details are still a bit vague.
I have spoken about mathematical models for stars, but I have not really described for you what I mean by this terminology. How do you reduce a pinpoint of light in the sky into a collection of equations, and what would these equations look like? The basic equations defining the structure and evolution of a star have been known for nearly a century. They describe what determins whether a star is stable, or subject to gravitational collapse. They describe how energy is transported from the core of the star to its surface, and how the density and temperature of the gas varies from the core to the surface. This theoretical model must also describe how much energy is liberated by the various possible fusion reactions occuring in each gram of matter in the core. When we express all these relationships and interdependencies in symbolic form, we get the 'equations of stellar structure' which look like this:
But these equations are not enough because you also have to specify how the pressure inside a star, which supports it against gravitational collapse, is dependent on the values for other physical quantities like a star's chemical composition, M, temperature, T, and density, D, which may, in turn, change from place to place inside the star. The amount of energy released in the thermonuclear fires in the star's core, E, also depend on these quantities as does K, the stellar opacity. The equation linking the pressure to the other variables is called the 'Equation of State' by the astronomical cognoscenti, and its form can change as the star evolves or as you dissect the star and examine various layers within it. The pressure due to light radiation and high temperature gas is usually expressed by, For high gas densities near 10^5 grams/cc, we also have to include electron degeneracy pressure, P_e, caused when electrons are squeezed together into a small volume. The opacity of a star determines how transparent it will be to its own emitted light radiation. Since radiation pressure is in many cases the most important internal support for a star, its accurate specification is crucil. Depending on the kind of interaction involved between matter and the light streaming out from the star's deep interior, the mathematical description of the transparency of the star's matter takes-on a variety of different forms. The sum total of these will determine how opaque the star is at a particular point in its interior, and how much radiation pressure will result. To write down all the different forms of the matter-radiation interaction that contribute to a star's opacity would easily fill a book of this size!
Although gravity is the ultimate source of energy for heating a star's interior, it is the nuclear reactions that provide the energy from which the star's internal pressure is ultimately derived. A complicated network of interdependent equations is required to account for the energy released by fusion reactions and how they change the internal element composition of a star. These describe how rapidly one element is converted into another by fusion or radioactive decay, and shows how the rate of energy release depends on the local temperature and density of the star. To assemble these equations, one must first write down all the important pathways by which the conversion from one element to another occurs, and the energy released at each step. For example, when the cores of stars more massive than the sun reach temperatures exceeding 100 million degrees, the so-called Triple Alpha reaction becomes important in supplying the thermal pressure needed to prevent further gravitational collapse. In this fusion reaction, two helium nuclei fuse into a single beryllium nucleus; then, after an additional helium nucleus fuses with the beryllium, one obtains a single carbon nucleus as nuclear 'ash'. The reaction also produces a considerable amount of energy.
At still higher temperatures appropriate to pre-supernova conditions where temperatures exceed 5 billion degrees, one encounters reactions that convert carbon into oxygen, oxygen into magnesium and silicon, and finally silicon into iron. All these reactions are very temperature sensitive. For instance, in Triple Alpha fusion, the reactions produce 10 times more energy at 105 million degrees than at 100 million degrees! Where does a star get the high energys and temperatures to allow these reactions to proceed? The answer is from the gravitational collapse of the core of the star under its own weight. Just as a rock gains speed and kinetic energy as it falls to the ground unsupported, the matter inside the core of a star, if unsupported by a counter-balancing pressure, will continue its fall towards the stellar core. In so doing, it gains kinetic energy which appears as an increase in temperature of the gas.
The change in the chemical composition of a star as it 'burns' one element and leaves behind another as a nuclear 'ash' can be represented by yet another set of equations. Modern nuclear reaction networks such as those used to study the last years of a star about to become a supernova, incorporate over 250 nuclear species and their isotopes, along with their highly interdependent equations of interconversion. Having considered the interior of the star and what goes into describing its inner workings, what of its outer layers?
How does a star look to a distant observer? All you see through the eyepiece of the most powerful telescope is the radiation emitted by the surface of the star. The interior is completely hidden from view. Not only that, but the light produced in the star's dense core requires millions of years to reach its surface, before it can start its journey to earth. There are models available for predicting the strengths and shapes of the atomic spectral lines emitted by the surface gases, but these models depend on the temperature, density, composition and surface gravity of the star. You can obtain pedictions for these quantities at a particular instant in the life of a star using your stellar evolution model. These 'stellar atmosphere' models are very complicated; to merely write down the necessary equations would fill up several books this size. The most sophisticated model now in routine use is the one developed by Robert Kurutz at the Center for Astrophysics in Cambridge, and his co-workers. His model contains 1,760,000 spectral lines for elements between hydrogen and nickle, and computes the expected spectrum shape and line intensities for most kinds of stars commonly studied in detail.
In addition to high temperature plasmas of charged atoms, stars are known to contain magnetic fields. A detailed study of the sun reveals a strong surface field of about 1 gauss, and sunspots where the fields are thousands of times stronger, along with a periodic 22-year cycle of magnetic polarity reversal, better known as the Sunspot Cycle. Other phenomena related to stellar magnetic fields include prominences, flares and coronal holes. Magnetic fields have been detected on nearly 100 stars, mostly of the peculiar A-type, which have surface fields 100 to 30,000 times stronger than the sun's. Sunspot cycles have also been observed on a number of nearby stars. Thanks to the rapid influx of data from satellite observations of the sun, and long-term studies from ground-based earth observatories, the detailed description of the role of magnetic fields in our sun has evolved rapidly from crude 'back of the envelope' calculations to highly sophisticated theoretical models. Presumably, the physics of the magnetic fields on more distant stars can also be described by this same theory, or simple modifications of it.
Solar Dynamo Theory provides a mathematical framework for understanding how sunspots form, how periodic polarity reversals occur, and to what they depend on. One of the basic equations describing this process is, During a sunspot cycle, the entire magnetic field of the sun changes its shape, beginning with a field that looks like that of a familiar bar magnet, but changing to one that looks more like a donut shape along the sun's equatorial zone. This equation describes how the stellar magnetic field changes its shape from a polar geometry, B_p , into a toroidal shape, B_u : The basic process of the sun's 22-year field reversal. When solved for a particular stellar case, the equation shows how the stellar magnetic field evolves, and predicts, among other things, the duration of the sunspot cycle and the latitude distribution of the spots on the star's surface. The quantity G is called the 'turbulent eddy diffusivity' while R represents the radius of the region producing the field. The value of G depends on how rapidly magnetic fields can be transported from one place on the sun's surface to another. The faster this occurs, the shorter will be the sunspot cycle. Amazingly, this theory also works well in explaining why the polarity of the earth's magnetic field reverses every 250,000 years! The same equations are used, only the values for G and R change to reflect earth's smaller size and the conductivity of its iron core.
Most known stars rotate, some barely at all, while others, such as the so-called 'emission-line B-type stars', spin fast enough to deform their shapes into a distinctly oval shape. In particularly extreme cases, not only is the star deformed, but it spins-off matter along its equator where the centrifugal force wins over gravity and launches streamers of hot gas into space. Stellar rotation can produce a whole host of effects including sunspot cycles, surface deformation and convection. To include the rotation of a star into its mathematical description, we have to re-write all of the equations in terms of a rotating coordinate system. Since the shape is no longer a perfect sphere, instead of the temperature, density and composition only depending on the distance from the star's center, they now also depend on stellar latitude and longitude angles and are represented by a set of mathematical functions called Spherical Harmonic Legendre Polynomials. The affect of stellar rotation on the structure and evolution of stars is so complicated to describe mathematically, that only with the advent of fast computers have actual, realistic, calculations been attempted.
In addition to the slow, million-year long changes that stars experience during the course of their evolution, any amateur astronomer will tell you that some stars, usually the red ones, undergo visible changes in brightness within a few days or weeks. Stars vary in brightness in this way because they are passing through an unstable period towards the end of their lives. This phenomenon does not involve the expansion and contraction of the star's entire body from core to surface, but only the outer layers nearest the stellar surface. When the layers expand, the star's surface cools slightly and the star dims in brightness. When the layers collapse, they heat up slightly and the star brightens. Stellar variability can be described, mathematically, once a particular stellar model has been computed giving the initial dimensions of the unstable layers, their temperatures and compositions. A set of equations are then used to calculate the amplitude of the oscillation and its period, the result is an equation that looks like this,
Stellar winds appear to be a common feature of many types of stars throughout their lives, especially for the bloated red supergiants such as Betelgeuse which looses 1.4 solar masses of material every million years. Since at this rate, Betelgeuse will lose its entire remaining mass in about 20 million years, it must be well on its way to some major change in its life, perhaps a supernova explosion. One of the equations used to describe this outflow of matter from the surface of a star, including the effects of magnetic fields and rotation is,
Stellar winds can be detected around other stars by the affect that they have on the star's spectrum. Unusually broadened spectral lines from key elements, or other peculiarities in the profiles of these lines can indicate the presence of hot, ionized gas being ejected from a star. If the stellar winds are cool and dense enough, dust grains can condense out of the gas like raindrops. Although the surface of a star is usually very hot, exceeding 2,500 K in most cases, at a sufficiently great distance from the star, temperatures within the outflowing matter will be cool enough for carbon, or silicon atoms to stick together forming dust grains. This process of condensation can be described by equations that follow the growth of dust grains, and describe what observers on earth will see as they look at a star with such a dusty envelope surrounding it. For some stars like the infrared source IRC+10216, carbon dust grains are condensing in the atmosphere of this star in such numbers that the star itself is optically invisible. All that one can observe is the infrared emission from the heated dust grains which now form a dense coccoon around the star.
All of these equations, when combined together in a computer program, and after extensive de-bugging, can be used to create theoretical models of objects that run through their evolution, lose mass through stellar winds, evolve to become white dwarfs or neutron stars, and otherwise look surprisingly like the stars we see in the night sky. In theory, it would be nice to have a single program that could evolve a star from a collapsing gas cloud to, say, its eventual demise as a white dwarf or supernova; a program that would follow detailed changes in surface magnetic fields and solar wind output. In practice, however, this is not necessary or even desirable. If you are studying the collapse of a star's dense core prior to the supernova phase, the presence of absence of spots on the star's surface is not likely to make much of a difference physically or observationally. You might, however, be interested in whether or not the star was rotating, or how the convection patterns occuring at a particular location within the star are influencing the chemical composition of the core region. Both of these make a measurable difference in the properties of the left-over remnant, or in the chemical composition of the gas ejected into interstellar space.
A single computer program attempting to follow a star as it evolves from birth to supernova, yet giving detailed predictions for surface magnetic fields and spot distributions would have to follow the minute to minute changes in these fields while handling the million-year changes due to its evolution. It would also have to correctly keep up with the microsecond to microsecond changes in stellar structure during the supernova detonation itself. Even at a temporal resolution of one minute, there will be 10 trillion of these timesteps during the full life of such a star posing a daunting computational and bookkeeping problem. The solution? Theoreticians tailor their programs for studying the physics of interest, not the entire evolutionary process. If you want to study the supernova, begin the model with a 'realistic' composition provided to you by a stellar evolution model. Ignore stellar winds and surface magnetic fields. Once you have run your computer models spanning the last milliseconds of a supernova's life, you can patch them into the results from other models by arguing that the starting conditions you began your computations with, are compatible to the conditions predicted from the evolution models spanning 100 million years at thousand-year intervals. Like a giant patchwork quilt, astronomers use many interwoven, and interdependent, theories to assemble a complete view of a star's life; a view that no single one of the theories can describe completely.
One issue that all mathematical prognosticians must face, is one that may well thwart any practical attempt to construct stellar models of arbitrarily high predictive power. It is in the very nature of the mathematical approach that it will never lead to a perfect match between observation and theory for all length scales and time intervals. The reason? It's related to why meteorologists will never be able to tell you that, for example, five months from today, at 3:35 PM there will be a rain shower over the town of Adams, Massachusetts which will last 1 hour and 45 minutes. To make a prediction that specific, it is very likely that meteorologists will need to measure the state of the earth's atmosphere today, within every cubic inch over the entire globe, throughout its entire 100 kilometer thickness. In addition to the literally astronomical data storage requirements, the computer will not even be allowed to round-off any of the intermediate numbers it computes, and it will have to complete the calculation before the target hour passes.
Mathematicians tell us that nearly all the equations we create to represent nature are inherently unstable for use in forecasting. They are not unstable because they are incomplete, though that certainly contributes to faulty predictions, they are unstable because the data we feed them always are incomplete. When you construct a mathematical representation of a physical system, you begin by selecting the quantities for the variables in the model at a particular starting time. You start the stellar evolution calculation with, for example, a surface temperature of 6,000 K, a total stellar mass of 2.000 times the mass of the sun, and a composition approximated by treating hydrogen and helium separatly, lumping all the elements heavier than helium together into one number, and that the star is of the same composition through out. The equations then tell you how each of the parameters change with each time step you evolve the model into the future, or past. The only problem is that the values that the variables take on at the end of the computation can be very sensitive to their values when you started the calculation. For the weather problem, it has been jokingly said that to know the weather pattern on one spot on the earth a few years into the future will depend on how vigoriously a butterfly was stirring up the atmosphere a thousand miles away last year! For stellar evolution calculations, fortunatly,it appears that what you wind up with as a stellar model is not too sensitive to where you start out, provided you only want to know a star's size, luminosity, surface composition and temperature. Our curse, that we can never study the interior of a distant star or photograph its surface and surroundings, becomes our blessing since from our vantage point on earth that which we want to know about a star and can measure, can be summed up in a short list of numbers. A few million years difference in age between two stars like our sun, amounts to an observational difference between them that is, largely, not measurable in terms of temperature, luminosity or spectral features.
So where does this put the classical goal of science as a means of predicting and accuratly portraying natural phenomena? For astronomy, it says that there are limits to our knowledge about the physical world. Within those limits we can hope to learn a great deal about the stars and the distant galaxies, but none of this knowledge will be certain. This will probably come as a bitter pill for many non-scientists as they may still founder on the wishful dreams of obtaining absolute knowledge, untarnished truths, and some scheme for distinguishing clearly between right and wrong answers. In science, we are accustomed to laws that may be overturned by the very next observation, theories that may be incomplete, or data that may not only be uncertain, but even wrong and misleading. This is not the arena that so many people might imagine science to be. Scientists do search for objective truths, but those truths are not written in capital letters and enscribed in stone. It is not that scientists have to change their methodology so that Truths can be revealed, it is that society has to learn that absolute truths about the physical world probably do not exist. As Jacob Brownowski states so poignantly in his essay 'Knowledge or Certainty'
"...Science is a very human form of knowledge. We are always at the brink of the known, we always feel forward for what is to be hoped. Every judgement in science stands on the edge of error, and is personal. Science is a tribute to what we can know although we are fallible..."