0% found this document useful (0 votes)
1K views541 pages

Stars Formation-Ariny Amos (Astronomer)

Astronomical observations of stars by carrying out human skin experiment, A thought experiment. Principle of Superposition, Quantum super position, review of the scientific big bang theory for the formation and evolution of stars.Astrophysics a star formation and evolution by Hydrogen and Helium .

Uploaded by

Dr. Amos Ariny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views541 pages

Stars Formation-Ariny Amos (Astronomer)

Astronomical observations of stars by carrying out human skin experiment, A thought experiment. Principle of Superposition, Quantum super position, review of the scientific big bang theory for the formation and evolution of stars.Astrophysics a star formation and evolution by Hydrogen and Helium .

Uploaded by

Dr. Amos Ariny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

ASTRONOMER.

AUTHOR ; ARINY AMOS (ASTRONOMER)

YEAR ;2016

BOOK TITLE; STARS FORMATION AND EVOLUTION EXPERIMENT.

1
PREFACE.
Author , Ariny Amos thanks GOD.

i
CONTENTS.

TABLE OF [Link]

PREFACE.i

ABSTRACTviii

INTRODUCTION...2

Astronomical unit of mass,

Jupiter mass,

Equivallent planetary masses.

Astronomical unit of length,

Other units of astronomical distances,

Introduction to Types of astronomy

Amateur astronomers.

Astronomy definition.

LITERATURE REVIEW..7

Prehistoric Europe

Mesopotamian astronomy

Greek and Helenistic Astronomy world,

Egypt

China

Mesoamerica.

Medieval middle east

Medieval western Europe

RENAISANCE PERIOD19

SCOPE OF OF THE STUDY OF PHYSICS AND ASTRONOMY..33

Uniting physics and astronomy

Completing the solar system

ii
Modern astronomy

Observational astronomy

Cosmology and expansion of the universe

New windows into the cosmos open

Astrophysics

Astrobiology

Astrochemistry

UTILITARIANISM AS AN APPLICATION OF ASTRONOMY.28

Utilitarianism

Observational astronomy Radio astronomy

Infra red astronomy

Gamma ray astronomy

Fields not based on electromagnetic spectrum,

Astrometry and celestial mechanics

Theoretical astronomy

Planetary science

Galactic astronomy

Extra galactic astronomy

Cosmology,

Interdisciplinary studies

Amateur astronomy

THE SCIENTIFIC BIG BANG THEORY.56

Time line

Singularity

Inflation and baryogenesis

Universe structure Cooling,

Universe Structure formation,

Features of the model

iii
Expansion of space,

Horizons

Etymology

Development of the universe

Observational evidence

Hubbles law and expansion of space.

Cosmic microwave background radiations

Abundance of primordial elements

Galactic evolution and distribution

Other lines of evidence

Future observations

PROBLEM STATEMENT.67

Problems and related issues in physics

Baryon asymmetry

Dark matter

Horizon problem

Magnetic monopolies

Flatness problem

Cause

Ultimate fate of the universe

Misconceptions

Speculations

Religion and philosophical interpretation of the big bang

Unsolved problems in astronomy

STAR72

Observation history of a star

Designation of a star

Unit of measurement of a star

iv
Star formation and evolution

Star formation

Post main sequence

Massive star

Spectral luminosity class

Evolutionary supergiants

Categorization os evolved stars

Surface gravity

Temperature,

Luminosity

Variability,

Chemical abundances

Collapse of a star

Binary stars

Distribution of binary stars

Characteristics of stars

Stellar age estimation

Metallicity and molecues in stars

Mass,

Rotation

Radiation

Luminosity

Magnitude

Classification os stellar

Variable star

Structure of a star

Nuclear fusion reaction pathways

Stellar nucleo synthesis

v
Over view of the proton-proton chain

The Carbon-Nitrogen-Oxygen cycle

STAR EVOLUTION OR STELLAR EVOLUTION..80

Proto stellar evolution

Star formation

Chemical composition

Observed classes of young stars

Brown dwarf and sub-stellar objects theory

Sub-gian

Subgiant tracks

Stellar evolutionary tracks

Very low mars stars

Mssive stars

Properties of massive stars

Sub-giants in the H-R diagram

Variability

Planets in orbit a round sub giant star include Kappa Andromedae

High mass brown dwarfs verses low mass stars

Sub-Brown Dwarf

Observations and classification of brown dwarfs

Spectral class M

Spectral class L

Spectral class T

Spectral class Y

Spectral and atmospheric properties of Brwon Dwarfs

Observational techniques

Recent development of Brown Dwarf

Planet around Brown Dwarf

Habitability

vi
Superlative Brown Dwarfs

MAIN SEQUENCE STAR145.

Main sequence

History of the main sequence

Formation of main sequence

Star formation, proto star and pre- main sequence star

Properties of main sequence

Dwarf terminology

Parameter

Sample parameter

ENERGY GENERATION.152

Stellar nucleosynthesis

Key reactions

Cross section of super giant showing nucleosynthesis and elements formed

Hydrogen fusion

Helium fusion,

Triple alpha process and alpha process

Hydrostatic equilibrium

Mathematical consideration

Derivativation from force summation

Derivation from general relativity

Applications of hydrostatic equilibrium

Astrophysics of star formation

Stellar structure

Energy transport, heat transfer of stars equations of stellar structure

Rapid evolution

Evolutionary tracks

vii
HERTZSPRUNG RUSSELL DIAGRAM.172

Forms of diagrams

Interpretation,

Diagram roles in the development of stellar physics

Mature stars

Electron degeneration pressure

Helium fusion

Fusion of helium

Alpha fusion chain

Secondary helium fusion processes

Primary processes

Secondary processes

A note on notation

A comment on reation rates

Fusion of Carbon and Oxygen

Carbon and Oxygn fusion chain

Carbon fusion

Oxygen fusion

Compton scattering

Bremsstrahlung

Photo-ionization

Atomic lines

RADIATION TRANSPORT.189

Radiation transport in stellar interiors

Convection in stellar interiors

Polytropic stars

Low mass star

Sub giant phase

Sub giant

viii
Red giant branch phase

Horizontal branch

A symptotic giant branch phase

A symptotic giant branch

Post AGB

Massive star super giant

SUPERNOVA..200

stellar remnants

White and Black Dwarfs

White Dwarf

Pauli exclusion principle

Connection into quantum state symmetry

Pauli principle in advanced quantum theory

Astrophysics and pauli exclusion principle

Black Dwarfs

NEUTRON STAR...205

Neutron star

Formation of a neutron star

Schematic of stellar evolution

Properties of a neutron star

Mass and temperature

Density and pressure,

Giant nucleus

Magnetic field

Gravity and equation on state

Neutron star structure

Radiation

ix
PUSARS.214

Pulsars

Non-pulsating neutron stars

Spectra

Rotation

Spin down

Spin up

Anti-glitches

Population and distances

Binary neutron star system

X-ray binaries

Neutron star binary mergers and nucleosynthesis

Planets

History of discovery of neutron star

Sub tyopes table of neutron star

Examples of neutron stars

Main sequence stars

Red gian star evolution

Binary star

GRAVITATIONAL COLLAPSE229

Big crunch

Star formation summary

Stars formation

Stellar remnants

White Dwarf

Neutron star

Black holes

x
THEORETICAL MINIMUM RADIUS OF STAR MODEL.237

Theories for the evolution of binary stars

Mathematical mode of stellar evolution

Blue stars

Equation of interaction bwtween the components

Mathematical procedure; star formation

Numerical examples and conclusion

Evolution towards a stationary state

Evolution towards a limit cycle

Measuring stellar and Dark mass fractions in spiral galaxies

Observations

METALLICITY..256

Stellar metallicity and planets

Definintion of metallicity

Calculations

Metallicity distribution function

ATHOUGHT EXPERIMENT, ARINY AMOS EXPERIMENT...260

Athought experiment on formation of a star from hydrogen, human skin and helium

ABSTRACT260

INTRODUCTION......261

RAW MATERIAL..262

APPARATUS...262

RAW MATERIALS DESCRIPTION262

Raw material 1. Helium description

Raw material [Link]

Description of hydrogen

Raw material [Link] skin

Human skin description

xi
PROCEDURE..301

RESULTS301

DISCUSSION.306

CALCULATIONS.333

SCHRODINGER EQUATION333

PERTURBATION THEORY360

COSMOLOGICAL PERTURBATION THEORY..364

UNIVERSAL GRAVITATION EQUATIONS...365

FERMI PROBLEM.. ..380

DRAKE EQUATION..382

FERMI PARADOX391

CONCLUSIONS..459

RECOMMEDATIONS472

EXTERNAL LINKS.510

NOTES..511

REFERNCES.518

xii
ABSTRACT.
A study of the scientific big bang theory in formation of stars , Presentation of Astronomers and
physicists role, the Big Bang theory as the prevailing cosmological model for the universe from the
earliest known periods through its subsequent large-scale evolution. The model describes how the
universe expanded from a very high density and high temperature state, and offers a comprehensive
explanation for a broad range of phenomena, including the abundance of light elements, the cosmic
microwave background (CMB), large scale structure and Hubble's law. As the known laws of physics are
extrapolated to the highest density regime, the result is a singularity which is typically associated with the
Big Bang. Detailed measurements of the expansion rate of the universe place this moment at
approximately 13.8 billion years ago, which is thus considered the age of the universe. After the initial
expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later simple
atoms. Giant clouds of these primordial elements later coalesced through gravity in halos of dark matter,
eventually forming the stars and galaxies visible [Link] book describes an astronomer in introduction ,
different types of astronomy, Celestial Objects, planets, various categories and formation of stars, with an
experimental abstract for the formation of stars based schrodingers paradox thought experiment,
superposition principle .quantum superposition, drakes equation, scientific big bang,conclusions and
recommendations , problem statement , description of raw materials and forces in the experiment , wave
physics description quantum mechanics, apparatus for the experiment wave superposition , quantum
superposition,conclusions, external links , Notes, References.

xiii
1
INTRODUCTION

An astronomer is a scientist in the field of astronomy who concentrates their studies on a specific
question or field outside of the scope of Earth. They look at stars, stars, galaxies, planets, moons,
asteroids, comets
and nebulae) and processes (such as supernovae explosions, gamma ray bursts, and cosmicmicrowave
background radiation), the physics, chemistry, and evolution of such objects and processes, and more generally
all phenomena that originate outside the atmosphere of Earth. A related but distinct subject, physical cosmology,
is concerned with studying the Universe as whole. as well as many other celestial objects either in
observational astronomy, in analyzing the data or in theoretical astronomy. Examples of topics or fields
astronomers work on include: planetary science, solar astronomy, the origin or evolution of stars, or
the formation of galaxies. There are also related but distinct subjects like cosmology which studies the
Universe as a whole. The astronomical system of units, formally called the IAU (1976) System of
Astronomical Constants,is a system of measurement developed for use in astronomy. It was
adopted by the International Astronomical Union(IAU) in 1976, and has been significantly updated in
1994 and 2009 (see astronomical constant).
The system was developed because of the difficulties in measuring and expressing astronomical data
in International System of Units (SI units). In particular, there is a huge quantity of very precise data
relating to the positions of objects within the solar system which cannot conveniently be expressed or
processed in SI units. Through a number of modifications, the astronomical system of units now explicitly
recognizes the consequences of general relativity, which is a necessary addition to the International
System of Units in order to accurately treat astronomical data.
The astronomical system of units is a tridimensional system, in that it defines units
of length, mass and time. The associated astronomical constants also fix the different frames of
reference that are needed to report observations.[2]The system is a conventional system, in that neither
the unit of length nor the unit of mass are true physical constants, and there are at least three different
measures of time.
The astronomical unit of time is the Day, defined as 86400 seconds. 365.25 days make up one Julian
year.[1] The symbol D is used in astronomy to refer to this unit.A day is a unit of time. In common usage, it
is either an interval equal to 24 hours[1] or daytime, the consecutive period of time during which the Sun is
above the horizon. The period of time during which the Earth completes one rotation with respect to the
Sun is called a solar day.[2][3] Several definitions of this universal human concept are used according to
context, need and convenience. In 1960, the second was redefined in terms of the orbital motion of the
Earth, and was designated the SI base unit of time. The unit of measurement "day", redefined in 1960 as
86 400 SI seconds and symbolized d, is not an SI unit, but is accepted for use with SI. A civil day is
usually 86 400 seconds, plus or minus a possible leap second in Coordinated Universal Time (UTC), and
occasionally plus or minus an hour in those locations that change from or to daylight saving time. The
word day may also refer to a day of the week or to a calendar date, as in answer to the question, "On
which day?" The life patterns of humans and many other species are related to Earth's solar day and the
day-night cycle (see circadian rhythms).
In recent decades the average length of a solar day on Earth has been about 86 400.002
seconds[4] (24.000 000 6 hours) and there are about 365.242 2 solar days in one mean tropical year.
Because celestial orbits are not perfectly circular, and thus objects travel at different speeds at various
positions in their orbit, a solar day is not the same length of time throughout the orbital year. A day,
understood as the span of time it takes for the Earth to make one entire rotation[5] with respect to the
celestial background or a distant star (assumed to be fixed), is called a stellar day. This period of rotation
is about 4 minutes less than 24 hours (23 hours 56 minutes and 4.1 seconds) and there are about
366.242 2 stellar days in one mean tropical year (one stellar day more than the number of solar days).
Mainly due to tidal effects, the Earth's rotational period is not constant, resulting in further minor variations
for both solar days and stellar "days". Other planets and moons have stellar and solar days of different
lengths to Earth's
Besides the day of 24 hours (86 400 seconds), the word day is used for several different spans of time
based on the rotation of the Earth around its axis. An important one is the solar day, defined as the time it
takes for the Sun to return to its culmination point (its highest point in the sky). Because the Earth orbits

2
the Sun elliptically as the Earth spins on an inclined axis, this period can be up to 7.9 seconds more than
(or less than) 24 hours. On average over the year this day is equivalent to 24 hours (86 400 seconds).
A day, in the sense of daytime that is distinguished from night-time, is commonly defined as the period
during which sunlight directly reaches the ground, assuming that there are no local obstacles. The length
of daytime averages slightly more than half of the 24-hour day. Two effects make daytime on average
longer than nights. The Sun is not a point, but has an apparent size of about 32 minutes of arc.
Additionally, the atmosphere refracts sunlight in such a way that some of it reaches the ground even
when the Sun is below the horizon by about 34 minutes of arc. So the first light reaches the ground when
the centre of the Sun is still below the horizon by about 50 minutes of arc. The difference in time depends
on the angle at which the Sun rises and sets (itself a function of latitude), but can amount to around seven
minutes.
Ancient custom has a new day start at either the rising or setting of the Sun on the local horizon (Italian
reckoning, for example, being 24 hours from sunset, oldstyle).[6] The exact moment of, and the interval
between, two sunrises or sunsets depends on the geographical position (longitude as well as latitude),
and the time of year (as indicated by ancient hemispherical sundials).
A more constant day can be defined by the Sun passing through the local meridian, which happens at
local noon (upper culmination) or midnight (lower culmination). The exact moment is dependent on the
geographical longitude, and to a lesser extent on the time of the year. The length of such a day is nearly
constant (24 hours 30 seconds). This is the time as indicated by modern sundials.
A further improvement defines a fictitious mean Sun that moves with constant speed along the celestial
equator; the speed is the same as the average speed of the real Sun, but this removes the variation over
a year as the Earth moves along its orbit around the Sun (due to both its velocity and its axial tilt).
The Earth's day has increased in length over time. This phenomenon is due to tides raised by
the Moon which slow Earth's rotation. Because of the way the second is defined, the mean length of a
day is now about 86 400.002 seconds, and is increasing by about 1.7 milliseconds per century (an
average over the last 2 700 years). (See tidal accelerationfor details.) The length of a day circa 620
million years ago has been estimated from rhythmites (alternating layers in sandstone) as having been
about 21.9 hours. The length of day for the Earth before the moon was created is still unknown.

Astronomical unit of mass


Solar mass
The astronomical unit of mass is the solar mass.[1] The symbol M is often used to refer to this unit. The
solar mass (M), 1.988921030 kg, is a standard way to express mass in astronomy, used to describe the
masses of other starsand galaxies. It is equal to the mass of the Sun, about 333000 times the mass of
the Earth or 1,048 times the mass of Jupiter.
In practice, the masses of celestial bodies appear in the dynamics of the solar system only through the
products GM, where G is the constant of gravitation. In the past, GM of the sun could be determined
experimentally with only limited accuracy. Its present accepted value is[3] G M=1.327 124 420 99
10201010 m3s2

Jupiter mass
Jupiter mass (MJ or MJUP), is the unit of mass equal to the total mass of the planet Jupiter, 1.8981027 kg.
Jupiter mass is used to describe masses of the gas giants, such as the outer planets and extrasolar
planets. It is also used in describing brown dwarfs and Neptune-mass planets.

Earth mass
Earth mass (M) is the unit of mass equal to that of the Earth. 1 M = 5.97421024 kg. Earth mass is
often used to describe masses of rocky terrestrial planets. It is also used to describe Neptune-mass
planets. One Earth mass is 0.00315 times a Jupiter mass.

3
Equivalent Planetary masses

Solar mass

Solar mass 1

Jupiter masses 1048

Earth masses 332950

Astronomical unit of length

Astronomical unit
The astronomical unit of length is now defined as exactly 149,597,870,700 meters. [4] It is approximately
equal to the mean EarthSun distance. It was formerly defined as that length for which the Gaussian
gravitational constant (k) takes the value 0.01720209895 when the units of measurement are the
astronomical units of length, mass and time. The dimensions of k2 are those of the constant of
gravitation (G), i.e., L3M1T2. The term unit distance is also used for the length A while, in general
usage, it is usually referred to simply as the astronomical unit, symbol au or ua.
An equivalent formulation of the old definition of the astronomical unit is the radius of an unperturbed
circular Newtonian orbit about the Sun of a particle having infinitesimal mass, moving with a mean motion
of 0.01720209895 radians per day.[5] The speed of light in IAU is the defined value c0 = 299792458 m/s of
the SI units. In terms of this speed, the old definition of the astronomical unit of length had the accepted
value:[3] 1 ua = c0A = 1.495978707001011 3 m, where A is the transit time of light across the
astronomical unit. The astronomical unit of length was determined by the condition that the measured
data in the ephemeris match observations, and that in turn decides the transit time A. An astronomical
constant is a physical constant used in astronomy. Formal sets of conrdfgggcdstants, along with
recommended values, have been defined by the International Astronomical Union (IAU) several times: in
1964[1] and in 1976 (with an update in 1994]). In 2009 the IAU adopted a new current set, and recognizing
that new observations and techniques continuously provide better values for these constants, they
decided[4] to not fix these values, but have the Working Group on Numerical Standards continuously
maintain a set of Current Best Estimates.[5] The set of constants is widely reproduced in publications such
as the Astronomical Almanac of the United States Naval Observatoryand HM Nautical Almanac Office.
Besides the IAU list of units and constants, also the International Earth Rotation and Reference Systems
Service defines constants relevant to the orientation and rotation of the Earth, in its technical notes. [6]
The IAU system of constants defines a system of astronomical units for length, mass and time (in fact,
several such systems), and also includes constants such as the speed of light and the constant of
gravitation which allow transformations between astronomical units and SI units. Slightly different values
for the constants are obtained depending on the frame of reference used. Values quoted in barycentric
dynamical time (TDB) or equivalent time scales such as the Teph of the Jet Propulsion
Laboratory ephemerides represent the mean values that would be measured by an observer on the
Earth's surface (strictly, on the surface of the geoid) over a long period of time. The IAU also recommends
values in SI units, which are the values which would be measured (in proper length and proper time) by
an observer at the barycentre of the Solar System:

4
Other units for astronomical distances

Astronomical Range Typical Units

Distances to satellites kilometres

Distances to near-Earth objects lunar distance

Planetary distances astronomical units, gigametres

Distances to nearby stars parsecs, light-years

Distances at the galactic scale kiloparsecs

Distances to nearby galaxies megaparsecs

The distances to distant galaxies are typically not quoted in distance units at all, but rather in terms
of redshift. The reasons for this are that converting redshift to distance requires knowledge of the Hubble
constant which was not accurately measured until the early 21st century, and that at cosmological
distances, the curvature of space-time allows one to come up with multiple definitions for distance. For
example, the distance as defined by the amount of time it takes for a light beam to travel to an observer is
different from the distance as defined by the apparent size of an object.

Astronomers usually fit into two types:

Observational astronomers make direct observations of planets, stars and galaxies, and analyse the data.

Theoretical astronomers create and investigate models of things that cannot be observed. Because it
takes millions to billions of years for a system of stars or a galaxy to complete a life cycle
astronomers have to

5
observe snap shots of different systems at unique points in their evolution to determine how they form,
evolve and die. They use this data to create models or simulations to theorize how different celestial
bodies work.

There are further subcategories inside these two main branches of astronomy such as planetary
astronomy, galactic astronomy or cosmology.

The Astronomer by Johannes Vermeer

Amateur astronomers

While there is a relatively low number of professional astronomers, the field is popular among amateurs.
Most cities have amateur astronomy clubs that meet on a regular basis and often host star parties. The
AstronomicalSociety of the Pacific is the largest general astronomical society in the world, comprising
both professional and amateur astronomers as well as educators from 70 different nations. [4]Like any
hobby, most people who think of themselves as amateur astronomers may devote a few hours a month
to stargazing and reading the latest developments in research. However, amateurs span the range from
so-called "armchair astronomers" to the very ambitious, who own science-grade telescopes and
instruments with which they are able to make their own discoveries and assist professional astronomers
in research. An amateur (French amateur "lover of", from Old French and ultimately from
Latin amatorem nom. amator, "lover") is generally considered a person attached to a particular pursuit,
study, or science in a non-professional or unpaid manner.

Astronomy

Astronomy, a natural science is the study of celestial objects (such


as stars, galaxies, planets, moons, asteroids, cometsand nebulae) and processes (such as
supernovaeexplosions, gamma ray bursts, and cosmic microwave background radiation), the
physics, chemistry, and evolution of such objects and processes, and more generally all
phenomena that originate outside the atmosphere of Earth. A related but distinct subject, physical
cosmology, is concerned with studying theUniverse as a whole.

Astronomy is one of the oldest sciences. The early civilizations in recorded history, such as

6
the Babylonians, Greeks, Indians,Egyptians, Nubians, Iranians, Chinese, and Maya performed
methodical observations of the night sky. Historically, astronomy has included disciplines as diverse

as astrometry, celestial navigation, observational astronomy and the making of calendars, but
professional astronomy is nowadays often considered to be synonymous with astrophysics.

During the 20th century, the field of professional astronomy split into observational and theoretical
branches. Observational astronomy is focused on acquiring data from observations of astronomical
objects, which is then analyzed using basic principles of physics. Theoretical astronomy is oriented
toward the development of computer or analytical models to describe astronomical objects and
phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the
observational results and observations being used to confirm theoretical results.

Astronomy is one of the few sciences where amateurs can still play an active role, especially in the
discovery and observation of transient phenomena. Amateur astronomers have made and contributed to
many important astronomical discoveries.

LITERATURE REVIEW.

Astronomy (from the Greek from astron, "star" and -nomia from nomos, "law" or "culture") means "law of
the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with
astrology, the belief system which claims that human affairs are correlated with the positions of celestial
objects. Although the two fields share a common origin, they are now entirely distinct.

19th century Sydney Observatory,Australia (1873)

An amateur (French amateur "lover of", from Old French and ultimately from
Latin amatorem nom. amator, "lover") is generally considered a person attached to a particular pursuit,
study, or science in a non-professional or unpaid manner.

7
Use of terms "astronomy" and "astrophysic

Generally, either the term "astronomy" or "astrophysics" may be used to refer to this subject. Based on
strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's
atmosphere and of their physic
"the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some
cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu,
"astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used
to describe the physics-oriented version of the subject. However, since most modern astronomical
research deals with subjects related to physics, modern astronomy could actually be called astrophysics.
Few fields, such as astrometry, are purely astronomy rather than also astrophysics. Various departments
in which scientists carry out research on this subject may use "astronomy" and "astrophysics," partly
depending on whether the department is historically affiliated with a physics department, and many
professional astronomers have physics rather than astronomy degrees. One of the leading scientific
journals in the field is the European journal named Astronomyand Astrophysics. The leading American
journals are The Astrophysical Journal and The Astronomical Journal.

Astronomy is the oldest of the natural sciences, dating back to antiquity, with its origins in the religious,
mythological, cosmological, calendrical, and astrological beliefs and practices of prehistory: vestiges of these are
still found in astrology, a discipline long interwoven with public and governmental astronomy, and not completely
disentangled from it until a few centuries ago in the Western World (see astrology and astronomy). In some cultures,
astronomical data was used for astrological prognostication.

Ancient astronomers were able to differentiate between stars and planets, as stars remain relatively fixed over the
centuries while planets will move an appreciable amount during a comparatively short time.

Early cultures identified celestial objects with gods and spirits. They related these objects (and their movements) to
phenomena such as rain, drought, seasons, and tides. It is generally believed that the first astronomers were priests,
and that they understood celestial objects and events to be manifestations of the divine, hence early astronomy's
connection to what is now called astrology. Ancient structures with possibly astronomical alignments (such as
Stonehenge) probably fulfilled astronomical, religious, and social functions.

Calendars of the world have often been set by observations of the Sun and Moon (marking the day, month and year),
and were important to agricultural societies, in which the harvest depended on planting at the correct time of year,
and for which the nearly full moon was the only lighting for night-time travel into city markets.

The common modern calendar is based on the Roman calendar. Although originally a lunar calendar, it broke the
traditional link of the month to the phases of the moon and divided the year into twelve almost-equal months, that
mostly alternated between thirty and thirty-one days. Julius Caesar instigated calendar reform in 46 BCE and
introduced what is now called the Julian calendar, based upon the 36514 day year length originally proposed by the
4th century BCE Greek astronomer Callippus.

8
Prehistoric Europe

Archaeoastronomy

The Nebra sky disk Germany 1600 BC

Calendrical functions of the Berlin Gold Hat c. 1000 BC

Since 1990 our understanding of prehistoric Europeans has been radically changed by discoveries of ancient
astronomical artifacts throughout Europe. The artifacts demonstrate that Neolithic and Bronze Age Europeans had a
sophisticated knowledge of mathematics and astronomy.

Among the discoveries are:

Bone sticks from locations like Africa and Europe from possibly as long ago as 35,000 BCE are marked in
ways that tracked the moon's phases.
The Warren Field calendar in the Dee River valley of Scotland's Aberdeenshire. First excavated in 2004 but
only in 2013 revealed as a find of huge significance, it is to date the worlds oldest known calendar, created
around 8000 BC and predating all other calendars by some 5,000 years. The calendar takes the form of an
early Mesolithic monument containing a series of 12 pits which appear to help the observer track lunar
months by mimicking the phases of the moon. It also aligns to sunrise at the winter solstice, thus
coordinating the solar year with the lunar cycles. The monument had been maintained and periodically
reshaped, perhaps up to hundreds of times, in response to shifting solar/lunar cycles, over the course of
6,000 years, until the calendar fell out of use around 4,000 years ago.
Goseck circle is located in Germany and belongs to the linear pottery culture. First discovered in 1991, its
significance was only clear after results from archaeological digs became available in 2004. The site is one
of hundreds of similar circular enclosures built in a region encompassing Austria, Germany, and the Czech
Republic during a 200-year period starting shortly after 5000 BC.[8]
The Nebra sky disc is a Bronze Age bronze disc that was buried in Germany, not far from the Goseck
circle, around 1600 BC. It measures about 30 cm diameter with a mass of 2.2 kg and displays a blue-green
patina (from oxidization) inlaid with gold symbols. Found by archeological thieves in 1999 and recovered
in Switzerland in 2002, it was soon recognized as a spectacular discovery, among the most important of the
20th century.[9][10] Investigations revealed that the object had been in use around 400 years before burial
(2000 BC), but that its use had been forgotten by the time of burial. The inlaid gold depicted the full moon,
a crescent moon about 4 or 5 days old, and the Pleiades star cluster in a specific arrangement forming the
earliest known depiction of celestial phenomena. Twelve lunar months pass in 354 days, requiring a

9
calendar to insert a leap month every two or three years in order to keep synchronized with the solar year's
seasons (making it lunisolar). The earliest known descriptions of this coordination were recorded by the
Babylonians in 6th or 7th centuries BC, over one thousand years later. Those descriptions verified ancient
knowledge of the Nebra sky disc's celestial depiction as the precise arrangement needed to judge when to
insert the intercalary month into a lunisolar calendar, making it an astronomical clock for regulating such a
calendar a thousand or more years before any other known method.
The Kokino site, discovered in 2001, sits atop an extinct volcanic cone at an elevation of 1,013 metres
(3,323 ft), occupying about 0.5 hectares overlooking the surrounding countryside in the former Yugoslav
Republic of Macedonia. A Bronze Ageastronomical observatory was constructed there around 1900 BC
and continuously served the nearby community that lived there until about 700 BC. The central space was
used to observe the rising of the sun and full moon. Three markings locate sunrise at the summer and
winter solstices and at the two equinoxes. Four more give the minimum and maximum declinations of the
full moon: in summer, and in winter. Two measure the lengths of lunar months. Together, they reconcile
solar and lunar cycles in marking the 235 lunations that occur during 19 solar years, regulating a lunar
calendar. On a platform separate from the central space, at lower elevation, four stone seats (thrones) were
made in north-south alignment, together with a trench marker cut in the eastern wall. This marker allows
the rising sun's light to fall on only the second throne, at midsummer (about July 31). It was used for ritual
ceremony linking the ruler to the local sun god, and also marked the end of the growing season and time for
harvest.
Golden hats of Germany, France and Switzerland dating from 1400-800 BC are associated with the Bronze
Age Urnfield culture. The Golden hats are decorated with a spiral motif of the Sun and the Moon. They
were probably a kind of calendar used to calibrate between the lunar and solar calendars.[13][14] Modern
scholarship has demonstrated that the ornamentation of the gold leaf cones of the Schifferstadt type, to
which the Berlin Gold Hat example belongs, represent systematic sequences in terms of number and types
of ornaments per band. A detailed study of the Berlin example, which is the only fully preserved one,
showed that the symbols probably represent a lunisolar calendar. The object would have permitted the
determination of dates or periods in both lunar and solar calendars.[15]

10
Ancient times

Mesopotamia

Mesopotamian astronomy

Further information: Babylonian astrology and Babylonian calendar

Babylonian tablet recording Halley's comet in 164 BC.

The origins of Western astronomy can be found in Mesopotamia, the "land between the rivers" Tigris and Euphrates,
where the ancient kingdoms of Sumer, Assyria, and Babylonia were located. A form of writing known as cuneiform
emerged among the Sumerians around 35003000 BC. Our knowledge of Sumerian astronomy is indirect, via the
earliest Babylonian star catalogues dating from about 1200 BC. The fact that many star names appear in Sumerian
suggests a continuity reaching into the Early Bronze Age. Astral theology, which gave planetary gods an important
role in Mesopotamian mythology and religion, began with the Sumerians. They also used a sexagesimal (base 60)
place-value number system, which simplified the task of recording very large and very small numbers. The modern
practice of dividing a circle into 360 degrees, of 60 minutes each, began with the Sumerians. For more information,
see the articles on Babylonian numerals and mathematics.

Classical sources frequently use the term Chaldeans for the astronomers of Mesopotamia, who were, in reality,
priest-scribes specializing in astrology and other forms of divination.

The first evidence of recognition that astronomical phenomena are periodic and of the application of mathematics to
their prediction is Babylonian. Tablets dating back to the Old Babylonian period document the application of
mathematics to the variation in the length of daylight over a solar year. Centuries of Babylonian observations of
celestial phenomena are recorded in the series of cuneiform tablets known as the Enma Anu Enlil. The oldest
significant astronomical text that we possess is Tablet 63 of the Enma Anu Enlil, the Venus tablet of Ammi-
saduqa, which lists the first and last visible risings of Venus over a period of about 21 years and is the earliest
evidence that the phenomena of a planet were recognized as periodic. The [Link], contains catalogues of stars
and constellations as well as schemes for predicting heliacal risings and the settings of the planets, lengths of
daylight measured by a water clock, gnomon, shadows, and intercalations. The Babylonian GU text arranges stars in
'strings' that lie along declination circles and thus measure right-ascensions or time-intervals, and also employs the
stars of the zenith, which are also separated by given right-ascensional differences.

11
A significant increase in the quality and frequency of Babylonian observations appeared during the reign of
Nabonassar (747733 BC). The systematic records of ominous phenomena in Babylonian astronomical diaries that
began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses, for example. The Greek
astronomer Ptolemy later used Nabonassar's reign to fix the beginning of an era, since he felt that the earliest usable
observations began at this time.

The last stages in the development of Babylonian astronomy took place during the time of the Seleucid Empire
(32360 BC). In the 3rd century BC, astronomers began to use "goal-year texts" to predict the motions of the
planets. These texts compiled records of past observations to find repeating occurrences of ominous phenomena for
each planet. About the same time, or shortly afterwards, astronomers created mathematical models that allowed
them to predict these phenomena directly, without consulting past records. A notable Babylonian astronomer from
this time was Seleucus of Seleucia, who was a supporter of the heliocentric model.

Babylonian astronomy was the basis for much of what was done in Greek and Hellenistic astronomy, in classical
Indian astronomy, in Sassanian Iran, in Byzantium, in Syria, in Islamic astronomy, in Central Asia, and in Western
Europe.

India

Historical Jantar Mantar observatory in Jaipur, India.


: Indian astronomy
Further information: Jyotisha

Astronomy in the Indian subcontinent dates back to the period of Indus Valley Civilization during 3rd millennium
BCE, when it was used to create calendars. As the Indus Valley civilization did not leave behind written documents,
the oldest extant Indian astronomical text is the Vedanga Jyotisha, dating from the Vedic period. Vedanga Jyotisha
describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. During the 6th century,
astronomy was influenced by the Greek and Byzantine astronomical traditions.

Aryabhata (476550), in his magnum opus Aryabhatiya (499), propounded a computational system based on a
planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given
with respect to the Sun. He accurately calculated many astronomical constants, such as the periods of the planets,
times of the solar and lunar eclipses, and the instantaneous motion of the Moon. Early followers of Aryabhata's
model included Varahamihira, Brahmagupta, and Bhaskara II.

Astronomy was advanced during the Shunga Empire and many star catalogues were produced during this time. The
Shunga period is known[according to whom?] as the "Golden age of astronomy in India". It saw the development of
calculations for the motions and places of various planets, their rising and setting, conjunctions, and the calculation
of eclipses.

Indian astronomers by the 6th century believed that comets were celestial bodies that re-appeared periodically. This
was the view expressed in the 6th century by the astronomers Varahamihira and Bhadrabahu, and the 10th-century
astronomer Bhattotpala listed the names and estimated periods of certain comets, but it is unfortunately not known
how these figures were calculated or how accurate they were.

Bhskara II (11141185) was the head of the astronomical observatory at Ujjain, continuing the mathematical
tradition of Brahmagupta. He wrote the Siddhantasiromani which consists of two parts: Goladhyaya (sphere) and

12
Grahaganita (mathematics of the planets). He also calculated the time taken for the Earth to orbit the sun to 9
decimal places. The Buddhist University of Nalanda at the time offered formal courses in astronomical studies.

Other important astronomers from India include Madhava of Sangamagrama, Nilakantha Somayaji and Jyeshtadeva,
who were members of the Kerala school of astronomy and mathematics from the 14th century to the 16th century.
Nilakantha Somayaji, in his Aryabhatiyabhasya, a commentary on Aryabhata's Aryabhatiya, developed his own
computational system for a partially heliocentric planetary model, in which Mercury, Venus, Mars, Jupiter and
Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in
the late 16th century. Nilakantha's system, however, was mathematically more efficient than the Tychonic system,
due to correctly taking into account the equation of the centre and latitudinal motion of Mercury and Venus. Most
astronomers of the Kerala school of astronomy and mathematics who followed him accepted his planetary
model.[24][25]

Greece and Hellenistic world

Greek astronomy

The Antikythera Mechanism was an analog computer from 150100 BC designed to calculate the positions of
astronomical objects.

The Ancient Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated
level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed
in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their models were based on nested
homocentric spheres centered upon the Earth. Their younger contemporary Heraclides Ponticus proposed that the
Earth rotates around its axis.

A different approach to celestial phenomena was taken by natural philosophers such as Plato and Aristotle. They
were less concerned with developing mathematical predictive models than with developing an explanation of the
reasons for the motions of the Cosmos. In his Timaeus, Plato described the universe as a spherical body divided into
circles carrying the planets and governed according to harmonic intervals by a world soul. Aristotle, drawing on the
mathematical model of Eudoxus, proposed that the universe was made of a complex system of concentric spheres,
whose circular motions combined to carry the planets around the earth. This basic cosmological model prevailed, in
various forms, until the 16th century.

In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system, although only
fragmentary descriptions of his idea survive. Eratosthenes, using the angles of shadows created at widely separated
regions, estimated the circumference of the Earth with great accuracy.

Greek geometrical astronomy developed away from the model of concentric spheres to employ more complex
models in which an eccentric circle would carry around a smaller circle, called an epicycle which in turn carried
around a planet. The first such model is attributed to Apollonius of Perga and further developments in it were carried
out in the 2nd century BC by Hipparchus of Nicea. Hipparchus made a number of other contributions, including the
first measurement of precession and the compilation of the first star catalog in which he proposed our modern
system of apparent magnitudes.

13
The Antikythera mechanism, an ancient Greek astronomical observational device for calculating the movements of
the Sun and the Moon, possibly the planets, dates from about 150100 BC, and was the first ancestor of an
astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between
Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been
invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the
18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum
of Athens, accompanied by a replica.

Depending on the historian's viewpoint, the acme or corruption of physical Greek astronomy is seen with Ptolemy of
Alexandria, who wrote the classic comprehensive presentation of geocentric astronomy, the Megale Syntaxis (Great
Synthesis), better known by its Arabic title Almagest, which had a lasting effect on astronomy up to the
Renaissance. In his Planetary Hypotheses, Ptolemy ventured into the realm of cosmology, developing a physical
model of his geometric system, in a universe many times smaller than the more realistic conception of Aristarchus of
Samos four centuries earlier.

Egypt

: Egyptian astronomy

Chart from Senemut's tomb, 18th dynasty

The precise orientation of the Egyptian pyramids affords a lasting demonstration of the high degree of technical skill
in watching the heavens attained in the 3rd millennium BC. It has been shown the Pyramids were aligned towards
the pole star, which, because of the precession of the equinoxes, was at that time Thuban, a faint star in the
constellation of Draco.[31] Evaluation of the site of the temple of Amun-Re at Karnak, taking into account the change
over time of the obliquity of the ecliptic, has shown that the Great Temple was aligned on the rising of the
midwinter sun.[32] The length of the corridor down which sunlight would travel would have limited illumination at
other times of the year.

Astronomy played a considerable part in religious matters for fixing the dates of festivals and determining the hours
of the night. The titles of several temple books are preserved recording the movements and phases of the sun, moon
and stars. The rising of Sirius (Egyptian: Sopdet, Greek: Sothis) at the beginning of the inundation was a particularly
important point to fix in the yearly calendar.

Writing in the Roman era, Clement of Alexandria gives some idea of the importance of astronomical observations to
the sacred rites:

And after the Singer advances the Astrologer (), with a horologium () in his hand, and a palm
(), the symbols of astrology. He must know by heart the Hermetic astrological books, which are four in
number. Of these, one is about the arrangement of the fixed stars that are visible; one on the positions of the sun and
moon and five planets; one on the conjunctions and phases of the sun and moon; and one concerns their risings. [33]

The Astrologer's instruments (horologium and palm) are a plumb line and sighting instrument[clarification needed]. They
have been identified with two inscribed objects in the Berlin Museum; a short handle from which a plumb line was
hung, and a palm branch with a sight-slit in the broader end. The latter was held close to the eye, the former in the
other hand, perhaps at arms length. The "Hermetic" books which Clement refers to are the Egyptian theological
texts, which probably have nothing to do with HellenisticHermetism.

14
From the tables of stars on the ceiling of the tombs of Rameses VI and Rameses IX it seems that for fixing the hours
of the night a man seated on the ground faced the Astrologer in such a position that the line of observation of the
pole star passed over the middle of his head. On the different days of the year each hour was determined by a fixed
star culminating or nearly culminating in it, and the position of these stars at the time is given in the tables as in the
centre, on the left eye, on the right shoulder, etc. According to the texts, in founding or rebuilding temples the north
axis was determined by the same apparatus, and we may conclude that it was the usual one for astronomical
observations. In careful hands it might give results of a high degree of accuracy.

China

Printed star map of Su Song (10201101) showing the south polar projection.
Chinese astronomy
See also: Book of Silk, Chinese astrology, and Timeline of Chinese astronomy

The astronomy of East Asia began in China. Solar term was completed in Warring States period. The knowledge of
Chinese astronomy was introduced into East Asia.

Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th
century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers
were able to precisely predict eclipses.

Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but
because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made
observations for that purpose.

Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars"
which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological
Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a
"guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries.
Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern
astronomical studies.

The world's first star catalogue was made by Gan De, a Chinese astronomer, in the 4th century BC.

Mesoamerica

"El Caracol" observatory temple at Chichen Itza, Mexico.


: Maya calendar and Aztec calendar

15
Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses,
and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the
carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a
precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky
Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented
toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many
recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved
astronomical codices and early mythology.

Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar
year to somewhat greater accuracy than the Gregorian calendar.[37] Both astronomy and an intricate numerological
scheme for the measurement of time were vitally important components of Maya religion.

Medieval Middle East

Astronomy in medieval Islam


See also: Maragheh observatory, Ulugh Beg Observatory, and Istanbul observatory of Taqi al-Din

Arabic astrolab from 1208 AD.

The Arabic and the Persian world under Islam had become highly cultured, and many important works of
knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used
and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on
observational astronomy[38] This led to the emergence of the first astronomical observatories in the Muslim world by
the early 9th century.[39][40]Zij star catalogues were produced at these observatories.

In the 10th century, Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their
positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars. He also
gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as
lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the
Isfahan astronomers, very probably before 905 AD. [41] The first recorded mention of the Large Magellanic Cloud
was also given by al-Sufi.[42][43] In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded
history, and left a detailed description of the temporary star.

16
In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-
Khujandi who observed a series of meridiantransits of the Sun, which allowed him to calculate the tilt of the Earth's
axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher
values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century
Persia, Omar Khayym compiled many tables and performed a reformation of the calendar that was more accurate
than the Julian and came close to the Gregorian.

Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving
significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by
Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Ms ibn Shkir's belief
that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth,[47] the first elaborate
experiments related to astronomical phenomena, the introduction of exacting empirical observations and
experimental techniques, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of
lunar motion which matched physical observations.

Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen)
in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century, leading to the development
of an astronomical physics.

Medieval Western Europe

Further information: Science in the Middle Ages

9th century diagram of the positions of the seven planets on 18 March 816.

After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static
era in Western Europe from the Roman era through the 12th century. This lack of progress has led some
astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent
investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period
from the 4th to the 16th centuries.

Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production.
The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge
of that language, only simplified summaries and practical texts were available for study. The most influential writers
to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century
Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to
employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at
night by watching the stars.

In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time,
providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using

17
a procedure called the computus. This text remained an important element of the education of clergy from the 7th
century until well after the rise of the Universities in the 12th century.

The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to
be studied in earnest during the revival of learning sponsored by the emperor Charlemagne.] By the 9th century
rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval
scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in
the motions of the planets and in their astrological significance.

Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began
to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There
they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most
notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on
the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to
observe the time of eclipses in order to test the validity of computistical tables.

By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and
astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical
knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval
Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of
Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the
Quadrant, and another on Calculation.

In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical
arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity
for the theory that the earth moves, and not the heavens. However, he concluded "everyone maintains, and I think
myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved."
In the 15th century, cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved
around the Sun, and that each star is itself a distant sun. He was not, however, describing a scientifically verifiable
theory of the universe.

18
RENAISSANCE PERIOD

Galileo Galilei (15641642) crafted his own telescope and discovered that our Moon had craters, that Jupiter had
moons, that the Sun had spots, and that Venus had phases like our Moon.
See also: Astronomia nova and Epitome Astronomiae Copernicanae

The renaissance came to astronomy with the work of Nicolaus Copernicus, who proposed a heliocentric system, in
which the planets revolved around the Sun and not the Earth. His De revolutionibus provided a full mathematical
discussion of his system, using the geometrical techniques that had been traditional in astronomy since before the
time of Ptolemy. His work was later defended, expanded upon and modified by Galileo Galilei and Johannes Kepler.

Galileo was considered the father of observational astronomy. He was among the first to use a telescope to observe
the sky and after constructing a 20x refractor telescope he discovered the four largest moons of Jupiter in 1610. This
was the first observation of satellites orbiting another planet. He also found that our Moon had craters and observed
(and correctly explained) sunspots. Galileo noted that Venus exhibited a full set of phases resembling lunar phases.
Galileo argued that these observations supported the Copernican system and were, to some extent, incompatible with
the favored model of the Earth at the center of the universe. He may have even observed the planet Neptune in 1612
and 1613, over 200 years before it was discovered, but it is unclear if he was aware of what he was looking at.

Uniting physics and astronomy

Plate with figures illustrating articles on astronomy, from the 1728 Cyclopaedia

Although the motions of celestial bodies had been qualitatively explained in physical terms since Aristotle
introduced celestial movers in his Metaphysics and a fifth element in his On the Heavens, Johannes Kepler was the
first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. Combining

19
his physical insights with the unprecedentedly accurate naked-eye observations made by Tycho Brahe, Kepler
discovered the three laws of planetary motion that now carry his name.

Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation.
Realising that the same force that attracted objects to the surface of the Earth held the moon in orbit around the
Earth, Newton was able to explain in one theoretical framework all known gravitational phenomena. In his
Philosophiae Naturalis Principia Mathematica, he derived Kepler's laws from first principles. Newton's theoretical
developments lay many of the foundations of modern physics.

Completing the solar system

Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway
in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own
philosophies. It wasn't until Voltaire published a popular account in 1738 that the tide changed.[72] In 1748, the
French Academy of Sciences offered a reward for solving the perturbations of Jupiter and Saturn which was
eventually solved by Euler and Lagrange. Laplace completed the theory of the planets towards the end of the
century.

Edmund Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return in
1758 of the comet that bears his name. Sir William Herschel found the first new planet, Uranus, to be observed in
modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the TitiusBode law was filled by
the discovery of the asteroidsCeres and Pallas in 1801 with many more following.

At first, astronomical thought in America was based on Aristotelian philosophy,[73] but interest in the new astronomy
began to appear in Almanacs as early as 1659.[74]

Modern astronomy

Mars surface map of Giovanni Schiaparelli.


Main article: Astronomy
Observational astronomy

In the 19th century it was discovered that, when decomposing the light from the Sun, a multitude of spectral lines
were observed (regions where there was less or no light). Experiments with hot gases showed that the same lines
could be observed in the spectra of gases, specific lines corresponding to unique elements. It was proved that the
chemical elements found in the Sun (chiefly hydrogen and helium) were also found on Earth. During the 20th
century spectroscopy (the study of these lines) advanced, especially because of the advent of quantum physics, that
was necessary to understand the observations.

Although in previous centuries noted astronomers were exclusively male, at the turn of the 20th century women
began to play a role in the great discoveries. In this period prior to modern computers, women at the United States
Naval Observatory (USNO), Harvard University, and other astronomy research institutions began to be hired as
human "computers," who performed the tedious calculations while scientists performed research requiring more
background knowledge. A number of discoveries in this period were originally noted by the women "computers"
and reported to their supervisors. For example, at the Harvard Observatory Henrietta Swan Leavitt discovered the
cepheid variable star period-luminosity relation which she further developed into a method of measuring distance
outside of our solar system. Annie Jump Cannon, also at Harvard, organized the stellar spectral types according to
stellar temperature. In 1847, Maria Mitchell discovered a comet using a telescope. According to Lewis D. Eigen,

20
Cannon alone, "in only 4 years discovered and catalogued more stars than all the men in history put together." [75]
Most of these women received little or no recognition during their lives due to their lower professional standing in
the field of astronomy. Although their discoveries and methods are taught in classrooms around the world, few
students of astronomy can attribute the works to their authors or have any idea that there were active female
astronomers at the end of the 19th century.

Cosmology and the expansion of the universe

Comparison of CMB (Cosmic microwave background) results from satellites COBE, WMAP and Planck
documenting a progress in 1989-2013.
Main article: Physical cosmology History of study

Most of our current knowledge was gained during the 20th century. With the help of the use of photography, fainter
objects were observed. Our sun was found to be part of a galaxy made up of more than 1010 stars (10 billion stars).
The existence of other galaxies, one of the matters of the great debate, was settled by Edwin Hubble, who identified
the Andromeda nebula as a different galaxy, and many others at large distances and receding, moving away from
our galaxy.

Physical cosmology, a discipline that has a large intersection with astronomy, made huge advances during the 20th
century, with the model of the hot big bang heavily supported by the evidence provided by astronomy and physics,
such as the redshifts of very distant galaxies and radio sources, the cosmic microwave background radiation,
Hubble's law and cosmological abundances of elements.

New windows into the Cosmos open

Hubble Space Telescope.

In the 19th century, scientists began discovering forms of light which were invisible to the naked eye: X-Rays,
gamma rays, radio waves, microwaves, ultraviolet radiation, and infrared radiation. This had a major impact on
astronomy, spawning the fields of infrared astronomy, radio astronomy, x-ray astronomy and finally gamma-ray
astronomy. With the advent of spectroscopy it was proven that other stars were similar to our own sun, but with a
range of temperatures, masses and sizes. The existence of our galaxy, the Milky Way, as a separate group of stars
was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of
the universe seen in the recession of most galaxies from us.

21
Astrophysics

Astrophysics is the branch of astronomy that employs the principles of physics and chemistry "to
ascertain the nature of the heavenly bodies, rather than their positions or motions in space." Among the
objects studied are the Sun, other stars, galaxies, extra solar planets, the interstellar medium and the
cosmic microwave background. Their emissions are examined across all parts of the electromagnetic
spectrum, and the properties examined include luminosity, density, temperature, and chemical
composition. Because astrophysics is a very broad subject, astrophysicists typically apply many
disciplines of physics,
including mechanics, electromagnetism, statistical mechanics, thermodynamics,
quantum mechanics, relativity, nuclear and particle physics, and atomic and
molecular physics.

In practice, modern astronomical research often involves a substantial amount of work in the realms
of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to
determine: the properties of dark matter, dark energy, and black holes; whether or not time travel is
possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.
Topics
also studied by theoretical astrophysicists include: Solar System formation and evolution;
stellar dynamics and evolution; galaxy formation and evolution; magneto hydro
dynamics;large-scale structure of matter in the universe; origin of cosmic rays; general
relativity and physical cosmology, including string cosmology and astro particle physics.

Astrophysics can be studied at the bachelors, masters, and Ph.D. levels in physics or astronomy
departments at many universities.

Astrobiology.

Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe:
extraterrestrial
life and life on Earth. Astrobiology addresses the question of whether life exists beyond Earth, and how
humans can detect it if it does (the term exobiology is similar but more specific it covers the search for life
beyond Earth, and the effects of extraterrestrial environments on living things).

Astrobiology makes use of physics, chemistry, astronomy, biology, molecular biology, ecology,
planetary science, geography, and geology to investigate the possibility of life on other worlds and
help

22
recognize biospheres that might be different from that on Earth. The origin and early evolution of life is
an inseparable part of the discipline of astrobiology Astrobiology concerns itself with interpretation of
existing scientific data; given more detailed and reliable data from other parts of the universe, the roots
of astrobiology itself physics, chemistry and biology may have their theoretical bases challenged.
Although speculation is entertained to give context, astrobiology concerns itself primarily with
hypotheses that fit firmly into existing scientific theories.

Nucleic acids may not be the only biomolecules in the Universe capable of coding for life processes.

This interdisciplinary field encompasses research on the origin and evolution of planetary systems,
origins of organic compounds in space, rock-water-carbon interactions,abiogenesis on Earth, planetary
habitability, research on biosignatures for life detection, and studies on the potential for life to adapt to
challenges on Earth and in outer space.

The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a
habitable epoch when theUniverse was only 1017 million years old. According to the panspermia
hypothesis, microscopic lifedistributed bymeteoroids, asteroids and other small Solar System
bodiesmay exist throughout the universe According to research published in August 2015, very
large galaxies may be more favorable to the creation and development of habitable planetsthan
smaller galaxies, like the Milky
Way galaxy. Nonetheless, Earth is the only place in the universe known to harbor life. Estimates of
habitable zones around other stars, along with the discovery of hundreds of extrasolar planets and new
insights into the extreme habitats here on Earth, suggest that there may be many more habitable
places in the universe than considered possible until very recently.

23
Current studies on the planet Mars by the Curiosity and Opportunity rovers are now searching for
evidence of ancient life as well as plains related to ancient rivers or lakes that may have been
habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic molecules
on the planet Mars is now a primary NASA and ESAobjective on Mars.

Astrochemistry

Astrochemistry is the study of the abundance and reactions of chemical elements and molecules in the
universe, and their interaction with radiation. The discipline is an overlap of astronomy and chemistry.
The word "astrochemistry" may be applied to both the Solar System and the interstellar medium. The
study of the abundance of elements and isotope ratios in Solar System objects, such as meteorites, is
also
called cosmochemistry, while the study of interstellar atoms and molecules and their interaction with
radiation is sometimes called molecular astrophysics. The formation, atomic and chemical composition,
evolution and fate of molecular gas clouds is of special interest, because it is from these clouds that solar
systems form.

One particularly important experimental tool in astrochemistry is spectroscopy, the use of telescopes to
measure the absorption and emission of light from molecules and atoms in various environments. By
comparing astronomical observations with laboratory measurements, astrochemists can infer the
elemental abundances, chemical composition, and temperatures of stars and interstellar clouds. This is
possible because ions, atoms, and molecules have characteristic spectra: that is, the absorption and
emission of certain wavelengths (colors) of light, often not visible to the human eye. However, these
measurements have limitations, with various types of radiation (radio,infrared, visible, ultraviolet etc.) able
to detect only certain types of species, depending on the chemical properties of the [Link]
formaldehyde was the first organic molecule detected in the interstellar medium.

Perhaps the most powerful technique for detection of individual chemical species is radio astronomy,
which has resulted in the detection of over a hundred interstellar species, including radicals and ions,
and organic ([Link]-based) compounds, such as alcohols, acids, aldehydes, andketones. One of the
most abundant interstellar molecules, and among the easiest to detect with radio waves (due to its strong
electric dipolemoment), is CO (carbon monoxide). In fact, CO is such a common interstellar molecule that
it is used to map out molecular regions.[1]The radio observation of perhaps greatest human interest is the
claim of interstellar glycine, the simplest amino acid, but with considerable accompanying
controversy.[3]One of the reasons why this detection was controversial is that although radio (and some
other methods like rotational spectroscopy) are good for the identification of simple species with large
dipole moments, they are less sensitive to more complex molecules, even something relatively small like
amino acids.

24
Moreover, such methods are completely blind to molecules that have no dipole. For example, by far the
most common molecule in the universe is H2(hydrogen gas), but it does not have a dipole moment, so it
is invisible to radio telescopes. Moreover, such methods cannot detect species that are not in the gas-
phase. Since dense

molecular clouds are very cold (10 to 50 K [263.1 to 223.2 C; 441.7 to 369.7 F]), most molecules
in them (other than hydrogen) are frozen, i.e. solid. Instead, hydrogen and these other molecules are
detected using other wavelengths of light. Hydrogen is easily detected in the ultraviolet (UV) and visible
ranges from its absorption and emission of light (the hydrogen line). Moreover, most organic compounds
absorb and emit light in the infrared (IR) so, for example, the detection of methane in the atmosphere of
Mars[4]was achieved using an IR ground-based telescope, NASA's 3-meter Infrared Telescope Facility
atop Mauna Kea, Hawaii. NASA also has an airborne IR telescope called SOFIA and an IR space
telescope called Spitzer. Somewhat related to the recent detection of methane in the atmosphere of
Mars, scientists reported, in June 2012, that measuring the ratio of hydrogen and methane levels on Mars
may help determine the likelihood of life on Mars. According to the scientists, "...low H2/CH4 ratios (less
than approximately 40) indicate that life is likely present and active."[5]Other scientists have recently
reported methods of detecting hydrogen and methane in extraterrestrial atmospheres.

Infrared astronomy has also revealed that the interstellar medium contains a suite of complex gas-phase
carbon compounds called polyaromatic hydrocarbons, often abbreviated PAHs or PACs. These
molecules, composed primarily of fused rings of carbon (either neutral or in an ionized state), are said to
be the most common class of carbon compound in the galaxy. They are also the most common class of
carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as
well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium and
isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial
origin. The PAHs are thought to form in hot circumstellar environments (around dying, carbon-rich red
giant stars).

Infrared astronomy has also been used to assess the composition of solid materials in the interstellar
medium, includingsilicates, kerogen-like carbon-rich solids, and ices. This is because unlike visible light,
which is scattered or absorbed by solid particles, the IR radiation can pass through the microscopic
interstellar particles, but in the process there are absorptions at certain wavelengths that are
characteristic of the composition of the grains.[9]As above with radio astronomy, there are certain
limitations, e.g. N2 is difficult to detect by either IR or radio astronomy.

Such IR observations have determined that in dense clouds (where there are enough particles to
attenuate the destructive UV radiation) thin ice layers coat the microscopic particles, permitting some low-
temperature chemistry to occur. Since hydrogen is by far the most abundant molecule in the universe, the
initial chemistry of these ices is determined by the chemistry of the hydrogen. If the hydrogen is atomic,
then the H atoms react with available O, C and N atoms, producing "reduced" species like H 2O, CH4, and
NH3. However, if the hydrogen is molecular and thus not reactive, this permits the heavier atoms to react
or remain bonded together, producing CO, CO2, CN, etc. These mixed-molecular ices are exposed to

25
ultraviolet radiation and cosmic rays, which results in complex radiation-driven chemistry.[9]Lab
experiments on the photochemistry of simple interstellar ices have produced amino acids. [10]The similarity
between interstellar and cometary ices (as well as comparisons of gas phase compounds) have been
invoked as indicators of a connection between interstellar and cometary chemistry. This is somewhat
supported by the results of the analysis of the organics from the comet samples returned by the Stardust
mission but the minerals also indicated a surprising contribution from high-temperature chemistry in the
solar nebula

Astrology
Astrology is the study of the movements and relative positions of celestial objects as a means
for divining information about human affairs and terrestrial events. Astrology has been dated to at least
the 2nd millennium BCE, and has its roots in calendrical systems used to predict seasonal shifts and to
interpret celestial cycles as signs of divine communications.[5]Many cultures have attached importance to
astronomical events, and some such as the Indians, Chinese, and Maya developed elaborate
systems for predicting terrestrial events from celestial observations. Western astrology, one of the oldest
astrological systems still in use, can trace its roots to 19th-17th century BCE Mesopotamia, from which it
spread to Ancient Greece, Rome, the Arab world and eventuallyCentral and Western Europe.
Contemporary Western astrology is often associated with systems of horoscopes that purport to explain
aspects of a person's personality and predict significant events in their lives based on the positions of
celestial objects; the majority of professional astrologers rely on such systems.

26
The astrological signs

Aries
Taurus
Gemini
Cancer
Leo
Virgo
Libra
Scorpio
Sagittarius
Capricorn
Aquarius

27
Pisces

Throughout most of its history astrology was considered a scholarly tradition and was common in
academic circles, often in close relation with astronomy, alchemy, meteorology, and medicine It was
present in political circles, and is mentioned in various works of literature, from Dante Alighieri and
Geoffrey Chaucer to William Shakespeare, Lope de Vega and Caldern de la Barca.

With the onset of the scientific revolution astrology was called into question; it has been challenged
successfully both on theoretical and experimental grounds, and has been shown to have no scientific
validity or explanatory power. Astrology thus lost its academic and theoretical standing, and common
belief in it has largely declined.]Astrology is now recognized to be pseudoscience

Astrobiochemistry.

Astrobiochemistry is the study of the origin, evolution, distribution, and future of life in the
universe:extraterrestrial life and life on Earth, using to tools of biochemistry. This interdisciplinary field
encompasses the search for habitable environments in our Solar System and habitable planets outside
our Solar System, the search for evidence of prebiotic chemistry, laboratory and field research into the
origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on
Earth and in outer space.

As a branch of Astrobiology, astrobiochemistry also addresses the question of whether life exists beyond
Earth, and how humans can detect it if it does. It concerns itself with interpretation of existing scientific
data; given more detailed and reliable data from other parts of the universe.

UTILITARIANISM AS AN APPLICATION OF ASTRONOMY

Utilitarianism.
Utilitarianism is the ethical doctrine that the moral worth of an action is solely determined by its contribution to
overall utility. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its
outcomethe ends justify the means. Utility the good to be maximized has been defined by various thinkers
as happiness or pleasure (versus sadness or pain), though preference utilitarians like Peter Singer define it as the
satisfaction of preferences. It can be described by the phrase "the greatest good for the greatest number", though the
'greatest number' part gives rise to the problematic mere addition paradox. Utilitarianism can thus be characterized
as a quantitative and reductionistic approach to ethics. Utilitarianism can be contrasted with deontological ethics
(which focuses on the action itself rather than its consequences) and virtue ethics (which focuses on character), as
well as with other varieties of consequentialism. Adherents of these opposing views have extensively criticized the

28
utilitarian view, though utilitarians have been similarly critical of other schools of ethical thought. In general use the
term utilitarian often refers to a somewhat narrow economic or pragmatic

viewpoint. However, philosophical utilitarianism is much broader than this, for example some approaches to
utilitarianism consider non-human animals in addition to people.

Astronomical Objects; An astronomical object or celestial object is a naturally occurring physical entity,
association, or structure that current astronomy has demonstrated to exist in the observable universe

In astronomy, the terms "object" and "body" are often used interchangeably. However, an astronomical body or
celestial body is a single, tightly bound contiguous entity, while an astronomical or celestial object is a complex,
less cohesively bound structure, that may consist of multiple bodies or even other objects with substructures.

Examples for astronomical objects include planetary systems, star clusters, nebulae and galaxies, while asteroids,
moons, planets, and stars are astronomical bodies. A comet may be identified as both body and object: It is a body
when referring to the frozen nucleus of ice and dust, and an object when describing the entire comet with its diffuse
coma and tail.

Galaxy and larger

The universe can be viewed as having a hierarchical structure. At the largest scales, the fundamental component
of assembly is the galaxy. Galaxies are organized into groups and clusters, often within larger super clusters, that
are strung along great filaments between nearly empty voids, forming a web that spans the observable universe.

Galaxies have a variety of morphologies, with irregular, elliptical and disk-like shapes, depending on their
formation and evolutionary histories, including interaction with other galaxies, which may lead to a merger .Disc
galaxies encompass lenticular and spiral galaxies with features, such as spiral arms and a distinct halo. At the core,
most galaxies have a super massive black hole, which may result in an active galactic nucleus. Galaxies can also
have satellites in the form of dwarf galaxies and globular clusters

Within a galaxy

The constituents of a galaxy are formed out of gaseous matter that assembles through gravitational self-attraction in
a hierarchical manner. At this level, the resulting fundamental components are the stars, which are typically
assembled in clusters from the various condensing nebulae. [6]The great variety of stellar forms are determined
almost entirely by the mass, composition and evolutionary state of these stars. Stars may be found in multi-star
systems that orbit about each other in a hierarchical organization. A planetary system and various minor objects
such as asteroids, comets and debris, can form in a hierarchical process of accretion from the protoplanetary disks
that surrounds newly formed stars.
The various distinctive types of stars are shown by the HertzsprungRussell diagram (HR diagram)a plot of
absolute stellar luminosity versus surface temperature. Each star follows an evolutionary track across this diagram.
If this track takes the star through a region containing an intrinsic variable type, then its physical properties can
cause it to become a variable star. An example of this is the instability strip, a region of the H-R diagram that
includes Delta Scuti, RRLyrae and Cepheid variables.[7]Depending on the initial mass of the star and the presence
or absence of a companion, a star may spend the last part of its life as a compact object; either a white dwarf,
neutron star, or black hole.

29
Categories by location

Lists of astronomical objects

The table below lists the general categories of bodies and objects by their location or structure.

Extrasolar
Exten
Solar bodies ded
Simple bodies Compound objects
objec
ts
Solar System Exoplanets Systems

Giant planet Chthonian (theoret.) Planetary


o Gas giant Earth analog Star
o Ice giant Eccentric Jupiter o Stars in
Heliosphere Hot Jupiter general
Oort cloud Hot Neptune o Binary (see
Meteoroid Interstellar below)
o Triples
o Micrometeoroid Ocean (theoret.)
o Higher
Meteor Pulsar planet
order
o Bolide Rogue planet
Moons Super-Earth
Minor planets (see below) Trojan (theoret.) Binary stars
o Asteroids

o Dwarf planets Brown dwarfs By observation


o Moons o Optical
o Binaries Types o Visual
Planets (see below) o MLTY o Astrometri
c
o Ring system Sub-brown dwarfs
Trans-Neptunian o Spectrosc

opic
objects Stars (see sections below)
Small Solar System o Eclipsing
body Stellar classification Close binaries
o Detached
o Comets Stellar population III, II, I
o Planetesimal Peculiar star o Semidetac
o Contact binary Stellar evolution hed

30
31
Sun Variable star o Contact
Compact star X-ray
Planets o Burster
By luminosity / evolution
Mercury Stellar groupings
Venus Protostar
Earth Moon Young stellar object Star cluster
Mars moons Pre-main-sequence o Stellarasso
Jupiter moons Main sequence ciation
Saturn moons Subdwarfs o Open
Uranus moons Subgiants o Globularo
Neptune moons Giants Hypercom
o Red / Blue pact
Dwarf planets Bright giants Constellation
Supergiants Asterism
Pluto moons o Red / Blue
Eris Dysnomia Hypergiants Galaxies
Ceres Compact stars (see below)

Makemake moon Galaxies
Haumea moons Compact stars ingeneral
Others Group and cluster
Black hole Supercluster
Minor planets o Stellar By component
o Intermediate- o Bulge
Vulcanoids mass o Spiral arm
Apoheles o Supermassive o Thin disk
Near-Earth objectso o GRBs o Thick disk
PHO Neutron star o Halo
o Arjunas o o Magnetar o Corona
Atens o o Pulsar By morphology
Apollos o Preon star (hypothet.) o Spiral
Amors Quark star (hypothet.) o Barred
Mars-crossers White dwarf spiral
Asteroid belt (families)o o Black dwarf o Lenticular
Alindas (theoret.) o Elliptical
o Cybeles o o Ring
Eos By peculiar stars o Irregular
o Floras By size
o Hildas o Brightestcl
A-type
o Hungarias o uster
o Peculiar
Hygieas o o Giant
Metallic
Koronis o elliptical
Barium
Marias o Dwarf
Blue straggler
o Nysas Carbon By type
o Pallas P Cygni Starburst
o Phocaeas S-type
Shell

32
o Themis WolfRayet Dark
o Vesta Active
Trojans
o Earth Variables Extrinsic o Radio
o Mars o Seyfert
o Jupiter Rotating o Quasar
o Uranus o Alpha2 CVn
o Neptune
Centaurs o Ellipsoidal
o Damocloids Eclipsing binaries
Kuiper belt objects o Algol
o Classical KBOs o Beta Lyrae
o Resonant TNOs o W Ursae Majoris
Plutinos
(2:3)
Variables Intrinsic
Twotinos
(1:2)
Scattered disc objects o Pulsating
Detached o Cepheids
objects o W Virginis
Sednoid o Delta Scuti
o RR Lyrae
o Mira
o Semiregular
o Irregular
o Beta Cephei
o Alpha Cygni
o RV Tauri
Eruptive variables
o Flare stars
o T Tauri
o FU Orionis
o RCr Borealis
o Luminous blue
Cataclysmic
o Symbiotics
o Dwarf novao
Nova
o Supernova
Type: Ia
Ib/c II
Hypernova
GRBs
o

By spectral types

O (blue)

33
B (blue-white)
A (white)
F (yellow-white)
G (yellow)
K (orange)
M (red)

In early times, astronomy only comprised the observation and predictions of the motions of objects
visible to the naked eye. In some locations, early cultures assembled massive artifacts that possibly
had some astronomical purpose. In addition to their ceremonial uses, these observatories could be
employed to determine the seasons, an important factor in knowing when to plant crops, as well as in
understanding the length of the year

Before tools such as the telescope were invented, early study of the stars was conducted using the
naked eye. As civilizations developed, most notably in Mesopotamia, Greece, India, China, Egypt, and
Central America, astronomical observatories were assembled, and ideas on the nature of the Universe
began to be explored. Most of early astronomy actually consisted of mapping the positions of the stars
and planets, a science now referred to as astrometry. From these observations, early ideas about the
motions of the planets were formed, and the nature of the Sun, Moon and the Earth in the Universe were
explored philosophically. The Earth was believed to be the center of the Universe with the Sun, the Moon
and the stars rotating around it. This is known as the geocentric model of the Universe, or the Ptolemaic
system, named after Ptolemy.

A particularly important early development was the beginning of mathematical and scientific astronomy,
which began among the Babylonians, who laid the foundations for the later astronomical traditions that
developed in many other civilizations. The Babylonians discovered that lunar eclipses recurred in a
repeating cycle known as a saros.

34
Greek equatorial sundial,Alexandria on the Oxus, present-day Afghanistan 3rd2nd century BCE.

Following the Babylonians, significant advances in astronomy were made in ancient Greece and the
Hellenistic world. Greek astronomy is characterized from the start by seeking a rational, physical
explanation for celestial phenomena. In the 3rd century BC, Aristarchus of Samos estimated the size
anddistance of the Moon and Sun, and was the first to propose a heliocentric model of the solar
system.[18]In the 2nd century BC, Hipparchus discovered precession, calculated the size and distance of
the Moon and invented the earliest known astronomical devices such as the astrolabe. Hipparchus also
created a comprehensive catalog of 1020 stars, and most of the constellations of the northern
hemisphere derive from Greek astronomy. The Antikythera mechanism(c. 15080 BC) was an early
analog computer designed to calculate the location of the Sun, Moon, and planets for a given date.
Technological artifacts of similar complexity did not reappear until the 14th century, when mechanical
astronomical clocks appeared in Europe.

A celestial map from the 17th century, by the Dutch cartographerFrederik de Wit.

35
During the Middle Ages, astronomy was mostly stagnant in medieval Europe, at least until the 13th
century. However, astronomy flourished in the Islamic world and other parts of the world. This led to the
emergence of the first astronomical observatories in theMuslim world by the early 9th century. In 964, the
Andromeda Galaxy, the largest galaxy in the Local Group, was discovered by the Persian astronomer
Azophi and first described in his Book of Fixed Stars. The SN 1006 supernova, the brightest apparent
magnitude stellar event in recorded history, was observed by the Egyptian Arabic astronomer Ali ibn
Ridwan and the Chinese astronomers in 1006. Some of the prominent Islamic (mostly Persian and Arab)
astronomers who made significant contributions to the science include Al-Battani, Thebit, Azophi,
Albumasar, Biruni,Arzachel, Al-Birjandi, and the astronomers of the Maragheh and Samarkand
observatories. Astronomers during that time introduced many Arabic namesnow used for individual stars.
It is also believed that the ruins at Great Zimbabwe and Timbuktu may have housed an astronomical
observatory. Europeans had previously believed that there had been no astronomical observation in pre-
colonial Middle Ages sub-Saharan Africa but modern discoveries show otherwise.[30][31][32][33]

The Roman Catholic Church gave more financial and social support to the study of astronomy for over
six centuries, from the recovery of ancient learning during the late Middle Ages into the Enlightenment,
than any other, and, probably, all other, institutions. Among the Church's motives was finding the date
for Easter
Scientific revolution

During the Renaissance, Nicolaus Copernicus proposed a heliocentric model of the solar system. His
work was defended, expanded upon, and corrected by Galileo Galilei and Johannes Kepler. Galileo used
telescopes to enhance his observations.

Kepler was the first to devise a system that described correctly the details of the motion of the planets
with the Sun at the center. However, Kepler did not succeed in formulating a theory behind the laws he
wrote down. It was left to Newton's invention ofcelestial dynamics and his law of gravitation to finally
explain the motions of the planets. Newton also developed the reflecting telescope.

36
Galileo's sketches and observations of the Moonrevealed that the surface was mountainous.

The English astronomer John Flamsteed catalogued over 3000 stars. Further discoveries
paralleled the improvements in the size and quality of the telescope. More extensive star
catalogues were produced by Lacaille. The astronomer William Herschel made a detailed catalog
of nebulosity and clusters, and in 1781 discovered the planet Uranus, the first new planet found.
The distance to a star was first announced in 1838 when the parallax of 61 Cygni was measured
by Friedrich Bessel.

During the 1819th centuries, attention to the three body problem by Euler, Clairaut, and D'Alembert
led to more accurate predictions about the motions of the Moon and planets. This work was further
refined
by Lagrange and Laplace, allowing the masses of the planets and moons to be estimated from
their perturbations.

37
An astronomical chart from an early scientific manuscript. c.1000

Significant advances in astronomy came about with the introduction of new technology, including
the spectroscope and [Link] hofer discovered about 600 bands in the spectrum of the
Sun in 181415, which, in 1859, Kirchhoff ascribed to the presence of different elements. Stars were
proven to be similar to the Earth's own Sun, but with a wide range of temperatures, masses, and
sizes.

The existence of the Earth's galaxy, the Milky Way, as a separate group of stars, was only proved in the
20t century, along with the existence of "external" galaxies, and soon after, the expansion of the Universe,
seen in the recession of most galaxies from us.[41]Modern astronomy has also discovered many exotic
objects such
as quasars, pulsars, blazars, and radio galaxies, and has used these observations to develop physical
theories which describe some of these objects in terms of equally exotic objects such as black holes and
neutron stars. Physical cosmology made huge advances during the 20th century, with the model of the
Big Bang heavily supported by the evidence provided by astronomy and physics, such as thecosmic
microwavebackground radiation, Hubble's law, and cosmological abundances of elements. Space
telescopes have enabled measurements in parts of the electromagnetic spectrum normally blocked or
blurred by the atmosphere. Recently, in February 2016, it was revealed that the LIGO project had
detected evidence of gravitational waves, in September 2015.

38
Observational astronomy
In astronomy, the main source of information about celestial bodies and other objects is visible light or
more generally electromagnetic radiation. Observational astronomy may be divided according to the
observed region of the electromagnetic spectrum. Some parts of the spectrum can be observed from the
Earth's surface, while other parts are only observable from either high altitudes or outside the Earth's
atmosphere. Specific information on these subfields is given below.

Radio astronomy

Radio astronomy studies radiation with wavelengths greater than approximately one millimeter. Radio
astronomy is different from most other forms of observational astronomy in that the observed radio
waves can be treated as waves rather than as discrete photons. Hence, it is relatively easier to measure
both the amplitude and phase of radio waves, whereas this is not as easily done at shorter wavelengths.

Although some radio waves are produced by astronomical objects in the form of thermal emission, most
of the radio emission that is observed from Earth is the result of synchrotron radiation, which is produced
when electrons orbit magnetic fields. Additionally, a number of spectral lines produced by
interstellar gas, notably the hydrogen spectral line at 21 cm, are observable at radio wavelengths.

A wide variety of objects are observable at radio wavelengths, including supernovae, interstellar gas,
pulsars, and active galactic nuclei.

The Very Large Array in New Mexico, an example of a radio telescope

39
Infrared astronomy

Infrared astronomy is founded on the detection and analysis of infrared radiation (wavelengths longer
than red light). The infrared spectrum is useful for studying objects that are too cold to radiate visible
light, such as planets, circumstellar disks or nebulae whose light is blocked by dust. Longer infrared
wavelengths can penetrate clouds of dust that block visible light, allowing the observation of young stars
in molecular clouds and the cores of galaxies. Observations from the Wide-field Infrared Survey Explorer
(WISE) have been particularly effective at unveiling numerous Galactic protostars and their host star
clusters.[45][46] With the exception of wavelengths close to visible light, infrared radiation is heavily
absorbed by the atmosphere, or masked, as the atmosphere itself produces significant infrared emission.
Consequently, infrared observatories have to be located in high, dry places or in space. Some molecules
radiate strongly in the infrared. This allows the study of the chemistry of space; more specifically it can
detect water in comets.

ALMA Observatory is one of the highest observatory sites on Earth. Atacama, Chile

Optical astronomy

Historically, optical astronomy, also called visible light astronomy, is the oldest form of astronomy.
Optical images of observations were originally drawn by hand. In the late 19th century and most of the
20th century, images were made using photographic equipment. Modern images are made using digital
detectors, particularly detectors using charge-coupled devices (CCDs) and recorded on modern medium.
Although visible light itself extends from approximately 4000 to 7000 (400 nm to 700 nm), that same
equipment can be used to observe some near-ultraviolet and near-infrared radiation.

40
The Subaru Telescope (left) andKeck Observatory (center) on Mauna Kea, both examples of an
observatory that operates at near-infrared and visible wavelengths. The NASA Infrared
Telescope Facility (right) is an example of a telescope that operates only at near-infrared
wavelengths.

Ultraviolet astronomy

Ultraviolet astronomy refers to observations at ultraviolet wavelengths between approximately 100 and
3200 (10 to 320 nm). Light at these wavelengths is absorbed by the Earth's atmosphere, so
observations at these wavelengths must be performed from the upper atmosphere or from space.
Ultraviolet astronomy is best suited to the study of thermal radiation and spectral emission lines from hot
blue stars (OB stars) that are very bright in this wave band. This includes the blue stars in other galaxies,
which have been the targets of several ultraviolet surveys. Other objects commonly observed in
ultraviolet light include planetary nebulae, supernovaremnants, and active galactic nuclei. However, as
ultraviolet light is easily absorbed by interstellar dust, an appropriate adjustment of ultraviolet
measurements is necessary.
X-ray astronomy

X-ray astronomy is the study of astronomical objects at X-ray wavelengths. Typically, X-ray radiation is
produced bysynchrotron emission (the result of electrons orbiting magnetic field lines), thermal emission
fromthin gases above 107(10 million) kelvins, and thermal emission from thick gases above 107
[Link] X-rays are absorbed by the Earth's atmosphere, all X-ray observations must be performed
from high-altitudeballoons, rockets, or spacecraft. Notable X-ray sources include X-ray binaries, pulsars,
supernovaremnants, elliptical galaxies, clusters of galaxies, and active galactic nuclei.

41
X-Ray jet made from a supermassive black hole found by NASA's Chandra X-ray Observatory,
made visible by light from the early Universe.

Gamma-ray astronomy

Gamma ray astronomy is the study of astronomical objects at the shortest wavelengths of the
electromagnetic spectrum. Gamma rays may be observed directly by satellites such as the Compton
Gamma Ray Observatory or by specialized telescopes called atmospheric Cherenkov telescopes. The
Cherenkov telescopes do not actually detect the gamma rays directly but instead detect the flashes of
visible light produced when gamma rays are absorbed by the Earth's atmosphere.

Most gamma-ray emitting sources are actually gamma-ray bursts, objects which only produce gamma
radiation for a few milliseconds to thousands of seconds before fading away. Only 10% of gamma-ray
sources are non-transient sources. These steady gamma-ray emitters include pulsars, neutron stars, and
black hole candidates such as active galactic nuclei.

Fields not based on the electromagnetic spectrum

In addition to electromagnetic radiation, a few other events originating from great distances may be
observed from the Earth.

In neutrino astronomy, astronomers use heavily shielded underground facilities such as SAGE, GALLEX,
and Kamioka II/III for the detection of neutrinos. The vast majority of the neutrinos streaming through the
Earth originate from the Sun, but 24 neutrinos were also detected from supernova 1987A. Cosmic rays,
which consist of very high energy particles that can decay or be absorbed when they enter the Earth's
atmosphere, result in a cascade of particles which can be detected by current
observatories.[51]Additionally, some
future neutrino detectors may also be sensitive to the particles produced when cosmic rays hit the
Earth's atmosphere.

42
Gravitational-wave astronomy is an emerging new field of astronomy which aims to use gravitational-
wavedetectors to collect observational data about compact objects. A few observatories have been
constructed, such as the Laser Interferometer Gravitational Observatory LIGO. LIGO made its first
detection on 14 September 2015, observing gravitational waves from a binary black hole. A second
gravitational wave was detected on 26 December 2015 and additional observations should continue
but gravitational waves are extremely difficult to detect.

Combining observations made using electromagnetic radiation, neutrinos or gravitational waves with
those made using a different means, which shall give complementary information, is known as multi-
messenger astronomy.

Astrometry and celestial mechanics

One of the oldest fields in astronomy, and in all of science, is the measurement of the positions of
celestial objects. Historically, accurate knowledge of the positions of the Sun, Moon, planets and stars
has been essential in celestial navigation (the use of celestial objects to guide navigation) and in the
making of calendars. Careful measurement of the positions of the planets has led to a solid
understanding of gravitational perturbations, and an ability to determine past and future positions of the
planets with great accuracy, a field known as celestial mechanics. More recently the tracking of near-
Earth objects will allow for predictions of close encounters, and potential collisions, with the Earth.

Star cluster Pismis 24 with a nebula

43
The measurement of stellar parallax of nearby stars provides a fundamental baseline in the cosmic
distanceladder that is used to measure the scale of the Universe. Parallax measurements of nearby
stars provide an absolute baseline for the properties of more distant stars, as their properties can be
compared. Measurements of radial velocity and proper motion plot the movement of these systems
through the Milky Way galaxy. Astrometric results are the basis used to calculate the distribution of dark
matter in the galaxy.

During the 1990s, the measurement of the stellar wobble of nearby stars was used to detect large
extrasolarplanets orbiting nearby stars.

Theoretical astronomy

Nucleosynthesis

Stellar nucleosynthesis
Big Bang nucleosynthesis
Supernova nucleosynthesis
Cosmic ray spallation

Astrophysics,

Nuclear fission and fusion.

R and s process

Theoretical astronomers use several tools including analytical models (for example, polytropes to
approximate the behaviors of a star) and computational numerical simulations. Each has some
advantages. Analytical models of a process are generally better for giving insight into the heart of what
is going on. Numerical models reveal the existence of phenomena and effects otherwise unobserved.

Theorists in astronomy endeavor to create theoretical models and from the results predict
observational consequences of those models. The observation of a phenomenon predicted by a model
allows astronomers to select between several alternate or conflicting models.

Theorists also try to generate or modify models to take into account new data. In the case of an
inconsistency, the general tendency is to try to make minimal modifications to the model so that it
produces results that fit the data. In some cases, a large amount of inconsistent data over time may
lead to total abandonment of a model.

44
Topics studied by theoretical astronomers include: stellar dynamics and evolution; galaxy formation;
large-scalestructure of matter in the Universe; origin of cosmic rays; general relativity and physical
cosmology,
including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to
gauge the properties of large scale structures for which gravitation plays a significant role in
physical phenomena investigated and as the basis for black hole (astro)physics and the study of
gravitational waves.

Some widely accepted and studied theories and models in astronomy, now included in the
Lambda-CDMmodel are the Big Bang,Cosmic inflation, dark matter, and fundamental theories of
physics.

A few examples of this process:

Physical process Experimental tool Theoretical model Explains/predicts

Self-gravitating
Gravitation Radio telescopes Emergence of a star system
system

How the stars shine and how metals


Nuclear fusion Spectroscopy Stellar evolution
formed

Hubble Space Expanding


The Big Bang Age of the Universe
Telescope, COBE universe

Quantum
Cosmic inflation Flatness problem
fluctuations

Gravitational X-ray astronomy General relativity Black holes at the center

45
collapse of Andromeda galaxy

CNO cycle in
stars
The dominant source of energy for

massive star.

Dark matter and dark energy are the current leading topics in astronomy, as their discovery and
controversy originated during the study of the galaxies.

Specific subfields

Solar astronomy

At a distance of about eight light-minutes, the most frequently studied star is the Sun, a typical
main-sequence dwarf star ofstellar class G2 V, and about 4.6 billion years (Gyr) old. The Sun is not
considered a variable star, but it does undergo periodic changes in activity known as the sunspot
cycle. This is an 11-year fluctuation in sunspot numbers. Sunspots are regions of lower-than-
average temperatures that are associated with intense magnetic activity.

An ultraviolet image of the Sun's active photosphere as viewed by theTRACE space telescope. NASA
photo .The Sun has steadily increased in luminosity over the course of its life, increasing by 40% since it
first became a main-sequence star. The Sun has also undergone periodic changes in luminosity that can
have a significant impact on the Earth. The Maunder minimum, for example, is believed to have caused
the Little IceAge phenomenon during the Middle Ages.

46
Solar observatory Lomnick tt(Slovakia) built in 1962.

The visible outer surface of the Sun is called the photosphere. Above this layer is a thin region known as
the chromosphere. This is surrounded by a transition region of rapidly increasing temperatures, and
finally by the super-heated corona.

At the center of the Sun is the core region, a volume of sufficient temperature and pressure for nuclear
fusion to occur. Above the core is the radiation zone, where the plasma conveys the energy flux by
means of radiation. Above that are the outer layers that form a convection zone where the gas material
transports energy primarily through physical displacement of the gas. It is believed that this convection
zone creates the magnetic activity that generates sunspots.

A solar wind of plasma particles constantly streams outward from the Sun until, at the outermost limit
of the Solar System, it reaches the heliopause. This solar wind interacts with the magnetosphere of
the Earth to create the Van Allen radiation beltsabout the Earth, as well as the aurora where the lines
of the Earth'smagnetic field descend into the atmosphere.

Planetary science

Planetary science is the study of the assemblage of planets, moons, dwarf planets, comets, asteroids,
and other bodies orbiting the Sun, as well as extrasolar planets. The Solar System has been relatively
well-studied, initially through telescopes and then later by spacecraft. This has provided a good overall
understanding of the formation and evolution of this planetary system, although many new discoveries
are still being made.

47
The black spot at the top is a dust devil climbing a crater wall on Mars. This moving,
swirling column of Martianatmosphere (comparable to a terrestrial tornado) created the
long, dark streak. NASA image.

The Solar System is subdivided into the inner planets, the asteroid belt, and the outer
planets. The inner terrestrial planetsconsist of Mercury, Venus, Earth, and Mars. The outer
gas giant planet are Jupiter, Saturn, Uranus, and Neptune.]Beyond Neptune lies the Kuiper
Belt, and finally the Oort Cloud, which may extend as far as a light-year.

The planets were formed in the protoplanetary disk that surrounded the early Sun. Through a process
that included gravitational attraction, collision, and accretion, the disk formed clumps of matter that, with
time,became protoplanets. Theradiation pressure of the solar wind then expelled most of the unaccreted
matter, and only those planets with sufficient mass retained their gaseous atmosphere. The planets
continued to sweep up,or eject, the remaining matter during a period of intense bombardment, evidenced
by the many impact craters on the Moon. During this period, some of the protoplanets may have collided,
the leading hypothesis for how the Moon was formed.

Once a planet reaches sufficient mass, the materials of different densities segregate within, during
planetarydifferentiation. This process can form a stony or metallic core, surrounded by a mantle and
an outer surface. The core may include solid and liquid regions, and some planetary cores generate
their own magnetic field, which can protect their atmospheres from solar wind stripping.

A planet or moon's interior heat is produced from the collisions that created the body, radioactive
materials (e.g. uranium, thorium, and Al), or tidal heating. Some planets and moons accumulate enough
heat to drive geologic processes such as volcanism and tectonics. Those that accumulate or retain an
atmosphere can also undergo surface erosion from wind or water. Smaller bodies, without tidal heating,
cool more quickly; and their geological activity ceases with the exception of impact cratering.

Stellar astronomy

The study of stars and stellar evolution is fundamental to our understanding of the Universe. The
astrophysics of stars has been determined through observation and theoretical understanding; and from
computer simulations of the interior. Star formation occurs in dense regions of dust and gas, known as
giant molecular

48
clouds. When destabilized, cloud fragments can collapse under the influence of gravity, to form a
protostar. A sufficiently dense, and hot, core region will trigger nuclear fusion, thus creating a main-
sequence star.

Almost all elements heavier than hydrogen and helium were created inside the cores of stars.

The Ant planetary nebula. Ejecting gas from the dying central star shows symmetrical patterns
unlike the chaotic patterns of ordinary explosions.

The characteristics of the resulting star depend primarily upon its starting mass. The more massive the
star, the greater its luminosity, and the more rapidly it expends the hydrogen fuel in its core. Over time,
this hydrogen fuel is completely converted into helium, and the star begins to evolve. The fusion of helium
requires a higher core temperature, so that the star both expands in size, and increases in core density.
The resulting red giant enjoys a brief life span, before the helium fuel is in turn consumed. Very massive
stars can also undergo a series of decreasing evolutionary phases, as they fuse increasingly heavier
elements.

The final fate of the star depends on its mass, with stars of mass greater than about eight times the
Sun becoming core collapse supernovae; while smaller stars form a white dwarf as it ejects matter
that forms
a planetary nebulae. The remnant of a supernova is a dense neutron star, or, if the stellar mass was at
least three times that of the Sun, a black hole. Close binary stars can follow more complex evolutionary
paths, such as mass transfer onto a white dwarf companion that can potentially cause a supernova.
Planetary nebulae and supernovae are necessary for the distribution of metals to the interstellar
medium; without them, all new stars (and their planetary systems) would be formed from hydrogen and
helium alone.

49
Galactic astronomy

Our solar system orbits within the Milky Way, a barred spiral galaxy that is a prominent member of the
LocalGroup of galaxies. It is a rotating mass of gas, dust, stars and other objects, held together by
mutual gravitational attraction. As the Earth is located within the dusty outer arms, there are large
portions of the Milky Way that are obscured from view.

Observed structure of the Milky Way's spiral arms

In the center of the Milky Way is the core, a bar-shaped bulge with what is believed to be a supermassive
blackhole at the center. This is surrounded by four primary arms that spiral from the core. This is a region
of active star formation that contains many younger, population I stars. The disk is surrounded by a
spheroid halo of older, population II stars, as well as relatively dense concentrations of stars known as
globular clusters.

Between the stars lies the interstellar medium, a region of sparse matter. In the densest regions,
molecularclouds ofmolecular hydrogen and other elements create star-forming regions. These begin as
a compact pre-stellar core or dark nebulae, which concentrate and collapse (in volumes determined by
the Jeans length) to form compact protostars.

As the more massive stars appear, they transform the cloud into an H II region (ionized atomic
hydrogen) of glowing gas and plasma. The stellar wind and supernova explosions from these stars
eventually cause the cloud to disperse, often leaving behind one or more young open clusters of
stars. These clusters gradually disperse, and the stars join the population of the Milky Way.

Kinematic studies of matter in the Milky Way and other galaxies have demonstrated that there is more
mass than can be accounted for by visible matter. A dark matter halo appears to dominate the mass,
although the nature of this dark matter remains undetermined.

50
Extragalactic astronomy

The study of objects outside our galaxy is a branch of astronomy concerned with the formation and
evolution ofGalaxies; their morphology (description) and classification; and the observation of active
galaxies, and at a larger scale, the groups and clusters of galaxies. Finally, the latter is important for the
understanding of the large-scale structure of the cosmos.

Most galaxies are organized into distinct shapes that allow for classification schemes. They are
commonly divided into spiral,elliptical and Irregular galaxies.

As the name suggests, an elliptical galaxy has the cross-sectional shape of an ellipse. The stars move
along random orbits with no preferred direction. These galaxies contain little or no interstellar dust; few
star-forming regions; and generally older stars. Elliptical galaxies are more commonly found at the core
of galactic clusters, and may have been formed through mergers of large galaxies.

This image shows several blue, loop-shaped objects that are multiple images of the same galaxy,
duplicated by
the gravitational lens effect of the cluster of yellow galaxies near the middle of the photograph. The lens
is produced by the cluster's gravitational field that bends light to magnify and distort the image of a more
distant object.

A spiral galaxy is organized into a flat, rotating disk, usually with a prominent bulge or bar at the
center, and trailing bright arms that spiral outward. The arms are dusty regions of star formation
where massive young stars produce a blue tint. Spiral galaxies are typically surrounded by a halo of
older stars. Both the MilkyWay and our nearest galaxy neighbor, theAndromeda Galaxy, are spiral
galaxies.

Irregular galaxies are chaotic in appearance, and are neither spiral nor elliptical. About a quarter of all
galaxies are irregular, and the peculiar shapes of such galaxies may be the result of gravitational
interaction.

An active galaxy is a formation that emits a significant amount of its energy from a source other than its
stars, dust and gas. It is powered by a compact region at the core, thought to be a super-massive black
hole that is emitting radiation from in-falling material.

51
A radio galaxy is an active galaxy that is very luminous in the radio portion of the spectrum, and is
emitting immense plumes or lobes of gas. Active galaxies that emit shorter frequency, high-energy
radiation include Seyfert galaxies, Quasars, and Blazars. Quasars are believed to be the most
consistently luminous objects in the known universe.

The large-scale structure of the cosmos is represented by groups and clusters of galaxies. This
structure is organized into a hierarchy of groupings, with the largest being the superclusters. The
collective matter is formed into filaments and walls, leaving large voids between.

Cosmology

Cosmology (fromthe Greek (kosmos) "world, universe" and (logos) "word, study" or
literally"logic") could be considered the study of the Universe as a whole.

Observations of the large-scale structure of the Universe, a branch known as physical cosmology,
have provided a deep understanding of the formation and evolution of the cosmos. Fundamental to
modern cosmology is the well-accepted theory of the big bang, wherein our Universe began at a
single point in time, and thereafterexpanded over the course of 13.8 billion years to its present
[Link] concept of the big bang can be traced back to the discovery of the microwave
background radiation in 1965.

Hubble Extreme Deep Field

52
In the course of this expansion, the Universe underwent several evolutionary stages. In the very early
moments, it is theorized that the Universe experienced a very rapid cosmic inflation, which homogenized
the starting conditions. Thereafter, nucleosynthesis produced the elemental abundance of the early
Universe

When the first neutral atoms formed from a sea of primordial ions, space became transparent to
radiation, releasing the energy viewed today as the microwave background radiation. The
expanding Universe then underwent a Dark Age due to the lack of stellar energy sources.

A hierarchical structure of matter began to form from minute variations in the mass density of space.
Matter accumulated in the densest regions, forming clouds of gas and the earliest stars, the Population III
stars. These massive stars triggered the reionization process and are believed to have created many of
the heavy elements in the early Universe, which, through nuclear decay, create lighter elements, allowing
the cycle of nucleosynthesis to continue longer.

Gravitational aggregations clustered into filaments, leaving voids in the gaps. Gradually, organizations
of gas and dust merged to form the first primitive galaxies. Over time, these pulled in more matter, and
were often organized into groups and clusters of galaxies, then into larger-scale superclusters.

Fundamental to the structure of the Universe is the existence of dark matter and dark energy. These are
now thought to be its dominant components, forming 96% of the mass of the Universe. For this reason,
much effort is expended in trying to understand the physics of these components.

Interdisciplinary studies

Astronomy and astrophysics have developed significant interdisciplinary links with other major
scientific fields.

Archaeoastronomy is the study of ancient or traditional astronomies in their cultural context,


utilizing archaeological and anthropological evidence. Astrobiology is the study of the advent and
evolution of biological systems in the Universe, with particular emphasis on the possibility of non-
terrestrial life. Astrostatistics is the application of statistics to astrophysics to the analysis of vast
amount of observational astrophysical data.

The study of chemicals found in space, including their formation, interaction and destruction, is
called astrochemistry. These substances are usually found inmolecular clouds, although they may also
appear in low temperature stars, brown dwarfs and planets. Cosmochemistry is the study of the
chemicals found within the Solar System, including the origins of the elements and variations in the
isotope ratios. Both of these fields represent an overlap of the disciplines of astronomy and chemistry. As
"forensic astronomy", finally, methods from astronomy have been used to solve problems of law and
history.

53
Amateur astronomy

Astronomy is one of the sciences to which amateurs can contribute the most. Collectively, amateur
astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they
build themselves. Common targets of amateur astronomers include the Moon, planets, stars, comets,
meteor showers, and a variety ofdeep-sky objects such as star clusters, galaxies, and nebulae.
Astronomy clubs are located throughout the world and many have programs to help their members set up
and complete observational programs including those to observe all the objects in the Messier (110
objects) or Herschel 400 catalogues of points of interest in the night sky. One branch of amateur
astronomy,
amateur astrophotography, involves the taking of photos of the night sky. Many amateurs like to
specialize in the observation of particular objects, types of objects, or types of events which interest
them.

Amateur astronomers can build their own equipment, and can hold star parties and gatherings,
such asStellafane.

Most amateurs work at visible wavelengths, but a small minority experiment with wavelengths outside
the visible spectrum. This includes the use of infrared filters on conventional telescopes, and also the
use of radio telescopes. The pioneer of amateur radio astronomy was Karl Jansky, who started
observing the sky at radio wavelengths in the 1930s. A number of amateur astronomers use either
homemade telescopes or use radio telescopes which were originally built for astronomy research but
which are now available to amateurs (e.g. the One-Mile Telescope).

54
Amateur astronomers continue to make scientific contributions to the field of astronomy and it is one of
the few scientific disciplines where amateurs can still make significant contributions. Amateurs can make
occultation measurements that are used to refine the orbits of minor planets. They can also discover
comets, and perform regular observations of variable stars. Improvements in digital technology have
allowed amateurs to make impressive advances in the field of astrophotography.

55
THE BIG BANG THEORY,
The Big Bang theory is the prevailing cosmological model for the universe[1] from the earliest known periods through its
subsequent large-scale evolution. The model describes how the universe expanded from a very high density and high
temperature state, and offers a comprehensive explanation for a broad range of phenomena, including the abundance
of light elements, the cosmic microwave background (CMB), large scale structure and Hubble's law.[7] If the known laws of
physics are extrapolated to the highest density regime, the result is a singularity which is typically associated with the Big
Bang. Detailed measurements of the expansion rate of the universe place this moment at approximately 13.8 billion years
ago, which is thus considered the age of the universe. After the initial expansion, the universe cooled sufficiently to allow the
formation of subatomic particles, and later simple atoms. Giant clouds of these primordial elements later coalesced
through gravity in halos of dark matter, eventually forming the stars and galaxies visible today.
Since Georges Lematre first noted in 1927 that an expanding universe could be traced back in time to an originating single
point, scientists have built on his idea of cosmic expansion. While the scientific community was once divided between
supporters of two different expanding universe theories, the Big Bang and the Steady State theory, empirical
evidence provides strong support for the former.[9] In 1929, from analysis of galactic redshifts, Edwin Hubbleconcluded that
galaxies are drifting apart; this is important observational evidence consistent with the hypothesis of an expanding universe.
In 1964, the cosmic microwave background radiation was discovered, which was crucial evidence in favor of the Big Bang
model,[10] since that theory predicted the existence of background radiation throughout the universe before it was discovered.
More recently, measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an
observation attributed to dark energy's existence.[11] The known physical laws of nature can be used to calculate the
characteristics of the universe in detail back in time to an initial state of extreme density and temperature
American astronomer Edwin Hubble observed that the distances to faraway galaxies were strongly correlated with
their redshifts. This was interpreted to mean that all distant galaxies and clusters are receding away from our vantage point
with an apparent velocity proportional to their distance: that is, the farther they are, the faster they move away from us,
regardless of direction.[13] Assuming the Copernican principle(that the Earth is not the center of the universe), the only
remaining interpretation is that all observable regions of the universe are receding from all others. Since we know that the
distance between galaxies increases today, it must mean that in the past galaxies were closer together. The continuous
expansion of the universe implies that the universe was denser and hotter in the past.
Large particle accelerators can replicate the conditions that prevailed after the early moments of the universe, resulting in
confirmation and refinement of the details of the Big Bang model. However, these accelerators can only probe so far
into high energy regimes. Consequently, the state of the universe in the earliest instants of the Big Bang expansion is still
poorly understood and an area of open investigation and speculation.
The first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei
formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms
formed. The majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant
clouds of these primordial elements later coalesced through gravity to form stars and galaxies, and the heavier elements
were synthesized either within stars or during supernovae.
The Big Bang theory offers a comprehensive explanation for a broad range of observed phenomena, including the
abundance of light elements, the CMB, large scale structure, and Hubble's Law.[7] The framework for the Big Bang model
relies on Albert Einstein's theory of general relativity and on simplifying assumptions such as homogeneity and isotropy of
space. The governing equations were formulated by Alexander Friedmann, and similar solutions were worked on by Willem
de Sitter. Since then, astrophysicists have incorporated observational and theoretical additions into the Big Bang model, and
its parametrization as the Lambda-CDM model serves as the framework for current investigations of theoretical cosmology.
The Lambda-CDM model is the current "standard model" of Big Bang cosmology, consensus is that it is the simplest model
that can account for the various measurements and observations relevant to cosmology.

Timeline
: Chronology of the universe

Singularity
Gravitational singularity and Planck epoch
Extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and
temperature at a finite time in the past. [14] This singularity indicates that general relativity is not an adequate description of
the laws of physics in this regime. It is debated 'how closely' models based on general relativity alone can be used to
extrapolate toward the singularitycertainly no closer than the end of the Planck epoch.
This primordial singularity is itself sometimes called "the Big Bang", [15] but the term can also refer to a more generic early hot,
dense phase[16][notes 1] of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of
our universe since it represents the point in history where the universe can be verified to have entered into a regime where

56
the laws of physics as we understand them (specifically general relativity and the standard model of particle physics) work.
Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in
the cosmic microwave background, the time that has passed since that event otherwise known as the "age of the
universe" is 13.799 0.021 billion years.[17] The agreement of independent measurements of this age supports the CDM
model that describes in detail the characteristics of the universe.

Inflation and baryogenesis


The earliest phases of the Big Bang are subject to much speculation. In the most common models the universe was
filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures and was very
rapidly expanding and cooling. Approximately 10 37 seconds into the expansion, a phase transition caused a cosmic inflation,
during which the universe grew exponentially during which time density fluctuations that occurred because of the uncertainty
principle were amplified into the seeds that would later form the large-scale structure of the universe.[18] After inflation
stopped, reheating occurred until the universe obtained the temperatures required for the production of a quarkgluon
plasma as well as all other elementary particles.[19] Temperatures were so high that the random motions of particles were
at relativistic speeds, and particleantiparticle pairs of all kinds were being continuously created and destroyed in
collisions.[5] At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to
a very small excess of quarks and leptons over antiquarks and antileptonsof the order of one part in 30 million. This
resulted in the predominance of matter over antimatter in the present universe.[20]

Cooling
Big Bang nucleosynthesis and cosmic microwave background radiation

57
Panoramic view of the entire near-infrared sky reveals the distribution of galaxies beyond the Milky Way. Galaxies are color-coded

by redshift.

Source;IPAC/Caltech, by Thomas Jarrett - "Large Scale Structure in the Local Universe: The 2MASS Galaxy Catalog",

Jarrett, T.H. 2004, PASA, 21, 396.

The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was
decreasing. Symmetry breaking phase transitions put the fundamental forces of physics and the parameters of elementary
particles into their present form. After about 1011seconds, the picture becomes less speculative, since particle energies drop
to values that can be attained in particle accelerators. At about 106 seconds, quarks and gluons combined to
form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons
over antibaryons. The temperature was now no longer high enough to create new protonantiproton pairs (similarly for
neutronsantineutrons), so a mass annihilation immediately followed, leaving just one in 10 10 of the original protons and
neutrons, and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After
these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy
density of the universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion (one thousand million) kelvin and the density
was about that of air, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process
called Big Bang nucleosynthesis. Most protons remained uncombined as hydrogen nuclei.
As the universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation.
After about 379,000 years, the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled
from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave

58
background radiation.[23] The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a
habitable epoch when the universe was only 1017 million years old.

Structure formation

Abell 2744 galaxy cluster - Hubble Frontier Fields view.

Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted
nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures
observable today. The details of this process depend on the amount and type of matter in the universe. The four possible
types of matter are known as cold dark matter, warm dark matter, hot dark matter, and baryonic matter. The best
measurements available, from Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-
CDM model in which dark matter is assumed to be cold (warm dark matter is ruled out by early reionization), and is
estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. [29] In an
"extended model" which includes hot dark matter in the form of neutrinos, then if the "physical baryon density" bh2 is
estimated at about 0.023 (this is different from the 'baryon density' bexpressed as a fraction of the total matter/energy
density, which as noted above is about 0.046), and the corresponding cold dark matter density ch2 is about 0.11, the
corresponding neutrino density vh2 is estimated to be less than 0.0062.

Cosmic acceleration

Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a
mysterious form of energy known as dark energy, which apparently permeates all of space. The observations suggest 73%
of the total energy density of today's universe is in this form. When the universe was very young, it was likely infused with
dark energy, but with less space and everything closer together, gravity predominated, and it was slowly breaking the
expansion. But eventually, after numerous billion years of expansion, the growing abundance of dark energy caused
the expansion of the universe to slowly begin to accelerate.
Dark energy in its simplest formulation takes the form of the cosmological constant term in Einstein's field equations of
general relativity, but its composition and mechanism are unknown and, more generally, the details of its equation of state
and relationship with the Standard Model of particle physics continue to be investigated both through observation and
theoretically.[11]
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the CDM model of
cosmology, which uses the independent frameworks of quantum mechanics and Einstein's General Relativity. There is no
well-supported model describing the action prior to 1015 seconds or so. Apparently a new unified theory of quantum
gravitation is needed to break this barrier. Understanding this earliest of eras in the history of the universe is currently one of
the greatest unsolved problems in physics.

Features of the model

The Big Bang theory depends on two major assumptions: the universality of physical laws and the cosmological principle.
The cosmological principle states that on large scales the universe is homogeneous and isotropic.
These ideas were initially taken as postulates, but today there are efforts to test each of them. For example, the first
assumption has been tested by observations showing that largest possible deviation of the fine structure constant over much
of the age of the universe is of order 105. Also, general relativity has passed stringent tests on the scale of the Solar System
and binary stars.
If the large-scale universe appears isotropic as viewed from Earth, the cosmological principle can be derived from the
simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the
cosmological principle has been confirmed to a level of 105 via observations of the CMB. The universe has been measured
to be homogeneous on the largest scales at the 10% level.

59
Expansion of space

General relativity describes spacetime by a metric, which determines the distances that separate nearby points. The points,
which can be galaxies, stars, or other objects, are themselves specified using a coordinate chart or "grid" that is laid down
over all spacetime. The cosmological principle implies that the metric should be homogeneous and isotropic on large scales,
which uniquely singles out the FriedmannLematreRobertsonWalker metric (FLRW metric). This metric contains a scale
factor, which describes how the size of the universe changes with time. This enables a convenient choice of a coordinate
system to be made, called comoving coordinates. In this coordinate system, the grid expands along with the universe, and
objects that are moving only because of the expansion of the universe, remain at fixed points on the grid. While
their coordinate distance (comoving distance) remains constant, the physical distance between two such co-moving points
expands proportionally with the scale factor of the universe.
The Big Bang is not an explosion of matter moving outward to fill an empty universe. Instead, space itself expands with time
everywhere and increases the physical distance between two comoving points. In other words, the Big Bang is not an
explosion in space, but rather an expansion of space.[5] Because the FLRW metric assumes a uniform distribution of mass
and energy, it applies to our universe only on large scaleslocal concentrations of matter such as our galaxy are
gravitationally bound and as such do not experience the large-scale expansion of space.

The Friedmann equations are a set of equations in physical cosmology that govern the expansion of
space in homogeneous and isotropic models of the universe within the context of general relativity.
They were first derived by Alexander Friedmann in 1922[1] from Einstein's field equations of gravitation
for the FriedmannLematreRobertsonWalker metric and a perfect fluid with a given mass density and
pressure .
Horizons

An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and
light travels at a finite speed, there may be events in the past whose light has not had time to reach us. This places a limit or
a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant
objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines
a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon
depends on the details of the FLRW model that describes our universe.
Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view
is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the
horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
History of the Big Bang theory

Etymology

English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a 1949 BBC radio broadcast. It is
popularly reported that Hoyle, who favored an alternative "steady state" cosmological model, intended this to be pejorative,
but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two
models.

60
Development of the big bang theory.
Hubble eXtreme Deep Field (XDF)

XDF size compared to the size of the Moon - several thousand galaxies, each consisting of billions of stars, are in this small view.

XDF (2012) view - each light speck is a galaxy - some of these are as old as 13.2 billion years[38] - the universe is estimated to

contain 200 billion galaxies.

XDF image shows fully mature galaxies in the foreground plane - nearly mature galaxies from 5 to 9 billion years ago -

protogalaxies, blazing with young stars, beyond 9 billion years.

The Big Bang theory developed from observations of the structure of the universe and from theoretical considerations. In
1912 Vesto Slipher measured the first Doppler shiftof a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies),
and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications
of this fact, and indeed at the time it was highly controversialwhether or not these nebulae were "island universes" outside
our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann
equations from Albert Einstein's equations of general relativity, showing that the universe might be expanding in contrast to
the static universe model advocated by Einstein at that time.[41] In 1924 Edwin Hubble's measurement of the great distance to
the nearest spiral nebulae showed that these systems were indeed other galaxies. Independently deriving Friedmann's
equations in 1927, Georges Lematre, a Belgian physicist and Roman Catholic priest, proposed that the inferred recession
of the nebulae was due to the expansion of the universe.
In 1931 Lematre went further and suggested that the evident expansion of the universe, if projected back in time, meant
that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was
concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.

61
Starting in 1924, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance
ladder, using the 100-inch (2.5 m) Hooker telescopeat Mount Wilson Observatory. This allowed him to estimate distances to
galaxies whose redshifts had already been measured, mostly by Slipher. In 1929 Hubble discovered a correlation between
distance and recession velocity now known as Hubble's law. Lematre had already shown that this was expected, given
the cosmological principle.[11]
In the 1920s and 1930s almost every major cosmologist preferred an eternal steady state universe, and several complained
that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated
by supporters of the steady state theory. This perception was enhanced by the fact that the originator of the Big Bang
theory, Monsignor Georges Lematre, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe
did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lematre, however,
thought that
If the world has begun with a single quantum, the notions of space and time would altogether fail to have any meaning at the
beginning; they would only begin to have a sensible meaning when the original quantum had been divided into a sufficient
number of quanta. If this suggestion is correct, the beginning of the world happened a little before the beginning of space
and time.
During the 1930s other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including
the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard
Tolman) and Fritz Zwicky's tired light hypothesis.]
After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady state model, whereby new matter would
be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other
was Lematre's Big Bang theory, advocated and developed by George Gamow, who introduced big bang
nucleosynthesis (BBN) and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was
Hoyle who coined the phrase that came to be applied to Lematre's theory, referring to it as "this big bang idea" during
a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the
observational evidence, most notably from radio source counts, began to favor Big Bang over Steady State. The discovery
and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the
universe.[57]Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang,
understanding the physics of the universe at earlier and earlier times, and reconciling observations with the basic theory.
In 1968 and 1970 Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed
that mathematical singularities were an inevitable initial condition of general relativistic models of the Big Bang. Then, from
the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving
outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding
theoretical problems in the Big Bang theory with the introduction of an epoch of rapid expansion in the early universe he
called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much
discussion and disagreement were over the precise values of the Hubble Constant[61] and the matter-density of the universe
(before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).
In the mid-1990s, observations of certain globular clusters appeared to indicate, that they were about 15 billion years old,
which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This
issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds,
indicated a much younger age for globular clusters. ] While there still remain some questions as to how accurately the ages
of the clusters are measured, globular clusters are of interest to cosmology as some of the oldest objects in the universe.
Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances
in telescopetechnology as well as the analysis of data from satellites such as COBE, the Hubble Space
Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the
Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.

Observational evidence

Artist's depiction of the WMAPsatellite gathering data to help scientists understand the Big Bang
"[The] big bang picture is too firmly grounded in data from every area to be proved invalid in its general features."
Lawrence Krauss

62
The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according
to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave
background and the relative abundances of light elements produced by Big Bang nucleosynthesis. More recent evidence
includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures, These are
sometimes called the "four pillars" of the Big Bang theory.
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in
terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark
matter is currently subjected to the most active laboratory investigations. Remaining issues include the cuspy halo
problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but
it is not clear whether direct detection of dark energy will be possible. [70] Inflation and baryogenesis remain more speculative
features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are
currently unsolved problems in physics.

Hubble's law and the expansion of space

Distance measures (cosmology) and Scale factor (universe)


Observations of distant galaxies and quasars show that these objects are redshiftedthe light emitted from them has been
shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching
the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting
with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the
redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is
possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these
distances, a linear relationship known as Hubble's law is observed:
v = H0D,
where

v is the recessional velocity of the galaxy or other distant object,


D is the comoving distance to the object, and
H0 is Hubble's constant, measured to be 70.4+1.3
1.4 km/s/Mpc by the WMAP probe.[29]

Hubble's law has two possible explanations. Either we are at the center of an explosion of galaxieswhich is untenable
given the Copernican principleor the universe is uniformly expanding everywhere. This universal expansion was
predicted from general relativity by Alexander Friedmann in 1922 [41] and Georges Lematre in 1927,[42] well before
Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang theory as developed
by Friedmann, Lematre, Robertson, and Walker.
The theory requires the relation v = HD to hold at all times, where D is the comoving distance, v is the recessional
velocity, and v, H, and D vary as the universe expands (hence we write H0 to denote the present-day Hubble
"constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of
as the Doppler shift corresponding to the recession velocity v. However, the redshift is not a true Doppler shift, but
rather the result of the expansion of the universe between the time the light was emitted and the time that it was
detected.
That space is undergoing metric expansion is shown by direct observational evidence of the Cosmological
principle and the Copernican principle, which together with Hubble's law have no other explanation. Astronomical
redshifts are extremely isotropic and homogeneous,[13] supporting the Cosmological principle that the universe looks the
same in all directions, along with much other evidence. If the redshifts were the result of an explosion from a center
distant from us, they would not be so similar in different directions.
Measurements of the effects of the cosmic microwave background radiation on the dynamics of distant astrophysical
systems in 2000 proved the Copernican principle, that, on a cosmological scale, the Earth is not in a central
position.[72]Radiation from the Big Bang was demonstrably warmer at earlier times throughout the universe. Uniform
cooling of the CMB over billions of years is explainable only if the universe is experiencing a metric expansion, and
excludes the possibility that we are near the unique center of an explosion.

63
Cosmic microwave background radiation

9 year WMAP image of the cosmic microwave background radiation (2012).[73][74] The radiation is isotropic to roughly one

part in 100,000.[75]

In 1964 Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an
omnidirectional signal in the microwaveband.[57] Their discovery provided substantial confirmation of the big-bang
predictions by Alpher, Herman and Gamow around 1950. Through the 1970s the radiation was found to be
approximately consistent with a black body spectrum in all directions; this spectrum has been redshifted by the
expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in
favor of the Big Bang model, and Penzias and Wilson were awarded a Nobel Prize in 1978.

The cosmic microwave background spectrum measured by the FIRAS instrument on the COBE satellite is the most-precisely

measured black body spectrum in nature.[76] The data pointsand error bars on this graph are obscured by the theoretical curve.

The surface of last scatteringcorresponding to emission of the

64
The cosmic microwave background spectrum measured by the FIRAS instrument on the
COBE satellite is the most-precisely measured black body spectrum in nature. The data
points and error bars on this graph are obscured by the theoretical curve

Source;Quantum Doughnut - Own work

CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe
comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles.
Peaking at around 37214 kyr,[28] the mean free path for a photon becomes long enough to reach the present day and
the universe becomes transparent.
In 1989, NASA launched the Cosmic Background Explorer satellite (COBE), which made two major advances: in 1990,
high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with
no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements
have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny
fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 10 5 John C.
Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and
balloon experiments. In 20002001 several experiments, most notably BOOMERanG, found the shape of the
universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe (WMAP) were released, yielding what were
at the time the most accurate values for some of the cosmological parameters. The results disproved several
specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was
launched in May 2009. Other ground and balloon based cosmic microwave background experiments are ongoing.

Abundance of primordial elements

Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium, and lithium-7 in
the universe as ratios to the amount of ordinary hydrogen. [22] The relative abundances depend on a single parameter,
the ratio of photons to baryons. This value can be calculated independently from the detailed structure
of CMB fluctuations. The ratios predicted (by mass, not by number) are about 0.25 for 4
He/H, about 103 for 2H/H, about 104 for 3He/H and about 109 for 7Li/H
.
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon
ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li
; in the latter two cases there are substantial systematic uncertainties. Nonetheless, the general consistency with
abundances predicted by Big Bang nucleosynthesis is strong evidence for the Big Bang, as the theory is the only
known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to
produce much more or less than 2030% helium. Indeed, there is no obvious reason outside of the Big Bang that, for
example, the young universe (i.e., before star formation, as determined by studying matter supposedly free of stellar
nucleosynthesis products) should have more helium than deuterium or more deuterium than 3He, and in constant
ratios, too.

Galactic evolution and distribution

Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current
state of the Big Bang theory. A combination of observations and theory suggest that the first quasars and galaxies
formed about a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy
clusters and superclusters.
Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the
early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that
formed relatively recently, appear markedly different from galaxies formed at similar distances but shortly after the Big
Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy
and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the
universe, and are helping to complete details of the theory.

65
Primordial gas clouds

Focal plane of BICEP2 telescope under a microscope - used to search for polarization in the CMB.

In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in
the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain
heavy elements that are formed in stars. These two clouds of gas contain no elements heavier than hydrogen and
deuterium. Since the clouds of gas have no heavy elements, they likely formed in the first few minutes after the Big
Bang, during Big Bang nucleosynthesis.

Other lines of evidence

The age of the universe as estimated from the Hubble expansion and the CMB is now in good agreement with other
estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular
clusters and through radiometric dating of individual Population II stars.
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of
very low temperature absorption lines in gas clouds at high redshift.[91] This prediction also implies that the amplitude of
the SunyaevZel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this
to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise
measurements difficult.

Future observations

Future gravitational waves observatories might be able to detect primordial gravitational waves, relics of the early
universe, up to less than a second after the Big Bang.

66
PROBLEM STATEMENT.

Problems and related issues in physics

As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang
theory. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed
solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example,
the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved
with inflationary theory, but the details of the inflationary universe are still left unresolved and many, including some
founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang
theory still under intense investigation by cosmologists and astrophysicists.

Baryon asymmetry
It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the
universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and
antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely
of matter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur,
the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-
symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these
conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon
asymmetry.

Dark energy
Measurements of the redshiftmagnitude relation for type Ia supernovae indicate that the expansion of the universe
has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity
requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark
energy".
Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave
background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the
universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be
measured from its gravitational clustering, and is found to have only about 30% of the critical density. [11] Since theory
suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density.
Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the
frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure as a cosmic
ruler.
Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy
remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a
universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1%
neutrinos.[29] According to theory, the energy density in matter decreases with the expansion of the universe, but the
dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger
fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far
future as dark energy becomes even more dominant.
The dark energy component of the universe has been explained by theorists using a variety of competing theories
including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified
gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics",
results from the apparent discrepancy between the measured energy density of dark energy, and the one naively
predicted from Planck units.

Dark matter

67
Chart shows the proportion of different components of the universe about 95% is dark matter and dark energy.

During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe
to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to
90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In
addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with
observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted
for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the
anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational
lensing studies, and X-ray measurements of galaxy clusters.
Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles
have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several
projects to detect them directly are underway
Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include
the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a
large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no
alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations.

Horizon problem

The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age
this sets a limitthe particle horizonon the separation of any two regions of space that are in causal contact. The
observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at
all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the
sky. There would then be no mechanism to cause wider regions to have the same temperature.
A resolution to this apparent inconsistency is offered by inflationary theory in which a homogeneous and isotropic
scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the
universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously
assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle
horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact
before the beginning of inflation.
Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal
fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in
the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been
accurately confirmed by measurements of the CMB.
If inflation occurred, exponential expansion would push large regions of space well beyond our observable horizon
A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation
ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale
discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when
the electroweak epoch ended.

Magnetic monopoles

The magnetic monopole objection was raised in the late 1970s. Grand unified theories predicted topological defects in
space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early
universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been
found. This problem is also resolved by cosmic inflation, which removes all point defects from the observable universe,
in the same way that it drives the geometry to flatness.

68
Flatness problem

The overall geometry of the universe is determined by whether the Omega cosmological parameter is less than, equal to

or greater than 1. Shown from top to bottom are a closed universewith positive curvature, a hyperbolic universe with

negative curvature and a flat universe with zero curvature.

The flatness problem (also known as the oldness problem) is an observational problem associated with a Friedmann
LematreRobertsonWalker metric (FLRW).[109] The universe may have positive, negative, or zero
spatial curvature depending on its total energy density. Curvature is negative, if its density is less than the critical
density; positive, if greater; and zero at the critical density, in which case space is said to be flat.
The problem is that any small departure from the critical density grows with time, and yet the universe today remains
very close to flat Given that a natural timescale for departure from flatness might be the Planck time,
1043 seconds,[5] the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years
requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the
density of the universe must have been within one part in 10 14 of its critical value, or it would not exist as it does
today.[112]

Cause

Problem of why there is anything at all


Gottfried Wilhelm Leibniz wrote: "Why is there something rather than nothing? The sufficient reason [...] is found in a
substance which [...] is a necessary being bearing the reason for its existence within itself. Philosopher of physics Dean
Rickles[114] has argued that numbers and mathematics (or their underlying laws) may necessarily exist. [115][116] Physics
may conclude that time did not exist before 'Big Bang', but 'started' with the Big Bang and hence there might be no
'beginning', 'before' or potentially 'cause' and instead always existed. Some also argue that nothing cannot exist or that
non-existence might never have been an option. Quantum fluctuations, or other laws of physics that may have existed
at the start of the Big Bang could then create the conditions for matter to occur.

Ultimate fate of the universe

Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the
mass density of the universe were greater than the critical density, then the universe would reach a maximum size and
then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started
a Big Crunch.
Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down
but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn
out, leaving white dwarfs, neutron stars, and black holes. Very gradually, collisions between these would result in mass
accumulating into larger and larger black holes. The average temperature of the universe would asymptotically
approach absolute zeroa Big Freeze.[123] Moreover, if the proton were unstable, then baryonic matter would disappear,
leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation.
The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a
scenario known as heat death
Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass
beyond our event horizon and out of contact with us. The eventual result is not known. The CDM model of the

69
universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally
bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe
expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy
clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-
called Big Rip.

Misconceptions

The following is a partial list of the popular misconceptions about the Big Bang model:
The Big Bang as the origin of the universe: One of the common misconceptions about the Big Bang model is the belief
that it was the origin of the universe. However, the Big Bang model does not comment about how the universe came
into being. Current conception of the Big Bang model assumes the existence of energy, time, and space, and does not
comment about their origin or the cause of the dense and high temperature initial state of the universe.
The Big Bang was "small": It is misleading to visualize the Big Bang by comparing its size to everyday objects. When
the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire
universe.
Hubble's law violates special theory of relativity: Hubble's law predicts that galaxies that are beyond Hubble
Distance recede faster than the speed of light. However, special relativity does not apply beyond motion through space.
Hubble's law describes velocity that results from expansion of space, rather than through space.
Doppler redshift vs cosmological red-shift: Astronomers often refer to the cosmological red-shift as a normal Doppler
shift,[127] which is a misconception. Although similar, the cosmological red-shift is not identical to the Doppler redshift.
The Doppler redshift is based on special relativity, which does not consider the expansion of space. On the contrary,
the cosmological red-shift is based on general relativity, in which the expansion of space is considered. Although they
may appear identical for nearby galaxies, it may cause confusion if the behavior of distant galaxies is understood
through the Doppler redshift.

Speculations

While the Big Bang model is well established in cosmology, it is likely to be refined. The Big Bang theory, built upon the
equations of classical general relativity, indicates a singularity at the origin of cosmic time; this infinite energy density is
regarded as impossible in physics. Still, it is known that the equations are not applicable before the time when the
universe cooled down to the Planck temperature, and this conclusion depends on various assumptions, of which some
could never be experimentally verified. (Also see Planck epoch.)
One proposed refinement to avoid this would-be singularity is to develop a correct treatment of quantum gravity.
It is not known what could have preceded the hot dense state of the early universe or how and why it originated, though
speculation abounds in the field of cosmogony.
Some proposals, each of which entails untested hypotheses, are:

Models including the HartleHawking no-boundary condition, in which the whole of space-time is finite; the Big
Bang does represent the limit of time but without any singularity.
Big Bang lattice model, states that the universe at the moment of the Big Bang consists of an infinite lattice
of fermions, which is smeared over the fundamental domain so it has rotational, translational and gauge
symmetry. The symmetry is the largest symmetry possible and hence the lowest entropy of any state.
Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang
model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic
model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was
preceded by a Big Crunch and the universe cycles from one process to the other.
Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point
leading to a bubble universe, expanding from its own big bang.
Proposals in the last two categories, see the Big Bang as an event in either a much larger and older universe or in
a multiverse.

Religious and philosophical interpretations

As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a
result, it has become one of the liveliest areas in the discourse between science and religion Some believe the Big
Bang implies a creator, and some see its mention in their holy books, [] while others argue that Big Bang cosmology
makes the notion of a creator superfluous

70
Unsolved problems in astronomy
In Astrochemistry and astronomy , Ariny Amos wrote this book to answer Questions asked Although the
scientific discipline of astronomy has made tremendous strides in understanding the nature of the
Universe and its contents, there remain som important unanswered questions. Answers to these may
require the construction of new ground- and space-based instruments, and possibly new developments in
theoretical and experimental physics.

What is the origin of the stellar mass spectrum? That is, why do astronomers observe the same

distribution of stellar masses the initial mass function apparently regardless of the
initial conditions?[100]A deeper understanding of the formation of stars and planets is
needed.
Is there other life in the Universe? Especially, is there other intelligent life? If so, what is the
explanation for
the Fermi paradox? The existence of life elsewhere has important scientific and
philosophical implications.[101][102] Is the Solar System normal or atypical?
What caused the Universe to form? Is the premise of the Fine-tuned universe hypothesis correct? If
so, could this be the result of cosmological natural selection? What caused the cosmic inflation that
produced our homogeneous universe? Why is there a baryon asymmetry?
What is the nature of dark matter and dark energy? These dominate the evolution and fate of the
cosmos, yet their true nature remains unknown.[103]What will be the ultimate fate of the universe?
How did the first galaxies form?[105]How did supermassive black holes form?[106]
What is creating the ultra-high-energy cosmic rays?
Why is the abundance of lithium in the cosmos four times lower than predicted by the
standard BigBang model?
What really happens beyond the event horizon?

Astronomy 120 Discussion Questions: The Expanding Universe Name:

. In the early 1900s a debate raged about the White nebulae. Were these objects part of the Milky Way or
were they separate Island Universes. Heber Doubst Curtis argued that the white nebula were island
universes (galaxies). Contrast these nebulae with Milky Way objects. How did these differences suggest they
were outside the Milky Way?

. Explain how the main sequence fitting technique can be used to measure the distance to clusters of stars.
Why can the distance to a cluster of stars be determined while the distance to a single star can generally not
be accurately measured.

. How does the distance ladder work and why is it a necessary tool to measure distances from the nearby to
the distant universe. Incorporate the inverse square law into your answer. 1

. Cepheid variable stars are stars that vary cyclically in brightness from bright to dim to bright again on a scale
of several months. (Incorporate the following concepts into your answers below: Hydrostatic Equilibrium,
Inverse Square Law, Blackbody Temperature Relation, Inertia) Explain how these stars may be used as
Standard Candles to measure their distances and, by implication, the distances to their home galaxies.

. Explain how the theory of an expanding universe accounts for the linear relationship between redshift V
(velocity away) and distance d known as Hubbles law V = Hd.

71
. Describe how you would measure the Hubble constant H0. What measurements would be necessary? What
are some of the difficulties with these measurements?

In class, we compared Population I stars (like the sun) and Population II stars which are found in globular
clusters. (a) What observation that can be used to tell the difference between a Pop I and a Pop II star
(assume both stars are on the main sequence and have the same intrinsic brightness). 2 (b) Explain the
reasoning behind why these stars would appear different. Include stellar evolutionary concepts in your
answer.

. The Big Bang and Steady State cosmological models both attempt to explain astronomical observations.
Given each of the following observations, indicate whether the Big Bang and Steady State theories are
consistent or inconsistent with the observation and explain why. (a) The Redshift-Distance relation (Hubbles
Law) The observed spectral lines of galaxies are red-shifted. The greater the distance to a galaxy, the greater
the observed red-shift; the distance is proportional to the red-shift. Big Bang Steady State (b) Ages of the
oldest stars. When the ages of stars are measured using techniques such as radioactive decay of elements and
Hertzsprung-Russell diagrams of clusters, the oldest stars in the universe are measured to be 15 billion years
old, even though many stars could theoretically burn for trillions of years. Big Bang Steady State (c) Distant vs.
nearby universe Objects such as quasars and energetic galaxies are common at high redshifts (great distances)
but are rare in the nearby universe. Big Bang

STAR.

A star is a luminous sphere of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many
other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous
points in the sky due to their immense distance from Earth. Historically, the most prominent stars were grouped
into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star
catalogues that identify the known stars and provide standardized stellar designations. However, most of the stars
in the Universe, including all stars outside our galaxy, the Milky Way, are invisible to the naked eye from Earth.
Indeed, most are invisible from Earth even through the most powerful telescopes.

A star is nothing more than a ball of gas, mostly hydrogen and helium, contracting under the force of gravity
and releasing gravitational and fusion energy into space. At the beginning of its life, a star releases only
gravitational potential energy. Later, when its core shrinks to a sufficiently high density, the star predominately
radiates energy released through the conversion of hydrogen and helium into heavier elements. During this
nuclear phase the shrinkage of the star's core slows dramatically. At the end of its luminous life, once all of its
nuclear energy is exhausted, the star begins again to shrink rapidly until the star reaches its most compact form:
a degenerate dwarf, a neutron star, or a black hole.
What separates a star from a brown dwarf or a gaseous planet is that the star passes through a phase of
hydrogen fusion. Before this phase, a gaseous planet, a brown dwarf, and a protostar (a star before nuclear
fusion takes place) look the same, emitting energy released through gravitational

72
collapse. The gaseous planet, however, stays on this track it whole life, shrinking until the pressure of its cold
core balances the gravitational forces. The brown dwarf, despite undergoing a period of deuterium fusion,
never approaches a star in brightness, and never changes it composition significantly from the composition of a
Jupiter; once it depletes its deuterium, it behaves as a Jupiter.
The determining factor for which type of object a hydrogen sphere becomes is the mass of the sphere.
The dividing line is about 0.075 times the mass of the sun.1Above this line, steady hydrogen fusion is
possible; below this line, hydrogen fusion does not occur, and the sphere becomes either a brown dwarf or a
Jupiter (it is a brown dwarf if the mass is greater than 0.012 solar masses).
The parameter with the greatest influence on stellar evolution is a star's mass. Large stars, those with
masses 100 times that of the Sun, evolve rapidly, glowing bright and blue over most of their lives, with life
spans as short as two million years. Small stars, those with a mass around one-tenth the mass of the Sun, burn a
dim red for several tens of billions of years. A more massive star burns its nuclear fuel faster than a smaller star
because the more massive star has a higher internal density and pressure, which forces the nuclear reactions to
occur at a much higher rate than in the smaller star. The higher fusion rate more than compensates for the
larger mass.

A star's maximum mass is set by the tendency of large stars to drive strong winds from their surface. As
the size of a star increases, the luminosity of the star increases. The radiation from a star exerts pressure on the
atmosphere of the star, and if the pressure is great enough, a substantial wind is created that drives off a large
fraction of the star's mass. Estimates place the maximum size of a star at around 350 times the mass of the sun.
There are very few parameters other than mass that determine how a star changes during its existence.
One is the star's composition. A star by mass is three parts hydrogen for every part helium. A tiny fraction of a
star is composed of other elements, such as carbon, nitrogen, and oxygen, all collectively termed as metals in
the astronomer's jargon. Metals influence the rate at which hydrogen is converted into helium, they affect the
escape of energy from the center of the star, and they control the stellar wind coming off the surface of the
largest stars. The final and least important parameter that determines the life of a star is its angular momentum.
The appearance of a star as it ages can change dramatically, because, while the core always shrinks as a
star ages, its outer layers can either shrink or expand, depending on the amount of energy released by the core.
When the rate at which energy is generated by the core increases, the outer

73
layers of the star puff-up until the energy can leak through these layers at a rate equal to the rate at which it
is created. The temperature at the surface generally drops, but because of the star's larger surface area, the
total power radiated by the star's surface remains equal to the power generated as thermal energy at the
star's core. The converse occurs if the energy generation decreases, with the surface of the star becoming
smaller and bluer. As a star ages and burns through its different elements, the power generated within the
star varies, so the size of the star varies.
A star begins the fusion stage of its life by converting hydrogen into helium. A star in this state is called
a main sequence star, because on a chart of surface temperature versus luminosity, a hydrogen-burning star
falls on a line called the main sequence. Low luminosity stars on the main sequence, which are low-mass
stars, have a low surface temperature, and are therefore red; on the other hand, high luminosity stars on the
main sequence, which are high-mass stars, have a have a high temperature, and are therefore blue.
Once a star exhausts its supply of hydrogen at its core, the core collapses until nuclear reactions begin
that convert helium into carbon and oxygen. Outer layers of these stars continue to convert hydrogen into
helium. A star in this stage produces more power than when it was on the main sequence, so its outer layers
expand, making the star many times larger than previously. Such stars are in their red-giant phase, and while
they produce more power, their larger surface area means that they are cooler and redder than before.
After a star exhausts its supply of nuclear fuel, its core collapses until either the star achieves a stable
configuration, with its internal pressure counteracting gravity as it cools to zero temperature, or the star
collapses to a black hole. This core collapse is generally accompanied with the expulsion of the outer layers of
the star, because the amount of gravitational energy released in the collapse provides a pressure that more than
counteracts the gravitational forces on the outer layers. A moderately-sized star, for instance, a star the size of
the Sun, collapses to a stable star, called a degenerate dwarf, that is roughly the radius of Earth. The core of a
larger star collapse to a radius of about 15km; this releases tremendous amounts of energy, leading to a
supernova explosion that drives the remainder of the star way. The remnant star left behind is a neutron star. If
a star is large enough, its core collapses to a black hole. What happens when this occurs is very speculative.
Degenerate dwarf stars have two possible fates. If the star is alone, it will cool gradually to invisibility. If
the star is in a binary system, then the star may undergo a thermonuclear detonation to produce a type Ia
supernova. Two different theories for this event exist; in one, the companion is a main sequence star, and in
the other, it is a degenerate dwarf. If the companion star is a main sequence star entering the red giant phase,
then, as it gradually expands, it will dump some of its atmosphere onto the degenerate dwarf. If enough
material is transferred, the degenerate dwarf becomes unstable to collapse. When the collapse commences, the
higher pressure at the star's center causes the carbon and oxygen to fuse into iron and other heavier elements,
resulting in a thermonuclear explosion. If the companion star is a degenerate dwarf, then the two stars
eventually merge when the system loses enough energy through gravitational radiation. As before, this
produces a stellar collapse, which leads to a thermonuclear explosion. Either way, a type Ia supernova is
produced, and the degenerate star is totally destroyed.
The neutron star has only one fate, and that is to cool to invisibility.
The most complex behavior seen among stars is stellar pulsation. While most stars maintain a stable
configuration that permits the steady transport of nuclear energy to the surface of the star, some stars never find
this configuration, and instead they oscillate in size. When one of these stars is at its smallest, energy within the
star builds up, building up the pressure within the star, and driving the outer layers of the star to greater radii.
At some point, depending upon the details of the physics driving the pulsation, the transport of energy becomes
more efficient, and the energy within the star leaves the star faster than it is produced. This drives the star back
to its smallest size. Stellar pulsation is therefore driven by the physics of the radiative transport.

74
1
The standard units for describing stars are the solar mass, the solar radius, and the astronomical unit (AU).

A star-forming region in the Large Magellanic Cloud.

For at least a portion of its life, a star shines due to thermonuclear fusion of hydrogen into helium in its core,
releasing energy that traverses the star's interior and then radiates into outer space. Almost all naturally occurring
elements heavier than helium are created by stellarnucleosynthesis during the star's lifetime, and for some stars by
supernova nucleosynthesis when it explodes. Near the end of its life, a star can also contain degenerate matter.
Astronomers can determine the mass, age, metallicity (chemical composition), and many other properties of a star
by observing its motion through space, its luminosity, and spectrum respectively. The total mass of a star is the main
factor that determines its evolution and eventual fate. Other characteristics of a star, including diameter and
temperature, change over its life, while the star's environment affects its rotation and movement. A plot of the
temperature of many stars against their luminosities produces a plot known as a HertzsprungRussell diagram (HR
diagram). Plotting a particular star on that diagram allows the age and evolutionary state of that star to be
determined.

False-color imagery of the Sun, a G-type main-sequence star, the closest to Earth

A star's life begins with the gravitational collapse of a gaseous nebula of material composed primarily of hydrogen,
along with helium and trace amounts of heavier elements. When the stellar core is sufficiently dense, hydrogen
becomes steadily converted into helium through nuclear fusion, releasing energy in the process. The remainder of
the star's interior carries energy away from the core through a combination of radiative and convective heat transfer
processes. The star's internal pressure prevents it from collapsing further under its own gravity. When the hydrogen
fuel at the core is exhausted, a star of mass 0.4 times greater than the Sun's will expand to become a red giant. In
some cases, it will fuse heavier elements at the core or in

75
shells around the core. As the star expands it throws a part of its mass, enriched with those heavier
elements, into the interstellar environment, to be recycled later as new stars. Meanwhile, the core becomes a
stellar remnant: a white dwarf, a neutron star, or if it is sufficiently massive a black hole.

Binary and multi-star systems consist of two or more stars that are gravitationally bound and generally move
around each other in stable orbits. When two such stars have a relatively close orbit, their gravitational interaction
can have a significant impact on their evolution. Stars can form part of a much larger gravitationally bound
structure, such as a star cluster or a galaxy.

Observation history of a star

Historically, stars have been important to civilizations throughout the world. They have been part of religious
practices and used for celestial navigation and orientation. Many ancient astronomers believed that stars were
permanently affixed to a heavenly sphere and that they were immutable.
By convention, astronomers grouped stars into constellations and used them to track the motions of the planets and
the inferred position of the Sun.[5]The motion of the Sun against the
background stars (and the horizon) was used to create calendars, which could be used to regulate agricultural
practices.[7]The Gregorian calendar, currently used nearly everywhere in the world, is a solar calendar based on the
angle of the Earth's rotational axis relative to its local star, the Sun.

The oldest accurately dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest
known star catalogues were compiled by the ancient Babylonian astronomers
of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (ca. 15311155 BC).

People have seen patterns in the stars since ancient times. This 1690 depiction of the constellation of
Leo, the lion, is by Johannes Hevelius.

The first star catalogue in Greek astronomy was created by Aristillus in approximately 300 BC, with the help of
Timocharis.]The star catalog of Hipparchus (2nd century BC) included 1020
stars, and was used to assemble Ptolemy's star catalogue. Hipparchus is known for the discovery of the first
recorded nova (new star).[12]Many of the constellations and star names in use today derive from Greek astronomy.
In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear.
In 185 AD, they were the first to observe and write about a supernova, now known as the SN 185 The brightest
stellar event in recorded history was the SN 1006 supernova, which was observed in 1006 and written about by the
Egyptian astronomer Ali ibnRidwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to
the Crab Nebula, was also observed by Chinese and Islamic astronomers.

Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous
astronomical instruments that could compute the positions of the stars. They built the first large observatory research
institutes, mainly for the purpose of producing Zij star catalogues. Among these, the Book of Fixed Stars (964) was
written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters (including
the OmicronVelorum and Brocchi's Clusters) and galaxies (including the Andromeda Galaxy). According to A.
Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky Way galaxy as a
multitude of fragments having the properties of nebulous stars, and also gave the latitudes of various stars during a
lunar eclipse in 1019.

76
According to Josep Puig, the Andalusian astronomer Ibn Bajjah proposed that the Milky Way was made up of
many stars that almost touched one another and appeared to be a continuous image due to the effect of
refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars on 500 AH
(1106/1107 AD) as evidence. Early European astronomers such as Tycho Brahe identified new stars in the night
sky (later termed novae), suggesting that the heavens were not immutable. In 1584 Giordano Bruno suggested
that the stars were like the Sun, and may have other planets, possibly even Earth-like, in orbit around
them,[23]an idea that had been suggested earlier by the ancient Greek philosophers, Democritus and Epicurus, and
by medieval Islamic cosmologists such as Fakhr al-Din al-Razi .By
the following century, the idea of the stars being the same as the Sun was reaching a consensus among astronomers.
To explain why these stars exerted no net gravitational pull on the Solar System, Isaac Newton suggested that the
stars were equally distributed in every direction, an idea prompted by the theologian Richard Bentley.

The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in
1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars,
demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and
Hipparchus

William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the
1780s he established a series of gauges in 600 directions and counted the stars observed along each line of sight.
From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the
Milky Way core. His son JohnHerschel repeated this study in the southern hemisphere and found a corresponding
increase in the same direction.[28]In addition to his other accomplishments, William Herschel is also noted for his
discovery that some stars do not merely lie along the same line of sight, but are also physical companions that form
binary star systems.

The constellation of Leo as it can be seen by the naked eye. Lines have been added.

77
The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the
spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption
linesthe dark lines in a stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865 Secchi
began classifying stars into spectral types.[29]However, the modern version of the stellar classification scheme was
developed by Annie J. Cannon during the 1900s.

Alpha Centauri A and B over limb of Saturn

The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich
Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the
heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich
Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering
discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the
star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers
such as William Struve and S. W. Burnham, allowing the masses of stars to be determined from computation of
orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations
was made by Felix Savary in 1827. The twentieth century saw increasingly rapid advances in the scientific study of
stars. The photograph became a valuable astronomical tool. Karl Schwarzschild discovered that the color of a star
and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic
magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at
multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using
an interferometer on the Hooker telescope at Mount Wilson Observatory.

Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth
century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars.
Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin
first proposed that stars were made primarily of hydrogen and helium in her 1925 PhD thesis. The spectra of stars
were further understood through advances in quantum physics. This allowed the chemical composition of the
stellar atmosphere to be determined.

With the exception of supernovae, individual stars have primarily been observed in the Local Group and
especially in the visible part of the Milky Way (as demonstrated by the detailed star catalogues available for our
galaxy) But some stars have been observed in the M100
galaxy of the Virgo Cluster, about 100 million light years from the Earth. In the Local Super cluster it is possible
to see star clusters, and current telescopes could in principle observe faint individual stars in the Local Group .
However, outside the Local Supercluster of galaxies, neither individual stars nor clusters of stars have been
observed. The

78
only exception is a faint image of a large star cluster containing hundreds of thousands of stars located at a
distance of one billion light years ten times further than the most distant star
cluster previously observed.

Designations of a star.

The concept of a constellation was known to exist during the Babylonian period. Ancient sky watchers
imagined that prominent arrangements of stars formed patterns, and they associated
these with particular aspects of nature or their myths. Twelve of these formations lay along the band of the
ecliptic and these became the basis of astrology. Many of the more prominent
individual stars were also given names, particularly with Arabic or Latin designations.

As well as certain constellations and the Sun itself, individual stars have their own myths .To the Ancient Greeks,
some "stars", known as planets (Greek (plants), meaning
"wanderer"), represented various important deities, from which the names of the planets Mercury, Venus, Mars,
Jupiter and Saturn were taken. (Uranus and Neptune were also Greek
and Roman gods, but neither planet was known in Antiquity because of their low brightness. Their names
were assigned by later astronomers.)

This view contains blue stars known as "Blue stragglers", for their apparent location on the HertzsprungRussell
diagram

Circa 1600, the names of the constellations were used to name the stars in the corresponding regions of the sky.
The German astronomer Johann Bayer created a series of star maps and applied Greek letters as designations to the
stars in each constellation. Later a numbering system based on the star's right ascension was invented and added to
John Flamsteed's star catalogue in
his book "Historia coelestis Britannica" (the 1712 edition), whereby this numbering system came to be called
Flamsteed designation or Flamsteed numbering.

The only internationally recognized authority for naming celestial bodies is the InternationalAstronomical
Union (IAU).[43]A number of private companies sell names of stars, which the British Library calls an
unregulated commercial [Link] IAU has disassociated itself from this commercial practice, and these
names are neither recognized by the IAU nor used by them. One such star-naming company is the International
Star Registry, which, during the 1980s, was accused of deceptive practice for making it appear that the
assigned name was official. This now-discontinued ISR practice was informally labeled a scam and a
fraud, and the New York City Department of Consumer Affairs issued a violation against ISR for engaging in
a deceptive trade practice.

Units of measurement

Although stellar parameters can be expressed in SI units or CGS units, it is often most convenient to express
mass, luminosity, and radii in solar units, based on the characteristics of the Sun:

79
solar mass: M = 1.9891 1030 kg[53]
solar luminosity: L = 3.827 1026 W[53]
solar radius R = 6.960 108 m[54]

Large lengths, such as the radius of a giant star or the semi-major axis of a binary star system, are often expressed in
terms of the astronomical unit approximately equal to the mean distance between the Earth and the Sun (150
million km or 93 million miles).

STAR FORMATION AND EVOLUTION

Stars condense from regions of space of higher density, yet those regions are less dense than within a vacuum
chamber. These regions - known as molecular clouds - consist mostly of hydrogen, with about 23 to 28
percent helium and a few percent heavier elements. One example of such a star-forming region is the Orion
Nebula. Most stars form in groups of dozens to hundreds of thousands of stars. Massive stars in these groups
may powerfully illuminate those clouds, ionizing the hydrogen, and creating H II regions. Such feedback
effects, from star formation, may ultimately disrupt the cloud and prevent further star formation.

Stellar evolution of low-mass (left cycle) and high-mass (right cycle) stars, with examples in italics

All stars spend the majority of their existence as main sequence stars, fueled primarily by the nuclear fusion of
hydrogen into helium within their cores. However, stars of different masses have markedly different properties at
various stages of their development. The ultimate fate of more massive stars differs from that of less massive stars,
as do their luminosities and the impact they have on their environment. Accordingly, astronomers often group stars
by their mass:[57]

80
Very low mass stars, with masses below 0.5 M, are fully convective and distribute helium evenly
throughout the whole star while on the main sequence. Therefore, they never undergo shell burning, never
become red giants, which cease fusing and become helium white dwarfs and slowly cool after exhausting
their hydrogen.[58]However, as the lifetime of 0.5 M stars is longer than the age of the universe, no such
star has yet reached the white dwarf stage.
Low mass stars (including the Sun), with a mass between 0.5 M and 1.82.5 M depending on
composition, do become red giants as their core hydrogen is depleted and they begin to burn helium in core
in a helium flash; they develop a degenerate carbon-oxygen core later on the asymptotic giant branch; they
finally blow off their outer shell as a planetary nebula and leave behind their core in the form of a white
dwarf.
Intermediate-mass stars, between 1.82.5 M and 510 M, pass through evolutionary stages similar to low
mass stars, but after a relatively short period on the RGB they ignite helium without a flash and spend an
extended period in the red clump before forming a degenerate carbon-oxygen core.
Massive stars generally have a minimum mass of 710 M (possibly as low as 56 M). After exhausting
the hydrogen at the core these stars become supergiants and go on to fuse elements heavier than helium.
They end their lives when their cores collapse and they explode as supernovae.

Star formation

The formation of a star begins with gravitational instability within a molecular cloud, caused by regions of higher
density - often triggered by compression of clouds by radiation from massive stars, expanding bubbles in the
interstellar medium, the collision of different molecular clouds, or the collision of galaxies (as in a starburst
galaxy). When a region reaches a sufficient density of matter to satisfy the criteria for Jeans instability, it begins to
collapse under its own gravitational force.

Artist's conception of the birth of a star within a dense molecular cloud.

81
As the cloud collapses, individual conglomerations of dense dust and gas form "Bok globules". As a globule
collapses and the density increases, the gravitational energy converts into heat and the temperature rises. When the
protostellar cloud has approximately reached the stable condition of hydrostatic equilibrium, a protostar forms at
the core. These premain sequence stars are often surrounded by a protoplanetary disk and powered mainly by the
conversion of gravitational energy. The period of gravitational contraction lasts about 10 to 15 million years.

A cluster of approximately 500 young stars lies within the nearby W40 stellar nursery.

Early stars of less than 2 M are called T Tauri stars, while those with greater mass are HerbigAe/Be stars. These
newly formed stars emit jets of gas along their axis of rotation, which may
reduce the angular momentum of the collapsing star and result in small patches of nebulosity known as Herbig
Haro objects.[63][64] These jets, in combination with radiation from nearby massive stars, may help to drive away the
surrounding cloud from which the star was formed.

Early in their development, T Tauri stars follow the Hayashi trackthey contract and decrease in luminosity
while remaining at roughly the same temperature. Less massive T Tauri stars follow this track to the main
sequence, while more massive stars turn onto the Henyey track.

Most stars are observed to be members of binary star systems, and the properties of those binaries are the
result of the conditions in which they formed. A gas cloud must lose its angular momentum in order to
collapse and form a star. The fragmentation of the cloud into multiple stars distributes some of that angular
momentum. The primordial binaries transfer some angular momentum by gravitational interactions during
close encounters with other stars in young stellar clusters. These interactions tend to split apart more widely
separated (soft) binaries while causing hard binaries to become more tightly bound. This produces the
separation of binaries into their two observed populations distributions.

Stars spend about 90% of their existence fusing hydrogen into helium in high-temperature and high-pressure
reactions near the core. Such stars are said to be on the main sequence, and are called dwarf stars. Starting at zero-
age main sequence, the proportion of helium in a star's core will steadily increase, the rate of nuclear fusion at the
core will slowly increase, as will the star's temperature and luminosity. The Sun, for example, is estimated to have
increased in luminosity by about 40% since it reached the main sequence 4.6 billion (4.6 10 9) years ago.

Every star generates a stellar wind of particles that causes a continual outflow of gas into space. For most stars, the
mass lost is negligible. The Sun loses 1014 M every year, or about 0.01%

82
of its total mass over its entire lifespan. However, very massive stars can lose 10 7 to 105 M
each year, significantly affecting their evolution. Stars that begin with more than 50 M can lose over half their
total mass while on the main sequence.

An example of a HertzsprungRussell diagram for a set of stars that includes the Sun (center).
Classification

The time a star spends on the main sequence depends primarily on the amount of fuel it has and the rate at which it
fuses it. The Sun's is expected to live 10 billion (10 10) years. Massive stars consume their fuel very rapidly and are
short-lived. Low mass stars consume their fuel very slowly. Stars less massive than 0.25 M , called red dwarfs, are
able to fuse nearly all of their mass while stars of about 1 M can only fuse about 10% of their mass. The
combination of their slow fuel-consumption and relatively large usable fuel supply allows low mass stars to last
about one trillion (1012) years; the most extreme of 0.08 M) will last for about 12 trillion years. Red dwarfs become
hotter and more luminous as they accumulate helium. When they eventually run out of hydrogen, they contract into
a white dwarf and decline in temperature.[58]However, since the lifespan of such stars is greater than the current age
of the universe (13.8 billion years), no stars under about 0.85 M[72]are expected to have moved off the main
sequence.

83
Besides mass, the elements heavier than helium can play a significant role in the evolution of stars.
Astronomers label all elements heavier than helium "metals", and call the chemical
concentration of these elements in a star, its metallicity. A star's metallicity can influence the time the star
takes to burn its fuel, and controls the formation of its magnetic fields, which
affects the strength of its stellar wind. Older, population II stars have substantially less metallicity than the
younger, population I stars due to the composition of the molecular clouds from which they formed. Over time,
such clouds become increasingly enriched in heavier elements as older stars die and shed portions of their
atmospheres.

Post- main sequence.

As stars of at least 0.4 M[2]exhaust their supply of hydrogen at their core, they start to fuse hydrogen in a shell
outside the helium core. Their outer layers expand and cool greatly as they form a red giant. In about 5 billion
years, when the Sun enters the helium burning phase, it will expand to a maximum radius of roughly 1
astronomical unit (150 million kilometres), 250 times its present size, and lose 30% of its current mass.

As the hydrogen shell burning produces more helium, the core increases in mass and temperature. In a red giant of
up to 2.25 M, the mass of the helium core becomes degenerate prior to helium fusion. Finally, when the
temperature increases sufficiently, helium fusion begins explosively in what is called a helium flash, and the star
rapidly shrinks in radius, increases its surface temperature, and moves to the horizontal branch of the HR diagram.
For more massive stars, helium core fusion starts before the core becomes degenerate, and the star spends some time
in the red clump, slowly burning helium, before the outer convective envelope collapses and the star then moves to
the horizontal branch.

After the star has fused the helium of its core, the carbon product fuses producing a hot core with an outer shell of
fusing helium. The star then follows an evolutionary path called the asymptoticgiant branch (AGB) that parallels the
other described red giant phase, but with a higher luminosity. The more massive AGB stars may undergo a brief
period of carbon fusion before the core becomes degenerate.

Massive stars
Supergiant and Hypergiant

During their helium-burning phase, stars of more than nine solar masses expand to form redsupergiants. When this
fuel is exhausted at the core, they continue to fuse elements heavier than helium.

The core contracts and the temperature and pressure rises enough to fuse carbon (see Carbonburning process).
This process continues, with the successive stages being fueled by neon (see neon burning process), oxygen (see
oxygen burning process), and silicon (see silicon burningprocess). Near the end of the star's life, fusion
continues along a series of onion-layer shells within a massive star. Each shell fuses a different element, with the
outermost shell fusing hydrogen; the next shell fusing helium, and so forth.

84
The final stage occurs when a massive star begins producing iron. Since iron nuclei are more tightly bound than
any heavier nuclei, any fusion beyond iron does not produce a net release of energy. To a very limited degree
such a process proceeds, but it consumes energy. Likewise, since they are more tightly bound than all lighter
nuclei, such energy cannot be released by fission.[77]In relatively old, very massive stars, a large core of inert iron
will accumulate in the center of the star. The heavier elements in these stars can work their way to the surface,
forming evolved objects known as Wolf-Rayet stars that have a dense stellar wind which sheds the outer
atmosphere.

Supergiant star
Supergiants are among the most massive and most luminous stars. Supergiant stars occupy the top
region of the HertzsprungRussell diagram with absolute visual magnitudes between about 3 and 8
with temperatures spanning from about 3,500 K to over 20,000 [Link] term supergiant, as applied to a
star, does not have a single concrete definition. The term giant star was first coined by Hertzsprung when
it became apparent that the majority of stars fell into two distinct regions of the HertzsprungRussell
diagram. One region contained larger and more luminous stars of spectral types A to M and received the
name giant.[1] Subsequently, as they lacked any measurable parallax, it became apparent that some of
these stars were significantly larger and more luminous than the bulk, and the term super-giant arose,
quickly adopted as supergiant.
Spectral luminosity class
Supergiant stars can be identified on the basis of their spectra, with distinctive lines sensitive to high
luminosity and low surface gravity.[5][6] In 1897, Antonia C. Maury, had divided stars based on the widths
of their spectral lines, with her class "c" identifying stars with the narrowest lines. Although it was not
known at the time, these were the most luminous stars.[7] In 1943 Morgan and Keenan formalised the
definition of spectral luminosity classes, with class I referring to supergiant stars. [8] The same system of
MK luminosity classes is still used today, with refinements based on the increased resolution of modern
spectra.[9]Supergiants occur in every spectral class from young blue class O supergiants to highly evolved
red class Msupergiants. Because they are enlarged compared to main-sequence and giant stars of the
same spectral type, they have lower surface gravities and changes can be observed in their line profiles.
Supergiants are also evolved stars with higher levels of heavy elements than main-sequence stars. This
is the basis of the MK luminosity system which assigns stars to luminosity classes purely from observing
their spectra. In addition to the line changes due to low surface gravity and fusion products, the most
luminous stars have high mass-loss rates and resulting clouds of expelled circumstellar materials which
can produce emission lines, P Cygni profiles, or forbidden lines. The MK system assigns stars to
luminosity classes: Ib for supergiants; Ia for luminous supergiants; and 0 (zero) or Ia+ for hypergiants. In
reality there is very much of a continuum rather than well defined bands for these classifications, and
classifications such as Iab are used for intermediate luminosity supergiants. Supergiant spectra are
frequently annotated to indicate spectral peculiarities, for example B2 Iae or F5 Ipec.

Evolutionary supergiants
Supergiants can also be defined as a specific phase in the evolutionary history of certain stars. Stars with
initial masses above 8-10 M quickly and smoothly initiate helium core fusion after they have exhausted
their hydrogen, continue fusing heavier elements after helium exhaustion until they develop an iron core,
and then the core collapses to produce a supernova. Once these massive stars leave the main sequence
their atmospheres inflate and they are described as supergiants. Stars initially under 10 M will never
form an iron core and in evolutionary terms do not become supergiants, although they can reach
luminosities thousands of times the sun's. They cannot fuse carbon and heavier elements after the helium
is exhausted, so they eventually just lose their outer layers whose core become a white dwarf. The phase
where these stars have both hydrogen and helium burning shells is referred to as the asymptotic giant
branch (AGB), as stars gradually become more and more luminous class M stars. Stars of 8-10 M may
fuse sufficient carbon on the AGB to produce an oxygen-neon core and an electron-capture supernova,
but astrophysicists categorise these as super-AGB stars rather than supergiants.[10]

85
Categorisation of evolved stars
There are several categories of evolved star which are not supergiants in evolutionary terms, but may
show supergiant spectral features or have luminosities comparable to supergiants.
Asymptotic-giant-branch (AGB) and post-AGB stars are highly evolved lower-mass red giants with
luminosities that can be comparable to more massive red supergiants, but because of their low mass,
being in a different stage of development (helium shell burning), and their lives ending in a different way
(planetary nebula and white dwarf rather than supernova), astrophysicists prefer to keep them separate.
The dividing line becomes blurred at around 710 M(or as high as 12 M in some models[11]) where
stars start to undergo limited fusion of elements heavier than helium. Specialists studying these stars
often refer to them as super AGB stars, since they have many properties in common with AGB such as
thermal pulsing. Others describe them as low-mass supergiants since they start to burn elements heavier
than helium and can explode as supernovae.[12] Many post-AGB stars receive spectral types with
supergiant luminosity classes. For example, RV Tauri has an Ia (bright supergiant) luminosity class
despite being less massive than the sun. Some AGB stars also receive a supergiant luminosity class,
most notably W Virginis variables such as W Virginis itself, stars that are executing a blue loop triggered
by thermal pulsing. A very small number of Mira variablesand other late AGB stars have supergiant
luminosity classes, for example Herculis.
Classical Cepheid variables typically have supergiant luminosity classes, although only the most luminous
and massive will actually go on to develop an iron core. The majority of them are intermediate mass stars
fusing helium in their cores and will eventually transition to the asymptotic giant branch. Cephei itself is
an example with a luminosity of 2,000 Land a mass of 4.5 M.
WolfRayet stars are also high-mass luminous evolved stars, hotter than most supergiants and smaller,
visually less bright but often more luminous because of their high temperatures. They have spectra
dominated by helium and other heavier elements, usually showing little or no hydrogen, which is a clue to
their nature as stars even more evolved than supergiants. Just as the AGB stars occur in almost the
same region of the HR diagram as red supergiants, WolfRayet stars can occur in the same region of the
HR diagram as the hottest blue supergiants and main-sequence stars.
The most massive and luminous main-sequence stars are almost indistinguishable from the supergiants
they quickly evolve into. They have almost identical temperatures and very similar luminosities, and only
the most detailed analyses can distinguish the spectral features that show they have evolved away from
the narrow early O-type main-sequence to the nearby area of early O-type supergiants. Such early O-
type supergiants share many features with WNLh WolfRayet stars and are sometimes designated
as slash stars, intermediates between the two types.
Luminous blue variables (LBVs) are a type of star that occur in the same region of the HR diagram as
blue supergiants, but are generally classified separately. They are evolved, expanded, massive, and
luminous stars, often hypergiants, but they have very specific spectral variability which defies the
assignment of a standard spectral type. LBVs only observed at a particular time, or over a period of time
when they are stable, may simply be designated as hot supergiants, or as candidate LBVs due to their
luminosity.
Hypergiants are frequently treated as a different category of star from supergiants, although in all
important respects they are just a more luminous category of supergiant. They are evolved, expanded,
massive and luminous stars like supergiants, but at the most massive and luminous extreme, and with
particular additional properties of undergoing high mass-loss due to their extreme luminosities and
instability. Generally only the more evolved supergiants show hypergiant properties since their instability
increases after high mass-loss and some increase in luminosity.

86
Some B[e] stars are supergiants, although other B[e] stars are clearly not. Some researchers distinguish
the B[e] objects as separate from supergiants, while others prefer to define massive evolved B[e] stars as
a subgroup of supergiants. The latter has become more common with the understanding that the B[e]
phenomenon arises separately in a number of distinct types of stars, including some that are clearly just a
phase in the life of supergiants.

Properties

The disc and atmosphere of Betelgeuse (ESO)


Supergiants have masses from 8 to 12 times the Sun (M) upwards, and luminosities from about 1,000 to
over a million times the Sun (L). They vary greatly in radius, usually from 30 to 500, or even in excess of
1,000 solar radii(R). They are massive enough to begin helium core burning gently before the core
becomes degenerate, without a flash, and without the strong dredge-ups that lower-mass stars
experience. They go on to successively ignite heavier elements, usually all the way to iron. Also because
of their high masses they are destined to explode as supernovae.
The Stefan-Boltzmann law dictates that the relatively cool surfaces of red supergiants radiate much less
energy per unit area than those of blue supergiants; thus, for a given luminosity red supergiants are larger
than their blue counterparts. Radiation pressure limits the largest cool supergiants to around
1,500 R and the most massive hot supergiants to around a million L(Mbol around 10). Stars near and
occasionally beyond these limits become unstable, pulsate, and experience rapid mass loss.
Surface gravity
The supergiant luminosity class is assigned on the basis of spectral features that are largely a measure of
surface gravity, although also affected by other properties such as microturbulence. Supergiants typically
have surface gravities of around log(g) 2.0 cgs and lower, although bright giants (luminosity class II) have
statistically very similar surface gravities to normal Ib supergiants.[13] Cool luminous supergiants have
lower surface gravities, with the most luminous (and unstable) stars having log(g) around zero. [14] Hotter
supergiants, even the most luminous, have surface gravities around one, due to their higher masses and
smaller radii.
Temperature
There are supergiant stars at all of the main spectral classes and across the whole range of temperatures
from mid-M class stars at around 3,500 K to the hottest O class stars over 40,000 K. Supergiants are
generally not found cooler than mid-M class. This is expected theoretically since they would be
catastrophically unstable. However, there are potential exceptions among extreme stars such as VX
Sagittarii.

87
Although there are examples of supergiants in every class from O to M, a majority are spectral type B,
more than at all other spectral classes combined. There is a much smaller grouping of very low luminosity
G-type supergiants, intermediate mass stars burning helium in their cores before reaching the asymptotic
giant branch. There is a distinct grouping of high luminosity supergiants at early B (B0-2) and very late O
(O9.5), more common even than main sequence stars of those spectral types.
The relative numbers of blue, yellow, and red supergiants is an indicator of the speed of stellar evolution
and is used as a powerful test of models of the evolution of massive stars.

Luminosity
The supergiants lie more or less on a horizontal band occupying the entire upper portion of the HR
diagram, but there are some variations at different spectral types. These variations are partly due to
different methods for assigning luminosity classes at different spectral types, and partly reflecting actual
physical differences in the stars.
The bolometric luminosity of a star reflects its total output of electromagnetic radiation at all wavelengths.
For very hot and very cool stars, the bolometric luminosity is dramatically higher than the visual
luminosity, sometimes several magnitudes or a factor of five or more. This bolometric correction is
approximately one magnitude for mid B, late K, and early M stars, increasing to three magnitudes (a
factor of 15) for O and mid M stars.
All supergiants are larger and more luminous than main sequence of the same temperature. This means
that hot supergiants lie on a relatively narrow band above bright main sequence stars. B0 main sequence
star has an absolute magnitude of about 5, meaning that all B0 supergiants are significantly brighter
than absolute magnitude 5. Bolometric luminosities for even the faintest blue supergiants are tens of
thousands of times the sun (L). The brightest can be over a million L and are often unstable such as
Cygni variables and Luminous Blue Variables.
The very hottest supergiants with early O spectral types, occur in an extremely narrow range of
luminosities above the highly luminous early O main sequence and giant stars. They are not classified
separately into normal (Ib) and luminous (Ia) supergiants, although they commonly have other spectral
type modifiers such as "f" for nitrogen and helium emission (e.g. O2 If for HD 93129A).
Yellow supergiants can be considerably fainter than absolute magnitude 5, with some examples around
2 (e.g. 14 Persei). With bolometric corrections around zero, they may only be a few hundred times the
luminosity of the sun. These are not massive stars though, instead being stars of intermediate mass that
have particularly low surface gravities, often due to instability such as Cepheid pulsations. These
intermediate mass stars being classified as supergiants during a relatively long-lasting phase of their
evolution account for the large numbers of low luminosity yellow supergiants. The most luminous yellow
stars, the yellow hypergiants, are amongst the visually brightest stars with absolute magnitudes around
9, although still less than a million L.
There is a strong upper limit to the luminosity of red supergiants at around half a million L. Stars that
would be brighter than this shed their outer layers so rapidly that they remain as hot supergiants after
they leave the main sequence. The majority of red supergiants were 10-15 M main sequence stars and
now have luminosities below 100,000 L, and there are very few bright supergiant (Ia) M class
stars.[16] The least luminous stars classified as red supergiants are some of the brightest AGB and post-
AGB stars, highly expanded and unstable low mass stars such as the RV Tauri variables. The majority of
AGB stars are given giant or bright giant luminosity classes, but particularly unstable stars such as W
Virginis variables may be given a supergiant classification (e.g. W Virginis itself). The faintest red
supergiants are around absolute magnitude 3.
Variability

88
RS Puppis is a supergiant and Classical Cepheid variable.
While most supergiants show some degree of photometric variability, such as Alpha Cygni
variables, semiregular variables, and irregular variables, there are certain well defined types of variables
amongst the supergiants. The instability strip crosses the region of supergiants, and specifically many
yellow supergiants are Classical Cepheid variables. The same region of instability extends to include the
even more luminous yellow hypergiants, an extremely rare and short-lived class of luminous supergiant.
Many R Coronae Borealis variables, although not all, are yellow supergiants, but this variability is due to
their unusual chemical composition rather than a physical instability.
Further types of variable stars, such as RV Tauri variables and PV Telescopii variables, are often
described as supergiants. RV Tau stars are frequently assigned spectral types with a supergiant
luminosity class on account of their low surface gravity, and they are amongst the most luminous of the
AGB and post-AGB stars, having masses similar to the sun. Likewise the even rarer PV Tel variables are
often classified as supergiants, but have lower luminosities than supergiants and peculiar B[e] spectra
extremely deficient in hydrogen. Possibly they are also post-AGB objects, or perhaps "born-again" AGB
stars.
The LBVs are variable with multiple semi-regular periods and less predictable eruptions and giant
outbursts. They are usually supergiants or hypergiants, occasionally with Wolf-Rayet spectra, extremely
luminous, massive, evolved stars with expanded outer layers, but are so distinctive and unusual that they
are often treated as a separate category without being referred to as supergiants or given a supergiant
spectral type. Often their spectral type will be given just as "LBV" because they have peculiar and highly
variable spectral features, with temperatures varying from about 8,000 K in outburst up to 20,000 K or
more when "quiescent".

Chemical abundances
The abundance of various elements at the surface of supergiants is different from less luminous stars.
Supergiants are evolved stars and may have undergone convection of fusion products to the surface.
Cool supergiants show enhanced helium and nitrogen at the surface due to convection of these fusion
products to the surface during the main sequence of very massive stars, due to dredge-ups during shell
burning, and due to the loss of the outer layers of the star. Helium is formed in the core and shell by
fusion of hydrogen and nitrogen accumulates relative to carbon and oxygen during CNO cycle fusion. At
the same time, carbon and oxygen abundances are reduced. Red supergiants can be distinguished from
luminous but less massive AGB stars by unusual chemicals at the surface, enhancement of carbon from
deep third dredge-ups, as well as carbon-13, lithium and s-process elements. Late-phase AGB stars can
become highly oxygen enriched, producing OH masers.
Hotter supergiants show differing levels of nitrogen enrichment. This may be due to different levels of
mixing on the main sequence, for example due to rotation, or because some blue supergiants are newly
evolved from the main sequence while others have previously been through a red supergiant phase.

89
Post-red supergiant stars have a generally higher level of nitrogen relative to carbon due to convection of
CNO-processed material to the surface and the complete loss of the outer layers. Surface enhancement
of helium is also stronger in post-red supergiants, representing more than a third of the atmosphere.

Hypergiant star,
A hypergiant (luminosity class 0 or Ia+) is among the very rare kinds of stars that typically show
tremendous luminosities and very high rates of mass loss by stellar winds. The term hypergiant is defined
as luminosity class 0 (zero) in the MKK system. However, this is rarely seen in the literature or in
published spectral classifications, except for specific well-defined groups such as the yellow hypergiants,
RSG (red supergiants), or blue B(e) supergiants with emission spectra. More commonly, hypergiants may
be classed as Ia-0 or Ia+, but red supergiants rarely receive these extra spectral classifications.
Astronomers are mostly interested in these stars because they relate to understanding stellar evolution,
especially with star formation, stability, and their expected demise as [Link] 1956, the
astronomers Feast and Thackeray used the term super-supergiant (later changed into hypergiant) for
stars with an absolute magnitude brighter than MV = 7 (MBol will be larger for very cool and very hot
stars, for example at least 9.7 for a B0 hypergiant). In 1971, Keenan suggested that the term would be
used only for supergiants showing at least one broad emission component in H, indicating an extended
stellar atmosphere or a relatively large mass loss rate. The Keenan criterion is the one most commonly
used by scientists today.
Observation of a highly luminous star is insufficient for it to be defined as a hypergiant. That requires the
detection of the spectral signatures of atmospheric instability and high mass loss. So it is quite possible
for non-hypergiant supergiant stars to have the same or higher luminosity as a hypergiant of the same
spectral class. Additionally, hypergiants are expected to have characteristic broadening and red-shifting
of their spectral lines producing a distinctive shape known as a P Cygni profile. The use of hydrogen
emission is not helpful for defining the coolest hypergiants, and these are largely classified on luminosity
since mass loss is almost inevitable for the class.

Collapse of a star

As a star's core shrinks, the intensity of radiation from that surface increases, creating such radiation pressure on the
outer shell of gas that it will push those layers away, forming a planetary nebula. If what remains after the outer
atmosphere has been shed is less than 1.4 M, it shrinks to a relatively tiny object about the size of Earth, known as
a white dwarf. White dwarfs lack the mass for further gravitational compression to take place. The electron-
degeneratematter inside a white dwarf is no longer a plasma, even though stars are generally referred to as being
spheres of plasma. Eventually, white dwarfs fade into black dwarfs over a very long period of time.

The Crab Nebula, remnants of a supernova that was first observed around 1050 AD

90
In larger stars, fusion continues until the iron core has grown so large (more than 1.4 M ) that it can no longer
support its own mass. This core will suddenly collapse as its electrons are driven into its protons, forming
neutrons, neutrinos, and gamma rays in a burst of electron capture and inverse beta decay. The shockwave formed
by this sudden collapse causes the rest of the star to explode in a supernova. Supernovae become so bright that
they may briefly outshine the star's entire home galaxy. When they occur within the Milky Way, supernovae have
historically been observed by naked-eye observers as "new stars" where none seemingly existed before.

A supernova explosion blows away the star's outer layers, leaving a remnant such as the Crab Nebula.]The core is
compressed into a neutron star, which sometimes manifests itself as a pulsar or X-ray burster. In the case of the
largest stars, the remnant is a black hole greater than 4 M)s. In a neutron star the matter is in a state known as
neutron-degenerate matter, with a more exotic form of degenerate matter, QCD matter, possibly present in the
core. Within a black hole, the matter is in a state that is not currently understood.

The blown-off outer layers of dying stars include heavy elements, which may be recycled during the formation of
new stars. These heavy elements allow the formation of rocky planets. The outflow from supernovae and the stellar
wind of large stars play an important part in shaping the interstellar medium.

Binary stars

The postmain-sequence evolution of binary stars may be significantly different from the evolution of single stars
of the same mass. If stars in a binary system are sufficiently close, when one of the stars expands to become a red
giant it may overflow its Roche lobe, the region around a star where material is gravitationally bound to that star,
leading to transfer of material to the other. When the Roche lobe is violated, a variety of phenomena can result,
including contactbinaries, common-envelope binaries, cataclysmic variables, and type Ia supernovae.

Distribution

In addition to isolated stars, a multi-star system can consist of two or more gravitationally bound stars that orbit
each other. The simplest and most common multi-star system is a binary star, but systems of three or more stars are
also found. For reasons of orbital stability, such multi-star systems are often organized into hierarchical sets of
binary stars.[81]Larger groups called starclusters also exist. These range from loose stellar associations with only a
few stars, up to enormous globular clusters with hundreds of thousands of stars. Such systems orbit our Milky Way
galaxy.

91
A white dwarf star in orbit around Sirius (artist's impression).

It has been a long-held assumption that the majority of stars occur in gravitationally bound, multiple-star systems.
This is particularly true for very massive O and B class stars, where 80% of the stars are believed to be part of
multiple-star systems. The proportion of single star systems increases with decreasing star mass, so that only 25%
of red dwarfs are known to have stellar companions. As 85% of all stars are red dwarfs, most stars in the Milky
Way are likely single from birth.

Stars are not spread uniformly across the universe, but are normally grouped into galaxies along
with interstellar gas and dust. A typical galaxy contains hundreds of billions of stars, and there are more than 100
billion (1011) galaxies in the observable universe. In 2010, one estimate of the number of stars in the observable
universe was 300 sextillion (3 1023) While it is often believed that stars only exist within galaxies, intergalactic
stars have been discovered.

The nearest star to the Earth, apart from the Sun, is Proxima Centauri, which is 39.9 trillion kilometres, or 4.2 light-
years. Travelling at the orbital speed of the Space Shuttle (8 kilometres per secondalmost 30,000 kilometres per
hour), it would take about 150,000 years to arrive. This it typical of stellar separations in galactic discs.[87]Stars can
be much closer to each other in the centres of galaxies and in globular clusters, or much farther apart in galactic
halos.

Due to the relatively vast distances between stars outside the galactic nucleus, collisions between
stars are thought to be rare. In denser regions such as the core of globular clusters or the galactic center, collisions
can be more common.[88]Such collisions can produce what are known as blue
stragglers. These abnormal stars have a higher surface temperature than the other main sequence stars with the
same luminosity of the cluster to which it belongs.

92
CHARACTERISTCS OF STARS.

Almost everything about a star is determined by its initial mass, including such characteristics as luminosity, size,
evolution, lifespan, and its eventual fate.

Age
Stellar age estimation
Most stars are between 1 billion and 10 billion years old. Some stars may even be close to 13.8 billion years old
the observed age of the universe. The oldest star yet discovered, HD 140283, nicknamed Methuselah star, is an
estimated 14.46 0.8 billion years old. (Due to the
uncertainty in the value, this age for the star does not conflict with the age of the Universe, determined by
the Planck satellite as 13.799 0.021).

93
Some of the well-known stars with their apparent colors and relative sizes.

The more massive the star, the shorter its lifespan, primarily because massive stars have greater pressure on their
cores, causing them to burn hydrogen more rapidly. The most massive stars last
an average of a few million years, while stars of minimum mass (red dwarfs) burn their fuel very slowly and can
last tens to hundreds of billions of years.

Chemical composition
Metallicity and Molecules in stars

When stars form in the present Milky Way galaxy they are composed of about 71% hydrogen and 27% helium, as
measured by mass, with a small fraction of heavier elements. Typically the portion of heavy elements is measured
in terms of the iron content of the stellar atmosphere, as iron is a common element and its absorption lines are
relatively easy to measure. The portion of heavier elements may be an indicator of the likelihood that the star has a
planetary system.

The star with the lowest iron content ever measured is the dwarf HE1327-2326, with only 1/200,000th the iron
content of the Sun.[96]By contrast, the super-metal-rich star Leonis has nearly double the abundance of iron as
the Sun, while the planet-bearing star 14 Herculis has nearly triple the iron. There also exist chemically peculiar
stars that show unusual abundances of certain elements in their spectrum; especially chromium and rare earth
elements .Stars with cooler outer atmospheres, including the Sun, can form various diatomic and polyatomic
molecules.

94
Diameter.

Due to their great distance from the Earth, all stars except the Sun appear to the unaided eye as shining points in
the night sky that twinkle because of the effect of the Earth's atmosphere. The Sun is also a star, but it is close
enough to the Earth to appear as a disk instead, and to provide
daylight. Other than the Sun, the star with the largest apparent size is R Doradus, with an angulardiameter of only
0.057 arcseconds

Stars vary widely in size. In each image in the sequence, the right-most object appears as the left-most object in the
next panel. The Earth appears at right in panel 1 and the Sun is second from the right in panel 3. The rightmost star
at panel 6 is UY Scuti, the largest known star.

The disks of most stars are much too small in angular size to be observed with current ground-based optical
telescopes, and so interferometer telescopes are required to produce images of these objects. Another technique
for measuring the angular size of stars is through occultation. By precisely measuring the drop in brightness of a
star as it is occulted by the Moon (or the rise in brightness when it reappears), the star's angular diameter can be
computed.

95
Stars range in size from neutron stars, which vary anywhere from 20 to 40 km (25 mi) in diameter, to
supergiants like Betelgeuse in the Orion constellation, which has a diameter approximately 1,070 times
that of the Sunabout 1,490,171,880 km (925,949,878 mi). Betelgeuse, however, has a much lower
density than the Sun.

Kinematics

The motion of a star relative to the Sun can provide useful information about the origin and age of a star, as well
as the structure and evolution of the surrounding galaxy. The components of motion of a star consist of the radial
velocity toward or away from the Sun, and the traverse angular movement, which is called its proper motion.

Radial velocity is measured by the doppler shift of the star's spectral lines, and is given in units of km/s. The proper
motion of a star, its parallax, is determined by precise astrometric measurements in units of milli-arc seconds (mas)
per year. With knowledge of the star's parallax and its distance, the proper motion velocity can be calculated.
Together with the radial velocity, the total velocity can be calculated. Stars with high rates of proper motion are
likely to be relatively close to the Sun, making them good candidates for parallax measurements.[104]

The Pleiades, an open cluster of stars in the constellation of Taurus. These stars share a common motion
through space.

When both rates of movement are known, the space velocity of the star relative to the Sun or the galaxy can be
computed. Among nearby stars, it has been found that younger population I stars have generally lower velocities
than older, population II stars. The latter have elliptical orbits that are inclined to the plane of the galaxy. A
comparison of the kinematics of nearby stars has allowed astronomers to trace their origin to common points in
giant molecular clouds, and are referred to as stellar associations.

96
Magnetic field.

The magnetic field of a star is generated within regions of the interior where convective circulation occurs. This
movement of conductive plasma functions like a dynamo, wherein the movement of elecrical charges induce
magnetic fields, as does a mechanical dynamo. Those magnetic fields have a great range that extend throughout and
beyond the star. The strength of the magnetic field varies with the mass and composition of the star, and the amount
of magnetic surface activity depends upon the star's rate of rotation. This surface activity produces starspots, which
are regions of strong magnetic fields and lower than normal surface temperatures. Coronalloops are arching
magnetic field flux lines that rise from a star's surface into the star's outer atmosphere, its corona. The coronal loops
can be seen due to the plasma they conduct along their length. Stellar flares are bursts of high-energy particles that
are emitted due to the same magnetic activity.

Surface magnetic field of SU Aur (a young star of T Tauri type), reconstructed by means of Zeeman-
Doppler imaging

Young, rapidly rotating stars tend to have high levels of surface activity because of their magnetic field. The
magnetic field can act upon a star's stellar wind, functioning as a brake to gradually slow the rate of rotation
with time. Thus, older stars such as the Sun have a much slower rate of rotation and a lower level of surface
activity. The activity levels of slowly rotating stars tend to vary in a cyclical manner and can shut down
altogether for periods of time.
During the Maunder minimum, for example, the Sun underwent a 70-year period with almost no sunspot activity.

97
Mass
Stellar mass

One of the most massive stars known is Eta Carinae. which, with 100150 times as much mass as the Sun, will
have a lifespan of only several million years. Studies of the most massive open clusters suggests 150 M as an
upper limit for stars in the current era of the universe. This represents an empirical value for the theoretical limit
on the mass of forming stars due to increasing radiation pressure on the accreting gas cloud. Several stars in the
R136 cluster in the Large Magellanic Cloud have been measured with larger masses,[111]but it has been
determined that they could have been created through the collision and merger of massive stars in close binary
systems, sidestepping the 150 M limit on massive star formation.

The reflection nebula NGC 1999 is brilliantly illuminated by V380 Orionis (center), a variable star with about 3.5
times the mass of the Sun. The black patch of sky is a vast hole of empty space and not a darknebula as previously
thought.

The first stars to form after the Big Bang may have been larger, up to 300 M ,[113]due to the complete absence of
elements heavier than lithium in their composition. This generation of supermassive population III stars is likely to
have existed in the very early universe (i.e., they are observed to have a high redshift), and may have started the
production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life. In
June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z =
6.60.

With a mass only 80 times that of Jupiter (MJ), 2MASS J0523-1403 is the smallest known star undergoing
nuclear fusion in its core.[116]For stars with metallicity similar to the Sun, the theoretical minimum mass the star
can have and still undergo fusion at the core, is estimated to be about 75 MJ When the metallicity is very low,
however, the minimum star size seems

98
to be about 8.3% of the solar mass, or about 87 M J Smaller bodies called brown dwarfs, occupy a poorly defined
grey area between stars and gas giants.

The combination of the radius and the mass of a star determines its surface gravity. Giant stars have a much lower
surface gravity than do main sequence stars, while the opposite is the case for degenerate, compact stars such as
white dwarfs. The surface gravity can influence the appearance of a star's spectrum, with higher gravity causing a
broadening of the absorption lines

Rotation
Stellar rotation

The rotation rate of stars can be determined through spectroscopic measurement, or more exactly determined by
tracking their star spots. Young stars can have a rotation greater than 100 km/s at the equator. The B-class star
Achernar, for example, has an equatorial velocity of about 225 km/s or greater, causing its equator to be slung
outward and giving it an equatorial diameter that is more than 50% greater than between the poles. This rate of
rotation is just below the critical velocity of 300 km/s at which speed the star would break apart. [120]By contrast, the
Sun rotates once every 25 35 days, with an equatorial velocity of 1.994 km/s. A main sequence star's
magnetic field and the stellar wind serve to slow its rotation by a significant amount as it evolves on the main
sequence.

Degenerate stars have contracted into a compact mass, resulting in a rapid rate of rotation. However they
have relatively low rates of rotation compared to what would be expected by conservation of angular
momentumthe tendency of a rotating body to compensate for a contraction in size by increasing its rate of
spin. A large portion of the star's angular momentum is dissipated as a result of mass loss through the stellar
wind. In spite of this, the rate of rotation for a pulsar can be very rapid. The pulsar at the heart of the Crab
nebula, for example, rotates 30 times per second. The rotation rate of the pulsar will gradually slow due to
the emission of radiation.

Temperature

The surface temperature of a main sequence star is determined by the rate of energy production of its core and by
its radius, and is often estimated from the star's color index. The temperature is normally given in terms of an
effective temperature, which is the temperature of an idealized black body that radiates its energy at the same
luminosity per surface area as the star. Note that the effective temperature is only a representative of the surface,
as the temperature increases toward the core.[125]The temperature in the core region of a star is several million
kelvins

The stellar temperature will determine the rate of ionization of various elements, resulting in characteristic
absorption lines in the spectrum. The surface temperature of a star, along with its

99
visual absolute magnitude and absorption features, is used to classify a star (see classification below).[33]

Massive main sequence stars can have surface temperatures of 50,000 K. Smaller stars such as the Sun have
surface temperatures of a few thousand K. Red giants have relatively low surface
temperatures of about 3,600 K; but they also have a high luminosity due to their large exterior surface area. [127]

Radiation

The energy produced by stars, a product of nuclear fusion, radiates to space as both
electromagnetic radiation and particle radiation. The particle radiation emitted by a star is manifested as the
stellar wind,[128]which streams from the outer layers as electrically charged
protons and alpha and beta particles. Although almost massless, there also exists a steady stream of neutrinos
emanating from the star's core.

The production of energy at the core is the reason stars shine so brightly: every time two or more atomic nuclei fuse
together to form a single atomic nucleus of a new heavier element, gamma rayphotons are released from the nuclear
fusion product. This energy is converted to other forms of electromagnetic energy of lower frequency, such as
visible light, by the time it reaches the star's outer layers.

The color of a star, as determined by the most intense frequency of the visible light, depends on the temperature of
the star's outer layers, including its photosphere.[129]Besides visible light, stars also emit forms of electromagnetic
radiation that are invisible to the human eye. In fact, stellar electromagnetic radiation spans the entire
electromagnetic spectrum, from the longest wavelengths of radio waves through infrared, visible light, ultraviolet,
to the shortest of X-rays, and gamma rays. From the standpoint of total energy emitted by a star, not all
components of stellar electromagnetic radiation are significant, but all frequencies provide insight into the star's
physics.

Using the stellar spectrum, astronomers can also determine the surface temperature, surfacegravity, metallicity and
rotational velocity of a star. If the distance of the star is found, such as by measuring the parallax, then the
luminosity of the star can be derived. The mass, radius, surface gravity, and rotation period can then be estimated
based on stellar models. (Mass can be calculated for stars in binary systems by measuring their orbital velocities and
distances. Gravitational microlensing has been used to measure the mass of a single star.]) With these parameters,
astronomers can also estimate the age of the star.

Luminosity

The luminosity of a star is the amount of light and other forms of radiant energy it radiates per unit of time. It has
units of power. The luminosity of a star is determined by its radius and surface temperature. Many stars do not
radiate uniformly across their entire surface. The rapidly rotating star Vega, for example, has a higher energy flux
(power per unit area) at its poles than along its equator.
Patches of the star's surface with a lower temperature and luminosity than average are known as
starspots. Small, dwarf stars such as our Sun generally have essentially featureless disks with only small
starspots. Giant stars have much larger, more obvious starspots, and they also
exhibit strong stellar limb darkening. That is, the brightness decreases towards the edge of the stellar
disk.[134]Red dwarf flare stars such as UV Ceti may also possess prominent starspot features.[135]

Magnitude
Main articles: Apparent magnitude and Absolute magnitude

The apparent brightness of a star is expressed in terms of its apparent magnitude. It is a function of the star's
luminosity, its distance from Earth, and the altering of the star's light as it passes through Earth's atmosphere.
Intrinsic or absolute magnitude is directly related to a star's luminosity, and is what the apparent magnitude a star
would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years).

100
Number of stars brighter than magnitude

Apparent Number
magnitude of stars[136]

0 4

1 15

2 48

3 171

4 513

5 1,602

6 4,800

7 14,000

Both the apparent and absolute magnitude scales are logarithmic units: one whole number difference in
magnitude is equal to a brightness variation of about 2.5 times [137](the 5th root of 100 or approximately 2.512).
This means that a first magnitude star (+1.00) is about 2.5 times brighter than a second magnitude (+2.00) star,
and about 100 times brighter than a sixthmagnitude star (+6.00). The faintest stars visible to the naked eye under
good seeing conditions are about magnitude +6.

On both apparent and absolute magnitude scales, the smaller the magnitude number, the brighter the star; the larger
the magnitude number, the fainter the star. The brightest stars, on either scale, have negative magnitude numbers.
The variation in brightness (L) between two stars is calculated by subtracting the magnitude number of the
brighter star (mb) from the magnitude number of the fainter star (mf), then using the difference as an exponent for
the base number 2.512; that is to say:

Relative to both luminosity and distance from Earth, a star's absolute magnitude (M) and apparent magnitude (m)
are not equivalent;[137]for example, the bright star Sirius has an apparent magnitude of 1.44, but it has an absolute
magnitude of +1.41.

The Sun has an apparent magnitude of 26.7, but its absolute magnitude is only +4.83. Sirius, the brightest star in
the night sky as seen from Earth, is approximately 23 times more luminous than the Sun, while Canopus, the second
brightest star in the night sky with an absolute magnitude of 5.53, is approximately 14,000 times more luminous
than the Sun. Despite Canopus being vastly more luminous than Sirius, however, Sirius appears brighter than
Canopus. This is because Sirius is merely 8.6 light-years from the Earth, while Canopus is much farther away at a
distance of 310 light-years.

As of 2006, the star with the highest known absolute magnitude is LBV 1806-20, with a magnitude of 14.2.
This star is at least 5,000,000 times more luminous than the Sun. [138]The
least luminous stars that are currently known are located in the NGC 6397 cluster. The faintest red dwarfs in the
cluster were magnitude 26, while a 28th magnitude white dwarf was also
discovered. These faint stars are so dim that their light is as bright as a birthday candle on the Moon when
viewed from the Earth.[139]

101
Classification

Surface temperature ranges for


different stellar classes[140]

Class Temperature Sample star

O 33,000 K or more Zeta Ophiuchi

B 10,50030,000 K Rigel

A 7,50010,000 K Altair

102
F 6,0007,200 K Procyon A

G 5,5006,000 K Sun

K 4,0005,250 K Epsilon Indi

M 2,6003,850 K Proxima Centauri Main


Stellar classification

The current stellar classification system originated in the early 20th century, when stars were classified from A to Q
based on the strength of the hydrogen line.[141]It thought that the hydrogen line strength was a simple linear function
of temperature. Rather, it was more complicated; it strengthened with increasing temperature, it peaked near 9000 K,
and then declined at greater temperatures. When the classifications were reordered by temperature, it more closely
resembled the modern scheme.

Stars are given a single-letter classification according to their spectra, ranging from type O, which are very hot, to
M, which are so cool that molecules may form in their atmospheres. The main classifications in order of
decreasing surface temperature are: O, B, A, F, G, K, and M. A variety of rare spectral types are given special
classifications. The most common of these are types L and T, which classify the coldest low-mass stars and
brown dwarfs. Each letter has 10 sub-divisions, numbered from 0 to 9, in order of decreasing temperature.
However, this system breaks down at extreme high temperatures as classes O0 and O1 may not exist.

In addition, stars may be classified by the luminosity effects found in their spectral lines, which correspond to
their spatial size and is determined by their surface gravity. These range from 0 (hypergiants) through III (giants)
to V (main sequence dwarfs); some authors add VII (white dwarfs). Most stars belong to the main sequence,
which consists of ordinary hydrogen-burning stars. These fall along a narrow, diagonal band when graphed
according to their absolute magnitude and spectral type. The Sun is a main sequence G2V yellow dwarf of
intermediate temperature and ordinary size.

Additional nomenclature, in the form of lower-case letters added to the end of the spectral type to indicate
peculiar features of the spectrum. For example, an "e" can indicate the presence of emission lines; "m" represents
unusually strong levels of metals, and "var" can mean variations in the spectral type.

White dwarf stars have their own class that begins with the letter D. This is further sub-divided
into the classes DA, DB, DC, DO, DZ, and DQ, depending on the types of prominent lines found in the spectrum.
This is followed by a numerical value that indicates the temperature. [144]

103
Variable stars.

Variable stars have periodic or random changes in luminosity because of intrinsic or extrinsic properties. Of the
intrinsically variable stars, the primary types can be subdivided into three principal groups.

During their stellar evolution, some stars pass through phases where they can become pulsating variables.
Pulsating variable stars vary in radius and luminosity over time, expanding and contracting with periods ranging
from minutes to years, depending on the size of the star. This category includes Cepheid and Cepheid-like stars,
and long-period variables such as Mira

Eruptive variables are stars that experience sudden increases in luminosity because of flares or mass ejection
events.[145]This group includes protostars, Wolf-Rayet stars, and flare stars, as
well as giant and supergiant stars.

The asymmetrical appearance of Mira, an oscillating variable star.

Cataclysmic or explosive variable stars are those that undergo a dramatic change in their properties. This group
includes novae and supernovae. A binary star system that includes a nearby white dwarf can produce certain
types of these spectacular stellar explosions, including the nova and a Type 1a supernova. The explosion is
created when the white dwarf accretes hydrogen from the companion star, building up mass until the hydrogen
undergoes fusion. Some novae are also recurrent, having periodic outbursts of moderate amplitude.

Stars can also vary in luminosity because of extrinsic factors, such as eclipsing binaries, as well as rotating stars
that produce extreme starspots. A notable example of an eclipsing binary is Algol, which regularly varies in
magnitude from 2.3 to 3.5 over a period of 2.87 days.

Structure of a star.

The interior of a stable star is in a state of hydrostatic equilibrium: the forces on any small volume almost exactly
counterbalance each other. The balanced forces are inward gravitational force and an outward force due to the
pressure gradient within the star. The pressure gradient is established by the temperature gradient of the plasma; the
outer part of the star is cooler than the

104
core. The temperature at the core of a main sequence or giant star is at least on the order of 10 7K. The resulting
temperature and pressure at the hydrogen-burning core of a main sequence star are sufficient for nuclear fusion to
occur and for sufficient energy to be produced to prevent further collapse of the star.

As atomic nuclei are fused in the core, they emit energy in the form of gamma rays. These photons interact with
the surrounding plasma, adding to the thermal energy at the core. Stars on the main sequence convert hydrogen
into helium, creating a slowly but steadily increasing proportion of helium in the core. Eventually the helium
content becomes predominant, and energy production ceases at the core. Instead, for stars of more than 0.4 M ,
fusion occurs in a slowly expanding shell around the degenerate helium core.

Internal structures of main sequence stars, convection zones with arrowed cycles and radiative zones with red
flashes. To the left a low-massred dwarf, in the center a mid-sizedyellow dwarf, and, at the right, a
massiveblue-white main sequence star.

In addition to hydrostatic equilibrium, the interior of a stable star will also maintain an energy balance of thermal
equilibrium. There is a radial temperature gradient throughout the interior that results in a flux of energy flowing
toward the exterior. The outgoing flux of energy leaving any layer within the star will exactly match the incoming
flux from below.

The radiation zone is the region of the stellar interior where the flux of energy outward is dependent on radiative
heat transfer, since convective heat transfer is inefficient in that zone. In this region the plasma will not be
perturbed, and any mass motions will die out. If this is not the case, however, then the plasma becomes unstable
and convection will occur, forming a convection zone. This can occur, for example, in regions where very high
energy fluxes occur, such as near the core or in areas with high opacity (making radiatative heat transfer
inefficient) as in the outer envelope.

The occurrence of convection in the outer envelope of a main sequence star depends on the star's mass. Stars with
several times the mass of the Sun have a convection zone deep within the interior and a radiative zone in the outer
layers. Smaller stars such as the Sun are just the opposite, with the convective zone located in the outer layers. Red
dwarf stars with less than 0.4 M are convective throughout, which prevents the accumulation of a helium core.
For most

105
stars the convective zones will also vary over time as the star ages and the constitution of the interior is
modified.[148]

This diagram shows a cross-section of the Sun.

The photosphere is that portion of a star that is visible to an observer. This is the layer at which the plasma of the
star becomes transparent to photons of light. From here, the energy generated at the core becomes free to
propagate into space. It is within the photosphere that sun spots, regions of lower than average temperature,
appear.

Above the level of the photosphere is the stellar atmosphere. In a main sequence star such as the Sun, the lowest
level of the atmosphere, just above the photosphere, is the thin chromosphere region, where spicules appear and
stellar flares begin. Above this is the transition region, where the temperature rapidly increases within a distance of
only 100 km (62 mi). Beyond this is the corona, a volume of super-heated plasma that can extend outward to
several million kilometres. The existence of a corona appears to be dependent on a convective zone in the outer
layers of the star.[150]Despite its high temperature, and the corona emits very little light, due to its low gas density.
The corona region of the Sun is normally only visible during a solar eclipse.

From the corona, a stellar wind of plasma particles expands outward from the star, until it interacts with the
interstellar medium. For the Sun, the influence of its solar wind extends throughout a bubble-shaped region
called the heliosphere.

106
Nuclear fusion reaction pathways

Stellar nucleosynthesis

A variety of nuclear fusion reactions take place in the cores of stars, that depend upon their mass and composition.
When nuclei fuse, the mass of the fused product is less than the mass of the original parts. This lost mass is
converted to electromagnetic energy, according to the mass-energy equivalence relationship E = mc2.[1]

Overview of the proton-proton chain

The hydrogen fusion process is temperature-sensitive, so a moderate increase in the core temperature will
result in a significant increase in the fusion rate. As a result, the core temperature of main sequence stars
only varies from 4 million kelvin for a small M-class star to 40 million kelvin for a massive O-class
star.[126]

107
The carbon-nitrogen-oxygen cycle

In the Sun, with a 10-million-kelvin core, hydrogen fuses to form helium in the proton-protonchain reaction
41H 22H + 2e++ 2e(2 x 0.4 MeV)
2e++ 2e- 2 (2 x 1.0 MeV)
21H + 22H 23He + 2 (2 x 5.5 MeV)
23He 4He + 21H (12.9 MeV)

These reactions result in the overall reaction:

41H 4He + 2e+ + 2 + 2e (26.7 MeV)

where e+ is a positron, is a gamma ray photon, e is a neutrino, and H and He are isotopes of hydrogen and helium,
respectively. The energy released by this reaction is in millions of electron volts, which is actually only a tiny
amount of energy. However enormous numbers of these reactions occur constantly, producing all the energy
necessary to sustain the star's radiation output. In comparison, the combustion of two hydrogen gas molecules with
one oxygen gas molecule releases only 5.7 eV.

Minimum stellar mass required for fusion


Solar

Element
masses
Hydrogen 0.01
Helium 0.4
[154
]
Carbon 5

108
Neon 8

In more massive stars, helium is produced in a cycle of reactions catalyzed by carbon called the carbon-nitrogen-
oxygen cycle

In evolved stars with cores at 100 million kelvin and masses between 0.5 and 10 M , helium can
be transformed into carbon in the triple-alpha process that uses the intermediate element beryllium
4
He + 4He + 92 keV 8*Be
4
He + 8*Be + 67 keV 12*C
12*
C 12C + + 7.4 MeV

For an overall reaction of:

34He 12C + + 7.2 MeV

In massive stars, heavier elements can also be burned in a contracting core through the neonburning process and
oxygen burning process. The final stage in the stellar nucleosynthesis process is the silicon burning process that
results in the production of the stable isotope iron-56, an endothermic process that consumes energy, and so
further energy can only be produced through gravitational collapse.

The example below shows the amount of time required for a star of 20 M to consume all of its
nuclear fuel. As an O-class main sequence star, it would be 8 times the solar radius and 62,000 times the Sun's
luminosity.[155]

Fuel Temperature Density Burn duration


material (million kelvins) (kg/cm3) ( in years)

H 37 0.0045 8.1 million

He 188 0.97 1.2 million

C 870 170 976

Ne 1,570 3,100 0.6

O 1,980 5,550 1.25

109
[156
]
S/Si 3,340 33,400 0.0315

STAR EVOLUTION or STELLAR EVOLUTION

Stellar evolution is the process by which astarchangesover the course of time. Depending onthe mass of the star,
its lifetime can range from a few million years for the most massive to trillions of years for the least massive,
which is considerably longer than the age of the universe. The table shows the lifetimes of stars as a function of
their masses.[1]All stars are born from collapsing clouds of gas and dust, often called nebulae or molecular clouds.
Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is
known as a main-sequence star.

Nuclear fusion powers a star for most of its life. Initially the energy is generated by the fusion of hydrogen atoms at
the core of the main-sequence star. Later, as the preponderance of atoms at the core becomes helium, stars like the
Sun begin to fuse hydrogen along a spherical shell surrounding the core. This process causes the star to gradually
grow in size, passing through the subgiant stage until it reaches the red giant phase. Stars with at least half the mass
of the Sun can also begin to generate energy through the fusion of helium at their core, whereas more-massive stars
can fuse heavier elements along a series of concentric shells. Once a star like the Sun has exhausted its nuclear fuel,
its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula. Stars with around
ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an extremely
dense neutron star or black hole. Although the universe is not old enough for any of the smallest red dwarfs to have
reached the end of their lives, stellar models suggest they will slowly become brighter and hotter before running out
of hydrogen fuel and becoming low-mass white dwarfs.

Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too slowly to be
detected, even over many centuries. Instead, astrophysicists come to understand how stars evolve by observing
numerous stars at various points in their lifetime, and by simulating stellar structure using computer models.

110
111
The life cycle of a star . Source;NASA.

Protostar.

A protostar is a very young star that is still gathering mass from its parent molecular cloud. The protostellar phase
is the earliest one in the process of stellar evolution. For a one solar-mass star it lasts about 1,000,000 years. The
phase begins when a molecular cloud first collapses under the force of self-gravity. It ends when the protostar
blows back the infalling gas and is revealed as an optically visible pre-main-sequence star, which later contracts to
become a main sequence star.

Protostar

Astar spends a brief childhood as a protostar, a star powered purely by its own gravitational contraction.
In this prologue to its life on the main sequence, the star achieves hydrostatic equilibrium, where its internal
pressure fully counteracts its self-gravity. The protostar begins its evolution to the main sequence at a
luminosity far above its main-sequence luminosity, but with a photospheric temperature that is not much
smaller than the main-sequence value. On a Hertzsprung-Russell diagram, which is a plot of a star's luminosity
against the star's photospheric temperature, a protostar evolves along a line of nearly-constant temperature and
falling luminosity. This track, which is nearly-vertical on the Hertzsprung-Russell diagram, is called a Hayashi
track.

112
A protostar has a simple evolution because it has a simple internal structure. Energy is transported from
the core of the protostar to the photosphere through convection. This process links the gas temperature within
the protostar to the gas density. The pressure exerted by a gas depends on both the temperature and the density
of the gasfor an ideal gas, which describes fully-ionized hydrogen and helium, the pressure is equal to the
temperature times the number density of particles. With convection tying temperature to density, the pressure
within a protostar varies only with density. Ionized hydrogen and helium exert a pressure that is proportional to
the density of the gas raised to the 5/3 power. The temperature is proportional to the density to the 2/3 power.
A star with such a simple relationship between pressure and density has a polytropicstructure.
The density of a polytropic star peaks at the center of the star and falls to zero at a finite radius. The
ratio of the density at a given fraction of a stellar radius from the star's center relative to the density at the
center is independent of the star's mass or radius. For example, the ratio of the density at half a stellar radius
to the density at the center of the star has a value that is the same for all protostars. Because the temperature
within the protostar varies with the density, the ratio of the temperature at a given fraction of a stellar radius
relative to the temperature at the center of a protostar is also independent of the star's mass and radius, so the
temperature ratio is the same for all protostars.
As a protostar radiates, it shrinks in size to generate the energy that replaces the radiated energy. This
shrinkage increases the self-gravity of the protostar, which is accompanied by an increase in the pressure at the
protostar's core. This balance between pressure and gravitational force maintains the protostar's hydrostatic
equilibrium and creates a relationship between the temperature at the protostar's center and the gravitational
potential of the star: the temperature of the gas at the center of the star is proportional to the mass of the star
divided by the radius of the star. As the radius of the star shrinks, the temperature at the center rises inversely,
so if the radius of the star decreases by a factor of 2, the temperature at the center of the star increases by a
factor of 2. The density of the star at the center also increases, since the mass of a star in confined to smaller
and smaller volumes, but this increase is with the inverse of the cube of the radius. Because the structure of a
protostar is independent of the protostar's radius, the temperature and the density throughout a protostar
increases as a star shrinks.
The increase in temperature within a protostar does not appear at the photosphere. In fact, the
photospheric temperature changes very little as a protostar shrinks in size. The reason is that the photosphere is
not at a fixed fraction of a radius within a star. Its position is set by the ability of light to freely escape from the
protostar, which depends on both the density and temperature of the gas. Because the gas density at a given
fraction of a radius increases as the star shrinks, the ability of light to escape decreases, the protostar becomes
more opaque, and the photosphere moves farther from the star's center in terms of fractional radius. The
temperature drop that accompanies this shift of the photosphere outward is sufficient to counteract the rise in
temperature throughout the protostar caused by the protostar's contraction. For this reason, the temperature at
the photosphere changes little as a protostar shrinks.
With the temperature at the photosphere nearly constant, the rate at which a protostar cools is
proportional to the photosphere's surface area. This means that a protostar is most luminous when it first
achieves hydrostatic equilibrium, and it grows less luminous as it shrinks. The initial luminosity is several
orders of magnitude larger than a main-sequence star of equivalent mass. As the protostar shrinks in size, the
amount of thermal energy within the protostar increases inversely with radius. The decreasing luminosity and
the increasing reservoir of thermal energy cause the rate of shrinkage to slow dramatically. In this way, the
physics of the photosphere, which sets the position of the photosphere within the protostar by controlling a
protostar's opaqueness, controls the evolution of a protostar.

113
The protostar therefore begins its brief life as a brilliant star that wanes in luminosity on a timescale of
hundreds of years. A one solar-mass star at the beginning of the protostar stage can have 1,000 times the Sun's
luminosity, 0.6 times the Sun's photosphere temperature, and 70 times the Sun's radius (0.3 AU). As a protostar
shrinks, its internal temperature and density reach a point that permits the thermonuclear fusion of deuterium,
which releases a slight amount of energy into the star, energy that is insufficient to halt the shrinkage of the
star. As the luminosity of the star drops, parts of the star become stable against convection. For a solar-mass
star, convection ceases at the core, and energy is transported out of the core through radiative diffusion.
Eventually a protostar approaches the size and luminosity of a main-sequence star. By this time, the protostar
changes its luminosity and size on a 10 million-year timescale. Over this time, the thermonuclear fusion of
hydrogen commences, which stabilizes the star's size and raises its photospheric temperature. The star settles
onto the main sequence. As a main-sequence star, it is somewhat hotter and considerably less luminous than it
was as a protostar.

The modern picture of protostars, summarized above, was first suggested by ChushiroHayashi.[2]In the first
models, the size of protostars was greatly overestimated. Subsequent numerical calculations [3][4][5]clarified the
issue, and showed that protostars are only modestly larger than main-sequence stars of the same mass. This
basic theoretical result has been confirmed by observations, which find that the largest pre-main-sequence stars
are also of modest size.

Protostellar evolution

Infant star CARMA-7 and its jets are located approximately 1400 light-years from Earth within the
Serpens South star cluster.

Star formation

Star formation begins in relatively small molecular clouds called dense cores.[7]Each dense core is initially in
balance between self-gravity, which tends to compress the object, and both gaspressure and magnetic pressure,
which tend to inflate it. As the dense core accrues mass from its larger, surrounding cloud, self-gravity begins to
overwhelm pressure, and collapse begins. Theoretical modeling of an idealized spherical cloud initially supported
only by gas pressure indicates that the collapse process spreads from the inside toward the outside. [8]Spectroscopic
observations of dense cores that do not yet contain stars indicate that contraction indeed occurs. So far, however,
the predicted outward spread of the collapse region has not been observed.

114
The gas that collapses toward the center of the dense core first builds up a low-mass protostar, and then a
protoplanetary disk orbiting the object. As the collapse continues, an increasing amount of gas impacts the disk
rather than the star, a consequence of angular momentum conservation. Exactly how material in the disk spirals
inward onto the protostar is not yet understood, despite a great deal of theoretical effort. This problem is
illustrative of the larger issue of accretion disk theory, which plays a role in much of astrophysics.

HBC 1 is a young pre-main-sequence star

Regardless of the details, the outer surface of a protostar consists at least partially of shocked gas that has fallen
from the inner edge of the disk. The surface is thus very different from the relatively quiescent photosphere of a
pre-main sequence or main-sequence star. Within its deep interior, the protostar has lower temperature than an
ordinary star. At its center, hydrogen is not yet undergoing nuclear fusion. Theory predicts, however, that the
hydrogen isotope deuterium is undergoing fusion, creating helium-3. The heat from this fusion reaction tends to
inflate the protostar, and thereby helps determine the size of the youngest observed pre-main-sequence stars.

The energy generated from ordinary stars comes from the nuclear fusion occurring at their centers. Protostars also
generate energy, but it comes from the radiation liberated at the shocks on its surface and on the surface of its
surrounding disk. The radiation thus created must traverse the interstellar dust in the surrounding dense core. The
dust absorbs all impinging photons and reradiates them at longer wavelengths. Consequently, a protostar is not
detectable at optical wavelengths, and cannot be placed in the Hertzsprung-Russell diagram, unlike the more
evolved pre-main-sequence stars.

The actual radiation emanating from a protostar is predicted to be in the infrared and millimeter regimes. Point-like
sources of such long-wavelength radiation are commonly seen in regions that are obscured by molecular clouds. It
is commonly believed that those conventionally labeled as Class 0 or Class I sources are protostars. However, there
is still no definitive evidence for this identification.

Chemical composition

115
The Sun as a protostar had the same composition as today, which is 71.1% hydrogen, 27.4% helium, and 1.5%
heavier elements, by mass.

Observed classes of young stars

For details of observational classification, see Young stellar object.

Class peak emission duration (Years)

0 submillimeter 104

I far-infrared 105

II near-infrared 106
[1
4]
III visible 107

Brown dwarfs and sub-stellar objects


Brown dwarf;

Brown dwarfs aresubstellar objectsthatoccupy the mass range between the heaviestgas giantplanets and the
lightest stars, of approximately 13 to 7580 Jupiter masses (MJ), or approximately 2.51028kg to about
1.51029kg. Below this range are the sub-brown dwarfs, and above it are the lightest red dwarfs (M9V). Brown
dwarfs may be fully convective, with no layers or chemical differentiation by depth

Unlike the stars in the main-sequence, brown dwarfs are not massive enough to sustain nuclearfusion of ordinary
hydrogen (1H) to helium in their cores. They are, however, thought to fusedeuterium (2H) and to fuse lithium (7Li) if
their mass is above a debated threshold of 13 MJ and 65 MJ, respectively. It is also debated whether brown dwarfs
would be better defined by their formation processes rather than by their supposed nuclear fusion reactions.

Stars are categorized by spectral class, with brown dwarfs designated as types M, L, T, and Y. Despite their name,
brown dwarfs are of different colors. Many brown dwarfs would likely appear magenta to the human eye, or
possibly orange/red. Brown dwarfs are not very luminous at visible wavelengths.

Planets are known to orbit some brown dwarfs: 2M1207b, MOA-2007-BLG-192Lb, and 2MASSJ044144b.

At a distance of about 6.5 light years, the nearest known brown dwarf is Luhman 16, a binary system of brown
dwarfs discovered in 2013. DENIS-P J082303.1-491201 b is listed as the most-massive known exoplanet (as of
March 2014) in NASA's exoplanet archive, despite having a mass (28.51.9 M J) more than twice the 13-Jupiter-
mass cutoff between planets and brown dwarfs.

116
Artist's concept of a T-type brown dwarf

Comparison: most brown dwarfs are only slightly larger than Jupiter (1015%) but up to 80 times more
massive due to greater density. The Sun is not to scale and would be larger.

Protostars with masses less than roughly 0.08 M (1.61029 kg) never reach temperatures high enough for nuclear
fusion of hydrogen to begin. These are known as brown dwarfs. The International Astronomical Union defines
brown dwarfs as stars massive enough to fusedeuterium at some point in their lives (13 Jupiter masses (MJ), 2.5
1028 kg, or 0.0125 M). Objects smaller than 13 MJ are classified as sub-brown dwarfs (but if they orbit around
another stellar object they are classified as planets) Both types, deuterium-burning and not, shine dimly and die
away slowly, cooling gradually over hundreds of millions of years.

117
The smaller object is Gliese 229B, about 20 to 50 times the mass of Jupiter, orbiting the star Gliese 229.
It is in the constellation Lepus, about 19 light years from Earth.

The objects now called "brown dwarfs" were theorized to exist in the 1960s by Shiv S. Kumar and were
originally called black dwarfs a classification for dark substellar objects floating freely in space that were not
massive enough to sustain hydrogen fusion. However: a) the term black dwarf was already in use to refer to a
cold white dwarf; b) red dwarfs fuse hydrogen, and c) these objects may be luminous at visible wavelengths early
in their lives. Because of this, alternative names for these objects were proposed, including planetar and substar.
In 1975, JillTarter suggested the term "brown dwarf", using brown as an approximate color.

The term black dwarf still refers to a white dwarf that has cooled to the point that it no longer emits significant
amounts of light. However, the time required for even the lowest-mass white dwarf to cool to this temperature is
calculated to be longer than the current age of the universe; hence such objects are thought not to exist yet.

Early theories concerning the nature of the lowest-mass stars and the hydrogen-burning limit suggested that a
population I object with a mass less than 0.07 solar masses (M) or a population
II object less than 0.09 M would never go through normal stellar evolution and would become a completely
degenerate star.[12]The first self-consistent calculation of the hydrogen-burnin minimum mass confirmed a value
between 0.08 and 0.07 solar masses for population I objects. The discovery of deuterium burning down to 0.012
solar masses and the impact of dust formation in the cool outer atmospheres of brown dwarfs in the late 1980s
brought these theories into question. However, such objects were hard to find as they emit almost no visible light.
Their strongest emissions are in the infrared (IR) spectrum, and ground-based IR detectors were too imprecise at
that time to readily identify any brown dwarfs.

Since then, numerous searches by various methods have sought these objects. These methods included multi-color
imaging surveys around field stars, imaging surveys for faint companions of main-sequence dwarfs and white
dwarfs, surveys of young star clusters, and radial velocity monitoring for close companions.

For many years, efforts to discover brown dwarfs were fruitless. In 1988, however, a faint companion to a star
known as GD 165 was found in an infrared search of white dwarfs. The spectrum of the companion GD 165B was
very red and enigmatic, showing none of the features expected of a low-mass red dwarf. It became clear that GD
165B would need to be classified as a much cooler object than the latest M dwarfs then known. GD 165B remained
unique for almost a decade until the advent of the Two Micron All Sky Survey (2MASS) which discovered many
objects with similar colors and spectral features.

Today, GD 165B is recognized as the prototype of a class of objects now called "L dwarfs".Although the discovery
of the coolest dwarf was highly significant at the time, it was debated whether GD 165B would be classified as a
brown dwarf or simply a very-low-mass star, because observationally it is very difficult to distinguish between the
two.

118
Soon after the discovery of GD 165B, other brown-dwarf candidates were reported. Most failed to live up to their
candidacy, however, because the absence of lithium showed them to be stellar objects. True stars burn their lithium
within a little over 100 Myr, whereas brown dwarfs (which can, confusingly, have temperatures and luminosities
similar to true stars) will not. Hence, the detection of lithium in the atmosphere of an object older than 100 Myr
ensures that it is a brown dwarf.

In 1995, the study of brown dwarfs changed substantially with the discovery of two indisputable substellar objects
(Teide 1 and Gliese 229B),[17][18] which were identified by the presence of the 670.8 nm lithium line. The latter was
found to have a temperature and luminosity well below the stellar range. Its near-infrared spectrum clearly
exhibited a methane absorption band at 2 micrometres, a feature that had previously only been observed in the
atmospheres of giant planets and that of Saturn's moon Titan. Methane absorption is not expected at the
temperatures of main-sequence stars. This discovery helped to establish yet another spectral class even cooler than
L dwarfs, known as "T dwarfs", for which Gliese 229B is the prototype.

The first confirmed brown dwarf was discovered by Spanish astrophysicists Rafael Rebolo (head of team), Maria
Rosa Zapatero Osorio, and Eduardo Martn in 1994. [19]This object, found in the Pleiades open cluster, received the
name Teide 1. The discovery article was submitted to Nature in spring 1995, and published on September 14,
1995.[17][20] Nature highlighted "Brown dwarfs discovered, official" in the front page of that issue.

Teide 1 was discovered in images collected by the IAC team on January 6, 1994 using the 80 cm telescope (IAC
80) at Teide Observatory and its spectrum was first recorded in December 1994 using the 4.2 m William Herschel
Telescope at Roque de los Muchachos Observatory (La Palma). The distance, chemical composition, and age of
Teide 1 could be established because of its membership in the young Pleiades star cluster. Using the most advanced
stellar and substellar evolution models at that moment, the team estimated for Teide 1 a mass of 55 M J,[citation needed]
which is below the stellar-mass limit. The object became a reference in subsequent young brown dwarf related
works.

In theory, a brown dwarf below 65 MJis unable to burn lithium by thermonuclear fusion at any time during its
evolution. This fact is one of the lithium test principles used to judge the substellar nature of low-luminosity and
low-surface-temperature astronomical bodies.

High-quality spectral data acquired by the Keck 1 telescope in November 1995 showed that Teide 1 still had the
initial lithium abundance of the original molecular cloud from which Pleiades stars formed, proving the lack of
thermonuclear fusion in its core. These observations confirmed that Teide 1 is a brown dwarf, as well as the
efficiency of the spectroscopic lithiumtest.

For some time, Teide 1 was the smallest known object outside the Solar System that had been identified by
direct observation. Since then, over 1,800 brown dwarfs have been identified, [21] even some very close to Earth
like Epsilon Indi Ba and Bb, a pair of brown dwarfs gravitationally bound to a Sun-like star 12 light-years from
the Sun, and Luhman 16, a binary system of brown dwarfs at 6.5 light-years.

119
Theory

Subgiant;

A subgiant is a star that is brighter than a normal main-sequence star of the same spectral class, but not as bright
as true giant stars. The term subgiant is applied both to a particular spectral luminosity class and to a stage in the
evolution of astar.

HertzsprungRussell diagram

Spectral type

Red dwarfs

Yerkes luminosity class IV

120
The term subgiant was first used in 1930 for class G and early K stars with absolute magnitudes between +2.5 and
+4. These were noted as being part of a continuum of stars between obvious main-sequence stars such as the Sun
and obvious giant stars such as Aldebaran, although less numerous than either the main sequence or the giant
stars.[1]

The Yerkes spectral classification system is a two-dimensional scheme that uses a letter and number combination
to denote that temperature of a star (e.g. A5 or M1) and a Roman numeral to indicate the luminosity relative to
other stars of the same temperature. Luminosity-class-IV stars are the subgiants, located between main-sequence
stars (luminosity class V) and red giants (luminosity class III).

Rather than defining absolute features, a typical approach to determining a spectral luminosity class is to
compare similar spectra against standard stars. Many line ratios and profiles are sensitive to gravity, and
therefore make useful luminosity indicators, but some of the most useful spectral features for each spectral class
are:[2][3]

O: relative strength of N III emission and He II absorption, strong emission is more luminous
B: Balmer line profiles and strength of O II lines
A: Balmer line profiles, broader wings means less luminous
F: line strengths of Fe, Ti, and Sr
G: Sr and Fe line strengths, and wing widths in the Ca H and K lines
K: Ca H&K line profiles, Sr/Fe line ratios, and MgH and TiO line strengths
M: strength of the 422.6 nm Ca line and TiO bands

Morgan and Keenan listed examples of stars in luminosity class IV when they established the two-dimensional
classification scheme:

B0: Cassiopeiae, Scorpii


B0.5: Scorpii
B1: Persei, Cephei
B2: Orionis, Scorpii, Ophiuchi, Scorpii
B2.5: Pegasi, Cassiopeiae
B3: Herculis
B5: Herculis
A2: Aurigae, Ursae Majoris, Serpentis
A3: Herculis
F2: Geminorum, Serpentis
F5: Procyon, 110 Herculis
F6: Bootis, Bootis, Serpentis
F8: 50 Andromedae, Draconis
G0: Bootis, Herculis
G2: Cancri
G5: Herculis
G8: Aquilae
K0: Cephei
K1: Cephei

121
Later analysis showed that some of these were blended spectra from double stars and some were variable, and the
standards have been expanded to many more stars, but many of the original stars are still considered standards of
the subgiant luminosity class. O class stars and stars cooler than K1 are rarely given subgiant luminosity classes.

Subgiant branch

Stellar evolutionary tracks:


the 5 M track shows a hook and a subgiant branch crossing the Hertzsprung gap
the 2 M track shows a hook and pronounced subgiant branch
lower-mass tracks show very short long-lasting subgiant branches

The subgiant branch is a stage in the evolution of low to intermediate mass stars. Stars with a subgiant spectral
type are not always on the evolutionary subgiant branch, and vice versa. For example, the stars FK Com and 31
Com both lie in the Hertzsprung Gap and are likely evolutionary subgiants, but both are often assigned giant
luminosity classes. The spectral classification can be influenced by metallicity, rotation, unusual chemical
peculiarities, etc. The initial stages of the subgiant branch in a star like the sun are prolonged with little external
indication of the internal changes. One approach to identifying evolutionary subgiants include chemical
abundances such as Lithium which is diluted in subgiants, and coronal emission strength.

As the fraction of hydrogen remaining in the core of a main sequence star decreases, the core temperature
increases and so the rate of fusion increases. This causes stars to evolve slowly to high luminosities as they age
and broadens the main sequence band in the HertzsprungRussellDiagram.

Once a main sequence star ceases to fuse hydrogen in its core, the core begins to collapse under its own weight. This
causes it to increase in temperature and hydrogen fuses in a shell outside the core, which provides more energy than
core hydrogen burning. Low- and intermediate-mass stars

122
expand and cool until at about 5,000 K they begin to increase in luminosity in a stage known as the red-giant
branch. The transition from the main sequence to the red giant branch is known as the subgiant branch. The shape
and duration of the subgiant branch varies for stars of different masses, due to differences in the internal
configuration of the star.

Very-low-mass stars

Stars less massive than about 0.4 M are convective throughout most of the star. These stars continue to fuse
hydrogen in their cores until essentially the entire star has been converted to helium, and they do not develop into
subgiants. Stars of this mass have main-sequence lifetimes many times longer than the current age of the
Universe.[7]

0.4 M to 1 M

H-R diagram for globular cluster M5, showing a short but densely-populated subgiant branch of stars slightly
less massive than the Sun

Stars less massive than the Sun have non-convective cores with a strong temperature gradient from the centre
outwards. When they exhaust hydrogen at the centre of the star, a thick shell of hydrogen outside the central core
continues to fuse without interruption. The star is considered to be a subgiant at this point although there is little
change visible from the exterior.

123
The helium core mass is below the SchnbergChandrasekhar limit and it remains in thermal equilibrium with the
fusing hydrogen shell. Its mass continues to increase and the star very slowly expands as the hydrogen shell
migrates outwards. Any increase in energy output from the shell goes into expanding the envelop of the star and the
luminosity stays approximately constant. The subgiant branch for these stars is short, horizontal, and heavily
populated, as visible in very old clusters.

After several billion years, the helium core becomes too massive to support its own weight and becomes
degenerate. Its temperature increases, the rate of fusion in the hydrogen shell increases, the outer layers become
strongly convective, and the luminosity increases at approximately the same effective temperature. The star is now
on the red giant branch.

Mass above 1 M

Stars more massive than the sun have a convective core on the main sequence. They develop a more massive
helium core, taking up a larger fraction of the star, before they exhaust the hydrogen in the entire convective region.
Fusion in the star ceases entirely and the core begins to contract and increase in temperature. The entire star
contracts and increases in temperature, with the radiated luminosity actually increasing despite the lack of fusion.
This continues for several million years before the core becomes hot enough to ignite hydrogen in a shell, which
reverses the temperature and luminosity increase and the star starts to expand and cool. This hook is
generally defined as the end of the main sequence and the start of the subgiant branch in these stars.

The core of stars below about 2 M is still below the SchnbergChandrasekhar mass, but hydrogen shell fusion
quickly increases the mass of the core beyond that limit. More-massive stars already have cores above the
SchnbergChandrasekhar mass when they leave the main sequence. The exact initial mass at which stars will
show a hook and at which they will leave the main sequence with cores above the SchnbergChandrasekhar limit
depend on the metallicity and the degree of overshooting in the convective core. Low metallicity causes the central
part of even low mass cores to be convectively unstable, and overshooting causes the core to be larger when
hydrogen becomes exhausted.

Once the core exceeds the C-R limit, it can no longer remain in thermal equilibrium with the hydrogen shell. It
contracts and the outer layers of the star expand and cool. The energy to expand the outer envelope causes the
radiated luminosity to decrease. When the outer layers cool sufficiently, they become opaque and force convection
to begin outside the fusing shell. The expansion stops and the radiated luminosity begins to increase, which is
defined as the start of the red giant branch for these stars. Stars with an initial mass approximately 1-2 M can
develop a degenerate helium core before this point and that will cause the star to enter the red giant branch as for
lower mass stars.

The core contraction and envelop expansion is very rapid, taking only a few million years. In this time the
temperature of the star will cool from its main sequence value of 6,000 30,000 K to around 5,000 K. Relatively
few stars are seen in this stage of their evolution and there is an

124
apparent lack in the HR diagram known as the Hertzsprung gap. It is most obvious in clusters from a few
hundred million to a few billion years old.

Massive stars

Beyond about 8-12 M, depending on metallicity, stars have hot massive convective cores on the main sequence due
to CNO cycle fusion. Hydrogen shell fusion and subsequent core helium fusion begin quickly following core
hydrogen exhaustion, before the star could reach the red giant branch. Such stars, for example early B main
sequence stars, experience a brief and shortened subgiant branch before becoming supergiants. They may also be
assigned a giant spectral luminosity class during this transition.

In very massive O class main sequence stars, the transition from main sequence to giant to supergiant occurs
over a very narrow range of temperature and luminosity, sometimes even before core hydrogen fusion has
ended, and the subgiant class is rarely used. Values for the surface gravity, log(g), of O class stars are around
3.6 cgs for giants and 3.9 for dwarfs.]For comparison, typical log(g) values for K class stars are 1.59
(Aldebaran) and 4.37 ( Centauri B), leaving plenty of scope to classify subgiants such as Cephei with
log(g) of 3.47. Examples of massive subgiant stars include 2 Orionis A and the primary star of the Circini
system, both class O stars with masses of over 20 M.

Properties of massive stars.

This table shows the typical lifetimes on the main sequence (MS) and subgiant branch (SB), as well as any hook
duration between core hydrogen exhaustion and the onset of shell burning, for stars with different initial masses,
all at solar metallicity (Z = 0.02). Also shown are the helium core mass, surface effective temperature, radius, and
luminosity at the start and end of the subgiant branch for each star. The end of the subgiant branch is defined to be
when the core becomes degenerate or when the luminosity starts to increase. [8]

Start End
Mass MS Hook SB
(M) Example (GYrs) (MYrs) (MYrs) He Radius Luminosity He Teff Radius Luminosity
Core Core
Teff (K) (R) (L) (K) (R) (L)
(M) (M)

0.6 61 Cyg B 58.8 N/A 5,100 0.047 4,763 0.9 0.9 0.10 4,634 1.2 0.6

1.0 The Sun 9.3 N/A 2,600 0.025 5,766 1.2 1.5 0.13 5,034 2.0 2.2

2.0 Sirius 1.2 10 22 0.240 7,490 3.6 36.6 0.25 5,220 5.4 19.6

5.0 Alkaid 0.1 0.4 15 0.806 14,544 6.3 1,571.4 0.83 4,737 43.8 866.0

125
In general, stars with lower metallicity are smaller and hotter than stars with higher metallicity. For subgiants, this
is complicated by different ages and core masses at the main sequence turnoff. Low metallicity stars develop a
larger helium core before leaving the main sequence, hence lower mass stars show a hook at the start of the
subgiant branch. The helium core mass of a Z=0.001 (extreme population II) 1 M star at the end of the main
sequence is nearly double that of a Z=0.02 (population I) star. The low metallicity star is also over 1,000 K hotter
and over twice as luminous at the start of the subgiant branch. The difference in temperature is less pronounced at
the end of the subgiant branch, but the low metallicity star is larger and nearly four times as luminous. Similar
differences exist in the evolution of stars with other masses, and key values such as the mass of a star that will
become a supergiant instead of reaching the red giant branch are lower at low metallicity.

Subgiants in the H-R diagram

Richard Powell- The Hertzprung Russell Diagram,

126
H-R diagram of the entire Hipparcos catalog

A HertzsprungRussell (H-R) diagram is a scatter plot of stars with temperature or spectral type on the x-axis and
absolute magnitude or luminosity on the y-axis. H-R diagrams of all stars, show a clear diagonal main sequence
band containing the majority of stars, a significant number of red giants (and white dwarfs if sufficiently faint stars
are observed), with relatively few stars in other parts of the diagram.

Subgiants occupy a region above (i.e. more luminous than) the main sequence stars and below the giant stars.
There are relatively few on most H-R diagrams because the time spent as a subgiant is much less than the time
spent on the main sequence or as a giant star. Hot, class B, subgiants are barely distinguishable from the main
sequence stars, while cooler subgiants fill a relatively large gap between cool main sequence stars and the red
giants. Below approximately spectral type K3 the region between the main sequence and red giants is entirely
empty, with no subgiants.

Old open clusters showing a subgiant branch between the main sequence turnoff and the red giant branch,
with a hook at the younger M67 turnoff

Stellar evolutionary tracks can be plotted on an H-R diagram. For a particular mass, these trace the position of a star
throughout its life, and show a track from the initial main sequence position, along the subgiant branch, to the giant
branch. When an H-R diagram is plotted for a group of stars which all have the same age, such as a cluster, the
subgiant branch may be visible as a band

127
of stars between the main sequence turnoff point and the red giant branch. The subgiant branch is only visible if the
cluster is sufficiently old that 1-8 M stars have evolved away from the main sequence, which requires several
billion years. Globular clusters such as Centauri and old openclusters such as M67 are sufficiently old that they
show a pronounced subgiant branch in their color-magnitude diagrams. Centauri actually shows several separate
subgiant branches for reasons that are still not fully understood, but appear to represent stellar populations of
different ages within the cluster.

Variability

Several types of variable star include subgiants:

Beta Cephei variables, early B main sequence and subgiant stars


Slowly pulsating B-type stars, mid to late B main sequence and subgiant stars
Delta Scuti variables, late A and early F main sequence and subgiant stars

Subgiants more massive than the sun cross the Cepheid instability strip, called the first crossing since they may
cross the strip again later on a blue loop. In the 2 3 M range, this includes Delta Scuti variables such as Cas At
higher masses the stars would pulsate as ClassicalCepheid variables while crossing the instability strip, but massive
subgiant evolution is very rapid and it is difficult to detect examples. SV Vulpeculae has been proposed as a
subgiant on its first crossing but was subsequently determined to be on its second crossing

Planets

Planets in orbit around subgiant stars include Kappa Andromedae

The standard mechanism for star birth is through the gravitational collapse of a cold interstellar cloud of gas and
dust. As the cloud contracts it heats due to the KelvinHelmholtz mechanism. Early in the process the contracting
gas quickly radiates away much of the energy, allowing the collapse to continue. Eventually, the central region
becomes sufficiently dense to trap radiation. Consequently, the central temperature and density of the collapsed
cloud increases dramatically with time, slowing the contraction, until the conditions are hot and dense enough for
thermonuclear reactions to occur in the core of the protostar. For most stars, gas and radiation pressure generated
by the thermonuclear fusion reactions within the core of the star will support it against any further gravitational
contraction. Hydrostatic equilibrium is reached and the star will spend most of its lifetime fusing hydrogen into
helium as a main-sequence star.

If, however, the mass of the protostar is less than about 0.08 M , normal hydrogen thermonuclear fusion reactions
will not ignite in the core. Gravitational contraction does not heat the small protostar very effectively, and before
the temperature in the core can increase enough to trigger fusion, the density reaches the point where electrons
become closely packed enough to create quantum electron degeneracy pressure. According to the brown dwarf
interior models,

128
This means that the protostar is not massive enough and not dense enough to ever reach the conditions
needed to sustain hydrogen fusion. The infalling matter is prevented, by electron degeneracy pressure, from
reaching the densities and pressures needed.

Further gravitational contraction is prevented and the result is a "failed star", or brown dwarf that simply cools off
by radiating away its internal thermal energy.

High-mass brown dwarfs versus low-mass stars

Lithium is generally present in brown dwarfs and not in low-mass stars. Stars, which reach the high temperature
necessary for fusing hydrogen, rapidly deplete their lithium. Fusion of lithium-7 and a proton occurs producing two
helium-4 nuclei. The temperature necessary for this reaction is just below that necessary for hydrogen fusion.
Convection in low-mass stars ensures that lithium in the whole volume of the star is eventually depleted. Therefore,
the presence of the lithium spectral line in a candidate brown dwarf is a strong indicator that it is indeed a substellar
object. The use of lithium to distinguish candidate brown dwarfs from low-mass stars is commonly referred to as the
lithium test, and was pioneered by Rafael Rebolo, Eduardo Martn and Antonio Magazzu. However, lithium is also
seen in very young stars, which have not yet had enough time to burn it all. Heavier stars, like the Sun, can retain
lithium in their outer atmospheres, which never get hot enough and the convective layer does not mix with the core
where the lithium would be rapidly depleted. Those larger stars are also distinguishable from brown dwarfs by their
size and luminosity. On the contrary, brown dwarfs at the high end of their mass range can be hot enough to deplete
their lithium when they are young. Dwarfs of mass greater than 65 MJ can burn their lithium by the time they are
half a billion years old, thus the lithium test is not perfect.

Unlike stars, older brown dwarfs are sometimes cool enough that, over very long periods of time, their atmospheres
can gather observable quantities of methane which cannot form in hotter objects. Dwarfs confirmed in this fashion
include Gliese 229B.

Main-sequence stars cool, but eventually reach a minimum bolometric luminosity that they can sustain through
steady fusion. This varies from star to star, but is generally at least 0.01% that of the Sun. Brown dwarfs cool and
darken steadily over their lifetimes: sufficiently old brown dwarfs will be too faint to be detectable.

Iron rain as part of atmospheric convection processes is possible only in brown dwarfs, and not in small stars. The
spectroscopy research into iron rain is still ongoing, but not all brown dwarfs

129
will always display this atmospheric anomaly. In 2013, a heterogeneous iron-containing atmosphere was
imaged around the B component in the close Luhman 16 system.

Low-mass brown dwarfs versus high-mass planets

An artistic concept of the brown dwarf around the star HD 29587, a companion known as HD 29587 b, and
estimated to be about 55 Jupiter masses.

Brown dwarfs are all roughly the same radius as Jupiter. At the high end of their mass range (60 90 MJ), the
volume of a brown dwarf is governed primarily by electron-degeneracy pressure,[24] as it is in white dwarfs; at the
low end of the range (10 MJ), their volume is governed primarily by Coulomb pressure, as it is in planets. The net
result is that the radii of brown dwarfs vary by only 1015% over the range of possible masses. This can make
distinguishing them from planets difficult.

In addition, many brown dwarfs undergo no fusion; those at the low end of the mass range (under 13 MJ) are never
hot enough to fuse even deuterium, and even those at the high end of the mass range (over 60 M J) cool quickly
enough that after 10 million years they no longer undergo fusion.

X-ray and infrared spectra are telltale signs of brown dwarfs. Some emit X-rays; and all "warm" dwarfs continue to
glow tellingly in the red and infrared spectra until they cool to planet-like temperatures (under 1000 K).

Gas giants have some of the characteristics of brown dwarfs. Like the Sun, Jupiter and Saturn are both made
primarily of hydrogen and helium. Saturn is nearly as large as Jupiter, despite having
only 30% the mass. Three of the giant planets in the Solar System (Jupiter, Saturn, and Neptune) emit much more
heat than they receive from the Sun.[25]And all four giant planets have their
own "planetary systems"their moons. Like stars, brown dwarfs form independently, but lack sufficient mass to
"ignite" as do stars. Like all stars, they can occur singly or in close proximity to other stars. Some orbit stars and
can, like planets, have eccentric orbits.

Currently, the International Astronomical Union considers an object above 13 MJ, which is the limiting mass for
thermonuclear fusion of deuterium, to be a brown dwarf, whereas an object under that mass (and orbiting a star
or stellar remnant) is considered a planet.[26]

130
The 13 Jupiter-mass cutoff is a rule of thumb rather than something of precise physical significance. Larger objects
will burn most of their deuterium and smaller ones will burn only a little, and the 13 Jupiter mass value is
somewhere in between.. The amount of deuterium burnt also depends to some extent on the composition of the
object, specifically on the amount of helium and deuterium present and on the fraction of heavier elements, which
determines the atmospheric opacity and thus the radiative cooling rate.

The Extrasolar Planets Encyclopaedia includes objects up to 25 Jupiter masses, and the Exoplanet Data
Explorer up to 24 Jupiter masses.

Sub-brown dwarf

A size comparison between the Sun, a young sub-brown dwarf, and Jupiter. As the sub-brown dwarf ages,
it will gradually cool and shrink

Objects below 13 MJ, called sub-brown dwarf or planetary-mass brown dwarf, form in the same manner as stars
and brown dwarfs (i.e. through the collapse of a gas cloud) but have a massbelow the limiting mass for
thermonuclear fusion of deuterium. Some researchers call them free-floating planets,]whereas others call them
planetary-mass brown dwarfs.

Observations

Classification of brown dwarfs

Spectral class M

Artist's vision of a late-M dwarf

131
There are brown dwarfs with a spectral class of M6.5 or later. They are also called late-M dwarfs.

Spectral class L

Artist's vision of an L-dwarf

The defining characteristic of spectral class M, the coolest type in the long-standing classical stellar sequence, is
an optical spectrum dominated by absorption bands of titanium(II) oxide (TiO) and vanadium(II) oxide (VO)
molecules. However, GD 165B, the cool companion to the white dwarf GD 165, had none of the hallmark TiO
features of M dwarfs. The subsequent identification of many objects like GD 165B ultimately led to the definition
of a new spectralclass, the L dwarfs, defined in the red optical region of the spectrum not by absorption metal-
oxide bands (TiO, VO), but metal hydride emission bands (FeH, CrH, MgH, CaH) and prominent alkali metal
lines (Na I, K I, Cs I, Rb I). As of 2013, over 900 L dwarfs have been identified,[21]most by wide-field surveys:
the Two Micron All Sky Survey (2MASS), the Deep Near Infrared Survey of the Southern Sky (DENIS), and the
Sloan Digital Sky Survey (SDSS).

Spectral class T

Artist's vision of a T-dwarf

As GD 165B is the prototype of the L dwarfs, Gliese 229B is the prototype of a second new spectral class, the T
dwarfs. Whereas near-infrared (NIR) spectra of L dwarfs show strong absorption bands of H 2O and carbon
monoxide (CO), the NIR spectrum of Gliese 229B is dominated by absorption bands from methane (CH4),
features that were only found in the giant

132
planets of the Solar System and Titan. CH4, H2O, and molecular hydrogen (H2) collision-induced absorption (CIA)
give Gliese 229B blue near-infrared colors. Its steeply sloped red optical spectrum also lacks the FeH and CrH
bands that characterize L dwarfs and instead is influenced by exceptionally broad absorption features from the alkali
metals Na and K. These differences led Kirkpatrick to propose the T spectral class for objects exhibiting H- and K-
band CH4 absorption. As of 2013, 355 T dwarfs are known. [21]NIR classification schemes for T dwarfs have recently
been developed by Adam Burgasser and Tom Geballe. Theory suggests that L dwarfs are a mixture of very-low-
mass stars and sub-stellar objects (brown dwarfs), whereas the T dwarf class is composed entirely of brown dwarfs.
Because of the absorption of sodium and potassium in the green part of the spectrum of T dwarfs, the actual
appearance of T dwarfs to human visual perception is estimated to be not brown, but the color of magenta coal tar
dye.[32][33] T-class brown dwarfs, such as WISE 0316+4307, have been detected over 100 light-years from the
Sun.

Spectral class Y

Artist's vision of a Y-dwarf

There is some doubt as to what, if anything, should be included in the class Y dwarfs. They are expected to be much
cooler than T-dwarfs. They have been modelled,[36]though there is no
well-defined spectral sequence yet with prototypes.

In 2009, the coolest known brown dwarfs had estimated effective temperatures between 500 and 600 K, and have
been assigned the spectral class T9. Three examples are the brown dwarfs CFBDS J005910.90-011401.3, ULAS
J133553.45+113005.2, and ULAS J003402.77005206.7.[37]The spectra of these objects display absorption around
1.55 micrometers.[37]Delorme et al. have suggested that this feature is due to absorption from ammonia and that this
should be taken as indicating the TY transition, making these objects of type [Link], the feature is difficult
to distinguish from absorption by water and methane,[37]and other authors have stated that the assignment of class
Y0 is premature.

In April 2010, two newly discovered ultracool sub-brown dwarfs (UGPS 0722-05 and SDWFS1433+35) were
proposed as prototypes for spectral class Y0.

In February 2011, Luhman et al. reported the discovery of a "brown dwarf" companion to a nearby white dwarf
with a temperature of c. 300 K and mass of 7 M J.[35]Though of planetary mass, Rodriguez et al. suggest it is
unlikely to have formed in the same manner as planets.

133
Shortly after that, Liu et al. published an account of a "very cold" (c. 370 K) brown dwarf orbiting another very-
low-mass brown dwarf and noted that "Given its low luminosity, atypical colors and cold temperature, CFBDS
J1458+10B is a promising candidate for the hypothesized Y spectral class."

In August 2011, scientists using data from NASA's Wide-field Infrared Survey Explorer (WISE) discovered six "Y
dwarfs"star-like bodies with temperatures as cool as the human body.

WISE 0458+6434 is the first ultra-cool brown dwarf (green dot) discovered by WISE. The green and blue comes
from infrared wavelengths mapped to visible colors.

WISE data has revealed hundreds of new brown dwarfs. Of these, fourteen are classified as cool Ys. [21]One of the
Y dwarfs, called WISE 1828+2650, was, as of August 2011, the record holder for the coldest brown dwarf
emitting no visible light at all, this type of object resembles free-floating planets more than stars. WISE 1828+2650
was initially estimated to have an atmospheric temperature cooler than 300 K [44]for comparison, the upper end of
room temperature is 298 K (25 C; 77 F). Its temperature has since been revised and newer estimates put it in the
range of 250 to 400 K (23 to 127 C; 10 to 260 F)

In April 2014, WISE 08550714 was announced with a temperature profile estimated around 225 to 260 K
(48 to 13 C; 55 to 8 F) and a mass of 3 to 10 MJ.[46]It was also unusual in that its observed parallax meant
a distance close to 7.20.7 light years from the Solar System.

Spectral and atmospheric properties of brown dwarfs

The majority of flux emitted by L and T dwarfs is in the 1 to 2.5 micrometre near-infrared range. Low and
decreasing temperatures through the late M-, L-, and T-dwarf sequence result in a rich near-infrared spectrum
containing a wide variety of features, from relatively narrow lines of neutral atomic species to broad molecular
bands, all of which have different dependencies on temperature, gravity, and metallicity. Furthermore, these low
temperature conditions favor condensation out of the gas state and the formation of grains.

134
Typical atmospheres of known brown dwarfs range in temperature from 2200 down to 750 K Compared to stars,
which warm themselves with steady internal fusion, brown dwarfs cool quickly over time; more massive dwarfs
cool slower than less massive ones.

Observational techniques

Brown dwarfs Teide 1, Gliese 229B, and WISE 1828+2650 compared to red dwarf Gliese 229A, Jupiter and
our Sun Coronagraphs have recently been used to detect faint objects orbiting bright visible stars,
including Gliese 229B. Sensitive telescopes equipped with charge-coupled devices (CCDs) have been used
to search distant star clusters for faint objects, including Teide 1.

Wide-field searches have identified individual faint objects, such as Kelu-1 (30 ly away).

Brown dwarfs are often discovered in surveys to discover extrasolar planets. Methods of detecting extrasolar
planets work for brown dwarfs as well, although brown dwarfs are much easier to detect.

Milestones

1995: First brown dwarf verified. Teide 1, an M8 object in the Pleiades cluster, is picked out with a CCD in
the Spanish Observatory of Roque de los Muchachos of the Instituto de Astrofsica deCanarias.

First methane brown dwarf verified. Gliese 229B is discovered orbiting red dwarf Gliese 229A (20 ly away)
using an adaptive optics coronagraph to sharpen images from the 60-inch (1.5 m) reflecting telescope at Palomar
Observatory on Southern California's Mt. Palomar; follow-up infrared spectroscopy made with their 200-inch (5
m) Hale telescope shows an abundance of methane.

1998: First X-ray-emitting brown dwarf found. Cha Halpha 1, an M8 object in the Chamaeleon I dark
cloud, is determined to be an X-ray source, similar to convective late-type stars.

135
15 December 1999: First X-ray flare detected from a brown dwarf. A team at the University of California
monitoring LP 944-20 (60 MJ, 16 ly away) via the Chandra X-ray Observatory, catches a 2-hour flare.
27 July 2000: First radio emission (in flare and quiescence) detected from a brown dwarf. A team
of students at the Very Large Array reported their observations of LP 944-20 in the 15 March 2001
issue of the journal Nature
25 April 2014: Coldest known brown dwarf discovered. WISE 08550714 is 7.2 light-years away (7th
closest system to the Sun) and has a temperature between 48 to 13 degrees Celsius. ]

Brown
dwarf as an X-ray source

Chandra image of LP 944-20 before flare and during flare

X-ray flares detected from brown dwarfs since 1999 suggest changing magnetic fields within them, similar to
those in very-low-mass stars.

With no strong central nuclear energy source, the interior of a brown dwarf is in a rapid boiling, or convective state.
When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the
development of a strong, tangled magnetic field near the surface. The flare observed by Chandra from LP 944-20
could have its origin in the turbulent magnetized hot material beneath the brown dwarf's surface. A sub-surface flare
could conduct heat to the atmosphere, allowing electric currents to flow and produce an X-ray flare, like a stroke of
lightning. The absence of X-rays from LP 944-20 during the non-flaring period is also a significant result. It sets the
lowest observational limit on steady X-ray power produced by a brown dwarf, and shows that coronas cease to exist
as the surface temperature of a brown dwarf cools below about 2800K and becomes electrically neutral.

Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low-mass brown dwarf in a
multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A)
has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal
plasma which is some 3 million degrees Celsius", said Yohko Tsuboi of Chuo University in Tokyo.[50]"This brown
dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun", said Tsuboi.
"This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves
during their youth!"

136
Recent developments

The brown dwarf Cha 110913-773444, located 500 light years away in the constellation Chamaeleon, may be in
the process of forming a miniature planetary system. Astronomers from Pennsylvania State University have
detected what they believe to be a disk of gas and dust similar to the one hypothesized to have formed the Solar
System. Cha 110913-773444 is the smallest brown dwarf found to date (8 M J), and if it formed a planetary system,
it would be the smallest known object to have one. Their findings were published in the December 10, 2005 issue
of Astrophysical Journal Letters.

Recent observations of known brown dwarf candidates have revealed a pattern of brightening and dimming of
infrared emissions that suggests relatively cool, opaque cloud patterns obscuring a hot interior that is stirred by
extreme winds. The weather on such bodies is thought to be extremely violent, comparable to but far exceeding
Jupiter's famous storms.

On January 8, 2013 astronomers using NASA's Hubble and Spitzer space telescopes probed the stormy
atmosphere of a brown dwarf named 2MASS J22282889-431026, creating the most detailed "weather map" of a
brown dwarf thus far. It shows wind-driven, planet-sized clouds. The new research is a stepping stone toward a
better understanding not only brown dwarfs, but also of the atmospheres of planets beyond the Solar System.
NASA's WISE mission has detected 200 new brown dwarfs. There are actually fewer brown
dwarfs in our cosmic neighborhood than previously thought. Rather than one star for every brown dwarf,
there may be as many as six stars for every brown dwarf.

Planets around brown dwarfs

Artist's impression of a disc of dust and gas around a brown dwarf.

The super-Jupiter planetary-mass objects 2M1207b and 2MASS J044144 that are orbiting brown dwarfs at large
orbital distances may have formed by cloud collapse rather than accretion and so may be sub-brown dwarfs rather
than planets, which is inferred from relatively large masses and large orbits. The first discovery of a low-mass
companion orbiting a brown dwarf (ChaH8) at a small orbital distance using the radial velocity technique paved
the way for the detection of planets around brown dwarfs on orbits of a few AU or smaller. [55][56] However, with a
mass ratio between the companion and primary in ChaH8 of about 0.3, this system rather resembles a binary star.
Then, in 2013, the first planetary-mass companion (OGLE-2012-BLG-0358L b) in a

137
relatively small orbit was discovered orbiting a brown dwarf. In 2015, the first terrestrial-mass planet orbiting a
brown dwarf was found, OGLE-2013-BLG-0723LBb.

Disks around brown dwarfs have been found to have many of the same features as disks around stars; therefore, it
is expected that there will be accretion-formed planets around brown dwarfs.[59]Given the small mass of brown
dwarf disks, most planets will be terrestrial planets rather than gas giants. If a giant planet orbits a brown dwarf
across our line of sight, then, because they have approximately the same diameter, this would give a large signal
for detectionby transit. The accretion zone for planets around a brown dwarf is very close to the brown dwarf
itself, so tidal forces would have a strong effect.
Planets around brown dwarfs are likely to be carbon planets depleted of water.

A 2016 study, based upon observations with Spitzer estimates that 175 brown dwarfs need to be monitored in
order to guarantee (95%) at least one detection of a planet.

Habitability

Habitability for hypothetical planets orbiting brown dwarfs has been studied. Computer models suggesting
conditions for these bodies to have habitable planets are very stringent, the habitablezone being narrow and
decreasing with time, due to the cooling of the brown dwarf. The orbits there would have to be of extremely low
eccentricity (of the order of 106) to avoid strong tidalforces that would trigger a greenhouse effect on the planets,
rendering them uninhabitable.

Superlative brown dwarfs

List of brown dwarfs

WD 0137-349 B: first confirmed brown dwarf to have survived the primary's red giant phase.
In 1984, it was postulated by some astronomers that the Sun may be orbited by an undetected
brown dwarf (sometimes referred to as Nemesis) that could interact with the Oort cloud just as passing
stars can. However, this theory has fallen out of favor.[66]

Table of firsts

Spectral

Record Name RA/Dec Constellation Notes


type

Teide 1 (Pleiades 3h47m18.0s Imaged in 1989 and


First discovered M8 Taurus
Open Star Cluster) +2422'31" 1994

First imaged with 06h10m34.62s


Gliese 229 B T6.5 Lepus Discovered 1994
coronography 2151'52.1"

138
2MASSW J1207334-
First with planemo M8 12h07m33.47s Centaurus

393254 3932'54.0"

First with a Planet discovered in


2M1207
planemo in orbit 2004

First with a dust


disk

First with bipolar


outflow

First field type 3h47m18.0s


Teide 1 M8 Taurus 1995
(solitary) +2422'31"

First as a
06h10m34.62s
companion to a Gliese 229 B T6.5 Lepus 1995
2151'52.1"
normal star

First spectroscopic
binary brown PPL 15 A, B[67] M6.5 Taurus Basri and Martin 1999
dwarf

First eclipsing Stassun et al. 2006,


binary brown 2M0535-05[68][69] M6.5 Orion 2007 (Distance ~450
dwarf pc)

First binary brown Epsilon Indi Ba, Bb[70] T1 + T6 Indus Distance: 3.626pc

dwarf of T Type

First trinary brown DENIS-P J020529.0- L5, L8 02h05m29.40s Delfosse et al. 1997,
Cetus
dwarf 115925 A/B/C and T0 1159'29.7" mentions

First halo brown 2MASS 05h32m53.46s Adam J. Burgasser, et


sdL7 Gemini
dwarf J05325346+8246465 +8246'46.5" al. 2003

First with late-M 3h47m18.0s


Teide 1 M8 Taurus 1995
spectrum +2422'31"

139
First with L
spectrum

Gliese 229 B
First with T T6.5 06h10m34.62s Lepus 1995

140
spectrum 2151'52.1"
[7
1]
Latest-T spectrum ULAS J0034-00 T9 Cetus 2007

2008; this is also


classified as a T9
First with Y
CFBDS0059[38] ~Y0 dwarf, due to its close
spectrum
resemblance to other
[7
1]
T dwarfs

First X-ray-
Cha Halpha 1 M8 Chamaeleon 1998
emitting

03h39m35.22s
First X-ray flare LP 944-20 M9V Fornax 1999
3525'44.1"

First radio
03h39m35.22s
[4
8]
emission (in flare LP 944-20 M9V Fornax 2000

3525'44.1"
and quiescence)

Coolest radio-
2MASSI 10h47m53.85s Route & Wolszczan

flaring brown T6.5 Leo


J10475385+2124234 +2124'23.4" 2012
dwarf

First potential
brown dwarf
LSR J1835+3259 M8.5 Lyra 2015
auroras
discovered

Equator rotates faster


First detection of
than poles by 0.022
differential 15h01m08.3s
TVLM 513-46546 M9 Botes radians / day;
rotation in a +2250'02"
Wolszczan & Route

141
brown dwarf
2014

This list is incomplete; you can help by expanding it.


Table of extremes

Spectral
Record Name RA/Dec Constellation Notes
type

Oldest

142
Youngest

Heaviest

Metal-
rich

Metal- 2MASS 05h32m53.46s distance is ~1030 pc,

sdL7 Gemini
poor J05325346+8246465 +8246'46.5" metallicity is 0.10.01 ZSol

Lightest OTS 44 M9.5 Chamaeleon Has a mass range of 11.5 MJ-

15 MJ, distance is ~550 ly

Largest
[7 [7
2] 3]
Smallest EBLM J0555-57Ab

Rotational period of 17, 35, or


Fastest WISEPC 11h22m54.73s
T6 Leo 52 mins; Route & Wolszczan
rotating J112254.73+255021.5 +2550'21.5"
2016

03h07m45.12s
[74]
Farthest WISP 0307-7243 T4.5 Distance: 400 pc
7243'57.5"

Nearest Luhman 16 Distance: ~6.5 ly

Brightest Teegarden's star M6.5 jmag=8.4

Dimmest WISE 1828+2650 Y2 jmag=23

Hottest

Coolest WISE 08550714[75] Temperature 48 to 13 C

Transiting brown dwarf


COROT-3b has 22 M Jwith a
Most diameter 1.010.07 times that
COROT-3b[76]
dense of Jupiter. It is slightly denser
than osmium at standard

143
conditions.

Least
dense

144
A dense starfield in Sagittarius

MAIN SEQUENCE

Main sequence

In astronomy, the main sequence is a continuous and distinctive band of stars that appears on plots of stellar color
versus brightness. These color-magnitude plots are known as Hertzsprung
Russell diagrams after their co-developers, Ejnar Hertzsprung and Henry Norris Russell. Stars on this band are
known as main-sequence stars or "dwarf" [Link] are the most numerous
true stars in the universe, and include the Earth's Sun.

After a star has formed, it generates thermal energy in the dense core region through nuclearfusion of hydrogen
atoms into helium. During this stage of the star's lifetime, it is located along the main sequence at a position
determined primarily by its mass, but also based upon its chemical composition and other factors. All main-
sequence stars are in hydrostatic equilibrium, where outward thermal pressure from the hot core is balanced by the
inward pressure of gravitational collapse from the overlying layers. The strong dependence of the rate of energy
generation in the core on the temperature and pressure helps to sustain this balance. Energy generated at the core
makes its way to the surface and is radiated away at the photosphere. The energy is carried by either radiation or
convection, with the latter occurring in regions with steeper temperature gradients, higher opacity or both.

The main sequence is sometimes divided into upper and lower parts, based on the dominant process that a star
uses to generate energy. Stars below about 1.5 times the mass of the Sun (or

145
1.5 solar masses (M)) primarily fuse hydrogen atoms together in a series of stages to form helium, a sequence
called the protonproton chain. Above this mass, in the upper main sequence, the nuclear fusion process mainly
uses atoms of carbon, nitrogen and oxygen as intermediaries in the CNO cycle that produces helium from hydrogen
atoms. Main-sequence stars with more than two solar masses undergo convection in their core regions, which acts to
stir up the newly created helium and maintain the proportion of fuel needed for fusion to occur. Below this mass,
stars have cores that are entirely radiative with convective zones near the surface. With decreasing stellar mass, the
proportion of the star forming a convective envelope steadily increases, whereas main-sequence stars below 0.4
Mundergo convection throughout their mass. When core convection does not occur, a helium-rich core develops
surrounded by an outer layer of hydrogen.

In general, the more massive a star is, the shorter its lifespan on the main sequence. After the hydrogen fuel at the
core has been consumed, the star evolves away from the main sequence on the HR diagram. The behavior of a star
now depends on its mass, with stars below 0.23 M becoming white dwarfs directly, whereas stars with up to ten
solar masses pass through a redgiant stage. More massive stars can explode as a supernova or collapse directly
into a blackhole.

For a more-massive protostar, the core temperature will eventually reach 10 million kelvin, initiating the proton
proton chain reaction and allowing hydrogen to fuse, first to deuterium and then to helium. In stars of slightly over
1 M (2.01030 kg), the carbonnitrogenoxygen fusion reaction (CNO cycle) contributes a large portion of the
energy generation. The onset of nuclear fusion leads relatively quickly to a hydrostatic equilibrium in which
energy released by the core maintains a high gas pressure, balancing the weight of the star's matter and preventing
further gravitational collapse. The star thus evolves rapidly to a stable state, beginning the main-sequence phase of
its evolution.

A new star will sit at a specific point on the main sequence of the HertzsprungRussell diagram, with the main-
sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse
hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas
massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf
star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the
middle of its main sequence lifespan.

146
The evolutionary tracks of stars with different initial masses on the HertzsprungRussell diagram. The tracks
start once the star has evolved to the main sequence and stop when fusion stops (for massive stars) and at the
end of the red giant branch (for stars 1 M and less)
A yellow track is shown for the Sun, which will become a red giant after its main-sequence phase ends before
expanding further along the asymptotic giant branch, which will be the last phase in which the Sun undergoes
fusion.

History

Hot and brilliant O-type main-sequence stars in star-forming regions. These are all regions of star
formation that contain many hot young stars including several bright stars of spectral type O.

In the early part of the 20th century, information about the types and distances of stars became more readily
available. The spectra of stars were shown to have distinctive features, which allowed them to be categorized.
Annie Jump Cannon and Edward C. Pickering at Harvard

147
College Observatory developed a method of categorization that became known as the HarvardClassification
Scheme, published in the Harvard Annals in 1901.

In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung noticed that the reddest stars classified as K and
M in the Harvard schemecould be divided into two distinct groups. These stars are either much brighter than the
Sun, or much fainter. To distinguish these groups, he called them "giant" and "dwarf" stars. The following year he
began studying star clusters; large groupings of stars that are co-located at approximately the same distance. He
published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous
sequence of stars, which he named the Main Sequence.

At Princeton University, Henry Norris Russell was following a similar course of research. He was studying the
relationship between the spectral classification of stars and their actual brightness as corrected for distancetheir
absolute magnitude. For this purpose he used a set of stars that had reliable parallaxes and many of which had been
categorized at Harvard. When he plotted the spectral types of these stars against their absolute magnitude, he
found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be
predicted with reasonable accuracy.

Of the red stars observed by Hertzsprung, the dwarf stars also followed the spectra-luminosity relationship
discovered by Russell. However, the giant stars are much brighter than dwarfs and so, do not follow the same
relationship. Russell proposed that the "giant stars must have low density or great surface-brightness, and the
reverse is true of dwarf stars". The same curve also showed that there were very few faint white stars.

In 1933, Bengt Strmgren introduced the term HertzsprungRussell diagram to denote a luminosity-
spectral class diagram.[9]This name reflected the parallel development of this technique by both
Hertzsprung and Russell earlier in the century.

As evolutionary models of stars were developed during the 1930s, it was shown that, for stars of a uniform
chemical composition, a relationship exists between a star's mass and its luminosity and radius. That is, for a given
mass and composition, there is a unique solution for determining the star's radius and luminosity. This became
known as the Vogt-Russell theorem; named after Heinrich Vogt and Henry Norris Russell. By this theorem, when a
star's chemical composition and its position on the main sequence is known, so too is the star's mass and radius.
(However, it was subsequently discovered that the theorem breaks down somewhat for stars of non-uniform
composition.)

A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs
Keenan.[11]The MK classification assigned each star a spectral typebased on the Harvard classificationand a
luminosity class. The Harvard classification had been developed by assigning a different letter to each star based on
the strength of the hydrogen spectral line, before the relationship between spectra and temperature was known.
When ordered by temperature and when duplicate classes were removed, the spectral types of stars followed, in
order of decreasing temperature with colors ranging from blue to red, the sequence O, B, A, F, G, K and M. (A
popular mnemonic for memorizing this sequence of stellar classes is "Oh Be A

148
Fine Girl/Guy, Kiss Me".) The luminosity class ranged from I to V, in order of decreasing luminosity. Stars
of luminosity class V belonged to the main sequence.

Formation of main sequence

Star formation, Protostar, and Pre-main-sequence star

When a protostar is formed from the collapse of a giant molecular cloud of gas and dust in the local interstellar
medium, the initial composition is homogeneous throughout, consisting of about 70% hydrogen, 28% helium and
trace amounts of other elements, by mass.[13]The initial mass of the star depends on the local conditions within the
cloud. (The mass distribution of newly formed stars is described empirically by the initial mass function.)[14]During
the initial collapse, this pre-main-sequence star generates energy through gravitational contraction. Upon reaching a
suitable density, energy generation is begun at the core using an exothermic nuclear fusion process that converts
hydrogen into helium.[12]

When nuclear fusion of hydrogen becomes the dominant energy production process and the excess energy gained
from gravitational contraction has been lost,[15]the star lies along a curve on the HertzsprungRussell diagram (or
HR diagram) called the standard main sequence. Astronomers will sometimes refer to this stage as "zero age
main sequence", or ZAMS.[16]The ZAMS curve can be calculated using computer models of stellar properties at
the point when
stars begin hydrogen fusion. From this point, the brightness and surface temperature of stars typically increase
with age.[17]

A star remains near its initial position on the main sequence until a significant amount of hydrogen in the core
has been consumed, then begins to evolve into a more luminous star. (On the HR diagram, the evolving star
moves up and to the right of the main sequence.) Thus the main sequence represents the primary hydrogen-
burning stage of a star's lifetime.[12]

Properties of main sequence

The majority of stars on a typical HR diagram lie along the main-sequence curve. This line is pronounced because
both the spectral type and the luminosity depend only on a star's mass, at least to zeroth-order approximation, as
long as it is fusing hydrogen at its coreand that is what almost all stars spend most of their "active" lives
doing.[18]

The temperature of a star determines its spectral type via its effect on the physical properties of plasma in its
photosphere. A star's energy emission as a function of wavelength is influenced by both its temperature and
composition. A key indicator of this energy distribution is given by the color index, B V, which measures the
star's magnitude in blue (B) and green-yellow (V) light by means of filters.[note 1]This difference in magnitude
provides a measure of a star's temperature.

Dwarf terminology

149
Main-sequence stars are called dwarf stars, but this terminology is partly historical and can be somewhat confusing.
For the cooler stars, dwarfs such as red dwarfs, orange dwarfs, and yellowdwarfs are indeed much smaller and
dimmer than other stars of those colors. However, for hotter blue and white stars, the size and brightness difference
between so-called "dwarf" stars that are on the main sequence and the so-called "giant" stars that are not becomes
smaller; for the hottest stars it is not directly observable. For those stars the terms "dwarf" and "giant" refer to
differences in spectral lines which indicate if a star is on the main sequence or off it.
Nevertheless, very hot main-sequence stars are still sometimes called dwarfs, even though they have roughly the
same size and brightness as the "giant" stars of that temperature.[19]

The common use of "dwarf" to mean main sequence is confusing in another way, because there are dwarf stars
which are not main-sequence stars. For example, a white dwarf is the dead core of a star that is left after the star
has shed its outer layers, that is much smaller than a main-sequence star, roughly the size of Earth. These
represent the final evolutionary stage of many main-sequence stars.[20]

Parameters

Comparison of main sequence stars of each spectral class

By treating the star as an idealized energy radiator known as a black body, the luminosity L and radius R can be
related to the effective temperature Teff by the StefanBoltzmann law:

L = 4R2Teff4

where is the StefanBoltzmann constant. As the position of a star on the HR diagram shows its approximate
luminosity, this relation can be used to estimate its radius.[21]

The mass, radius and luminosity of a star are closely interlinked, and their respective values can be approximated
by three relations. First is the StefanBoltzmann law, which relates the luminosity L, the radius R and the surface
temperature Teff. Second is the massluminosityrelation, which relates the luminosity L and the mass M. Finally,
the relationship between M and R is close to linear. The ratio of M to R increases by a factor of only three over 2.5
orders ofmagnitude of M. This relation is roughly proportional to the star's inner temperature T I, and its extremely
slow increase reflects the fact that the rate of energy generation in the core strongly depends on this temperature,
whereas it has to fit the massluminosity relation. Thus, a too high or too low temperature will result in stellar
instability.

150
A better approximation is to take = L/M, the energy generation rate per unit mass, as is
proportional to TI15, where TI is the core temperature. This is suitable for stars at least as massive as the Sun,
exhibiting the CNO cycle, and gives the better fit R M0.78.[22]

Sample parameters

The table below shows typical values for stars along the main sequence. The values of luminosity (L), radius
(R) and mass (M) are relative to the Suna dwarf star with a spectral classification of G2 V. The actual values
for a star may vary by as much as 2030% from the values listed below.[23]

Table of main-sequence stellar parameters[24]

Luminosit Temperatur
Stella Radius Mass y e
r [2
Clas 5]
s Examples
L K
R/ R M/ M L/

O6 18 40 500,000 38,000 Theta1 Orionis C

B0 7.4 18 20,000 30,000 Phi1 Orionis

B5 3.8 6.5 800 16,400 Pi Andromedae A

A0 2.5 3.2 80 10,800 Alpha Coronae Borealis A

A5 1.7 2.1 20 8,620 Beta Pictoris

F0 1.3 1.7 6 7,240 Gamma Virginis

F5 1.2 1.3 2.5 6,540 Eta Arietis

G0 1.05 1.10 1.26 5,920 Beta Comae Berenices

G2 1.00 1.00 1.00 5,780 Sun[note 2]

G5 0.93 0.93 0.79 5,610 Alpha Mensae

K0 0.85 0.78 0.40 5,240 70 Ophiuchi A

151
K5 0.74 0.69 0.16 4,410 61 Cygni A[26]

M0 0.63 0.47 0.063 3,920 Gliese 185[27]

M5 0.32 0.21 0.0079 3,120 EZ Aquarii A

M8 0.13 0.10 0.0008 2,660 Van Biesbroeck's star[28]

ENERGY GENERATION

Stellar nucleosynthesis
Stellar nucleosynthesis is the process by which the natural abundances of thechemical elementswithin stars change
due to nuclear fusion reactions in the cores and their overlying mantles. Stars are said to evolve (age) with changes
in the abundances of the elements within. Core fusion increases the atomic weight of elements and reduces the
number of particles, which would lead to a pressure loss except that gravitation leads to contraction, an increase of
temperature, and a balance of forces.[1]A star loses most of its mass when it is ejected late in the star's stellar
lifetimes, thereby increasing the abundance of elements heavier than helium in the interstellarmedium. The term
supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a
presupernova star, a concept put forth by Fred Hoyle in 1954.[2] A stimulus to the development of the theory of
nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. Those
abundances, when plotted on a graph as a function of atomic number of the element, have a jagged sawtooth shape
that varies by factors of tens of millions. This suggested a natural process other than random. Such a graph of the
abundances can be seen at History of nucleosynthesis theory article. Of the several processes of nucleosynthesis,
stellar nucleosynthesis is the dominating contributor to elemental abundances in the universe.

A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the
20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the
longevity of the Sun as a source of heat and light.[3]The fusion of nuclei in a
star, starting from its initial hydrogen and helium abundance, provides it energy and the synthesis of new nuclei is a
byproduct of that fusion process. This became clear during the decade prior to World War II. The fusion-produced
nuclei are restricted to those only slightly heavier than the fusing nuclei; thus they do not contribute heavily to the
natural abundances of the elements. Nonetheless, this insight raised the plausibility of explaining all of the natural
abundances of elements in this way. The prime energy producer in our Sun is the fusion of hydrogen to form helium,
which occurs at a solar-core temperature of 14 million kelvin.

152
In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by [Link] and a
preliminary suggestion by Jean Perrin, proposed that stars obtained their energy
from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are
produced in stars.[4][5][6]This was a preliminary step toward the idea of
nucleosynthesis. In 1928, George Gamow derived what is now called the Gamow factor, a quantum-mechanical
formula that gave the probability of bringing two nuclei sufficiently close for the strong nuclear force to overcome
the Coulomb barrier. The Gamow factor was used in the decade that followed by Atkinson and Houtermans and
later by Gamow himself and EdwardTeller to derive the rate at which nuclear reactions would proceed at the high
temperatures believed to exist in stellar interiors.

In 1939, in a paper entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for
reactions by which hydrogen is fused into helium.[7]He defined two processes that he believed to be the sources of
energy in stars. The first one, the protonproton chainreaction, is the dominant energy source in stars with masses
up to about the mass of the Sun. The second process, the carbonnitrogenoxygen cycle, which was also considered
by Carl Friedrichvon Weizscker in 1938, is most important in more massive stars. These works concerned the
energy generation capable of keeping stars hot. A clear physical description of the protonproton chain and of the
CNO cycle appears in a 1968 textbook.[8]Bethe's two papers did not address the creation of heavier nuclei, however.
That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble
into iron.[9]Hoyle followed that in
1954 with a large paper describing how advanced fusion stages within stars would synthesize elements between
carbon and iron in mass.[10]This is the dominant work in stellar
nucleosynthesis.[11]It provided the roadmap to how the most abundant elements on Earth had been synthesized
from initial hydrogen and helium, making clear how those abundant elements increased their galactic abundances
as the galaxy aged.

Quickly, Hoyle's theory was expanded to other processes, beginning with the publication of a celebrated review
paper in 1957 by Burbidge, Burbidge, Fowler and Hoyle (commonly referred to as the B2FH paper).[12]This
review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for
the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of
primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements
heavier than iron. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton.
Cameron presented his own independent approach[13](following Hoyle's approach for the most part) of
nucleosynthesis. He introduced computers into time-dependent
calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the S-
process[14]and of the R-process,[15]as well as of the burning of silicon into the abundant
alpha-particle nuclei and iron-group elements,[16]and discovered radiogenic chronologies[17]for determining the age
of the elements. The entire research field expanded rapidly in the 1970s.

Key reactions

153
Cross section of a supergiant showing nucleosynthesis and elements formed.

154
A version of the periodic table indicating the origins including stellar nucleosynthesis of the
elements. All elements above 103 (lawrencium) are also manmade and are not included. Source;
[Link] work.

The most important reactions in stellar nucleosynthesis:

Hydrogen fusion:
o Deuterium fusion
o The protonproton chain
o The carbonnitrogenoxygen cycle
Helium fusion:
o The triple-alpha process
o The alpha process
Fusion of heavier elements:
o Lithium burning: a process found most commonly in brown dwarfs
o Carbon-burning process
o Neon-burning process
o Oxygen-burning process
o Silicon-burning process
Production of elements heavier than iron:
o Neutron capture:
The R-process
The S-process
o Proton capture:
The Rp-process
The P-process
o Photodisintegration

Hydrogen fusion

Protonproton chain reaction, CNO cycle, and Deuterium fusion

155
Illustration of the protonproton chain reaction sequence

156
Overview of the CNO-I cycle. The helium nucleus is released at the top-left step.

Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus) is the dominant process that
generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be
confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant
processes by which stellar hydrogen fusion occurs: proton-proton chain and the carbon-nitrogen-oxygen (CNO)
cycle. Ninety percent of all stars, with the exception of white dwarfs, are fusing hydrogen by these two processes.

In the cores of lower-mass main-sequence stars such as the Sun, the dominant energy production process is the
protonproton chain reaction. This creates a helium-4 nucleus through a sequence of chain reactions that begin with
the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron
and neutrino.[19]In each complete fusion cycle, the protonproton chain reaction releases about 26.2 MeV. [19]The
protonproton chain reaction cycle is relatively insensitive to temperature; a 10% rise of temperature would
increase

157
energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the
star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, [20]the energy flux toward the
surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer, rather than
by convective heat transfer.[21]As a result, there is little mixing of fresh hydrogen into the core or fusion products
outward.

In higher-mass stars, the dominant energy production process is the CNO cycle, which is a catalytic cycle that uses
nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the
proton-proton chain. During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy
production of this cycle, compared to the protonproton chain reaction, is accounted for by the energy lost through
neutrino emission.[19]The CNO cycle is very temperature sensitive, a 10% rise of temperature would produce a
350% rise in energy production. About 90% of the CNO cyle energy generation occurs within the inner 15% of the
star's mass, hence it is strongly concentrated at the core.] This results in such an intense outward energy flux that
convective energy transfer become more important than does radiative transfer. As a result, the core region
becomes a convection zone, which stirs the hydrogen fusion region and keeps it well mixed with the surrounding
proton-rich region.[23]This core convection occurs in stars where the CNO cycle contributes more than 20%
of the total energy. As the star ages and the core temperature increases, the region occupied by the convection
zone slowly shrinks from 20% of the mass down to the inner 8% of the mass.
Our Sun produces 10% of its energy from the CNO cycle.

The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency
differences between the two reactions. The protonproton chain reaction starts at temperatures about 410 6K
making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher
temperature of approximately 16106 K, but thereafter it increases more rapidly in efficiency as the temperature
rises, than does the proton-proton reaction. Above approximately 17106 K, the CNO cycle becomes the
dominant
source of energy. This temperature is achieved in the cores of main sequence stars with at least
1.3 times the mass of the Sun.[26]The Sun itself has a core temperature of about 15.710 6 K. As a main sequence star
ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle.

Helium fusion
Triple-alpha process and Alpha process

Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become
hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after
accumulating sufficient helium in its core to ignite it. In stars around the mass of the sun, this begins at the tip of the
red giant branch with a helium flash from a degenerate helium core and the star moves to the horizontal branch
where it burns helium in its core. More massive stars ignite helium in their cores without a flash and execute a blue
loop before reaching the asymptotic giant branch. Despite the name, stars on a blue loop from the red giant branch
are typically yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and
oxygen. The most massive stars become supergiants when they leave

158
the main sequence and quickly start helium fusion as they become red supergiants. After helium is exhausted in the
core of a star, it will continue in a shell around the carbon-oxygen core.

In all cases, helium is fused to carbon via the triple-alpha process. This can then form oxygen, neon, and heavier
elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers
of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion
pathways.

All main-sequence stars have a core region where energy is generated by nuclear fusion. The temperature and
density of this core are at the levels necessary to sustain the energy production that will support the remainder of
the star. A reduction of energy production would cause the overlaying mass to compress the core, resulting in an
increase in the fusion rate because of higher temperature and pressure. Likewise an increase in energy production
would cause the star to expand, lowering the pressure at the core. Thus the star forms a self-regulating system in
hydrostatic equilibrium that is stable over the course of its main sequence lifetime.

This graph shows the logarithm of the relative energy output () for the proton-proton (PP), CNO and triple-
fusion processes at different temperatures. The dashed line shows the combined energy generation of the PP
and CNO processes within a star. At the Sun's core temperature, the PP process is more efficient.

Hydrostatic equilibrium.

Hydrostatic equilibrium; In fluid mechanics, a fluid is said to be in hydrostatic equilibrium or hydrostatic


balance when it is at rest, or when the flow velocity at each point is constant overtime. This occurs when external
forces such as gravity are balanced by a pressure gradientforce.[1]For instance, the pressure-gradient force
prevents gravity from collapsing Earth'satmosphere into a thin, dense shell, whereas gravity prevents the pressure
gradient force from diffusing the atmosphere into space.

Hydrostatic equilibrium is the current distinguishing criterion between dwarf planets and smallSolar System bodies,
and has other roles in astrophysics and planetary geology. This qualification typically means that the object is
symmetrically rounded into a spheroid or ellipsoid

159
shape, where any irregular surface features are due to a relatively thin solid crust. There are 31observationally
confirmed such objects (apart from the Sun), sometimes called planemos,[2]in the Solar System, seven more[3]that
are virtually certain, and a hundred or so more that are likely.[3]

Mathematical consideration

If the highlighted volume of fluid is not moving, the forces on it upwards must equal the forces
downwards.

Derivation from force summation

Newton's laws of motion state that a volume of a fluid that is not in motion or that is in a state of constant velocity
must have zero net force on it. This means the sum of the forces in a given direction must be opposed by an equal
sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium.

The fluid can be split into a large number of cuboid volume elements; by considering a single element, the
action of the fluid can be derived.

There are 3 forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is,
from the definition of pressure,

There are 3 forces: the force downwards onto the top of the cuboid from the pressure, P, of the fluid above it is,
from the definition of pressure,

Similarly, the force on the volume element from the pressure of the fluid below pushing upwards is

Finally, the weight of the volume element causes a force downwards. If the density is , the volume is V and
g the standard gravity, then:

160
The volume of this cuboid is equal to the area of the top or bottom, times the height the formula for
finding the volume of a cube.

By balancing these forces, the total force on the fluid is

This sum equals zero if the fluid's velocity is constant. Dividing by A,

Or,

Ptop Pbottom is a change in pressure, and h is the height of the volume elementa change in the distance above the
ground. By saying these changes are infinitesimally small, the equation can be
written in differential form.

Density changes with pressure, and gravity changes with height, so the equation would be:

Derivation from NavierStokes equations

Note finally that this last equation can be derived by solving the three-dimensional NavierStokes equations
for the equilibrium situation where

Then the only non-trivial equation is the , which now reads

161
Thus, hydrostatic balance can be regarded as a particularly simple equilibrium solution of the NavierStokes
equations.

Derivation from general relativity

By plugging the energy momentum tensor for a perfect fluid

into the Einstein field equations

and using the conservation condition

one can derive the TolmanOppenheimerVolkoff equation for the structure of a static, spherically
symmetric relativistic star in isotropic coordinates:

In practice, and are related by an equation of state of the form f(,)=0, with f specific to makeup of the
star. M(r) is a foliation of spheres weighted by the mass density (r), with the largest sphere having radius r:

Per standard procedure in taking the nonrelativistic limit, we let c, so that the factor

162
Therefore, in the nonrelativistic limit the TolmanOppenheimerVolkoff equation reduces to
Newton's hydrostatic equilibrium:

(we have made the trivial notation change h=r and have used f(,)=0 to express in terms of P) A similar
equation can be computed for rotating, axially symmetric stars, which in its gauge independent form reads:

Unlike the TOV equilibrium equation, these are two equations (for instance, if as usual when

treating stars, one chooses spherical coordinates as basis coordinates , the index i

runs for the coordinates

Applications

Fluids

The hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic
balance is a particular balance for weighing substances in water. Hydrostatic balance allows the discovery of their
specific gravities.

Astrophysics

In any given layer of a star, there is a hydrostatic equilibrium between the outward thermal pressure from below
and the weight of the material above pressing inward. The isotropic gravitational field compresses the star into the
most compact shape possible. A rotating star in hydrostatic equilibrium is an oblate spheroid up to a certain
(critical) angular velocity. An extreme example of this phenomenon is the star Vega, which has a rotation period of
12.5 hours. Consequently, Vega is about 20% larger at the equator than at the poles. A star with an angular velocity
above the critical angular velocity becomes a Jacobi (scalene) ellipsoid, and at still faster rotation it is no longer
ellipsoidal but piriform or oviform, with yet other shapes beyond that, though shapes beyond scalene are not stable.

If the star has a massive nearby companion object then tidal forces come into play as well, distorting the star
into a scalene shape when rotation alone would make it a spheroid. An example of this is Beta Lyrae.

Hydrostatic equilibrium is also important for the intracluster medium, where it restricts the amount of fluid
that can be present in the core of a cluster of galaxies.

163
We can also use the principle of hydrostatic equilibrium to estimate the velocity dispersion of dark matter in
clusters of galaxies. Only baryonic matter (or, rather, the collisions thereof) emits X-ray radiation. The absolute X-
ray luminosity per unit volume takes the form

are the temperature and density of the baryonic matter,

and is some function of temperature and fundamental constants. The baryonic density satisfies the
above equation :

The integral is a measure of the total mass of the cluster, with r being the proper distance to the center of the cluster.

Using the ideal gas law

is a characteristic mass of the baryonic gas particles) and rearranging, we arrive at

Multiplying by and differentiating with respect to r yields

If we make the assumption that cold dark matter particles have an isotropic velocity distribution,

then the same derivation applies to these particles, and their density satisfies the non-linear
differential equation

With perfect X-ray and distance data, we could calculate the baryon density at each point in the

cluster and thus the dark matter density. We could then calculate the velocity dispersion of

the dark matter, which is given by

164
The central density ratio is dependent on the redshift z of the cluster and is given
by

where the angular width of the cluster and s the proper distance to the cluster. Values for the ratio range
from .11 to .14 for various surveys.[6]

Planetary geology

The concept of hydrostatic equilibrium has also become important in determining whether an astronomical object is
a planet, dwarf planet, or small Solar System body. According to the definition of planet adopted by the
International Astronomical Union in 2006, one defining characteristic of planets and dwarf planets is that they are
objects that have sufficient gravity to overcome their own rigidity and assume hydrostatic equilibrium. Such a body
will normally have the differentiated interior and geology of a world (a planemo), though near-hydrostatic bodies
such as the proto-planet 4 Vesta may also be differentiated. Sometimes the equilibrium shape is an oblate spheroid,
as is the case with Earth. However, in the cases of moons in synchronous orbit, nearly unidirectional tidal forces
create a scalene ellipsoid. Also, the dwarf planet Haumea is scalene due to its rapid rotation.

It had been thought that icy objects with a diameter larger than roughly 400 km are usually in hydrostatic
equilibrium, whereas those smaller than that are not. Icy objects need less mass for hydrostatic equilibrium than
rocky objects. The smallest object that is known to have an equilibrium shape is the icy moon Mimas at 397 km,
whereas the largest object known to have an obviously non-equilibrium shape is the rocky asteroid Pallas at 532
km
(582 556 500 18 km). However, Mimas is not actually in hydrostatic equilibrium for its current rotation. The
smallest body confirmed to be in hydrostatic equilibrium is the icy moon Rhea, at 1,528 km, whereas the largest
body known to not be in hydrostatic equilibrium is the icy moon Iapetus, at 1,470 km.

Because the terrestrial planets and dwarf planets (and likewise the larger satellites, like the Moon and Io) have
irregular surfaces, this definition evidently has some flexibility, but a specific means of quantifying an object's
shape by this standard has not yet been announced. Local irregularities may be consistent with global equilibrium.
For example, the massive base of the tallest mountain on Earth, Mauna Kea, has deformed and depressed the level
of the surrounding crust, so that the overall distribution of mass approaches equilibrium. The amount of leeway
afforded the definition could affect the classification of the asteroid Vesta, which may have solidified while in
hydrostatic equilibrium but was subsequently significantly deformed by large impacts (now 572.6 557.2 446.4
km).[7]

Atmospherics

165
In the atmosphere, the pressure of the air decreases with increasing altitude. This pressure difference causes an
upward force called the pressure-gradient force. The force of gravity balances this out, keeping the atmosphere
bound to Earth and maintaining pressure differences with altitude.

Main-sequence stars employ two types of hydrogen fusion processes, and the rate of energy generation from each
type depends on the temperature in the core region. Astronomers divide the main sequence into upper and lower
parts, based on which of the two is the dominant fusion process. In the lower main sequence, energy is primarily
generated as the result of the proton-proton chain, which directly fuses hydrogen together in a series of stages to
produce helium.[30] Stars in the upper main sequence have sufficiently high core temperatures to efficiently use the
CNO cycle. This process uses atoms of carbon, nitrogen and oxygen as intermediaries in the process of fusing
hydrogen into helium.

At a stellar core temperature of 18 million Kelvin, the PP process and CNO cycle are equally efficient, and each
type generates half of the star's net luminosity. As this is the core temperature of a star with about 1.5 M, the upper
main sequence consists of stars above this mass. Thus, roughly speaking, stars of spectral class F or cooler belong to
the lower main sequence, while A-type stars or hotter are upper main-sequence stars. The transition in primary
energy production from one form to the other spans a range difference of less than a single solar mass. In the Sun, a
one solar-mass star, only 1.5% of the energy is generated by the CNO cycle. ]By contrast, stars with 1.8 M or above
generate almost their entire energy output through the CNO cycle.

The observed upper limit for a main-sequence star is 120200 M.[33]The theoretical explanation for this limit is that
stars above this mass can not radiate energy fast enough to remain stable, so any additional mass will be ejected in a
series of pulsations until the star reaches a stable limit.[34]The lower limit for sustained protonproton nuclear fusion
is about 0.08 M or 80 times the mass of Jupiter.[30]Below this threshold are sub-stellar objects that can not sustain
hydrogen fusion, known as brown dwarfs.[35]

Structure

Stellar structure

166
This diagram shows a cross-section of a Sun-like star, showing the internal structure.

Stars of different mass and age have varying internal structures. Stellar structure models describe the
internal structure of a star in detail and make detailed predictions about the luminosity, the color and the
futureevolution of the star.

Energy transport

The different transport mechanisms of low-mass, intermediate-mass, and high-mass stars.

Different layers of the stars transport heat up and outwards in different ways, primarily convection and
radiativetransfer, but thermal conduction is important in white dwarfs.
Convection is the dominant mode of energy transport when the temperature gradient is steep enough so
that a given parcel of gas within the star will continue to rise if it rises slightly via an adiabatic process. In
this case,

167
the rising parcel is buoyant and continues to rise if it is warmer than the surrounding gas; if the rising
particle is cooler than the surrounding gas, it will fall back to its original height.[1]In regions with a low
temperature gradient and a low enough opacity to allow energy transport via radiation, radiation is the
dominant mode of energy transport.
The internal structure of a main sequence star depends upon the mass of the star.
In stars with masses of 0.31.5 solar masses (M), including the Sun, hydrogen-to-helium fusion occurs
primarily via proton-proton chains, which do not establish a steep temperature gradient. Thus,
radiation dominates in the inner portion of solar mass stars. The outer portion of solar mass stars is
cool enough that hydrogen is neutral and thus opaque to ultraviolet photons, so convection
dominates. Therefore, solar mass stars have radiative cores with convective envelopes in the outer
portion of the star.
In massive stars (greater than about 1.5 M), the core temperature is above about 1.810 7K, so
hydrogen-to-helium fusionoccurs primarily via the CNO cycle. In the CNO cycle, the energy generation
rate scales as the temperature to the 15th power, whereas the rate scales as the temperature to the 4th
power in the proton-proton chains.[2]Due to the strong temperature sensitivity of the CNO cycle, the
temperature gradient in the inner portion of the star is steep enough to make the core convective. In the
outer portion of the star, the temperature gradient is shallower but the temperature is high enough that the
hydrogen is nearly fully ionized, so the star remains transparent to ultraviolet radiation. Thus, massive
stars have a radiative envelope.
The lowest mass main sequence stars have no radiation zone; the dominant energy transport
mechanism throughout the star is convection.[3]

Equations of stellar structure


The simplest commonly used model of stellar structure is the spherically symmetric quasi-static model,
which assumes that a star is in a steady state and that it is spherically symmetric. It contains four basic
first-order differential equations: two represent how matter and pressure vary with radius; two represent
how temperature and luminosity vary with radius.
In forming the stellar structure equations (exploiting the assumed spherical symmetry), one considers
the

matter density , temperature , total pressure (matter plus radiation) , luminosity ,


and

energy generation rate per unit mass in a spherical shell of a thickness dr at a distance r from
the center of the star. The star is assumed to be in local thermodynamic equilibrium (LTE) so the
temperature is identical for matter and photons. Although LTE does not strictly hold because the
temperature a given shell "sees" below itself is always hotter than the temperature above, this
approximation is normally excellent

because the photon mean free path, , is much smaller than the length over which the temperature

varies considerably, i. e. .
First is a statement of hydrostatic equilibrium: the outward force due to the pressure gradient within the
star is exactly balanced by the inward force due to gravity. This is sometimes referred to as stellar
equilibrium.

168
where is the cumulative mass inside the shell at r and G is the gravitational constant. The
cumulative

mass increases with radius according to the mass continuity equation:

169
Integrating the mass continuity equation from the star center () to the radius of the star ( )
yields the total mass of the star.
Considering the energy leaving the spherical shell yields the energy equation:

where is the luminosity produced in the form of neutrinos (which usually escape the star without
interacting with ordinary matter) per unit mass. Outside the core of the star, where nuclear reactions
occur, no energy is generated, so the luminosity is constant.
The energy transport equation takes differing forms depending upon the mode of energy transport. For
conductive energy transport (appropriate for a white dwarf), the energy equation is

where k is the thermal conductivity.


In the case of radiative energy transport, appropriate for the inner portion of a solar mass main sequence
star and the outer envelope of a massive main sequence star,

where is the opacity of the matter, is the Stefan-Boltzmann constant, and the Boltzmann
constant is set to one.
The case of convective energy transport does not have a known rigorous mathematical formulation, and
involves turbulencein the gas. Convective energy transport is usually modeled using mixing length theory.
This treats the gas in the star as containing discrete elements which roughly retain the temperature,
density, and pressure of their surroundings but move through the star as far as a characteristic length,
called the mixing length.[5]For a monatomic ideal gas, when the convection is adiabatic, meaning that the
convective gas bubbles don't exchange heat with their surroundings, mixing length theory yields

where is the adiabatic index, the ratio of specific heats in the gas. (For a fully ionized ideal

gas, .) When the convection is not adiabatic, the true temperature gradient is not given by this
equation. For example, in the Sun the convection at the base of the convection zone, near the core, is
adiabatic but that near the surface is not. The mixing length theory contains two free parameters which
must be set to make the model fit observations, so it is a phenomelogical theory rather than a rigorous
mathematical formulation.[6]
Also required are the equations of state, relating the pressure, opacity and energy generation rate to
other local variables appropriate for the material, such as temperature, density, chemical composition,
etc. Relevant equations of state for pressure may have to include the perfect gas law, radiation pressure,
pressure due to degenerate electrons, etc. Opacity cannot be expressed exactly by a single formula. It is
calculated for various compositions at specific densities and temperatures and presented in tabular
form.[7]Stellar structure codes (meaning computer programs calculating the model's variables) either
interpolate in a density-

170
temperature grid to obtain the opacity needed, or use a fitting function based on the tabulated values. A
similar situation occurs for accurate calculations of the pressure equation of state. Finally, the nuclear
energy generation rate is computed from nuclear physics experiments, using reaction networks to
compute reaction rates for each individual reaction step and equilibrium abundances for each isotope in
the gas.[6][8]
Combined with a set of boundary conditions, a solution of these equations completely describes the
behavior of the star. Typical boundary conditions set the values of the observable parameters
appropriately at the

surface ( ) and center ( ) of the star: , meaning the pressure at the


surface of

the star is zero; , there is no mass inside the center of the star, as required if the mass
density
remains finite; , the total mass of the star is the star's mass; and , the
temperature at the surface is the effective temperature of the star.
Although nowadays stellar evolution models describes the main features of color magnitude diagrams,
important improvements have to be made in order to remove uncertainties which are linked to the limited
knowledge of transport phenomena. The most difficult challenge remains the numerical treatment of
turbulence Some research teams are developing simplified modelling of turbulence in 3D calculations.

Rapid evolution
The above simplified model is not adequate without modification in situations when the composition
changes are sufficiently rapid. The equation of hydrostatic equilibrium may need to be modified by
adding a radial acceleration term if the radius of the star is changing very quickly, for example if the
star is radially pulsating.[9]Also, if the nuclear burning is not stable, or the star's core is rapidly
collapsing, an entropy term must be added to the energy equation.[10]

Evolutionary tracks

Stellar evolution

This shows the HertzsprungRussell diagrams for two open clusters. NGC 188 (blue) is older, and shows a lower
turn off from the main sequence than that seen in M67 (yellow). The dots outside the two sequences are
mostly foreground and background stars with no relation to the clusters.

171
When a main-sequence star consumes the hydrogen at its core, the loss of energy generation causes its
gravitational collapse to resume. Stars with less than 0.23 M ,[3]are predicted to directly become white dwarfs
when energy generation by nuclear fusion of hydrogen at their core comes to a halt. In stars between this
threshold and 10 M, the hydrogen surrounding the
helium core reaches sufficient temperature and pressure to undergo fusion, forming a hydrogen-burning shell. In
consequence of this change, the outer envelope of the star expands and decreases in temperature, turning it into a
red giant. At this point the star is evolving off the main sequence and entering the giant branch. The path which the
star now follows across the HR diagram, to the upper right of the main sequence, is called an evolutionary track.

The helium core of a red giant continues to collapse until it is entirely supported by electrondegeneracy
pressurea quantum mechanical effect that restricts how closely matter can be compacted. For stars of more
than about 0.5 Mthe core eventually reaches a temperature where it becomes hot enough to burn helium into
carbon via the triple alpha process.[57][58] Stars with more than 57.5 M can additionally fuse elements with
higher atomic numbers.
For stars with ten or more solar masses, this process can lead to an increasingly dense core that finally collapses,
ejecting the star's overlying layers in a Type II supernova explosion, Type Ibsupernova or Type Ic supernova.

When a cluster of stars is formed at about the same time, the life span of these stars will depend on their individual
masses. The most massive stars will leave the main sequence first, followed steadily in sequence by stars of ever
lower masses. Thus the stars will evolve in order of their position on the main sequence, proceeding from the most
massive at the left toward the right of the HR diagram. The current position where stars in this cluster are leaving
the main sequence is known as the turn-off point. By knowing the main sequence lifespan of stars at this point, it
becomes possible to estimate the age of the cluster.

HERTZSPRUNGRUSSELL DIAGRAM

(Redirected from Color magnitude diagram)

172
Source; Richard Powell - The Hertzsprung Russell Diagram

An observational HertzsprungRussell diagram with 22,000 stars plotted from the Hipparcos Catalogue
and 1,000 from the Gliese Catalogue of nearby stars. Stars tend to fall only into certain regions of the
diagram. The most prominent is the diagonal, going from the upper-left (hot and bright) to the lower-
right (cooler and less bright), called the main sequence. In the lower-left is where white dwarfs are
found, and above the main sequence are the subgiants, giants and supergiants. The Sun is found on
the main sequence at luminosity 1 (absolute magnitude 4.8) and BV color index 0.66 (temperature
5780 K, spectral type G2V).

173
Hertzsprung-Russell diagram. A plot of luminosity (absolute magnitude) against the colour of the stars
ranging from the high-temperature blue-white stars on the left side of the diagram to the low

temperature red stars on the right side. "This diagram below is a plot of 22000 stars from the Hipparcos
Catalogue together with 1000 low-luminosity stars (red and white dwarfs) from the Gliese Catalogue of
Nearby Stars. The ordinary hydrogen-burning dwarf stars like the Sun are found in a band running from
top-left to bottom-right called the Main Sequence. Giant stars form their own clump on the upper-right
side of the diagram. Above them lie the much rarer bright giants and supergiants. At the lower-left is the
band of white dwarfs - these are the dead cores of old stars which have no internal energy source and
over billions of years slowly cool down towards the bottom-right of the diagram." Converted to png and
compressed with pngcrush.

HR diagrams for two open clusters, M67 and NGC 188, showing the main-sequence turn-off at
different ages.

The HertzsprungRussell diagram, abbreviated HR diagram or HRD, is a scatter


plot of stars showing the relationship between the stars' absolute magnitudes or luminosities versus their
stellar classifications or effective temperatures. More simply, it plots each star on a graph measuring the
star's brightness against its temperature (color). It does not map any locations of stars.
The diagram was created circa 1910 by Ejnar Hertzsprung and Henry Norris Russell and represents a
major step towards an understanding of stellar evolutionor "the way in which stars undergo sequences of
dynamic and radical changes over time".
In the nineteenth-century large-scale photographic spectroscopic surveys of stars were performed at
Harvard College Observatory, producing spectral classifications for tens of thousands of stars,
culminating ultimately in the Henry Draper Catalogue. In one segment of this work AntoniaMaury
included divisions of the stars by the width of their spectral lines. Hertzsprung noted that stars described
with narrow lines tended to have smaller proper motionsthan the others of the same spectral
classification. He took this as an indication of greater luminosity for the narrow-line stars, and computed
secular parallaxes for several groups of these, allowing him to estimate their absolute magnitude.
In 1910 Hans Rosenberg published a diagram plotting the apparent magnitude of stars in the Pleiades
cluster against the strengths of the Calcium K line and two Hydrogen Balmer lines. These spectral lines
serve as a proxy for the temperature of the star, an early form of spectral classification. The apparent
magnitude of stars in the same cluster is equivalent to their absolute magnitude and so this early diagram
was effectively a plot of luminosity against temperature. The same type of diagram is still used today as a
means of showing the stars in clusters without having to initially know their distance and luminosity.
Hertzsprung had already been working with this type of

174
diagram, but his first publications showing it were not until 1911. This was also the form of the
diagram using apparent magnitudes of a cluster of stars all at the same distance.
Russell's early (1913) versions of the diagram included Maury's giant stars identified by Hertzsprung,
those nearby stars with parallaxes measured at the time, stars from the Hyades (a nearby opencluster),
and several moving groups, for which the moving cluster method could be used to derive distances and
thereby obtain absolute magnitudes for those stars.

Forms of diagrams
There are several forms of the HertzsprungRussell diagram, and the nomenclature is not very well
defined. All forms share the same general layout: stars of greater luminosity are toward the top of the
diagram, and stars with higher surface temperature are toward the left side of the diagram.
The original diagram displayed the spectral type of stars on the horizontal axis and the absolutevisual
magnitude on the vertical axis. The spectral type is not a numerical quantity, but the sequence of spectral
types is a monotonic series that reflects the stellar surface temperature. Modern observational versions
of the chart replace spectral type by a color index (in diagrams made in the middle of the 20th Century,
most often the B-V color) of the stars. This type of diagram is what is often called an observational
HertzsprungRussell diagram, or specifically a color-magnitude diagram (CMD), and it is often used by
observers. In cases where the stars are known to be at identical distances such as within a star cluster, a
color-magnitude diagram is often used to describe the stars of the cluster with a plot in which the vertical
axis is the apparent magnitude of the stars. For cluster members, by assumption there is a single additive
constant difference between their apparent and absolute magnitudes, called the distance modulus, for all
of that cluster of stars. Early studies of nearby open clusters (like the Hyades and Pleiades) by
Hertzsprung and Rosenberg produced the first CMDs, antedating by a few years Russell's influential
synthesis of the diagram collecting data for all stars for which absolute magnitudes could be determined.
Another form of the diagram plots the effective surface temperature of the star on one axis and the
luminosity of the star on the other, almost invariably in a log-log plot. Theoretical calculations
of stellar structure and the evolution of stars produce plots that match those from observations. This type
of diagram could be called temperature-luminosity diagram, but this term is hardly ever used; when the
distinction is made, this form is called the theoretical HertzsprungRussell diagram instead. A peculiar
characteristic of this form of the HR diagram is that the temperatures are plotted from high temperature
to low temperature, which aids in comparing this form of the HR diagram with the observational form.

175
An HR diagram showing many well known stars in the Milky Way galaxy.,

source; ESO http// [Link]/public/images/ eso 0728c/

Although the two types of diagrams are similar, astronomers make a sharp distinction between the two.
The reason for this distinction is that the exact transformation from one to the other is not trivial. To go
between effective temperature and color requires a color-temperature relation, and constructing that is
difficult; it is known to be a function of stellar composition and can be affected by other factors like stellar
rotation. When converting luminosity or absolute bolometric magnitude to

176
apparent or absolute visual magnitude, one requires a bolometric correction, which may or may not come
from the same source as the color-temperature relation. One also needs to know the distance to the
observed objects (i.e., the distance modulus) and the effects of interstellar obscuration, both in the color
(reddening) and in the apparent magnitude (extinction). For some stars, circumstellardust also affects
colors and apparent brightness. The ideal of direct comparison of theoretical predictions of stellar
evolution to observations thus has additional uncertainties incurred in the conversions between
theoretical quantities and observations.

Interpretation
Most of the stars occupy the region in the diagram along the line called the main sequence. During the
stage of their lives in which stars are found on the main sequence line, they are fusinghydrogen in their
cores. The next concentration of stars is on the horizontal branch (helium fusion in the core and
hydrogen burning in a shell surrounding the core). Another prominent feature is the Hertzsprung gap
located in the region between A5 and G0 spectral type and between +1 and 3 absolute magnitudes
(i.e. between the top of the main sequence and the giants in the horizontalbranch). RR Lyrae
variablestars can be found in the left of this gap. Cepheid variables reside in the

177
upper section of the instability strip.

An HR diagram with the instability strip and its components highlighted.

Source; Rursus ;Own work.

The H-R diagram can be used by scientists to roughly measure how far away a star cluster is from
Earth. This can be done by comparing the apparent magnitudes of the stars in the cluster to the
absolute magnitudes of stars with known distances (or of model stars). The observed group is then
shifted in the vertical direction, until the two main sequences overlap. The difference in magnitude that
was bridged in order to match the two groups is called the distance modulus and is a direct measure for
the distance (ignoring extinction). This technique is known as main sequence fittingand is a type of
spectroscopic parallax.

178
Diagram's role in the development of stellar physics

Contemplation of the diagram led astronomers to speculate that it might demonstrate stellarevolution, the
main suggestion being that stars collapsed from red giants to dwarf stars, then moving down along the
line of the main sequence in the course of their lifetimes. Stars were thought therefore to radiate energy
by converting gravitational energy into radiation through the KelvinHelmholtz mechanism. This
mechanism resulted in an age for the Sun of only tens of millions of years, creating a conflict over the
age of the Solar System between astronomers, and biologists and geologists who had evidence that the
Earth was far older than that. This conflict was only resolved in the 1930s when nuclear fusion was
identified as the source of stellar energy.
However, following Russell's presentation of the diagram to a meeting of the Royal
AstronomicalSociety in 1912, Arthur Eddington was inspired to use it as a basis for developing ideas on
stellarphysics. In 1926, in his book The Internal Constitution of the Stars he explained the physics of
how stars fit on the diagram. This was a particularly remarkable development since at that time the
major problem of stellar theory, the source of a star's energy, was still unsolved. Thermonuclearenergy,
and even that stars are largely composed of hydrogen , had yet to be discovered. Eddington managed
to sidestep this problem by concentrating on the thermodynamics of radiative transport of energy in
stellar interiors. So, Eddington predicted that dwarf stars remain in an essentially static position on the
main sequence for most of their lives. In the 1930s and 1940s, with an understanding of hydrogen
fusion, came a physically based theory of evolution to red giants, and white dwarfs. By this time, study
of the HertzsprungRussell diagram did not drive such developments but merely allowed stellar
evolution to be presented graphically.

Mature stars

Eventually the core exhausts its supply of hydrogen and the star begins to evolve off of the mainsequence, without
the outward pressure generated by the fusion of hydrogen to counteract the force of gravity the core contracts until
either electron degeneracy pressure becomes sufficient to oppose gravity or the core becomes hot enough (around
100 MK) for helium fusion to begin. Which of these happens first depends upon the star's mass.

Electron degeneracy pressure

Electron degeneracy pressure is a particular manifestation of the more general phenomenon of


quantum degeneracy pressure. The Pauli exclusion principle disallows two identical half-integer spin
particles (electrons and all other fermions) from simultaneously occupying the same quantum state. The
result is an emergent pressure against compression of matterinto smaller volumes of space. Electron
degeneracy pressure results from the same underlying mechanism that defines the electron orbital
structure of elemental matter. For bulk matter with no net electric charge, the attraction between electrons
and nuclei exceeds (at any scale) the mutual repulsion of electrons plus the mutual repulsion of nuclei; so
absent electron degeneracy pressure, the matter would collapse into a single nucleus. In 1967, Freeman
Dyson showed that solid matter is stabilized by quantum degeneracy pressure rather than electrostatic
repulsion. Because of this, electron degeneracy creates a barrier to the gravitational collapse of dying
stars and is responsible for the formation of white dwarfs.

179
When electrons are squeezed together too closely, the exclusion principle requires them to have
different energy levels. To add another electron to a given volume requires raising an electron's energy
level to make room, and this requirement for energy to compress the material manifests as a pressure.
Electron degeneracy pressure in a material can be computed as[4]

where is the reduced Planck constant, me is the mass of the electron, and N is the free electron
density (the number of free electrons per unit volume).
When particle energies reach relativistic levels, a modified formula is required.
This pressure is derived from the energy of each electron with wave number k= 2/, having

and every possible momentum state of an electron within this volume up to the Fermi energy being
occupied.
This degeneracy pressure is omnipresent and is in addition to the normal gas pressure P=NkT/V. At
commonly encountered densities, this pressure is so low that it can be neglected. Matter is electron
degenerate when the density (n/V) is high enough, and the temperature low enough, that the sum is
dominated by the degeneracy pressure.
Perhaps useful in appreciating electron degeneracy pressure is the Heisenberg
uncertaintyprinciple, which states that

where x is the uncertainty of the position measurements and p is the uncertainty (standard
deviation) of the momentummeasurements. A material subjected to ever-increasing pressure will
compact more, and, for electrons within it, their delocalization, x, will decrease. Thus, as dictated by
the uncertainty principle, the spread in the momenta of the electrons, p, will grow. Thus, no matter how
low the temperature drops, the electrons must be traveling at this "Heisenberg speed", contributing to
the pressure. When the pressure due to this "Heisenberg motion" exceeds that of the pressure from
the thermal motions of the electrons, the electrons are referred to as degenerate, and the material is
termed degenerate matter.
Electron degeneracy pressure will halt the gravitational collapse of a star if its mass is below
the Chandrasekhar limit (1.39 solar masses[5]). This is the pressure that prevents a white dwarf star
from collapsing. A star exceeding this limit and without significant thermally generated pressure will

180
continue to collapse to form either a neutron star or black hole, because the degeneracy pressure
provided by the electrons is weaker than the inward pull of gravity.

Helium Fusion.

The next thermonuclear fusion stage following the fusion of hydrogen into helium in the core of a star is
the fusion of helium into carbon. This process is known as the triple-alpha process, because it converts three
helium-4 nuclei ( particles) into a single carbon-12 nucleus. This process operates efficiently above 100
million degrees Kelvin; its strong temperature-dependence means that it can convert all the helium within the
core of a star into carbon and heavier elements in less than a year for temperatures above 200 million degrees
Kelvin.
At the temperatures where the triple-alpha process is effective, other fusion processes efficiently combine
helium-4 with carbon-12 into heavier elements. The principal processes step through the
sequence of nuclei that are multiples of the He4 nucleus: C12O16 Ne20 Mg24 Si28 S32 Ar36.
Two secondary fusion chains occur if hydrogen, carbon-13, or nitrogen-14 is present in the gas. One
normally expects the last-two elements to be present if the star underwent the CNO hydrogen-burning process.
One process combines C13 and He 4 to give O16 and a neutron, which can decay into a proton and an electron.
The second combines N14 with He4 to give O18and a positron. The oxygen-
18 undergoes fusion with helium-4 to produce neon-21 and neon-22, with the production of neon-21
accompanied by the production of a neutron. These processes produce hydrogen, which is burned to helium-4
through the CNO processs.
The helium fusion simulator on this page shows the change in composition of a gas experiencing the
nuclear fusion of helium-4. The default initial composition is pure helium-4, although this can be modified by
the reader. The simulation follows the nuclear fusion until all of the helium-4 is consumed. While many more
elements than those shown on the composition plot are generated during the fusion of helium-4 fusion, only
those elements with nucleon fractions above the lower bound of the plot are displayed. The power generated
by the various fusion processes and the total power generated through nuclear fusion are presented in the
power plot.
The end products depend on the temperature of the gas. For the highest temperatures allowed by the
simulator, the end product of fusion is primarily carbon-12, but for the lowest temperatures allowed in the
simulation, it is primarily oxygen-16.

Parameters

The simulator follows the evolution of 21 isotopes, but only five of these can have the initial values
of their nucleon part set by the reader: hydrogen, helium-4, carbon-12, nitrogen-14, and oxygen-16. The
remaining isotopes have their initial nucleon fractions set to 10 -15.
The nucleon density is defined to be the total number of protons and neutrons per unit volume. For
instance, the contribution of helium-4 to the nucleon density is 4 times the number of helium nuclei per unit
volume. Nucleon density is used because the number of nucleons in conserved in a fusion reaction. The the
simulation the total nucleon density is fixed at 105 g-mole (an Avogadro's number of 6.02216910 23
nucleons) per cubic centimeter.
The initial composition is expressed as nucleon parts, meaning a ratio relative to the other nucleons.
For instance, in the table of initial composition, if the hydrogen and helium nucleon parts were 0.8 and 0.2,
then for every 8 nucleons that are in hydrogen nuclei, there would 2 that are in helium nuclei.

181
The temperature is given in units of tens of millions of degrees Kelvin, and can be set from 100 million
degrees to 350 million degrees.
Fusion of Helium

Alpha Fusion Chain


Once all of the hydrogen in a gas is converted into helium-4, fusion stops until the temperature rises to
about 108K. At this temperature, helium-4 is converted into heavier elements, predominantly carbon-12 and
oxygen.-16, both of which are multiples of helium-4 in their proton and neutron composition. To create these
isotopes, beryllium-8 must first be created from two helium-4 nuclei, but this unstable isotope, with a lifetime
of only 2.6 10-16 seconds, rapidly decays back into helium-4.
The short lifetime of beryllium-8 ensures that the creation and decay of beryllium-8 are in equilibrium.
This means that the density of beryllium-8 is set by the thermodynamic properties of the gas, specifically the
temperature and the density of the gas; the creation and decay rated drop out of the problem. As a practical
matter, because the amount of energy required to create beryllium-8 is large, 92.1 keV, the density of
berylium-8 to helium-4 is minuscule: for a temperature of 10 8 K and a helium-4 density of 105 gm cm-3, the
ratio of beryllium-8 nuclei to helium-4 nuclei will be around 10-9. The density of beryllium-8 is proportional to
T -3/2 e-40 keV/T. This temperature dependence imples that a small change in temperature produces a large change
in the berylium-8 density; for a temperature of 108 K (9 keV), a 15% change in temperature produces a factor
of 2 change in the berylium-8 density.
While berylium-8 is present, its creation is a small energy sink. To release energy, carbon-12 and heavier
elements must be created. Carbon-12 is created when helium-4 combines with beryllium-
8. In this interaction, carbon-12 nucleus is left in an energetic state from which it decays, releasing a gamma-
ray. The conversion of beryllium-8 into carbon-12 releases 7.37 MeV.
The conversion of helium-4 into carbon-12 is therefore accomplished through the following two
reactions:
He4 + He4 Be8

Be8 + He4 C12 +

The process of converting three helium-4 nuclei into a single carbon-12 nucleus releases a total of 7.27 MeV,
all of which remains trapped within the star. This fusion chain can be treated as a single process; it is then
called the triple-alpha process (an alpha particle is a helium-4 nucleus). The triple-alpha reaction rate is
proportional to the cube of the helium-4 density. Because of the strong temperature dependence of the
beryllium-8 density, the triple-alpha reaction rate is much more temperature dependent than any of the
hydrogen fusion rates. Within a star, helium fusion provides sufficient energy to support a star when the core
temperature rises to about 100 million degrees. The practical effect of this is that helium fusion within stars
occurs over a very narrow range of temperatures.
For temperatures that enable the triple-alpha process to proceed, other nuclear reactions are possible
involving helium that create elements with atomic masses that are multiples of 4. These processes are as
follows:
C12 + He4 O16 +

182
O16 + He4 Ne20 +

Ne16 + He4 Mg24 +

Each of these reactions releases energy. The creation of oxygen-16 generates 7.16 MeV, while the generation
of neon-20 generates 4.730 MeV. The next-two elements release even more energy, with 9.32 MeV from the
creation of magnesium-24 and 9.98 from the creation of silicon-28. The creation of sulfur-32 and argon-26
generates 6.95 MeV and 6.65 MeV respectively. These large amounts of energy point to the stability of these
isotopes.
Because the triple-alpha process switches on so rapidly with temperature, all stellar cores that are fusing
helium have essentially the same temperature, so that the ratios of carbon-12 to oxygen-16 to neon-20 to
magnesium-24 within a stellar core is essentially the same for all stellar cores.
In the universe, the third, fourth, fifth, and sixth most abundant elements are oxygen, neon,
nitrogen, and carbon. The triple-alpha process and the CNO process of hydrogen fusion are responsible
for this, with the triple-alpha process creating the carbon, oxygen, and neon, and the CNO process
creating the nitrogen from the carbon and oxygen.

Secondary Helium Fusion Processes


The CNO hydrogen fusion process converts carbon-12 and the oxygen-16 into four other isotopes as
hydrogen is converted into helium-4. These isotopes are carbon-13, nitrogen-14, nitrogen-15, and oxygen-15.
Two of these isotopes, carbon-13 and nitrogen-14, can be destroyed by combining with helium-4 during the
helium fusion stage. During these reactions, neutrons are released that either combine with other isotopes to
form heavier elements or decay to a proton and an electron. Because the CNO isotopes are present in only
small quantities in a star, the amount of energy release through their fusion with helium-4 is generally
negligible; the importance of these fusion processes is in their effect on the isotopes found in the universe. The
absorption of a neutron by a nucleus can produce isotopes away from the C 12 O16 20 Mg24 path.
The destruction of carbon-13 proceeds through the following reaction with helium-4:
C + He O16 + n
13 4

In this reaction, the carbon absorbs a helium nucleus and releases a neutron to become oxygen-16, releasing
2.21 MeV of energy.
The destruction of nitrogen-14 through the absorption of helium-4 creates the unstable nucleus
fluorine-18, which decays to oxygen-18. These reactions are as follows:
N14 + He4 F18 +

F18 O18 + e+ + e

The energy released in these processes is 4.42 MeV.


The oxygen-18 created from nitrogen-14 can be destroyed by absorbing a helium-4 nucleus. This
interaction has two branches, one that creates neon-21, and a second that creates neon-22. The first of these
reactions is as follows:
O18 + He4 Ne21 + n

This reaction is endothermic, absorbing a total of 0.699 MeV of energy from the gas.
The second reaction is as follows:

183
O18 + He4 Ne22 +

Ne22 + He4 Mg25 + n

The first of these reactions is exothermic, generating 9.67 MeV of energy. The reaction producing the
magnesium-25 in endothermic, swallowing 0.48 MeV of energy.

Helium Fusion rates,


The helium fusion processes divide into two sets: the primary processes, which create isotopes that are in
composition multiples of He4, and the secondary processes, which convert carbon-13 and nitrogen-14 into
heavier isotopes. Virtually all of the energy created during the burning of helium-4 is released through the
primary processes. The secondary processes are responsible for creating isotopes that are not multiples of
helium-4, either directly or through the release of neutrons that combine with the nuclei in the gas to create
neutron-rich isotopes.

Primary Processes
The rate at which helium is converted into heavier elements is set by the triple-alpha process, which is
the process that combines three helium-4 nuclei into a single carbon-12 nucleus. This rate is faster than any of
the rates that convert C12 and its products into heavier nuclei, so the triple-alpha process destroys most of the
He4 in a gas and generates most of the power generated through helium fusion.
The triple-alpha process is much more temperature dependent than the other primary helium fusion processes.
This strong dependence arises because the intermediate state of the reaction, the creation of beryllium-8
through the fusion of two He4 nuclei, is endothermic, and because the Be8 rapidly decays back into helium-4,
which makes the equilibrium density of beryllium-8 highly temperature dependent. The strong temperature
dependence of the triple-alpha reaction sets a narrow range for the core temperature of a star undergoing
helium fusion.

Once carbon-12 is created, it can combine with He 4 to give oxygen-16. The oxygen in turn combines with He4
to give neon-20. This fusion chain continues to argon-36. But the rates for each of these processes is
considerably lower than for the triple-alpha process when the temperature is above 100 million degrees
Kelvin. This means that the C12 created in the triple-alpha process cannot be burned away before all of the He 4
in the gas is exhausted. This implies that the end product of helium fusion is predominately C 12 for
temperatures above 100 million degrees Kelvin.

For temperatures below 100 million degrees, the rate of converting carbon-12 into oxygen-16 exceeds the
triple-alpha process. Under this circumstance, all of the C12 created in the triple-alpha process is converted
into O16, so that the end product of helium fusion is principally O16.

The reaction rates for heavier elements exceed the reaction rates for the triple-alpha process for
temperatures above 400 million degrees Kelvin, but because the C12 + He4fusion rate remains below the triple-
alpha rate, a bottle neck is created in the low rate of O16production. In this temperature regime, the triple-alpha
rate still governs the rate at which He4 is burned. The small amount of O16 that is created is rapidly converted
into Ne20 and Mg24.

184
This figure shows the reaction rates for helium fusion. The triple-alpha reaction rate He4(2He4,)C12) is
calculated for a helium-4 density of 105 gm cm-3. This rate is plotted on the plots for both the primary and the
secondary processes. The reader can specify whether the units of temperature are in degrees Kelvin or in kilo-
electron volts. The nuclear reaction notationis described at the bottom of the page. More information on how to
control the applet is given by the Applet Control Guide.

Secondary Processes
The secondary processes convert C13 and N14, both products of the CNO hydrogen fusion process, into
O16 and O18 respectively. While these reactions are unimportant from the standpoint of power generation,
because the density of C13 and N14 is small in a star, they do have an effect on the isotopic composition of the
gas in a star. In the case of the burning of C13, this isotope of carbon is lost and a neutron is released that can
be absorbed by other nuclei to create heavier elements. The creation of O18 from N14 only changes the
isotopic composition of the gas, but the O18 can combine with He4 to create Ne21 and Ne22. The creation of a
Ne21 nucleus is accompanied by the release of a neutron.

185
The reaction rates for the destruction of C13 and N14 are both greater than the helium-4 reaction
rate, so these elements are burned before the triple-alpha process can consume a significant amount of He 4.
The creation of Ne21 is an endothermic reaction, so, as with the triple-alpha process, energy
conservation gives this process a strong temperature dependence. Both it and the process that creates Ne 22 are
insignificant until the temperature rises above 200 million degrees Kelvin. By 300 million degrees Kelvin,
both fusion processes burn all of the O18 before the helium-4 is exhausted. Of these processes, the process
creating Ne21 is the more important.

A Note on Notation
In the figure a compact notation for nuclear reactions is used. The general form is A(b,c)D, which
is equivalent to A + b gives c + D. So the reaction for creating Deuterium is written as H 1(H1,)H2, which
means H1 + H1 gives H2 plus a gamma-ray.
Also note that, depending on the computer fonts available on your computer, the symbol for a neutrino,
, and the symbol for a gamma-ray, , may look very similar. The key for distinguishing them is that the
neutrino is involved only in reactions that involve an electron (e -) or a positron (e+).

A Comment on Reaction Rates


The rates given in the figure are based on formulae given in Astrophysical Formulae by Lang.1
1
Lang, Kenneth R. Astrophysical Formulae: A Compendium for the Physicist and the Astrophysicist.
2nd edition. New York: Springer-Verlag, 1980.

Fusion of Carbon and Oxygen

Carbon and Oxygen Fusion chain

Following the complete burning of helium-4 into carbon, oxygen, and other elements within the core of a
star, the core begins to collapse again until the next fusion stage is reached: the burning of carbon into heavier
elements. This stage is then followed by the fourth stage of thermonuclear fusion, the burning of oxygen into
heavier elements. Each of these stages is much more complex than either the hydrogen or the helium burning
stages, because the number of fusion processes and the variety of fusion products are much richer than in the
fusion of hydrogen and helium. Among the products are protons, which burn further through the CNO
hydrogen fusion process, neutrons, which can combine with atomic nuclei to produce heavier isotopes, and
helium-4, which can burn through one of the processes detailed on the helium fusion page. On this page, only
the most important processes are presented.

Carbon Fusion
In the carbon-fusion stage, two carbon-12 nuclei fuse to create heavier elements. Carbon preferentially
interacts only with itself, unlike helium, which interacts with heavy elements such as carbon-12 and oxygen-
16. In particular, there is no appreciable interaction between carbon-12 and
oxygen-16. The primary nuclei created through carbon fusion are sodium-23 (Na23) and neon-20 (Ne20).
Carbon fusion begins at about 600 to 700 million degrees (50 to 60 keV). The most energetic carbon-
carbon reaction liberates approximately 13 MeV of energy as magnesium-24 (Mg24) is created. Other carbon-
carbon reactions liberate considerably less energy than this, and in some cases

186
consuming energy. Much of this energy escapes from the star as neutrinos, even though none of the principle
carbon fusion reactions emit neutrinos. The principal reactions are as follows:
C12 + C12 Mg24 +
C12 + C12 Na23 + p
C12 + C12 Ne20 + He4
C12 + C12 Mg23 + n
C12 + C12 O16 + 2 He4
Of these processes, the first-three are exothermic, releasing 13.93 MeV, 2.24 MeV, and 4.62 MeV
respectively, and the last-two are endothermic, absorbing 2.60 MeV and 0.11 MeV respectively of energy.

Oxygen Fusion
In oxygen fusion, two oxygen nuclei fuse to create elements with atomic mass at or below the mass of
sulfur-32. Many different nuclei are created in this process, although silicon-28 (Si28) is the the major
product from the nuclear fusion of oxygen.
Oxygen fusion begins at about 1 billion degrees (90 keV). The energy released is more uncertain than
for the carbon burning, but it is comparable in value. Neutrino production is so great for oxygen fusion that
most of the energy liberated is transported out of the core by the neutrinos, so only a small part of the energy
release in oxygen fusion is available to replace energy that is radiatively transported out of the star.
The principal oxygen fusion reactions are as follows:
O + O S32 +
16 16

O16 + O16 P31 + p


O16 + O16 S31 + n
O16 + O16 Si28 + He4
O16 + O16 Mg24 + 2 He4
The first-four reactions are exothermic, releasing 16.54 MeV, 7.68 MeV, 1.46 MeV, and 9.59 MeV
respectively. The last reaction is endothermic, absorbing 0.39 MeV of energy.

Radiative process in stellar interiors

The Four processes are principally responsible for creating, thermalizing, and impeding the flow of
radiation in the interior of a star. From those that are dominant at high temperature to those dominant at low,
these processes are Compton scattering, bremsstrahlung emission and absorption, photo-ionization and
recombination, and atomic line emission and absorption. Each of these interactions is described by a
probability.
Each possible interaction can be thought of as one of a pair of interactions, one forward, the other
reverse; this pairing permits the interaction of radiation with matter to obey the laws of thermodynamics. A
process that creates a photon is paired with an inverse process that destroys a photon of the same energy, and
the probability of creating a photon of a given density is precisely tied to the probability of its destruction.
These properties of the interaction between matter and radiation are the reasons that hot thermal matter will
modify the spectrum of electromagnetic radiation until the spectrum is black-body. When the photons acquire a
black-body spectrum, the creation and destruction of photons of a given energy are in balance. The scattering
process does not create or destroy photons; rather, it allows the exchange of energy between radiation and
matter. The forward and reverse scattering processes, where the inverse process is the forward process reversed
in time, have probabilities that are precisely related, with the forward and reverse processes occurring at the
same rate when the radiation has a black-body spectrum.

187
Compton Scattering
A photon can scatter, exchanging energy and momentum, with a free electronan electron that is not
bound to an atom. This process is called Compton scattering. While the process does not destroy or create
photons, it does keep the photons in thermal equilibrium with the electrons of a star, and it slows the diffusion
of radiation from the core of a star. Compton scattering is the dominant radiative process for photons that are
hard x-rays (energies above several keV) and gamma-rays. It is the dominant process for the thermalization and
transport of radiation when the temperature is above several tens of millions of degrees, where a large fraction
of the photons in a black-body spectrum are x-rays.
The probability that a photon Compton scatters with an electron is proportional to the electron density.
The probability is independent of photon energy as long as the energy is well below the electron rest-mass
energy. When the energy exceeds the electron rest-mass energy, the probability of a scattering between a
photon and electron decreases almost inverse-proportionally with photon energy; this effect is unimportant in
stars, which have black-body photons that are far below the electron rest-mass energy.

Bremsstrahlung
A free electron moves along a hyperbolic path past an ion, curving towards the ion. As the electron
accelerates through this path, it emits electromagnetic radiation, and if electromagnetic wave are passing by at
the time, the electron can absorb electromagnetic radiation. The radiation created in this way is called
bremsstrahlung, or braking radiation. The absorption of radiation through this process is often called free-free
absorption, referring to the state of the electron before and after the event.
The rate per unit volume at which radiation is created through bremsstrahlung is proportional to the free-
electron density times the ion density. It is inversely-proportional to the square-root of the temperature,
because slow electrons follow sharper paths than do fast electrons, producing more electromagnetic radiation.
This means that this mechanism becomes less efficient as the temperature rises. On the other hand, as the
density increase, the rate at which electromagnetic energy is release rapidly rises.

Photo-ionization
Electromagnetic radiation can free an electron that is bound within an atom; the only requirement is that
the photon must carry an energy at least equal to the binding energy of the electron. The absorption of a
photon through photo-ionization is often called bound-free absorption. The probability of this interaction
occurring is greatest for photons carrying the binding energy of an electron. As the energy of the photon
increases above the binding energy, the probability of it freeing the electron decreases.
Ionization and its inverse, recombination, are important for hydrogen and helium over a narrow range of
temperatures. At high temperatures, the atoms quickly dissociate, so that in equilibrium virtually no hydrogen
and helium with bound electrons exists. On the other hand, at low temperatures, there are no photons that can
dissociate a hydrogen or helium atom. It is only in the narrow ranges of temperatures that keep the density of
neutral hydrogen , neutral helium, and partially-ionized helium about equal to the density of free electrons that
ionization and recombination of hydrogen and helium dominate the diffusion of radiation.
While the ionization of hydrogen and helium occur at temperatures characterized by ultraviolet
radiation, many other elements within a star ensure that ionization and recombination play an

188
important role in hinder the diffusion of radiation at higher temperatures. The ionization of the most tightly-
bound electrons of iron provides a particularly important role in the diffusion of x-rays.

Atomic Lines
At low temperatures, most of the electrons are bound within atoms and the average energy of the
photons is too low to ionize an atom. Under these condition, the radiation interacts with atoms by forcing the
bound electrons to change their orbits within atoms. These interactions are resonant, meaning that they occur
at very particular photon energies, where the frequency of the electrons motion within the atom matches the
frequency of the light. In fact, the energies at which the interactions occur are the energies that separate pairs
of quantized electron orbits within an atom. Unlike the orbit of a planet around the Sun, an electron can only
orbit the nucleus of an atom at very specific energies.
For an electron to move from one state to the next, it must acquire or lose the amount of energy that
separates the two states. This can be done through a collision between two atoms, and it can be done through
the absorption or emission of a photon. When the second happens, the result is the creation and destruction of
photons at very specific energies. In practice, these interactions occur over narrow ranges of energies, partly
from the Doppler shift of the line from the random motion of the ions, and partly from the property of quantum
mechanics that an electron energy state becomes a narrow continuum of values when an electron spends a finite
amount of time in that state. Atoms therefore emit and absorb photons over narrow ranges of energies that have
widths associated with the widths of the electron energy states.

RADIATIVE TRANSPORT

Radiative transport in stellar interiors


The radiation field in the interior of a star always has a black-body spectrum, because the interactions
between matter and radiation rapidly bring the electromagnetic radiation into thermal equilibrium with the
electrons and ions. In the absence of convection, this radiation field provides the mechanism of transporting
energy out of the star. The radiation diffuses in the direction of lower temperature, which means it diffuses to
the star's surface, where it can freely escape into space.
Diffusion is a random-walk process. For instance, a photon that interacts only
throughComptonscattering will move a small distance in one direction, scatter with an electron, and then move
a similar distance in a new, random direction. To move a large distance from its starting point, meaning
many times the distance traveled between scatterings the photon must random walk a distance many times
longer. This random walk also holds for absorption and emission, because photons are emitted into random
directions; energy absorbed from a photon moving in one direction will be released as a photon moving in a
new direction.
An estimate of how far photon must random walk to travel a given distance away from a starting point
can be calculated from the ratio of the distance traveled way from a source divided by the average distance
traveled between scatterings. This ratio would give the number of scatterings if the photon had continued on in
the same direction after each scattering. In a random walk, a photon must travel this ratio squared times the
average distance between scatterings to move our required distance from its source. This is equivalent to
random walking our required distance times the ratio of this distance to the average distance traveled between
scatterings. In a star, the distance between scatterings is very small, while the distance across the star is very
large, so a photon must random-walk a distance that is many times the distance across a star. At the core of the
Sun, a photon undergoes 1015 Compton scatterings per second, and over 1 second, it diffuses about 5 meters
from its starting point. A photon in the Sun requires about 10 30scatterings to escape the core, which takes of
order 10 million years.

189
The radiation diffuses most rapidly where the interaction of radiation with matter is weakest. The
diffusion is fasted when Compton scattering is the only mechanism of interaction. As the temperature drops,
so that bremsstrahlung, photo-ionization, and atomic transition processes become prevalent, the interaction
between radiation and matter becomes more frequent, and the diffusion becomes slower.
Diffusion is faster at some photon frequencies than others. For instance, diffusion at the frequence of an
atomic transition is much slower than at frequency well-away from an atomic transition. At higher
temperatures, diffusion is fastest for the highest-energy photons, because low-energy photons interact with
electrons through the bremsstrahlung process more strongly than do the high-energy photons.
The diffusion of energy is always in the direction of lower temperature. The reasons are that as the
temperature drops, the density of photons drops, because the density of photons is proportional to T 3, and the
average energy carried by a photon drops proportionally with temperature. The the number of photons random-
walking from a high-temperature area into a low-temperature area is much larger than the number of photons
random-walking from a low-temperature area to a high-temperature area, and the average energy carried by the
high-temperature photons to the low-temperature area is greater than the average energy carried from the low-
temperature area to the high-temerpature area by low-temperature photons.
The power diffusing through the radius r inside the star is described by the equation
4ac dT

L(r) = -4 r2 T 3 .
3 dr

In this equation, a = 7.56510-15 ergs cm-3 deg-4 is the Stefan-Boltzmann constant, c is the speed of light, is
the mass density of material, is a constant called the Rosseland mean opacity, and T is the temperature. This
equation explicitly shows that the power flowing through a particular radius is proportional to the temperature
at that radius. What is not show is all of the complex physics associated with the interaction of radiation with
matter; this is hidden in the Rosseland mean opacity. The Rosseland mean opacity describes the strength of the
interactions between radiation and matter, with the weakest interactions contributing most strongly to the
parameter. It is a function of both density and temperature. The only time it has a simple form is when the
dominant contribution is from Compton scattering for temperatures well below the electron rest-mass energy.
In this case, the Rosseland mean opacity is a constant. The diffusion equation is one of the principle equations
for deriving the internal structure of a star.

Convection in stellar interiors

The temperature gradient in a star determines the rate at which radiative diffusion transports energy
out of a star. If this gradient becomes steep enough, the plasma in this region becomes unstable to
convection. Convection therefore sets a bound on the temperature gradient in a star. Because convection
imposes a constraint on the temperature gradient, it imposes a limit on the amount of energy transported by
the diffusion of radiation; convection is the dominant mechanism for transporting energy in convectively-
unstable regions.
The mechanism that gives rise to convection is the same for all pressure-supported atmospheres trapped
in a gravitational potential. If a gas in a gravitational potential is static, then the pressure at any point within
the gas is equal to the pressure exerted by the overlying material. The temperature and pressure, however, are
set by other factor, such as the radiative transport of energy. For an ideal

190
gas, such as is found in the interior of a star, the gas pressure increases with either an increase in temperature
or density. A volume of gas of any temperature can therefore be in pressure balance with its surroundings if the
density is adjusted to compensate. For instance, if a particular volume is hotter than its surroundings, then its
density will be lower than its surrounds. It is this property that drives the convection, because the lower-density
region is buoyant, and it will rise to a higher altitude.
Whether or not a region is unstable to convection depends on the precise temperature structure of the
region. Let us assume, for instance, that a gravitationally-bound gas has a pressure, temperature, and density
that drop with altitude. What happens when we take a small volume of this gas and push it to higher altitude?
Does it sink back to its original position, or does it keep rising? If the first occurs, the gas is stable to
convection; if the second occurs, the gas is unstable to convection.
When our volume of gas is lifted to a higher altitude, the gas expands to maintain pressure balance with
its surroundings. This expansion decreased not only the density, but the temperature, because our gas volume is
doing work on its surroundings as it expands. If the temperature drops faster than the temperature of the
surrounding gas, the density of our volume will be greater than that of the surroundings. The volume would
therefore be less buoyant, and would sink back into place. But if the temperature of our volume is greater than
the surrounding temperature, the density of the volume will be less than that of the surroundings, and our
volume would be more buoyant than the surrounding gas. Our volume would continue rising, and convection
would commence in the gas. The atmosphere in this case is unstable to convection. From this we see that the
stability of the gas is dependent on the temperature gradient. If the temperature gradient becomes too steep,
convection begins.
This thought experiment shows that the density of a star must decrease as one moves out from the core to
the surface. In a star, the pressure decreases as one moves outward, because the pressure is set by the weight of
the overlying layers. If we take our volume element and move it up, both the density and temperature of the
element must drop to achieve this lower pressure: the density drops because of the expansion, and the
temperature drops because of the work done on the surroundings during the expansion. But if the surrounding
density is constant or is increases with altitude, then our volume has a lower density than does the
surroundings, and it is more buoyant than the surrounds. A density that increases with altitude is therefore
unstable to convection.
In our thought experiment of lifting a gas volume to higher altitude, we are defining a temperature
structure for the atmosphere through an adiabatic process, which is defined as a process where no heat is
exchanged with the surroundings. If the actual temperature over a region of a star falls faster than this
adiabatic temperature, the region is unstable to convection.
Most stars have regions of convection. In a main sequence star that are the size of the Sun or smaller, this
region is in the outer layers of the star. In a main sequence star that is more massive than the Sun, the core of
the star is convective.
Convection is important not only because it transports energy, but because it mixes the gas in a star. For
stars with convective cores, the products of nuclear fusion are mixed with lighter elements from regions not
supporting nuclear fusion. This mixing prolongs core nuclear fusion in a star.

Polytropic stars.

In physics we develop insight by distilling a problem down to its essence. For stellar structure, this
essence is found by expressing the pressure within a star in terms of density alone and solving the resulting
equation of hydrostatic equilibrium. This type of model is referred in the scientific literature as a polytropic
stellar model. In this type of analysis, we are ignoring the details of how energy is transported out of the star.
Temperature is present, but only implicitly.

191
The advantage of this type of analysis is that we find a simple solution for the stellar density with radius
that is not too far off from the results found by solving the complete set of equations that describe the
structure of a star. Because of this, polytropic stellar models are used as the initial models in computer codes
that recursively solve for stellar structure. We also learn from this type of analysis that not all physically
realized equations for pressure permit a static stellar solutionthis points to the mechanism that drives core
collapse in dying stars, which leads to supernovae and the creation of neutron stars and black holes.
The polytropic pressure law used in defining a polytropic stellar model is
P = K ,

where P is the pressure, is the denisty, K is a constant, and is the adiabatic index.
There is a tremendous amount of physics buried in the adiabatic index. If we consider the case of a fully
ionized plasma that is at the stability limit to convection, we find a pressure law with = 5/3. If we consider a
star that is pressure supported in part by radiation, with the fraction of pressure provided by radiation held
constant, we find that = 4/3. These two values mark the most general range of values seen in real stars. These
two value are also bound the range of for degenerate gases, which are gases of low enough temperature that
the effects of quantum mechanics, particularly the effects of the Pauli exclusion principle, determine the
pressure-density relationship. For a non-relativistic degenerate gas, = 5/3, and for a relativistic degenerate
gas, = 4/3. But
while 5/3 and 4/3 mark the bounds of the most general range of values seen in real stars, there are
circumstances when can go much lower, below the critical value of 1.2, for example, when a gas is
becoming ionized.

The figure on this page shows the density as a function of radius for four different values of . With the
radio buttons set to Density and Fixed Mass, which are the default settings, the plot shows the density in
units of central density as a function of normalized radius for stars of a single mass (the units of radius are
explained at the end of this page). The primary point to notice is that there is very little difference among these
models for the core density. Most of the mass is confined within about the same radius, regardless of the value
of . Where the differences occur is at the outer edge of each star. For stars with a large value of , the density
drops rapidly to 0. As becomes smaller, the core of the star becomes slightly smaller while the envelope of
the star becomes much larger. At the value of goes to 1.2, radius of the star's surface goes to infinity, even
though the mass of the star remains constant; this is best seen in the figure by plotting , which is related to
density by the equation below. Stars with 1.2 have no static solution.
From these solutions, we see that when a star maintains a particular adiabatic index as its core shrinks,
the outer surface of the star also shrinks by the same factor. But if the adiabatic index of a star becomes
smaller as the star's core shrinks, the radius of its surface may increase. This happens when a star moves
from the main sequence into the giant phase; the core shrinks until the nuclear fusion of helium commences,
and the radius of the star expands.
The absence of a hydrostatic solution for 1.2 is interpreted as a condition for stellar collapse. This is
the orgin of core collapse in massive stars that have burned most of their nuclear fuel. As the core of such a star
shrinks and becomes hotter, atoms at the core disintegrate and combine with electrons to form free neutrons,
causing the adiabatic index to drop below 1.2. The core of such a star will collapse until either a state is
reached with > 1.2, which forms a neutron star, or the core forms a black hole.

192
A similar situation is encountered in protostars of un-ionized gas. Once the core temperature reaches a
temperature that allows ionization, the adiabatic index drops below 1.2, and the protostar collapses until the
whole star ionizes, returning the adiabatic index to a value above 1.2.

Figure Notes
Three pieces of physics enters into the derivation of the polytrope equation: the hydrostatic equation,
which is the equation for pressure balancing gravitational force, the polytropic pressure equation, which is
given above, and the mass of the star inside a given radius. The equation that one finds is a nonlinear second-
order differential equation. Rather than being written as an equation of density versus radius, the equation is
written as an equation for in terms of , where is related to density by
=01/( - 1 ),

and is related to radius r by


r = a .

The term a in this equation is a function of mass, central density, and adiabatic index.
With the Fixed Mass radio button selected, the the radius in the diagram is normalized so that each
curve represents the same mass and the curve for = 1.2 has a = 1.

Applet control guide

Low-mass stars

What happens after a low-mass star ceases to produce energy through fusion has not been directly observed;
the universe is around 13.8 billion years old, which is less time (by several orders of magnitude, in some cases)
than it takes for fusion to cease in such stars.

Recent astrophysical models suggest that red dwarfs of 0.1 M may stay on the main sequence
for some six to twelve trillion years, gradually increasing in both temperature and luminosity, and take several
hundred billion more to collapse, slowly, into a white dwarf. Such stars will not become red giants as they are fully
convective and will not develop a degenerate helium core with a shell burning hydrogen. Instead, hydrogen fusion
will proceed until almost the whole star is helium.

193
Internal structures of main-sequence stars, convection zones with arrowed cycles and radiative zones with red
flashes. To the left a low-massred dwarf, in the center a mid-sizedyellow dwarf and at the right a massive blue-
white main-sequence star.

Slightly more massive stars do expand into red giants, but their helium cores are not massive enough to reach the
temperatures required for helium fusion so they never reach the tip of the red giant branch. When hydrogen shell
burning finishes, these stars move directly off the red giant branch like a post-asymptotic-giant-branch (AGB) star,
but at lower luminosity, to become a white dwarf. A star of about 0.5 M will be able to reach temperatures high
enough to fuse helium, and these "mid-sized" stars go on to further stages of evolution beyond the red giant
branch.[10]

Mid-sized stars

The evolutionary track of a solar mass, solar metallicity, star from main sequence to post-AGB

194
Stars of roughly 0.510 M become red giants, which are large non-main-sequence stars of stellar
classification K or M. Red giants lie along the right edge of the HertzsprungRussell diagram due to their red
color and large luminosity. Examples include Aldebaran in the constellation Taurus and Arcturus in the
constellation of Botes.

Mid-sized stars are red giants during two different phases of their post-main-sequence evolution: red-giant-branch
stars, whose inert cores are made of helium, and asymptotic-giant-branch stars, whose inert cores are made of
carbon. Asymptotic-giant-branch stars have helium-burning shells inside the hydrogen-burning shells, whereas red-
giant-branch stars have hydrogen-burning shells only. Between these two phases, stars spend a period on the
horizontal branch with a helium-fusing core. Many of these helium-fusing stars cluster towards the cool end of the
horizontal branch as K-type giants and are referred to as red clump giants.

Subgiant phase: Subgiant, Subgiant

A subgiant is a star that is brighter than a normal main-sequence star of the same spectral class, but not as bright
as true giant stars. The term subgiant is applied both to a particular spectral luminosity class and to a stage in the
evolution of a star.

When a star exhausts the hydrogen in its core, it leaves the main sequence and begins to fuse hydrogen in a shell
outside the core. The core increases in mass as the shell produces more helium. Depending on the mass of the
helium core, this continues for several million to one or two billion years, with the star expanding and cooling at a
similar or slightly lower luminosity to its main sequence state. Eventually either the core becomes degenerate, in
stars around the mass of the sun, or the outer layers cool sufficiently to become opaque, in more massive stars.
Either
of these changes cause the hydrogen shell to increase in temperature and the luminosity of the star to increase, at
which point the star expands onto the red giant branch.

Red-giant-branch phase
Red giant branch

195
Typical stellar evolution for 0.8-8 M

The expanding outer layers of the star are convective, with the material being mixed by turbulence from near the
fusing regions up to the surface of the star. For all but the lowest-mass stars, the fused material has remained deep
in the stellar interior prior to this point, so the convecting envelope makes fusion products visible at the star's
surface for the first time. At this stage of evolution, the results are subtle, with the largest effects, alterations to the
isotopes of hydrogen and helium, being unobservable. The effects of the CNO cycle appear at the surface during
the first dredge-up, with lower 12C/13C ratios and altered proportions of carbon and nitrogen. These are detectable
with spectroscopy and have been measured for many evolved stars.

The helium core continues to grow on the red giant branch. It is no longer in thermal equilibrium, either
degenerate or above the Schoenberg-Chandrasekhar limit, so it increases in temperature which causes the rate of
fusion in the hydrogen shell to increase. The star increases in luminosity towards the tip of the red-giant branch.
Red giant branch stars with a degenerate helium core all reach the tip with very similar core masses and very
similar luminosities, although the more massive of the red giants become hot enough to ignite helium fusion
before that point.

Horizontal branch
Horizontal branch and Red clump

In the helium cores of stars in the 0.8 to 2.0 solar mass range, which are largely supported by electron degeneracy
pressure, helium fusion will ignite on a timescale of days in a helium flash. In the nondegenerate cores of more
massive stars, the ignition of helium fusion occurs relatively slowly with no flash. The nuclear power released
during the helium flash is very large, on the order of 10 8 times the luminosity of the Sun for a few days[12]and 1011
times the luminosity of the Sun (roughly the luminosity of the Milky Way Galaxy) for a few seconds.[14]However,
the

196
energy is consumed by the thermal expansion of the initially degenerate core and thus cannot be seen from outside
the star.[12][14][15]Due to the expansion of the core, the hydrogen fusion in the
overlying layers slows and total energy generation decreases. The star contracts, although not all the way to the
main sequence, and it migrates to the horizontal branch on the Hertzsprung Russell diagram, gradually shrinking
in radius and increasing its surface temperature.

Core helium flash stars evolve to the red end of the horizontal branch but do not migrate to higher temperatures
before they gain a degenerate carbon-oxygen core and start helium shell burning. These stars are often observed as
a red clump of stars in the colour-magnitude diagram of a cluster, hotter and less luminous than the red giants.
Higher-mass stars with larger helium cores move along the horizontal branch to higher temperatures, some
becoming unstable pulsating stars in the yellow instability strip (RR Lyrae variables), whereas some become even
hotter and can form a blue tail or blue hook to the horizontal branch. The morphology of the horizontal branch
depends on parameters such as metallicity, age, and helium content, but the exact details are still being modelled.

Asymptotic-giant-branch phase
Asymptotic giant branch

After a star has consumed the helium at the core, hydrogen and helium fusion continues in shells around a hot core
of carbon and oxygen. The star follows the asymptotic giant branch on the HertzsprungRussell diagram, paralleling
the original red giant evolution, but with even faster energy generation (which lasts for a shorter time) . Although
helium is being burnt in a shell, the majority of the energy is produced by hydrogen burning in a shell further from
the core of the star. Helium from these hydrogen burning shells drops towards the center of the star and periodically
the energy output from the helium shell increases dramatically. This is known as a thermal pulse and they occur
towards the end of the asymptotic-giant-branch phase, sometimes even into the post-asymptotic-giant-branch phase.
Depending on mass and composition, there may be several to hundreds of thermal pulses.

There is a phase on the ascent of the asymptotic-giant-branch where a deep convective zone forms and can bring
carbon from the core to the surface. This is known as the second dredge up, and in some stars there may even be a
third dredge up. In this way a carbon star is formed, very cool and strongly reddened stars showing strong carbon
lines in their spectra. A process known as hot bottom burning may convert carbon into oxygen and nitrogen before
it can be dredged to the surface, and the interaction between these processes determines the observed luminosities
and spectra of carbon stars in particular clusters.

Another well known class of asymptotic-giant-branch stars are the Mira variables, which pulsate with well-defined
periods of tens to hundreds of days and large amplitudes up to about 10 magnitudes (in the visual, total luminosity
changes by a much smaller amount). In more-massive stars the stars become more luminous and the pulsation
period is longer, leading to enhanced mass loss, and the stars become heavily obscured at visual wavelengths. These
stars can be observed as OH/IR stars, pulsating in the infra-red and showing OH maser activity. These stars are
clearly oxygen rich, in contrast to the carbon stars, but both must be produced by dredge ups.

197
Post-AGB
Post-AGB

The Cat's Eye Nebula, a planetary nebula formed by the death of a star with about the same mass as the Sun

These mid-range stars ultimately reach the tip of the asymptotic-giant-branch and run out of fuel for shell burning.
They are not sufficiently massive to start full-scale carbon fusion, so they contract again, going through a period of
post-asymptotic-giant-branch superwind to produce a planetary nebula with an extremely hot central star. The
central star then cools to a white dwarf. The expelled gas is relatively rich in heavy elements created within the star
and may be particularly oxygen or carbon enriched, depending on the type of the star. The gas builds up in an
expanding shell called a circumstellar envelope and cools as it moves away from the star, allowing dust particles
and molecules to form. With the high infrared energy input from the central star, ideal conditions are formed in
these circumstellar envelopes for maser excitation.

It is possible for thermal pulses to be produced once post-asymptotic-giant-branch evolution has begun, producing
a variety of unusual and poorly understood stars known as born-again asymptotic-giant-branch stars. These may
result in extreme horizontal-branch stars (subdwarf
B stars), hydrogen deficient post-asymptotic-giant-branch stars, variable planetary nebula central stars, and R
Coronae Borealis variables.

Massive stars
Supergiant

198
The Crab Nebula, the shattered remnants of a star which exploded as a supernova, the light of which
reached Earth in 1054 AD

In massive stars, the core is already large enough at the onset of the hydrogen burning shell that helium ignition will
occur before electron degeneracy pressure has a chance to become prevalent. Thus, when these stars expand and
cool, they do not brighten as much as lower-mass stars; however, they were much brighter than lower-mass stars to
begin with, and are thus still brighter than the red giants formed from less-massive stars. These stars are unlikely to
survive as red supergiants; instead they will destroy themselves as type II supernovas.

Extremely massive stars (more than approximately 40 M), which are very luminous and thus have very rapid stellar
winds, lose mass so rapidly due to radiation pressure that they tend to strip off their own envelopes before they can
expand to become red supergiants, and thus retain extremely high surface temperatures (and blue-white color) from
their main-sequence time onwards. The largest stars of the current generation are about 100-150 M because the
outer layers would be expelled by the extreme radiation. Although lower-mass stars normally do not burn off their
outer layers so rapidly, they can likewise avoid becoming red giants or red supergiants if they are in binary systems
close enough so that the companion star strips off the envelope as it expands, or if they rotate rapidly enough so that
convection extends all the way from the core to the surface, resulting in the absence of a separate core and envelope
due to thorough mixing.

The core grows hotter and denser as it gains material from fusion of hydrogen at the base of the envelope. In all
massive stars, electron degeneracy pressure is insufficient to halt collapse by itself, so as each major element is
consumed in the center, progressively heavier elements ignite, temporarily halting collapse. If the core of the star is
not too massive (less than approximately
1.4 M, taking into account mass loss that has occurred by this time), it may then form a white dwarf (possibly
surrounded by a planetary nebula) as described above for less-massive stars, with the difference that the white
dwarf is composed chiefly of oxygen, neon, and magnesium.

199
The onion-like layers of a massive, evolved star just before core collapse. (Not to scale.)

Above a certain mass (estimated at approximately 2.5 M and whose star's progenitor was around 10 M), the core
will reach the temperature (approximately 1.1 gigakelvins) at which neon partially breaks down to form oxygen
and helium, the latter of which immediately fuses with some of the remaining neon to form magnesium; then
oxygen fuses to form sulfur, silicon, and smaller amounts of other elements. Finally, the temperature gets high
enough that any nucleus can be partially broken down, most commonly releasing an alpha particle (helium
nucleus) which immediately fuses with another nucleus, so that several nuclei are effectively rearranged into a
smaller number of heavier nuclei, with net release of energy because the addition of fragments to nuclei exceeds
the energy required to break them off the parent nuclei.

A star with a core mass too great to form a white dwarf but insufficient to achieve sustained conversion of neon to
oxygen and magnesium, will undergo core collapse (due to electroncapture) before achieving fusion of the heavier
elements.[21]Both heating and cooling caused by electron capture onto minor constituent elements (such as
aluminum and sodium) prior to collapse may have a significant impact on total energy generation within the star
shortly before collapse.[22]This may produce a noticeable effect on the abundance of elements and isotopes ejected
in the subsequent supernova.

SUPERNOVA
Supernova

Once the nucleosynthesis process arrives at iron-56, the continuation of this process consumes energy (the
addition of fragments to nuclei releases less energy than required to break them off the parent nuclei). If the mass
of the core exceeds the Chandrasekhar limit, electron degeneracypressure will be unable to support its weight
against the force of gravity, and the core will undergo sudden, catastrophic collapse to form a neutron star or (in
the case of cores that exceed

200
the Tolman-Oppenheimer-Volkoff limit), a black hole. Through a process that is not completely understood, some
of the gravitational potential energy released by this core collapse is converted into a Type Ib, Type Ic, or Type II
supernova. It is known that the core collapse produces a massive surge of neutrinos, as observed with supernova SN
1987A. The extremely energetic neutrinos fragment some nuclei; some of their energy is consumed in releasing
nucleons, including neutrons, and some of their energy is transformed into heat and kinetic energy, thus augmenting
the shock wave started by rebound of some of the infalling material from the collapse of the core. Electron capture
in very dense parts of the infalling matter may produce additional neutrons. Because some of the rebounding matter
is bombarded by the neutrons, some of its nuclei capture them, creating a spectrum of heavier-than-iron material
including the radioactive elements up to (and likely beyond) uranium.[23]Although non-exploding red giants can
produce significant quantities of elements heavier than iron using neutrons released in side reactions of earlier
nuclear reactions, the abundance of elements heavier than iron (and in particular, of certain isotopes of elements
that have multiple stable or long-lived isotopes) produced in such reactions is quite different from that produced in
a supernova. Neither abundance alone matches that found in the Solar System, so both supernovae and ejection of
elements from red giants are required to explain the observed abundance of heavy elements and isotopes thereof.

The energy transferred from collapse of the core to rebounding material not only generates heavy elements, but
provides for their acceleration well beyond escape velocity, thus causing a Type Ib, Type Ic, or Type II supernova.
Note that current understanding of this energy transfer is still not satisfactory; although current computer models of
Type Ib, Type Ic, and Type II supernovae account for part of the energy transfer, they are not able to account for
enough energy transfer to produce the observed ejection of material.

Some evidence gained from analysis of the mass and orbital parameters of binary neutron stars (which require two
such supernovae) hints that the collapse of an oxygen-neon-magnesium core may produce a supernova that differs
observably (in ways other than size) from a supernova produced by the collapse of an iron core. [25]

The most massive stars that exist today may be completely destroyed by a supernova with an energy greatly
exceeding its gravitational binding energy. This rare event, caused by pair-instability, leaves behind no black hole
remnant.[26]In the past history of the universe, some stars were even larger than the largest that exists today, and
they would immediately collapse into a black hole at the end of their lives, due to photodisintegration.

201
Stellar evolution of low-mass (left cycle) and high-mass (right cycle) stars, with examples in italics

Stellar remnants

After a star has burned out its fuel supply, its remnants can take one of three forms, depending on the mass during its
lifetime.

White and black dwarfs


Main articles: White dwarf and Black dwarf

White dwarf

For a star of 1 M, the resulting white dwarf is of about 0.6 M, compressed into approximately the volume of the
Earth. White dwarfs are stable because the inward pull of gravity is balanced by the degeneracy pressure of the
star's electrons, a consequence of the Pauli exclusion principle. Electron degeneracy pressure provides a rather soft
limit against further compression; therefore, for a given chemical composition, white dwarfs of higher mass have a
smaller volume. With no fuel left to burn, the star radiates its remaining heat into space for billions of years.

Pauli exclusion principle

Connection to quantum state symmetry


The Pauli exclusion principle with a single-valued many-particle wave function is equivalent to
requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented
as a sum of states in
which one particle is in state and the other in state , and is given by:

and antisymmetry under exchange means that A(x,y) = A(y,x). This implies A(x,y) = 0 when x = y,
which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric
matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an
antisymmetric second-order tensor.

202
Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction
component

is necessarily antisymmetric. To prove it, consider the matrix element

This is zero, because the two particles have zero probability to both be in the superposition state

. But this is equal to

The first and last terms on the right side are diagonal elements and are zero, and the whole sum is
equal to zero. So the wavefunction matrix elements obey:

or

Pauli principle in advanced quantum theory

According to the spin-statistics theorem, particles with integer spin occupy symmetric quantum states,
and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-
integer values of spin are allowed by the principles of quantum mechanics. In relativistic quantum field
theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-
integer spin.
In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional
Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free
fermions. The reason for this is that, in one dimension, exchange of particles requires that they pass
through each other; for infinitely strong repulsion this cannot happen. This model is described by a
quantum nonlinear Schrdinger equation. In momentum space the exclusion principle is valid also for
finite repulsion in a Bose gas with delta-function interactions,[7]as well as for interacting spins and
Hubbard model in one dimension, and for other models solvable by Bethe ansatz. The ground state in
models solvable by Bethe ansatz is a Fermi sphere.

Consequences of pauli exclusive principle


Atoms and the Pauli principle

The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly
important consequence of the principle is the elaborate electron shell structure of atoms and the way
atoms share electrons, explaining the variety of chemical elements and their chemical combinations.
An electrically
neutral atom contains bound electrons equal in number to the protons in the nucleus. Electrons, being
fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within
an atom, i.e. have different spins while at the same electron orbital as described below.
An example is the neutral helium atom, which has two bound electrons, both of which can occupy the
lowest-energy (1s) states by acquiring opposite spin; as spin is part of the quantum state of the electron,
the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin
can take only two different values (eigenvalues). In a lithium atom, with three bound electrons, the third

203
electron cannot reside in a 1s state, and must occupy one of the higher-energy 2s states instead.
Similarly, successively larger elements must have shells of successively higher energy. The chemical
properties of an element largely depend on the number of electrons in the outermost shell; atoms with
different numbers of occupied electron shells but
the same number of electrons in the outermost shell have similar properties, which gives rise to the
periodictable of the elements
Astrophysics and the Pauli principle

Dyson and Lenard did not consider the extreme magnetic or gravitational forces that occur in some
astronomical objects. In 1995 Elliott Lieb and coworkers showed that the Pauli principle still leads to
stability in intense magnetic fields such as in neutron stars, although at a much higher density than in
ordinary matter.[15] It is a consequence of general relativity that, in sufficiently intense gravitational
fields, matter collapses to form a black hole.
Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form
of white dwarf and neutron stars. In both bodies, atomic structure is disrupted by extreme pressure, but
the stars are held in hydrostatic equilibrium by degeneracy pressure, also known as Fermi pressure. This
exotic form of matter is known as degenerate matter. The immense gravitational force of a star's mass is
normally held in equilibrium by thermal pressure caused by heat produced
in thermonuclear fusion in the star's core. In white dwarfs, which do not undergo nuclear fusion, an
opposing force to gravity is provided by electron degeneracy pressure. In neutron stars, subject to even
stronger gravitational forces, electrons have merged with protons to form neutrons. Neutrons are
capable of producing an even higher degeneracy pressure, neutron degeneracy pressure, albeit over a
shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher
density than a white dwarf. Neutrons are the most "rigid" objects known; their Youngmodulus (or more
accurately, bulk modulus) is 20 orders of magnitude larger than that of diamond. However, even this
enormous rigidity can be overcome by the gravitational field of a massive star or by the pressure of a
supernova, leading to the formation of a black hole

A white dwarf is very hot when it first forms, more than 100,000 K at the surface and even hotter
in its interior. It is so hot that a lot of its energy is lost in the form of neutrinos for the first 10 million years of
its existence, but will have lost most of its energy after a billion years. [27]

The chemical composition of the white dwarf depends upon its mass. A star of a few solar masses will
ignite carbon fusion to form magnesium, neon, and smaller amounts of other elements, resulting in a white
dwarf composed chiefly of oxygen, neon, and magnesium,
provided that it can lose enough mass to get below the Chandrasekhar limit (see below), and provided that the
ignition of carbon is not so violent as to blow the star apart in a supernova. [28]A
star of mass on the order of magnitude of the Sun will be unable to ignite carbon fusion, and will produce a white
dwarf composed chiefly of carbon and oxygen, and of mass too low to collapse unless matter is added to it later
(see below). A star of less than about half the mass of the Sun will be unable to ignite helium fusion (as noted
earlier), and will produce a white dwarf composed chiefly of helium.

Black dwarf

In the end, all that remains is a cold dark mass sometimes called a black dwarf. However, the universe is not
old enough for any black dwarfs to exist yet.

If the white dwarf's mass increases above the Chandrasekhar limit, which is 1.4 M for a white dwarf composed
chiefly of carbon, oxygen, neon, and/or magnesium, then electron degeneracy

204
pressure fails due to electron capture and the star collapses. Depending upon the chemical composition and pre-
collapse temperature in the center, this will lead either to collapse into a neutron star or runaway ignition of carbon
and oxygen. Heavier elements favor continued core collapse, because they require a higher temperature to ignite,
because electron capture onto these elements and their fusion products is easier; higher core temperatures favor
runaway nuclear reaction, which halts core collapse and leads to a Type Ia supernova.[29]These supernovae may be
many times brighter than the Type II supernova marking the death of a massive star, even though the latter has the
greater total energy release. This instability to collapse means that no white dwarf more massive than approximately
1.4 M can exist (with a possible minor exception for very rapidly spinning white dwarfs, whose centrifugal force
due to rotation partially counteracts the weight of their matter). Mass transfer in a binary system may cause an
initially stable white dwarf to surpass the Chandrasekhar limit.

If a white dwarf forms a close binary system with another star, hydrogen from the larger companion may accrete
around and onto a white dwarf until it gets hot enough to fuse in a runaway reaction at its surface, although the
white dwarf remains below the Chandrasekhar limit. Such an explosion is termed a nova.

NEUTRON STAR

A neutron star is the collapsed core of a large (1029 solar masses) star. Neutron stars are the
smallest and densest stars known to exist.[1]Though neutron stars typically have a radius on the order of
10 kilometres
(6.2 mi), they can have masses of about twice that of the Sun. They result from the supernova explosion
of a massive star, combined with gravitational collapse, that compresses the core past the white dwarf
star density to that of atomic nuclei. Most of the basic models for these objects imply that neutron stars
are composed almost entirely of neutrons, which are subatomic particles with no net electrical charge and
with slightly larger mass than protons. They are supported against further collapse by neutron
degeneracy pressure, a phenomenon described by the Pauli exclusion principle. If the remnant has too
great a density, something which occurs in excess of an upper limit of the size of neutron stars at 23
solar masses, it will continue collapsing to form a black hole.
Neutron stars that can be observed are very hot and typically have a surface temperature
around 600000 K.[2][3][4][5][a]They are so dense that a normal-sized matchbox containing neutron-star
material would have a mass of approximately 3 billion tonnes, or a 0.5 cubic kilometre chunk of the
Earth (a cube with edges of about 800 metres).[6][7]Their magnetic fields are between 108 and 1015
times as strong as that of the Earth. The gravitational field at the neutron star's surface is about 210 11
times that of the Earth.
As the star's core collapses, its rotation rate increases as a result of conservation of angular momentum,
hence newly formed neutron stars rotate at up to several hundred times per second. Some neutron stars
emit beams of electromagnetic radiation that make them detectable as pulsars. Indeed, the discovery of
pulsars by JocelynBell Burnell in 1967 was the first observational suggestion that neutron stars exist. The
radiation from pulsars is thought to be primarily emitted from regions near their magnetic poles. If the
magnetic poles do not coincide with the rotational axis of the neutron star, the emission beam will sweep
the sky, and when seen from a distance, if the observer is somewhere in the path of the beam, it will
appear as pulses of radiation coming from a fixed point in space (the so-called "lighthouse effect"). The
fastest-spinning neutron star known is PSR
J1748-2446ad, rotating at a rate of 716 times a second[8][9]or 43,000 revolutions per minute, giving a
linear speed at the surface on the order of 0.24 c (i.e. nearly a quarter the speed of light).
There are thought to be around 100 million neutron stars in the Milky Way, a figure obtained by estimating
the number of stars that have undergone supernova explosions.[10]However, most are old and cold, and
neutron stars can only be easily detected in certain instances, such as if they are a pulsar or part of a
binary system. Slow-rotating and non-accreting neutron stars are virtually undetectable; however, since
the Hubble SpaceTelescope detection of RX J185635-3754, a few nearby neutron stars that appear to
emit only thermal radiation have been detected. Soft gamma repeaters are conjectured to be a type of
neutron star with very strong magnetic fields, known as magnetars, or alternatively, neutron stars with
fossil disks around them.

205
Neutron stars in binary systems can undergo accretion which typically makes the system bright in x-
rays while the material falling onto the neutron star can form hotspots that rotate in and out of view in
identified X-ray pulsar systems. Additionally, such accretion can "recycle" old pulsars and potentially
cause them to gain mass and spin-up to very fast rotation rates, forming the so-called millisecond
pulsars. These binary systems will continue to evolve, and eventually the companions can
become compact objects such as white dwarfs or neutron stars themselves, though other possibilities
include a complete destruction of the companion through ablation or merger. The merger of binary
neutron stars may be the source of short-duration gamma-ray bursts and are likely strong sources
of gravitational waves. Though as of 2016 no direct detection of the gravitational waves from such an
event has been made, gravitational waves have been indirectly detected in a system where two neutron
stars orbit each other.

Formation

Simplistic representation of the formation of neutron stars.

Source;Bedrockperson ; Own work

Any main-sequence star with an initial mass of above 8 times the mass of the sun (8 M) has the
potential to
produce a neutron star. As the star evolves away from the main sequence, subsequent nuclear burning
produces an iron-rich core. When all nuclear fuel in the core has been exhausted, the core must be
supported by degeneracy pressure alone. Further deposits of mass from shell burning cause the core to
exceed the Chandrasekhar limit. Electron-degeneracy pressure is overcome and the core collapses
further, sending temperatures soaring to over 5109 K. At these temperatures, photodisintegration (the
breaking up of iron nuclei into alpha particles by high-energy gamma rays) occurs. As the temperature
climbs even higher, electrons and protons combine to form neutrons via electron capture, releasing a
flood of neutrinos. When densities reach nuclear density of 41017 kg/m3, neutron degeneracy pressure
halts the contraction. The infalling outer envelope of the star is halted and flung outwards by a flux of
neutrinos produced in the creation of the neutrons, becoming a supernova. The remnant left is a

206
neutron star. If the remnant has a mass greater than about 3 M, it collapses further to become a black
hole.
As the core of a massive star is compressed during a Type II supernova, Type Ib or Type Ic supernova,
and collapses into a neutron star, it retains most of its angular momentum. But, because it has only a tiny
fraction of its parent's radius (and therefore its moment of inertia is sharply reduced), a neutron star is
formed with very high rotation speed, and then over a very long period it slows. Neutron stars are known
that have rotation periods from about 1.4 ms to 30 s. The neutron star's density also gives it very high
surface gravity, with typical values ranging from 1012to 1013 m/s2 (more than 1011 times that of
Earth).[5]One measure of such immense gravity is the fact that neutron stars have an escape velocity
ranging from 100,000 km/s to 150,000 km/s, that is, from a third to half the speed of light. The neutron
star's gravity accelerates infalling matter to tremendous speed. The force of its impact would likely
destroy the object's component atoms, rendering all the matter identical, in most respects, to the rest of
the neutron star.

Schematic of stellar evolution

207
208
Properties of a neutron star
Mass and temperature
A neutron star has a mass of at least 1.1 and perhaps up to 3 solar masses (M).[13][14] The maximum
observed mass of neutron stars is about 2.01 M. But in general, compact stars of less than 1.39 M(the
Chandrasekhar
limit) are white dwarfs, whereas compact stars with a mass between 1.4 M and 3 M (the Tolman
OppenheimerVolkoff limit) should be neutron stars (though there is an interval of a few tenths of a solar
mass where the masses of low-mass neutron stars and high-mass white dwarfs can overlap). Between 3
M and
5 M, hypothetical intermediate-mass stars such as quark starsand electroweak stars have been
proposed, but none have been shown to exist. Beyond 10 M the stellar remnant will overcome the
neutron degeneracypressure and gravitational collapse will usually occur to produce a black hole, though
the smallest observed mass of a stellar black hole is about 5 M.[15]
The temperature inside a newly formed neutron star is from around 1011to 1012kelvin.[16]However, the
huge number of neutrinos it emits carry away so much energy that the temperature of an isolated
neutron star falls within a few years to around 106kelvin.[16]At this lower temperature, most of the light
generated by a neutron star is in X-rays.
Density and pressure

Neutron stars have overall densities of 3.71017 to 5.91017 kg/m3 (2.61014 to 4.11014 times the
density of the Sun),[b]which is comparable to the approximate density of an atomic nucleus of 31017
kg/m3.[17]The neutron star's density varies from about 1109 kg/m3 in the crustincreasing with depth
to about 61017 or 81017 kg/m3 (denser than an atomic nucleus) deeper inside.[16]A neutron star is so
dense that one teaspoon (5 milliliters) of its material would have a mass over 5.51012 kg (that is 1100
tonnes per1 nanolitre), about 900 times the mass of the Great Pyramid of Giza. The pressure increases
from 31033 to 1.61035 Pa from the inner crust to the center.
The equation of state of matter at such high densities is not precisely known because of the theoretical
difficulties associated with extrapolating the likely behavior of quantum chromodynamics,
superconductivity, and superfluidity of matter in such states along with the empirical difficulties of
observing the characteristics of neutron stars that are at least hundreds of parsecs away.
Giant nucleus
A neutron star has some of the properties of an atomic nucleus, including density (within an order of
magnitude) and being composed of nucleons. In popular scientific writing, neutron stars are therefore
sometimes described as giant nuclei. However, in other respects, neutron stars and atomic nuclei are
quite different. In particular, a nucleus is held together by the strong interaction, whereas a neutron
star is held together by gravity, and thus the density and structure of neutron stars can be more
variable.
Magnetic field

Neutron stars have strong magnetic fields. The magnetic field strength on the surface of neutron stars
have been estimated at least to have the range of 108 to 1015gauss (104 to 1011tesla).[19]In comparison,
the magnitude at Earth's surface ranges from 25 to 65 microteslas (0.25 to 0.65 gauss),[20]making the
field at least 108 times as strong as that of Earth. Variations in magnetic field strengths are most likely
the main factor that allows different types of neutron stars to be distinguished by their spectra, and
explains the periodicity of pulsars. The neutron stars known as magnetars have the strongest magnetic
fields, in the range of 108 to 1011tesla,[21]and have become the widely accepted hypothesis for neutron
star types soft gammarepeaters (SGRs)[22]and anomalous X-ray pulsars (AXPs).[23]
The origins of the strong magnetic field are as yet unclear.[19]One hypothesis is that of "flux freezing", or
conservation of the original magnetic flux takes place during the formation of the neutron star.[19]If an
object has a certain magnetic flux over its surface area, and that area shrinks to a smaller area, but the
magnetic flux is conserved, then the magnetic field would correspondingly increase. Likewise, a
collapsing star begins with a much larger surface area than the resulting neutron star, and conservation of

209
magnetic flux would result in a far stronger magnetic field. However, this simple explanation does not fully
explain magnetic field strengths of neutron stars.
Gravity and equation of state

Gravitational light deflection at a neutron star. Due to relativistic light deflection more than half of

the surface is visible (each chequered patch here represents 30 degrees by 30 degrees).[24]In

natural units, the mass of the depicted star is 1 and its radius 4, or twice its Schwarzschild radius

The gravitational field at a neutron star's surface is about 21011 times stronger than on Earth, at
around 2.01012 m/s2.[25]Such a strong gravitational field acts as a gravitational lens and bends the
radiation emitted by the neutron star such that parts of the normally invisible rear surface become
visible.[24]If the radius of the neutron star is 3GM/c2 or less, then the photons may be trapped in an orbit,
thus making the whole surface of that neutron star visible from a single vantage point, along with
destabilizing photon orbits at or below the 1 radius distance of the star.
A fraction of the mass of a star that collapses to form a neutron star is released in the supernova
explosion from which it forms (from the law of massenergy equivalence, E = mc2). The energy
comes from the gravitational binding energy of a neutron star. Hence, the gravitational force of a
typical neutron star is huge. If an object were to fall from a height of one meter on a neutron star 12
kilometers in radius, it would reach the ground at around 1.4 million meters per second.[26]
Because of the enormous gravity, time dilation between a neutron star and Earth is significant. For
example, eight years could pass on the surface of a neutron star, yet ten years would have passed on
Earth, not including the time-dilation effect of its very rapid rotation.[27]
Neutron star relativistic equations of state describe the relation of radius vs. mass for various
models.[28]The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest
radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to the
observed neutron star gravitational mass of "M" kilograms with radius "R" meters, [29]

210
Given current values

[30]

[30]

and star masses "M" commonly reported as multiples of one solar mass,

then the relativistic fractional binding energy of a neutron star is

A 2 M neutron star would not be more compact than 10,970 meters radius (AP4 model). Its mass
fraction gravitational binding energy would then be 0.187, 18.7% (exothermic). This is not near 0.6/2
= 0.3, 30%.

211
The equation of state for a neutron star is still not known. It is assumed that it differs significantly from
that of a white dwarf, whose equation of state is that of a degenerate gas that can be described in close
agreement with special relativity. However, with a neutron star the increased effects of general relativity
can no longer be ignored. Several equations of state have been proposed (FPS, UU, APR, L, SLy, and
others) and current research is still attempting to constrain the theories to make predictions of neutron
star matter.[5][31]This means that the relation between density and mass is not fully known, and this causes
uncertainties in radius estimates. For example, a 1.5 M neutron star could have a radius of 10.7, 11.1,
12.1 or 15.1 kilometers (for EOS FPS, UU, APR or L respectively).[31]

Structure

212
Cross-section of neutron star. Densities are in terms of 0 the saturation nuclear matter

density, where nucleons begin to touch.

Source Robert schultze; Own Work

Current understanding of the structure of neutron stars is defined by existing mathematical models, but it
might be possible to infer some details through studies of neutron-star oscillations. Asteroseismology, a
study applied to ordinary stars, can reveal the inner structure of neutron stars by analyzing observed
spectra of stellar oscillations.[5]
Current models indicate that matter at the surface of a neutron star is composed of ordinary atomic
nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is
possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon.[32]It is also
possible that heavy elements, such as iron, simply sink beneath the surface, leaving only light nuclei
like helium and hydrogen.[32]If the surface temperature exceeds 106 kelvin (as in the case of a young
pulsar), the surface should be fluid instead of the solid phase that might exist in cooler neutron stars
(temperature <106 kelvin).[32]
The "atmosphere" of a neutron star is hypothesized to be at most several micrometers thick, and its
dynamics are fully controlled by the neutron star's magnetic field. Below the atmosphere one encounters
a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities of ~5
mm), due to the extreme gravitational field.[33]The expected hierarchy of phases of nuclear matter in the
inner crust has been characterized as nuclear pasta.[34]
Proceeding inward, one encounters nuclei with ever-increasing numbers of neutrons; such nuclei would
decay quickly on Earth, but are kept stable by tremendous pressures. As this process continues at
increasing depths, the neutron dripbecomes overwhelming, and the concentration of free neutrons
increases rapidly. In that region, there are nuclei, free electrons, and free neutrons. The nuclei become
increasingly small (gravity and pressure overwhelming the strong force) until the core is reached, by
definition the point where mostly neutrons exist.
The composition of the superdense matter in the core remains uncertain. One model describes the core
as superfluidneutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic
forms of matter are possible, including degenerate strange matter (containing strange quarks in addition
to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons,[5]or
ultra-dense quark-degenerate matter.

Radiation

Animation of a rotating pulsar. The sphere in the middle represents the neutron star, the curves
indicate the magnetic field lines and the protruding cones represent the emission zones.

213
PULSARS

Neutron stars are detected from their electromagnetic radiation. Neutron stars are usually
observed to pulse radio waves and other electromagnetic radiation, and neutron stars observed
with pulses are called pulsars.
Pulsars' radiation is thought to be caused by particle acceleration near their magnetic poles, which
need not be aligned with the rotational axis of the neutron star. It is thought that a large electrostatic field
builds up near the magnetic poles, leading to electron emission. These electrons are magnetically
accelerated along the field lines, leading to curvature radiation, with the radiation being strongly
polarized towards the plane of curvature. In addition, high energy photons can interact with lower energy
photons and the magnetic field for electron-positron pair production, which through electronpositron
annihilation leads to further high energy photons.
The radiation emanating from the magnetic poles of neutron stars can be described as magnetospheric
radiation, in reference to the magnetosphere of the neutron star.[36]It is not to be confused with magnetic
dipoleradiation, which is emitted because the magnetic axis is not aligned with the rotational axis, with a
radiation frequency the same as the neutron star's rotational frequency.
If the axis of rotation of the neutron star is different to the magnetic axis, external viewers will only see
these beams of radiation whenever the magnetic axis point towards them during the neutron star
rotation. Therefore, periodic pulses are observed, at the same rate as the rotation of the neutron star.
Non-pulsating neutron stars

In addition to pulsars, neutron stars have also been identified with no apparent periodicity of
their radiation.[37]This seems to be a characteristic of the X-ray sources known as Central
Compact Objects in Supernova remnants (CCOs in SNRs), which are thought to be young,
radio-quiet isolated neutron stars.
Spectra

In addition to radio emissions, neutron stars have also been identified in other parts of the
electromagneticspectrum. This includes visible light, near infrared, ultraviolet, X-rays and gamma
rays.[36]Pulsars observed in X-rays are known as X-ray pulsars if accretion-powered; while those
identified in visible light as optical pulsars. The majority of neutron stars detected, including those
identified in optical, X-ray and gamma rays, also emit radio waves;[39]the Crab Pulsar produces
electromagnetic emissions across the spectrum.[39]However, there exist neutron stars called radio-quiet
neutron stars, with no radio emissions detected.

Rotation
Neutron stars rotate extremely rapidly after their formation due to the conservation of angular
momentum; like spinning ice skaters pulling in their arms, the slow rotation of the original star's core
speeds up as it shrinks. A newborn neutron star can rotate many times a second.

214
Spin down

PP-dot diagram for known rotation-powered pulsars (red), anomalous X-ray pulsars (green),
high-energy emission pulsars (blue) and binary pulsars (pink)

Source;Alan [Link] work

Over time, neutron stars slow, as their rotating magnetic fields in effect radiate energy associated with
the rotation; older neutron stars may take several seconds for each revolution. This is called spin down.
The rate at which a neutron star slows its rotation is usually constant and very small.
The periodic time (P) is the rotational period, the time for one rotation of a neutron star. The spin-down
rate,

the rate of slowing of rotation, is then given the symbol (P-dot), the negative derivative of P with respect
to time. It is defined as periodic time decrease per unit time; it is a dimensionless quantity, but can be
given the units of ss1 (seconds per second)
The spin-down rate (P-dot) of neutron stars usually falls within the range of 1022 to 109 ss1, with the
shorter period (or faster rotating) observable neutron stars usually having smaller P-dot. However, as a
neutron star ages, the neutron star slows (P increases) and the rate of slowing decreases (P-dot
decreases). Eventually, the rate of rotation becomes too slow to power the radio-emission mechanism,
and the neutron star can no longer be detected.
P and P-dot allow minimum magnetic fields of neutron stars to be estimated. P and P-dot can be also
used to calculate the characteristic age of a pulsar, but gives an estimate which is somewhat larger than
the true age when it is applied to young pulsars.

215
P and P-dot can also be combined with neutron star's moment of inertia to estimate a quantity called
spin-

down luminosity, which is given the symbol (E-dot). It is not the measured luminosity, but rather the
calculated loss rate of rotational energy that would manifest itself as radiation. For neutron stars
where the spin-down luminosity is comparable to the actual luminosity, the neutron stars are said to
be "rotationpowered".[35][36] The observed luminosity of the Crab Pulsar is comparable to the spin-
down luminosity, supporting the model that rotational kinetic energy powers the radiation from
it.[35]With neutron stars such as magnetars, where the actual luminosity exceeds the spin-down
luminosity by about a factor of one hundred, it is assumed that the luminosity is powered by
magnetic dissipation, rather than being rotation powered.
P and P-dot can also be plotted for neutron stars to create a PP-dot diagram. It encodes a tremendous
amount of information about the pulsar population and its properties, and has been likened to the
HertzsprungRussell diagram in its importance for neutron stars.
Spin up

Neutron star rotational speeds can increase, a process known as spin up. Sometimes neutron stars
absorbs orbiting matter from companion stars, increasing the rotation rate and reshaping the neutron star
into an oblatespheroid. This causes an increase in the rate of rotation of the neutron star of over a
hundred times per second in the case of millisecond pulsars.
The most rapidly rotating neutron star currently known, PSR J1748-2446ad, rotates at 716 rotations
per second.[42]However, a recent paper reported the detection of an X-ray burst oscillation, which
provides an indirect measure of spin, of 1122 Hz from the neutron star XTE J1739-285,[43]suggesting
1122 rotations a second. However, at present, this signal has only been seen once, and should be
regarded as tentative until confirmed in another burst from that star.
Glitches and starquakes

NASA artist's conception of a "starquake", or "stellar quake".

Sometimes a neutron star will undergo a glitch, a sudden small increase of its rotational speed or spin
up. Glitches are thought to be the effect of a starquakeas the rotation of the neutron star slows, its
shape becomes more spherical. Due to the stiffness of the "neutron" crust, this happens as discrete
events when the crust ruptures, creating a starquake similar to earthquakes. After the starquake, the
star will have a smaller equatorial radius, and because angular momentum is conserved, its rotational
speed has increased.
Starquakes occurring in magnetars, with a resulting glitch, is the leading hypothesis for the
gamma-ray sources known as soft gamma repeaters
Recent work, however, suggests that a starquake would not release sufficient energy for a neutron star
glitch; it has been suggested that glitches may instead be caused by transitions of vortices in the
theoretical superfluid core of the neutron star from one metastable energy state to a lower one, thereby
releasing energy that appears as an increase in the rotation rate.

216
"Anti-glitches"

An "anti-glitch", a sudden small decrease in rotational speed, or spin down, of a neutron star has also
been reported. It occurred in a magnetar, that in one case produced an X-ray luminosity increase of a
factor of 20, and a significant spin-down rate change. Current neutron star models do not predict this
behavior. If the cause was internal, it suggests differential rotation of solid outer crust and the superfluid
component of the inner of the magnetar's structure.

Population and distances

Central neutron star at the heart of the Crab Nebula

At present, there are about 2000 known neutron stars in the Milky Way and the Magellanic Clouds, the
majority of which have been detected as radio pulsars. Neutron stars are mostly concentrated along the
disk of the Milky Way although the spread perpendicular to the disk is large because the supernova
explosion process can impart high translational speeds (400 km/s) to the newly formed neutron star.
Some of the closest known neutron stars are RX J1856.5-3754, which is about 400 light years away,
and PSRJ0108-1431 at about 424 light years.[48]RX J1856.5-3754 is a member of a close group of
neutron stars called The Magnificent Seven. Another nearby neutron star that was detected transiting the
backdrop of the constellation Ursa Minor has been nicknamed Calvera by its Canadian and American
discoverers, after the villain in the 1960 film The Magnificent Seven. This rapidly moving object was
discovered using the ROSAT/Bright Source Catalog.

Binary neutron star systems

217
Circinus X-1: X-ray light rings from a binary neutron star (24 June 2015; Chandra X-ray
Observatory)

About 5% of all known neutron stars are members of a binary system. The formation and evolution of
binary neutron stars can be a complex process. Neutron stars have been observed in binaries with
ordinary main-sequence stars, red giants, white dwarfs or other neutron stars. According to modern
theories of binary evolution it is expected that neutron stars also exist in binary systems with black hole
companions. The merger of binaries containing two neutron stars, or a neutron star and a black hole, are
expected to be prime sources for the emission of detectable gravitational waves.
X-ray binaries
X-ray binary

Binary systems containing neutron stars often emit X-rays, which are emitted by hot gas as it falls
towards the surface of the neutron star. The source of the gas is the companion star, the outer layers of
which can be stripped off by the gravitational force of the neutron star if the two stars are sufficiently
close. As the neutron star accretes this gas its mass can increase; if enough mass is accreted the
neutron star may collapse into a black hole.
Neutron star binary mergers and nucleosynthesis

Binaries containing two neutron stars are observed to shrink as gravitational waves are emitted.
[51]Ultimatelythe neutron stars will come into contact and coelesce. The coalescence of binary neutron
stars is one of the leading models for the origin of short gamma-ray bursts. Strong evidence for this
model came from the observation of a kilonova associated with the short-duration gamma-ray burst
GRB 130603B. The light emitted in the kilonova is believed to come from the radioactive decay of
material ejected in the merger of the two neutron stars. This material may be responsible for the
production of many of the chemicalelements beyond iron,[53]as opposed to the supernova
nucleosynthesis theory.

Planets

An artist's conception of a pulsar planet with bright aurorae.

Neutron stars can host exoplanets. These can be original, circumbinary, captured, or the result of a
second round of planet formation. Pulsars can also strip the atmosphere off from a star by the pulsar,
leaving a planetary-mass remnant, which may be understood as a chthonian planet or a stellar object
depending on interpretation. For pulsars, such pulsar planets can be detected with the pulsar timing
method, which allows for high precision and detection of much smaller planets than with other methods.
Two systems have been definitively confirmed. The first exoplanets ever to be detected were the three
planets Draugr, Poltergeist and Phobetor around PSR B1257+12, discovered in 1992-1994. Of these,
Draugr is the smallest exoplanet ever detected, at a mass of twice that of the Moon. Another system is
PSR B1620-26, where a circumbinary

218
planet orbits a neutron star-white dwarf binary system. Also, there are several unconfirmed candidates.
Pulsar planets receive little visible light, but massive amounts of ionizing radiation and high-energy stellar
wind, which makes them rather hostile environments.

History of discoveries

The first direct observation of a neutron star in visible light. The neutron star is RX J1856.5-3754.

In 1934, Walter Baade and Fritz Zwicky proposed the existence of neutron stars,[54][d]only a year after
thediscovery of the neutron by Sir James Chadwick.[57]In seeking an explanation for the origin of a
supernova, they tentatively proposed that in supernova explosions ordinary stars are turned into stars
that consist of extremely closely packed neutrons that they called neutron stars. Baade and Zwicky
correctly proposed at that time that the release of the gravitational binding energy of the neutron stars
powers the supernova: "In the supernova process, mass in bulk is annihilated". Neutron stars were
thought to be too faint to be detectable and little work was done on them until November 1967, when
Franco Pacini pointed out that if the neutron stars were spinning and had large magnetic fields, then
electromagnetic waves would be emitted. Unbeknown to him, radio astronomer Antony Hewish and his
research assistant Jocelyn Bell at Cambridge were shortly to detect radio pulses from stars that are now
believed to be highly magnetized, rapidly spinning neutron stars, known as pulsars.
In 1965, Antony Hewish and Samuel Okoye discovered "an unusual source of high radio
brightness temperature in the Crab Nebula".[58]This source turned out to be the Crab Pulsar that
resulted from the great supernova of 1054.
In 1967, Iosif Shklovsky examined the X-ray and optical observations of Scorpius X-1 and correctly
concluded that the radiation comes from a neutron star at the stage of accretion.[59]
In 1967, Jocelyn Bell Burnell and Antony Hewish discovered regular radio pulses from PSR B1919+21.
This pulsar was later interpreted as an isolated, rotating neutron star. The energy source of the pulsar is
the rotational energy of the neutron star. The majority of known neutron stars (about 2000, as of 2010)
have been discovered as pulsars, emitting regular radio pulses.
In 1971, Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum
discovered 4.8 second pulsations in an X-ray source in the constellation Centaurus, Cen X-3.[60]They
interpreted this as resulting from a rotating hot neutron star. The energy source is gravitational and
results from a rain of gas falling onto the surface of the neutron star from a companion star or the
interstellar medium.
In 1974, Antony Hewish was awarded the Nobel Prize in Physics "for his decisive role in the
discovery of pulsars" without Jocelyn Bell who shared in the discovery.[61]

219
In 1974, Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which
consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass. Einstein's
general theory of relativity predicts that massive objects in short binary orbits should emit gravitational
waves, and thus that their orbit should decay with time. This was indeed observed, precisely as general
relativity predicts, and in 1993, Taylor and Hulse were awarded the Nobel Prize in Physics for this
discovery.

220

In 1982, Don Backer and colleagues discovered the first millisecond pulsar, PSR B1937+21.[63]This
object spins 642 times per second, a value that placed fundamental constraints on the mass and
radius of neutron stars. Many millisecond pulsars were later discovered, but PSR B1937+21 remained
the fastest-spinning known pulsar for 24 years, until PSR J1748-2446ad(which spins more than 700
times a second) was discovered.
In 2003, Marta Burgay and colleagues discovered the first double neutron star system where both
components are detectable as pulsars, PSR J0737-3039.[64]The discovery of this system allows a
total of 5 different tests of general relativity, some of these with unprecedented precision.
In 2010, Paul Demorest and colleagues measured the mass of the millisecond pulsar PSR J16142230
to be 1.970.04 M, using Shapiro delay. This was substantially higher than any previously measured
neutron star mass (1.67 M, see PSR J1903+0327), and places strong constraints on the interior
composition of neutron stars.
In 2013, John Antoniadis and colleagues measured the mass of PSR J0348+0432 to be 2.010.04 M,
using white dwarfspectroscopy. This confirmed the existence of such massive stars using a different
method.
Furthermore, this allowed, for the first time, a test of general relativity using such a massive neutron star.

Subtypes table

Neutron star
Isolated Neutron Star (INS): not in a binary system.
[3
Rotation-powered pulsar (RPP or "radio 8] neutron stars that emit
pulsar"): directed
pulses of radiation towards us at regular intervals (due to their strong magnetic fields).
[3
8]




Rotating radio transient
(RRATs): are thought to be pulsars which emit more
sporadically and/or with higher pulse-to-pulse variability than the bulk of the known pulsars.
Magnetar: a neutron star with an extremely strong magnetic field (1000 times
more
than a regular neutron star), and long rotation periods (5 to 12 seconds).
Soft gamma repeater (SGR)
Anomalous X-ray
Rotating radio pulsar (AXP)
transient
Radio-quiet
(RRATs): neutron stars. are thought to be pulsars which emit more
X-ray Dim Isolated Neutron Stars.
Central Compact Objects in Supernova remnants (CCOs in SNRs): young, radio-
quiet non-pulsating X-ray sources, thought to be Isolated Neutron Stars surrounded by supernova
remnants.[38]
X-ray pulsars or "accretion-powered pulsars": a class of X-ray binaries.
Low-mass X-ray binary pulsars: a class of low-mass X-ray binaries (LMXB), a
pulsar
with a main sequence star, white dwarf or red giant.
Millisecond pulsar (MSP) ("recycled pulsar").
[6
Sub-millisecond 9]
pulsar.

221
X-ray burster: a neutron star with a low mass binary companion from which
matter is
accreted resulting in irregular bursts of energy from the surface of the neutron star.
Intermediate-mass X-ray binary pulsars: a class of intermediate-mass X-ray
binaries (IMXB), a pulsar with an intermediate mass star.
High-mass X-ray binary pulsars: a class of high-mass X-ray binaries (HMXB), a
pulsar with a massive star.
Binary pulsars: a pulsar with a binary companion, often a white dwarf or neutron
star.
Theorized compact stars with similar properties.
Protoneutron star (PNS), theorized.[70]
Exotic star
Quark star: currently a hypothetical type of neutron star composed of quark matter, or strange
matter. As of 2008, there are three candidates.

222
Electroweak star: currently a hypothetical type of extremely heavy neutron star, in which the
quarks are converted to leptons through the electroweak force, but the gravitational collapse of the
neutron star is prevented by radiation pressure. As of 2010, there is no evidence for their existence.
Preon star: currently a hypothetical type of neutron star composed of preon
matter.
As of 2008, there is no evidence for the existence of preons.

Examples of neutron stars.

PSR J0108-1431 closest neutron star


LGM-1 the first recognized radio-pulsar
PSR B1257+12 the first neutron star discovered with planets (a millisecond pulsar)
SWIFT J1756.9-2508 a millisecond pulsar with a stellar-type companion with planetary range
mass (below brown dwarf)
PSR B1509-58 source of the "Hand of God" photo shot by the Chandra X-ray Observatory.
PSR J0348+0432 the most massive neutron star with a well-constrained mass, 2.01 0.04 M.

Neutron star

Bubble-like shock wave still expanding from a supernova explosion 15,000 years ago.

Ordinarily, atoms are mostly electron clouds by volume, with very compact nuclei at the center (proportionally, if
atoms were the size of a football stadium, their nuclei would be the size of dust mites). When a stellar core
collapses, the pressure causes electrons and protons to fuse by electron capture. Without electrons, which keep
nuclei apart, the neutrons collapse into a dense ball (in some ways like a giant atomic nucleus), with a thin
overlying layer of degenerate matter (chiefly iron unless matter of different composition is added later). The
neutrons resist further compression by the Pauli Exclusion Principle, in a way analogous to electron degeneracy
pressure, but stronger.

223
These stars, known as neutron stars, are extremely smallon the order of radius 10 km, no bigger than the size of a
large cityand are phenomenally dense. Their period of rotation shortens dramatically as the stars shrink (due to
conservation of angular momentum); observed rotational periods of neutron stars range from about 1.5 milliseconds
(over 600 revolutions per second) to several seconds. [30]When these rapidly rotating stars' magnetic poles are aligned
with the Earth, we detect a pulse of radiation each revolution. Such neutron stars are called pulsars, and were the
first neutron stars to be discovered. Though electromagnetic radiation detected from pulsars is most often in the
form of radio waves, pulsars have also been detected at visible, X-ray, and gamma ray wavelengths.

Main -sequence stars

The e structure of a main-sequence star is quite simple: at the core of the star, hydrogen is converted into
helium through nuclear fusion. Some of this nuclear energy escapes directly into space as neutrinos, and the
remainder is trapped within the core as thermal energy and electromagnetic radiation. This energy is
transported through lower temperature and density layers to the surface.
All main sequence stars have regions that are stable to convection and regions that are unstable. A star
with a mass about the size of the Sun or less has a core that is stable to convection, but its outer layers are
unstable. The distance of the boundary between the convective and the stable layers from the center of the star
depends on the mass of the star, with the boundary closer to the center in smaller stars. The very smallest stars
are fully convective. Stars the size of the Sun are convective in only the outermost layers. Because of the
stability of their cores, stars the size of the Sun and most stars that are smaller have no mixing of the fusion-
created helium with the hydrogen outside of their cores. This leads over time to a composition gradient within
the cores of small stars, with the very center of the star becoming helium-rich, and region immediately outside
of the core remaining hydrogen-rich.
Stars that are more massive than the Sun have an outer layer that is stable to convection and a core that is
unstable; the boundary between these two layers moves outward towards the surface as the mass of the star
increases, but it never reaches the surface. Because of this convective interior, the helium created through
nuclear fusion is carried out of the core, and hydrogen from regions outside of the core is carried into the core,
where it undergoes fusion. This causes a massive main-sequence star to convert more of hydrogen into helium
than is present within its core. Over time, the boundary between the convective interior and the stable outer
layers pulls closer to the center of the stars. This leads to a composition gradient, similar to that produced in a
low-mass star, but extending out to larger radii.
The size of a star's core is set by the star's mass and core temperature. The larger the mass, the larger the
core, but the larger the core temperature, the smaller the core. This last point may at first appear
counterintuitive. Keep in mind that we are considering stars in static equilibrium, so the pressure at the core of
a star must be counteracted by the gravitational force on the core by the overlaying material. At the core radius,
the gravitational force is proportional to the mass of the core divided by the square of the core radius. If the
mass of the star is held constant, the gravitational force at the core can only be increased by shrinking the core.
The mechanism that transports radiation out of a star and the nature of the pressure within a star, whether
dominated by radiation pressure or gas pressure, determines the precise structure and radius of a star. Because
these factors change with mass, there is no simple equation that relates the stellar mass and core temperature to
the surface radius and temperature. If a star is fully convective and dominated by gas pressure, or if it is fully
supported by radiation pressure, then the radius of a star is proportional to the radius of its core, and the surface
temperature is proportional to the core temperature.

224
By defining the mass and the core temperature of a star, we can calculate a structure for the star. But the
core temperature is not an arbitrary parameter; its value adjusts until the rate of energy generation though
nuclear fusion equals the rate of energy loss at the star's surface. The nuclear reaction rates are strong functions
of temperature, with the rate rising rapidly with temperature. They are also proportional to the square of the
density. This means that as the core of a star shrinks, both the density and the temperature increase, and the rate
of nuclear fusion increases dramatically. Because the core density is inversely-proportional to the cube of the
core radius, and because the core temperature increases inversely with core radius, the nuclear reaction rate
increases faster than Rc-6. The surface temperature of the star increases proportionally with the core temperature
as long as the character of the energy transport and of the pressure do not change. The surface radius is then
proportional to the core radius, and the surface temperature is proportional to the core temperature. Because the
cooling at the surface is proportional to R2T4, a feature of thermal cooling, and because the core temperature is
inversely proportional to the core radius, the cooling rate increases as the star shrinks as R c-2. The energy
generation at the core therefore increases much more rapidly than the energy loss at the surface, and so as the
star shrinks, it finds a core temperature that balances heat generation with cooling.
In practice, because the energy generation rate is such a strong function of temperature, the core
temperature of a star is a weak function of its mass. A star of one-tenth the Sun's mass will have a core
temperature of around 4 million degrees, and a star with 50 times the Sun's mass will have a core temperature
of around 40 million degrees. So over a mass range of a factor of 500, the core temperature increases by only a
factor of 10. The core temperature determines which of the two hydrogen fusion processes, the PP chain and
the CNO cycle, is dominant. The PP chain dominates energy production in stars of less than one-third of the
Sun's mass. The CNO cycle dominates in stars with more than 1.3 time the Sun's mass

225
Red giant star evolution.

Astar in its red giant phase remains in this phase to the end of its life of thermonuclear fusion. A red giant
evolves through several cycles, each beginning when one nuclear fuel is exhausted at the star's core, causing
the core of the star shrinks until its temperature is sufficiently high to cause the next heavier element to burn.
Cycle by cycle, the star burns at its core firsthelium, then carbon, and then oxygen. Beyond oxygen,
thermonuclear fusion is too rapid to create a stable, long-lived star, so burning of these heavier elements to iron
is part of a star's collapse to a neutron star or black hole. From the outside, each cycle appears as an expansion
of the photosphere and the reddening of the star, followed by an increase in luminosity, a contraction of the
photosphere, and a shift in color to the blue.
During these cycles of core fusion, thermonuclear fusion of lighter elements continues in shells that
surround the core. The outermost shell burns hydrogen. Additional shells of burning material are created at the
end of each cycle, starting with the creation of a helium-burning shell, followed by creation of a carbon
burning shell. Often most of the energy generated by a red giant comes from these shells, with the hydrogen
shell generating the largest portion of this power. Despite this, thermonuclear burning at the core determines
the structure and appearance of the star.
The size of a star's core is related directly to the core temperature. The reason is that when the pressure
exactly balances gravitational force, as it does at the center of a star, the kinetic energy in the gas is equal to the
magnitude of the gravitational potential energy. The temperature is therefore inversely proportional to the
radius of the core. The transition from hydrogen burning to helium burning, which requires the temperature to
rise from around 2 107 K to 108 K, requires the core to contract by a factor of 5. The transition from helium
to carbon requires the temperature to rise to 6 10 8 K, which requires the core to contract by another factor of
6. The transition from carbon to oxygen only requires an increase in temperature to 10 9 K, which only requires
the core to contract by 1.7. Transitions to helium burning and to carbon burning therefore produce dramatic
changes to the structure of a star.
When burning stops with the depletion of a fuel and the core starts contracting, the temperature gradiant
in the star between the core and the photosphere steepens. This causes heat to diffuse to the photosphere
faster. This energy goes to expanding the outer envelope of the star, which flattens the temperature gradient.
The photosphere of a red giant therefore expands to counter the effect of a shrinking, hot core on the
temperature gradient through the star. At the same time, because much of the energy generated by the
shrinking of the core and by the burning of light elements in shells surrounding the core goes into
gravitational potential energy of the expanding outer regions of the star, the luminosity of the star declines.
This means less energy is emitted from a larger photosphere, so the photosphere becomes cooler and redder.
The cooling of the photosphere ceases before the core reaches the temperature for renewed fusion. At
some point, the temperature of the photosphere drops so low that the photosphere ceases to be a plasma; all of
the free electrons recombine with the ions to create a neutral gas. This has a dramatic effect on how radiation
propagates out of the star. The atmosphere of a star is opaque because of H, a hydrogen atom containing two
electrons. It may seem odd that such an atom could exist in a hot environment, but in fact it exists in small
quantities when free electrons are available for capture by neutral hydrogen. This extra electron causes the
hydrogen to easily absorb light, making the atmosphere opaque. When an atmosphere becomes neutral gas, free
electrons disappear,
H cannot form in the atmosphere, and the atmosphere becomes transparent. We see through this neutral gas
down to where the temperature is still high enough to keep the gas ionized and opaque. This effect occurs
when the photosphere of a red giant is around 3,000 K. Once this point is reached, the temperature of a red
giant photosphere remains fixed as the core shrinks. The photosphere's radius, however, continues to
increase, so the red giant becomes brighter.

226
Eventually the core becomes hot enough to reignite thermonuclear fusion. Once started, the fusion
spreads through the core, halting and reversing to some extent the contraction of the core. The evolution of the
star is then governed by thermonuclear fusion, both within the core and within the shells surrounding the core.
From outside, the evolution reverses to some extent the reddening during core shrinkage. As core
thermonuclear fuel is consumed, the core contracts slightly, the star becomes more luminous, the photosphere
radius shrinks, and the photosphere temperature rises. The star therefore appears to become brighter and bluer
as core nuclear fuel is consumed.
The effect of a contracting core is most pronounced in the transition from the main-sequence to the
helium-burning stage. For a 3 solar mass star, the photosphere temperature drops from 12,000 K to 4,000 in
this transition, while for a 9 solar mass star, the photosphere temperature drops from 25,000 K to 4,000. The
reverse of this cooling as helium fusion commences is considerably less pronounced. The photosphere of a 3
solar mass star heats back up to 6,000 K as helium fusion proceeds, while the photosphere of a 9 solar mass
star heats back up to 14,000 K.
The amount of energy liberated by burning helium and other heavier elements is considerably less than
that liberated by burning hydrogen. Helium fusion liberates 9% of the energy per nucleon that hydrogen
liberates. Because the power generated during helium fusion is higher than during core hydrogen burning, the
time a star spends burning helium at its core is less than 9% of the time spent on the main-sequence. The
carbon and oxygen burning stages are considerably shorter than the helium-burning stage, as carbon burning
releases no more than 90% of the energy per nucleon of helium burning, and oxygen burning releases less than
this amount. As a consequence, a star is a red giant for about 10% to 25% of its fusion life.
The time for a core to shrink from one burning stage to the next is much shorter than each of the burning
stages. For instance, a 9 solar mass star transitions from the main sequence to core helium burning in 1% of the
time that the star is on the main sequence. A 3 solar mass star makes this same transition in 5% of the time that
it is on the main sequence. This relatively rapid transition means that only a minority of the red giants we see
are in this transition phase.

Binary stars
.S tars are often born in groups. Early in the history of the universe, many stars were born in compact
clusters of hundreds of thousands or millions of stars called globular clusters. More recent star formation
within the Galactic disk gives rise to open clusters of hundreds or thousands of stars. But stars are also born
in smaller groupings of two or three.
Many of the bright, nearby stars are members of binary or triplet star systems. For instance, the brightest
star in the sky, Sirius, is a member of a triple-star system. The second-closest star to Earth, Rigil Kentaurus (
Centauri) has a slightly-dimmer companion star. Another bright, nearby star, Procyon ( Canis Minoris), has a
13th magnitude companion star. These systems are not unusual. In fact, multiple star systems of main-
sequence stars are far more common than single main-sequence stars in the Galactic disk. The binary main-
sequence star systems slightly outnumber single main-sequence stars. The ratios of binary systems to triplet
and quadruplet systems is [Link].[1]This means that only 34% of the main sequence stars in the Galactic disk
have no companion stars.
Generally a binary star looks like a single star to the eye. At a distance of 5 parsecs, a pair of stars
separated by 200 AU would have a separation on the sky of only 40 arc seconds, which is about the angle
spanned by Saturn's rings. This separation is easily resolved with a telescope. Binary stars that can be resolved
with a telescope are angularly-resolved binaries. But most binary systems are too distant to resolve with a
telescope. These systems betray their binary nature through their spectra. As the stars in a binary system orbit
one-another, their spectra are Doppler shifted, so that one sees the spectral lines of one star shifted in
frequency relative to the spectral lines of the other star. Binary systems that reveal themselves in this way are
called spectroscopic binaries.

227
Binary stars, when they are widely separated, are described as the action of Newtonian gravity on two
point-masses. Each star moves in an elliptical orbit, and the motion of one star relative to the other also traces
an ellipse. The relationship between the period and the semimajor axis (the average of the maximum and
minimum separation of the stars) of a binary system is given by Kepler's laws: the square of the period is
proportional to the cube of the semimajor axis. The physics of binary star motion is therefore very simple
when the stars are far enough apart that their tidal influence on each-other is negligible. This simple physics
makes the binary star the best tool for weighing stars.
The size of a binary star system is more like the size of the Solar System than the separation
between stars in the stellar neighborhood. The orbital periods of the majority of binary stars are
between 1/3 and 300,000 years, with the median at 14 years. [2]Only a tiny fraction of binary stars
have periods shorter than 1 day or longer than 1 million years. For a binary system with a total mass
of 1 solar mass, the median orbital period of 14 years corresponds to a semimajor axis of only 6
AU, which is slightly more than Jupiter's distance from the Sun. For a 1/3 yearperiod, the semimajor
axis is 0.5 AU, and for a 300,000 year period, it is 4,500 AU. These separations increase with an
increase in the total mass of the system as the total mass to the one-third power, so the semimajor
axis of a 10-solar-mass binary system is only 2.2 times greater than that of a 1-solar-mass system
with the same period. With Solar-System like values, the semimajor axis of a binary system is tiny
compared to the average separation of more than a parsec (206,000 AU) between the stars of the
Galactic disk.
The eccentricities of binary star orbits fall into two classes. For binary stars with periods longer than 3
years, the orbits are generally very elliptical, with most having eccentricities e ranging between 0.3 and 0.9 (a
circular orbit has e = 0, and a parabolic orbit has e = 1. Mercury, the Solar System planet with the most
eccentric orbit, has e = 0.2). For periods of less than 3 years, the orbits are much more circular, with a large
majority of the orbits having eccentricities between 0.15 and
0.45. This effect is attributed to the tidal dissipation of orbital energy in these tightly-bound systems, which
causes the orbits to become circular. In binary systems with periods of less than 1 day, the tidal dissipation of
energy is so efficient that the orbits have eccentricities of 0.
Besides having circular orbits, the stars in the most tightly-bound systems are close enough to tidally
distort and heat each-other. If the stars in such a binary system are so close that each star fills its Roche lobe
and the photospheres touch, the system is a contact binary star; if the stars are so close that one fills its Roche
lobe, but the other does not, then the system is a semi-detached binary star; if neither star fills its Roche lobe,
the system is a detached binary star. The evolution of these binary stars is complex, with some evolving into
the brilliant compact binary systems that contain compact objects, such as degenerate dwarfs, neutron stars,
and black holes.
One final, striking aspect of binary stars is the relative masses of the stars in a system. For binary systems
with orbital periods longer than 100 years, the secondary (less massive) star tends to be of very low mass, just
as the stars in the Galactic disk tend to be of very low-mass, but for systems with orbital periods less than 100
years, the secondary star's mass tends to be close to the mass of the primary star. This difference in the
secondary's mass with orbital period suggests that the long-period binaries are either created by a different
process than the short-period binaries, or the process that creates binary stars behaves much differently when
creating a larger system than when creating a small system.
The commonness and small size of the binary stars in our Galaxy have implications for the theories of
star formation. The majority of stars are members of binary systems, so binary systems form very easily
within the Galactic disk. The size of a binary system is generally about the size of our Solar System; this has
lead astrophysicists to associate the separation between stars with the size of the cloud that gave birth to the
stars in the binary system. The idea that a binary system is born when its stars are born is supported by more
recent observations of binary systems that contain a T Tauri star. These stars are very young variable stars of
between 0.1 and 3 solar masses that have not

228
yet settled down onto the main sequence. They are from one million to several tens of millions of years old. T
Tauri stars are found to have companion stars with the same frequency as the main-sequence star, and the
distribution of their periods is similar to the binary stars containing main-sequence stars. A T Tauri star and its
companion are of the same age.[3]These properties support the idea that stars are born with their companions,
as it is unlikely that they could acquire companions so rapidly after birth. Binary stars therefore tell us
something about how stars are born. Abt, Helmut A., and Levy, Saul G. Multiplicity Among Solar-Type
Stars. The Astrophysical Journal Supplement Series 30 (March 1976): 273306.
[2]
Duquennoy, A., and Mayor, M. Multiplicity Among Solar-Type Stars in the Solar Neighborhood: II.
Distribution of the Orbital Elements in an Unbiased Sample. Astronomy and Astrophysics 248 (1991): 485
524.]White, R.J., and Ghez, A.M. Observational Constraints on the Formation and Evolution of Binary
Stars. The Astrophysical Journal 556 (20 July 2001): 265295.

GRAVITATIONAL COLLAPSE

Gravitational collapse is the contraction of an astronomical objectdueto the influence of its


own gravity, which tends to draw matter inward toward the center of mass. Gravitational collapse is a
fundamental mechanism for structure formation in the universe. Over time an initial, relatively smooth
distribution of matter will collapse to form pockets of higher density, typically creating a hierarchy of
condensed structures such as clusters of galaxies, stellar groups, stars and planets.
A star is born through the gradual gravitational collapse of a cloud of interstellar matter. The
compression caused by the collapse raises the temperature until thermonuclear fusion occurs at the
center of the star, at which point the collapse gradually comes to a halt as the outward thermal pressure
balances the gravitational forces. The star then exists in a state of dynamic equilibrium. Once all its
energy sources are exhausted, a star will again collapse until it reaches a new equilibrium state.

Gravitational collapse of a massive star, resulting in a Type II supernova

BIG CRUNCH.

The Big Crunch is one possible scenario for the ultimate fate of the universe, in which the
metricexpansion of space eventually reverses and the universe recollapses, ultimately
causing the cosmic scale factor to reach zero or causing a reformation of the universe
starting with another Big
Bang. Overview
If the universe's expansion speed does not exceed the escape velocity, then the mutual gravitational
attraction of all its matter will eventually cause it to contract. If entropy continues to increase in the
contracting phase

229
(see Ergodic hypothesis), the contraction would appear very different from the time reversal of the
expansion. While the early universe was highly uniform, a contracting universe would become
increasingly clumped. Eventually all matter would collapse into black holes, which would then coalesce
producing a unified black hole or Big Crunch singularity.
The exact details of the events that would take place before such final collapse depend on the length
of both the expansion phase as well as the previous contraction phase; the longer both lasted, the
more events expected to take place in an ever-expanding universe would happen; nonetheless it's
expected that the contraction phase would not immediately be noticed by hypothetical observers
because of the delay caused by the speed of light, that the temperature of the cosmic microwave
background would rise during contraction symmetrically compared to the previous expansion phase,
and that the events that took place during the Big Bang would take in opposite order. [2]For a
contracting Universe similar to ours in composition it's expected that superclusters would merge
among themselves followed by galaxy clusters and later galaxies. By the time stars were so close
together that collisions among them were frequent, the temperature of the cosmic microwave
background would have increased so much that stars would be unable to expel their internal heat,
slowly cooking until they exploded leaving behind a hot and highly heterogeneus gas, whose atoms
would break down in their constituent subatomic particles because of the increasing temperature,
that would be absorbed by the already coalescing black holes before the Big Crunch itself.
The Hubble Constant measures the current state of expansion in the universe, and the strength of the
gravitational force depends on the density and pressure of matter in the universe, or in other words, the
criticaldensity of the universe. If the density of the universe is greater than the critical density, then the
strength of the gravitational force will stop the universe from expanding and the universe will collapse
back on itself[1] assuming that there is no repulsive force such as a cosmological constant. Conversely,
if the density of the universe is less than the critical density, the universe will continue to expand and the
gravitational pull will not be enough to stop the universe from expanding. This scenario would result in the
Big Freeze, where the universe cools as it expands and reaches a state of entropy.

One theory proposes that the universe could collapse to the state where it began and then initiate
another Big Bang,[1]so in this way the universe would last forever, but would pass through phases of
expansion (Big Bang) and contraction (Big Crunch).[4]Another scenario results in a flat universe which
occurs when the critical density is just right. In this state the universe would always be slowing down, and
eventually come to a stop in an interminable amount of time. Although, it is now understood that the
critical density has been measured and determined to be a flat universe.
Recent experimental evidence (namely the observation of distant supernovae as standard candles, and
the well-resolved mapping of the cosmic microwave background) has led to speculation that the
expansion of the universe is not being slowed down by gravity but rather accelerating. However, since
the nature of the darkenergy that is postulated to drive the acceleration is unknown, it is still possible
(though not observationally supported as of today) that it might eventually reverse its developmental
path and cause a collapse.[

230
Source; nature timeline.
.

Star formation summary


Star formation

An interstellar cloud of gas will remain in hydrostatic equilibrium as long as the kinetic energy of the gas
pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is
expressed using the virial theorem, which states that, to maintain equilibrium, the gravitational potential
energy must equal twice the internal thermal energy. If a pocket of gas is massive enough that the gas
pressure is insufficient to support it, the cloud will undergo gravitational collapse. The mass above which
a cloud will undergo such collapse is called the Jeans mass. This mass depends on the temperature and
density of the cloud, but is typically thousands to tens of thousands of solarmasses

Stellar remnants

231
NGC 6745 produces material densities sufficiently extreme to trigger star formation through
gravitational collapse

At what is called the death of the star (when a star has burned out its fuel supply), it will undergo a
contraction that can be halted only if it reaches a new state of equilibrium. Depending on the mass
during its lifetime, these stellar remnants can take one of three forms:

White dwarfs, in which gravity is opposed by electron degeneracy pressure[3]


Neutron stars, in which gravity is opposed by neutron degeneracy pressure and short-range
repulsive neutronneutron interactions mediated by the strong force
Black hole, in which there is no force strong enough to resist gravitational collapse
White dwarf
The collapse of the stellar core to a white dwarf takes place over tens of thousands of years, while the
star blows off its outer envelope to form a planetary nebula. If it has a companion star, a white dwarf-
sized object can accrete matter from the companion star until it reaches the Chandrasekharlimit (about
one and a half times the mass of our Sun) at which point gravitational collapse takes over again. While
it might seem that the white dwarf might collapse to the next stage (neutron star), they instead undergo
runaway carbon fusion, blowing completely apart in a Type Ia supernova.
Neutron star

Neutron stars are formed by gravitational collapse of the cores of larger stars, and are the remnant of
other types of supernova. They are so compact that a Newtonian description is inadequate for an
accurate treatment, which requires the use of Einstein's general relativity.
Black holes

232
Logarithmic plot of mass against mean density (with solar values as origin) showing possible
kinds of stellar equilibrium state. For a configuration in the shaded region, beyond the black
hole limit line, no equilibrium is possible, so runaway collapse will be inevitable.

According to Einstein's theory, for even larger stars, above the Landau-Oppenheimer-Volkoff limit, also
known as the TolmanOppenheimerVolkoff limit(roughly double the mass of our Sun) no known form
of cold matter can provide the force needed to oppose gravity in a new dynamical equilibrium. Hence,
the collapse continues with nothing to stop it.

Simulated view from outside black hole with thin accretion disc, by J. A. Marck

Once a body collapses to within its Schwarzschild radius it forms what is called a black hole,
meaning a space-time region from which not even light can escape. It follows from a theorem of
Roger Penrose[5] that the subsequent formation of some kind of singularity is inevitable.
Nevertheless, according to Penrose's cosmic censorship hypothesis, the singularity will be confined
within the event horizon bounding the black hole, so the space-time region outside will still have a well
behaved geometry, with strong but finite curvature, that is expected[6] to evolve towards a rather

233
simple form describable by the historic Schwarzschild metric in the spherical limit and by the more
recently discovered Kerr metric if angular momentum is present.
On the other hand, the nature of the kind of singularity to be expected inside a black hole remains rather
controversial. According to some theories, at a later stage, the collapsing object will reach the maximum
possible energy density for a certain volume of space or the Planck density (as there is nothing that can
stop it). This is when the known laws of gravity cease to be valid.[7] There are competing theories as to
what occurs at this point, but it can no longer really be considered gravitational collapse at that stage.[8]
Theoretical minimum radius for a star
The radii of larger mass neutron stars (about 2.0 solar mass) are estimated to be about 12-km, or
approximately 2.0 times their equivalent Schwarzschild radius.
It might be thought that a sufficiently massive neutron star could exist within its Schwarzschild radius (1.0
SR) and appear like a black hole without having all the mass compressed to a singularity at the center;
however, this is probably incorrect. Within the event horizon, matter would have to move outward faster
than the speed of light in order to remain stable and avoid collapsing to the center. No physical force
therefore can prevent a star smaller than 1.0 SR from collapsing to a singularity (at least within the
currently accepted framework of general relativity; this doesnt hold for the Einstein YangMillsDirac
system). A model for nonspherical collapse in general relativity with emission of matter and gravitational
waveshas been presented.

The mass of the stellar remnant is high enough, the neutron degeneracy pressure will be insufficient to prevent
collapse below the Schwarzschild radius. The stellar remnant thus becomes a black hole. The mass at which this
occurs is not known with certainty, but is currently estimated at between 2 and 3 M .

Black holes are predicted by the theory of general relativity. According to classical general relativity, no matter or
information can flow from the interior of a black hole to an outside observer, although quantum effects may allow
deviations from this strict rule. The existence of black holes in the universe is well supported, both theoretically and
by astronomical observation.

Because the core-collapse supernova mechanism itself is imperfectly understood, it is still not known whether it is
possible for a star to collapse directly to a black hole without producing a visible supernova, or whether some
supernovae initially form unstable neutron stars which then collapse into black holes; the exact relation between
the initial mass of the star and the final remnant is also not completely certain. Resolution of these uncertainties
requires the analysis of more supernovae and supernova remnants.

Models

A stellar evolutionary model is a mathematical model that can be used to compute the evolutionary phases
of a star from its formation until it becomes a remnant. The mass and chemical composition of the star are
used as the inputs, and the luminosity and surface

234
temperature are the only constraints. The model formulae are based upon the physical understanding of the star,
usually under the assumption of hydrostatic equilibrium. Extensive computer calculations are then run to determine
the changing state of the star over time, yielding a table of data that can be used to determine the evolutionary track
of the star across the HertzsprungRussell diagram, along with other evolving properties.[32]Accurate models can be
used to estimate the current age of a star by comparing its physical properties with those of stars along a matching
evolutionary track.

Theories for the evolution of binary stars

Any theory that aspires to explain how stars are born must also explain why the majority of stars in the
Galactic disk are members of multiple-star systems, with the majority of those binary-starsystems.[1]This is
clearly a property that goes back to stellar birth, because newly-formed stars are also predominately in
multiple-star systems. Stars do not pick up companions as they age. Either stars capture their companions
shortly after birth, or they are born into multiple-star systems.
Binary-star systems have two properties that strongly impact theories of star formation. First, binary-
star systems are small in size, with the separations between stars ranging from much less than 1 AU to several
thousand AU. Second, for systems with short periods, the mass of the smaller of the two stars in a binary
system is generally close to the primary star's mass, averaging half of the primary-star's mass, but, for systems
with long periods, the mass is generally small, with a distribution of values similar to the distribution of
stellar masses of isolated stars.
In recent years theorists have explored four theories for binary-star birth: the capture of one star by
another; the splitting of a star into two stars; the collapse of a star's accretion disk to a companion star, and the
fragmentation of a collapsed molecular cloud into multiple stars. [3]The last-three theories treat the birth of a
binary system as part of the birth of a star.
The first theory the capture of a star by a second star can explain the creation of binary stars in the dense
globular clusters, where the gravitational potential energy liberated in the formation of a binary star heats the
cluster, but it cannot explain the binary systems in the Galactic disk. The problem is that a star cannot capture
another star unless kinetic energy is expelled from the system. A third star can be the sink for this kinetic
energy, but in the Galactic disk the probability is low that three stars would come together at the same time in a
way that leaves two of these stars bound together. Even with the higher stellar densities in star-forming regions,
the rate of capture is too low to produce a high number of binary systems with young stars. Tidal heating of the
stars can expel kinetic energy from the system, but for tidal forces to dissipate enough energy to cause stellar
capture requires the stars to pass very close to one-another, which is a low-probability event. One way around
this close-encounter problem is to tidally-heat the accretions disks orbiting each star instead of the stars
themselves. Accretions disks are observed orbiting newly-formed stars. These disks are seen by the infrared
radiation they emit. They take up angular momentum from the new star, allowing the star to become a slowly-
rotating pressure-supported sphere. One can imagine that as a pair of young stars with accretion disks pass
each-other, they raise tides in each-other's accretion disks, dissipating kinetic energy. Simulations of this
process, however, find that in such an event the accretion disks are disrupted without extracting enough kinetic
energy to gravitationally bind the stars. For these reasons, one does not expect stars to capture one-another at a
great rate, and certainly not rapidly enough to account for the binary-star systems containing stars that are only
several million years old.
The second theory a rapidly-rotating star can split into two stars is a theory that is over a century old. It
appears to be out of favor in the broad community, although some researchers are still pursuing it. The idea is
that a rapidly-spinning spherical star is unstable, distorting first into a bar shape, and then into a barbell shape.
The mass that accumulates at each end of the barbell becomes a

235
star, so that the system evolves into a contact binary star. As each star contracts to its main-sequence size, the
binary system becomes detached. The problem is in getting the original star to evolve from a bar shape to a
barbell shape; numerical simulations tend to find that angular momentum within the star is redistributed, and
the star changes from a bar shape to a sphere orbited by an accretion disk.
The third theorya second star forms from the accretion disk orbiting a newly-formed star resembles
the theory for planetary birth. As stated earlier, stars are born surrounded by accretion disks. The planets
around the Sun and around other stars formed from these accretion disks. Compared to a star, the planets in a
planetary system are very small. Can an accretion disk give birth to something as large as a star? Theorists
have shown that such a birth is possible if the accretion disk is more massive than the central star it is orbiting.
The ideas is that after the central star forms from a molecular cloud, the accretion disk surrounding it continues
to accumulate gas from the cloud. When the accretion disk becomes more massive than the central star, the
disk becomes unstable, with the gas in the disk clumping to one side of the disk. This instability is driven by
the self-gravity of the accretion disk. Eventually all of the gas in the disk flows to one part of the disk to form
the second star. The advantage of such a theory is that it naturally produces binary stars rather than systems
with three or four stars, and it explains why the size of a binary-star system is comparable to the size of a
planetary system.
The final theory a molecular cloud collapses and fragments to form multiple stars takes advantage of the
fact that as a cloud contracts, the length over which it is stable contracts more rapidly. Under the current
theories of star formation, a star forms when the densest regions of a molecular cloud collapse through their
own self-gravity. Whether a cloud is stable against collapse depends on whether it is larger or smaller than the
Jeans length, which is set by the temperature and density of the gas. If a static cloud is larger than the Jeans
length, it will collapse. If the cloud is much larger than the Jeans length, it will collapse into several pieces,
with the initial size of each piece of order the Jeans length. The interesting feature of this fragmentation is that
if a cloud cools as it contracts, the Jeans length for the cloud becomes much smaller than the size of the cloud.
This has lead theorists to believe that a collapsing molecular cloud of several solar masses could fragment and
give birth to a multiple-star system. The problem in this simple picture, however, is that as long as the cloud is
collapsing, it cannot fragment. Computer simulations have shown that the density gradients that form as a cloud
collapses prevent the cloud from fragmenting. Not unless the original cloud ceases its collapse and stabilizes
itself at a smaller scale can fragmentation and collapse occur. Angular momentum provides the mechanism that
halts collapse, causing the cloud to settle down as a large rotating disk. This disk fragments and collapses to
form several stars that are gravitationally bound to each-other. The size of the system is set by the size of the
stabilized cloud. Whether one can preferentially form binary stars with such a theory is not yet known.
As often happens in astrophysics with frequently-occurring phenomena, the birth of a binary star is
difficult to replicate in the laboratory, which in this case is within a computers guts. Astrophysicists are unable
to follow the computer realizations of the last-three theories for a sufficient length of time to see stars form.
The problem is in the wide range of scales encountered in the problem. Spatially, one is following the three-
dimensional evolution of a cloud that is parsecs in size down to a handful of stars separated by several AU and
with radii of less than 0.01 AU, so the scale changes by a factor of 10 million. One is also following processes
that occur on a variety of timescales. In particular, the gravitational free fall timescale in the densest regions of
a molecular cloud are several orders of magnitude shorter than the orbital timescale for the system. These
widely-varying scales, which must be resolved within a computer simulation, are why a theory that is over one
hundred years old cannot be definitively eliminated from consideration.
How many theories do we need? The different characteristics of binary systems with periods of more
than 100 years from those of systems with periods of less than 100 years suggests that two different
mechanisms give us binary stars (100 years corresponds to a semimajor axis of around 22

236
AU for a 1 solar mass system). The researchers who found this difference between long- and short-period
systems in the 1970s suggested that a short-period binary system is formed when a rapidly-rotating star split in
two, and a long-period system is formed when a molecular cloud fragments. [1] More recently, researchers
studying systems of young stars have claimed that molecular cloud fragmentation as the source of all binary-
star systems is most consistent with their data.[2]The issue remains unsettled, so we find ourselves with three
plausible theories for the origin of binary stars in the Galaxy disk, two of which may be at work creating binary
stars. If only one mechanism is at work, it is likely the fragmentation of molecular clouds, but if two are at
work, then the collapse of an unstable accretion disk now seems the most likely process creating the short-
period binary systems.

MATHEMATICAL MODEL OF STELLAR EVOLUTION.

Blue stars.
Blue stars are the most luminous stars. They located in the upper left on the main sequence in the Hertsprung
Russel diagram. These stars are also the most massive of all the main sequence stars and the largest ones. Their
large mass, as however, are not able to compensate for the high luminosity. This means that, although their supply
of hydrogen is larger than stars like Sun, they will burn up this supply at such a high rate that their total lifetime is
much shorter than for medium or massive stars.

Why is it important to look for these short-living stars? Since their total lifetime is so short, they will not have had
the opportunity to move away very far from the place where they were born. Consequently, if we want to search for
galactic regions in which star formation is possible, we have to look for places with luminous blue stars. This is not
always possible in our own Galaxy where our view is blocked by interstellar clouds, but there are more than enough
other galaxies. Luminous blue stars are found in the outer regions of spiral galaxies. The spiral components of these
galaxies actually have a blue color because the luminosity of these blue stars is so high that it compensates for their
small number. The spiral arms also have another important component, namely extended hydrogen and dust clouds,
whose, presence leads to the suggestion that stars are born in these clouds by condensation and gravitational
collapse. A number of pictures taken from dark clouds in the Galaxy clearly show spherical condensations, called
[Link] stars often found in the immediate surroundings of these protostars.

A normal consequence of contraction of a gas cloud is the increasing of its temperature. During the first stages,
however, the radiation is able to escape where the temperature and pressure in the cloud stay at low level. The cloud
will therefore continue to collapse, eventually fragmenting into a number of smaller contracting clouds. By the time
the central density becomes high enough to make the center opaque for infrared radiation, the gravitational field is
strong enough to compensate for increasing pressure. The collapse is now inevitable and will only come to an end
when the central temperature reaches several million Kelvin. This is the minimum temperature to start the nuclear
fusion of hydrogen into helium, a process that releases an enormous amount of energy. This energy is transposed
through the cloud and radiated away in space. The flow of energy also restores hydrostatic equilibrium. The cloud
has now become a normal core hydrogen burning star (Bodif and De Loore, 1985).

An important amount of kinetic energy is released during the contraction stage. It is necessary for the energy to be
neutralized without heating the cloud to prevent any further contraction. CO

237
molecules play an especially important role as cooling mechanisms in the dust clouds. On this context, a simple
model for star formation can be illustrated as the following.

The model contains three active components: cool atomic clouds, cool molecular clouds, and active young stars.
Each of these components may interact with the other components or with the rest of the Galaxy. The model is
therefore an open system connected with two mass reservoirs outside the system. A schematic view of the system is
presented in Fig. 1.

Fig. . A schematic view of the various components of the star formation model.

The main component of cool atomic clouds is neutral hydrogen, the most abundant chemical element in the
Galaxy. Neutral hydrogen clouds are found in the spiral arms and disk of the Galaxy. Their typical radio
emission at a wavelength of 21 cm enable.

The density varies over a large range, but direct star formation in these clouds seems not to occur. The cooling
capacity of these clouds is not large enough to allow a sufficient condensation. This component is connected to an
unlimited reservoir of new atomic gas outside the system.

Cool molecular clouds mainly consist of molecular hydrogen HII. The densities mast enormously and are generally
much higher than in neutral clouds. The temperature in such cloud decreases as its density increases as a
consequence of large cooling capacity of the CO molecules. Molecular clouds have smaller dimensions than neutral
clouds. They are found in the spiral arms, often in the vicinity of OB associations, which are groups of young blue
stars (Bodif, 1986).

Young, active stars are mostly accompanied by hot ionized HII gas. These stars strongly affect the surrounding gas
clouds and are responsible for shock waves in these clouds. In this way, new condensation regions may be formed
in the molecular clouds of the system. The presence of

238
young stars therefore has a positive effect on the stellar birth rate. The capacity of influencing the other components
ends when these young stars evolve to neutron stars. Although these remnants are still physically present in the
system, their masses have stopped playing an active role in the star formation process. We therefore say that this
mass has left the active star formation system. The second reservoir is hence a waste reservoir containing the stellar
remnants.

Equations of interaction between the components


The three mass components will be described by the variable Sfor the total mass of active stars, M for the total
mass of molecular clouds, and A for the total mass of atomic clouds. It is assumed that the total mass of the
system remains constant; thus, we assume that the amount of mass lost by stellar evolution is exactly replaced by
fresh atomic clouds entering the star formation region from the rest of the Galaxy. If we call the total mass of the
system T, we may write

(1)T=A+M+S.

There are three kinds of interaction for the atomic cloud component A. First, there is a constant replenishment by
new atomic clouds in an amount equal to the amount of mass leaving the active system by stellar evolution. The
amount of new gas may therefore be considered as proprotional to the amount of stellar mass S. We will call the
proportional constant of this process K1. Secondly, the atomic component is increased as young, active stars lose
mass by stellar wind. This process is also proportional to the number of stars and therefore to the amount of stellar
mass. The proportional constant for this process is K2. The third interaction is the transformation of atomic into
molecular clouds. This process is clearly proportional to the amount of atomic gas A, but since the transformation
becomes more and more effective with the cooling capacity of the cloud, and since this capacity increases with the
square of the density of molecular content, we assume that the transformation of atomic into molecular gas is
proportional to the square of the molecular mass. This third processes loss of atomic gas and therefore is written
with a minus sign in the differential equation and a proportional constant K3
(2)dAdt=K1S+K2S-K3AM2.

The rate of the star formation may be considered as being proportional to the nth power of the density of molecular
cloud. Values of n can be selected between 0.5 and 3.5 or between 1 and 2. It is assumed that the presence of other
young stars is a necessary condition for star formation since they will perturb the molecular cloud and provoke
condensations. In this way we may also state that the star formation rate is proportional to the number of active
stars already present. Let us call the proportional constant K4. This process increases the mass of stellar
component. Two other process decrease it: stellar evolution, for which we may use K1, and mass loss by stellar
wind, for which we again use K2. Both processes are proportional to the amount of stellar mass. Thus, the equation
describing the variation of stellar mass in the system is (3)dSdt=K4SMn-K1S-K2S.

Finally, the variation of the total molecular mass is given by two processes already described: transformation of
atomic into molecular gas, which increases for the variable M, and stellar formation, which decreases the
amount of molecular mass. The equation for M will be

(4)dMdt=K3AM2-K4SMn.

239
It is easy to see that every process in one of the three differential equations is a compensated by the process with
an opposite sign in one of the other equations. This balance guarantees the conservation of the total amount of
mass.

The three independent variables A, M and S in the previous section are linked by the fact that we consider the total
mass Tof the star formation system as constant. One may, for instance, replace S by T, A, M in Eqs. (2) and (4).
The system then reduces to only two first-order equations. Furthermore, it is possible to transfer these two
remaining equations by introducing new dimensional variables and a new dimensional time coordinate x.

(5.1)a=AT

(5.2)m=MT

(5.3)s=ST

(5.4)x=(K1+K2)t.

This implies that

(6)a+m+s=1,

at every moment. The parameters K1,K2,K3, and K4 are also transformed as a consequence of the dimensional
variables and become two new parameters (7.1)k1=K3T2K1+K2,

and
(7.2)k2=K4TnK1+K2.

when two out of the three variables are known, the third also is known, since the sum of the three is always one. The
differential equations, after elimination of S and after introducing new variables, become

(8.1)dadx=1-a-m-k1m2a,

(8.2)dmdx=k1m2a+k2mn(a-1+m).

These are the equations which can be solved. They contain three free parameters k1,k2 and n. The results may
graphically be represented in two ways. It is, of course, possible to plot the three variables the atomic, molecular,
and stellar content as functions of time. This is done for a number of models whose results will be discussed later.
Another possible method often used in all applied sciences is to work with phase [Link], for instance, a
mass hanging on a spring. The mass will jump up and down, periodically oscillating around the state of rest. We
may once again plot the position and the velocity as functions of the time in two separate drawings, but another way
is to plot the position X on the horizontal axis and the corresponding velocity V on the vertical axis. We will then
obtain a point that moving on circle in the XV plane. If the oscillations are damped, the radius will slowly decrease
unit the circle shrinks to a point on the X-axis. This means that the position is constant and the velocity is zero. The
mass has come to rest.

240
Diagrams in which two functions of time are plotted on the two axes are called phase diagrams, just as if the time
had been eliminated from the two functions so that we have obtained one single relation containing the two
variables. Phase diagrams are especially suited to study of periodic or oscillating systems.

Mathematical procedure: star formation


Purpose
Computing a model of a star-forming region with negative and positive feedback mechanisms, where the model
includes three components: cool HI, dust molecular gas, and young stars with their associated HII regions.

Input
1.n,
2.k1 (k1),
3.k2 (k2),
4.m0 (m0),
5.s0 (s0),
6.a0 (a0).
Output
One of the two different regimes that simulate the three-components of a star-forming region:

1-evolution towards a stationary state, or


2-evolution towards a limit cycle.

List of the procedure

241
Numerical examples and conclusion

Evolution towards a stationary state


The module evolves towards a stationary state for certain combinations of the three free parameters. For fixed
values of nand k2, there are some critical values of k1 above which the models evolve to such state. The model
first starts with some periodic oscillations. These oscillations are then damped, and after a certain relaxation
time, all variables (i.e., the atomic, molecular, and stellar content) reach constant values. This means that all the
interactions

242
between the three components are still present, but in such way their combined effects cancel out. The decrease of
the molecular fraction by star formation, for instance, is perfectly balanced by the increase caused by the
transformation of atomic into molecular gas. The rate of star formation is also constant (Hellings, 1994).
For example consider the following values:

Parameters
n = 1.0, k1 = 10, k2 = 10.

Initial conditions
m0 = 0.15, s0 = 0.15, a0 = 0.15.
The results are represented in Fig. 2a and 2b. We start with a galactic subsystem with high stellar mass fraction of
0.75. These stars all evolve together to an inactive state, meaning that their masses are regenerated in the system in
the form of atomic gas. This is why the atomic content increases very sharply in the beginning (see Fig. 2a). All this
atomic gas is then transformed completely into molecular gas, a process visible as the diagonal decrease on the
second of Fig.
2a. Finally, the system evolves after some very quickly damped oscillations towards a stationary state with mass
fraction of about 0.40 for the molecular clouds, 0.35 for the stellar content, and
0.25 for the atomic clouds. These final and stable fractions are represented by the constant levels in Fig. 2b.

243
Fig. 2a. Phase diagram of star formation model.

Source ;M.A Sharaf [Link]

244
245
Source;M.A Sharaf [Link],[Link]

Evolution towards a limit cycle

Another possible final state of the stellar formation model is the limit cycle. In this case, the model evolves into an
oscillating but stable state. We present the two following examples:

Example (1):

Parameters
n = 1.7, k1 = 20, k2 = 25.
Initial conditions

246
m0 = 0.7, s0 = 0.2, a0 = 0.1.
Example (2):

Parameters
n = 1.5, k1 = 8, k2 = 15.
Initial conditions
m0 = 0.3, s0 = 0.3, a0 = 0.4.
The results are shown in Figs. 3 and 4, respectively. The period and amplitude of the oscillations are constants. In
particular, this means for the stellar fraction that the birth rate is not constant in time. Star formation is concentrated
at certain moments as is easily seen in the time dependency Figs. 3b and 4b. In Fig. 3there are always active stars in
the system, at least about 5%. The oscillations of Fig. 4 are more violent and this model is characterized by large
periods in which the major part of the mass is atomic gas, and nearly all the rest molecular. Almost no active stars
are present. Then there is a quite sudden transformation of almost all the atomic gas into molecular gas,
immediately followed by a burst of the star formation. Then all these stars evolve more or less, leaving the active
star formation system. Their mass is replaced by fresh atomic gas, which becomes again the most important
component.

247
Fig. 3a. Phase diagram of star formation model.

Source; [Link] [Link], [Link]

248
Source;[Link] [Link], [Link]

249
source;[Link] [Link],[Link]

250
Source;M.A Sharaf [Link],[Link]

Measuring stellar and dark mass fractions in spiral galaxies

In almost all galaxy formation scenarios non baryonic dark matter plays an important role. Today's numerical
simulations of cosmological structure evolution quite successfully reproduce the observed galaxy distribution in the
universe (e.g. Kauffmann et al., 1999). While galaxies form and evolve inside of dark halos their physical appearance
depends strongly on the local starformation an merging history. In the same time the halos evolve and merge as well.
According to the simulations, we expect that the dark matter is of comparable importance in the inner parts of
galaxies (Navarro, Frenk, White, 1996/97) and it thus has a considerable influence on the kinematics.
These predictions are in contrast to some studies which indicate that galactic stellar disks - at least of barred spiral
galaxies - alone dominate the kinematics of the inner regions (e.g. Debattista, Sellwood, 2000). Apparently this is also
the case in our own Milky Way (Gerhard, 1999).

251
Determining individual mass fractions of the luminous and dark matter is not a straightforward task. The rotation curve
of a disk galaxy is only sensitive to the total amount of gravitating matter, but does not allow to distinguish the two
mass density profiles. For a detailed analysis it is necessary to adopt more refined methods to separate out the different
profiles. Previous investigations used for example knowledge of kinematics of rotating bars (Weiner, 1999) or the
geometry of gravitational lens systems (Maller et al., 2000).
Here we would like to exploit the fact, that the stellar mass in disk galaxies is often organized in spiral arms, thus in
clearly non-axisymmetric structures. On the other hand, in most proposed scenarios, the dark matter is non-collisional
and dominated by random motions. It is not susceptible to spiral structures and distributed like the stars in elliptical
galaxies. If the stellar mass dominates, the arms could induce considerable non-circular motions in the gas, which
should become visible as velocity wiggles in observed gas kinematics. Using hydrodynamical gas simulations we are
able to predict these velocity wiggles and compare them to the observations. Hence the contribution of the perturbative
forces with respect to the total forces can be determined quantitatively and can be used to constrain the disk to halo
mass ratio.

Observations

For this analysis we need data to provide us with information on the stellar mass distribution and on the gas
kinematics of a sample of galaxies. To map the stellar surface mass density it is most desirable to take near infrared
(NIR) images of the galaxies, because dust extinction and population effects are minimized (e.g. Rix, Rieke, 1993).
During two observing runs in May '99 and March '00 we obtained photometric data for ~20 closeby NGC galaxies.
We used the Omega Prime camera at the Calar Alto 3.5m telescope with the K-band filter (K'). It provides us with a
field of view of 6.76' x 6.76'. Figure 1 shows the K-band image of the Messier galaxy M99.
The kinematic data was obtained with the TWIN, a longslit spectrograph for the 3.5m telescope. To reach a
reasonable coverage of the galaxies' velocity field, we needed to take 8 slit positions across the entire disk of the
galaxies (also displayed in Figure 1). So far we were able to collect complete sets of longslit spectra for only 4
galaxies, mostly due to only moderate weather conditions during the spectroscopy runs.

252
Fig.1

Source; T. Kranz, A. Slyz, H.-W. Rix

Max-Planck Institut fr Astronomie, Heidelberg, Germany

253
First results

As a pilot project, we analyzed the data of NGC 4254 (M99). Assuming a constant stellar
mass to light ratio, the gravitational potential due to the stellar mass fraction was
calculated by direct integration over the whole mass distribution taken from the NIR-
image. The mass to light ratio for the maximum disk contribution was scaled by the
measured rotation curve. For the dark matter contribution we assumed an isothermal halo
with a core. To combine the two components we chose a stellar mass fraction and added
the halo with the variable parameters adjusted to give a best fit to the rotation curve.
We used this potential as an input for the hydrodynamical gas simulations. Figure
2 presents the results for the resulting gas surface density, as it settles in the potenital.
The morphology of the gas distribution is very sensitive to the velocity, with which the
spiral pattern of the galaxy rotates (pattern speed). In figure 2 we printed the result of the
simulation with the spiral structure matching best to the K-band image morphology. We
find quite good agreement.
In figure 3 a comparison of the modelled and the measured rotation curves are presented
for one particular position angle. The four panels show the rotation curves for different
disk mass fractions, ranging from almost none to maximum disk case.

Fig. 2

Conclusions and discussion

Although there is quite some scatter in the observed data, we find that the
velocity jumps, which are apparent in the simulations for the 85% case are
too large to be in agreement with the measurements. On the other hand the
simulated velocity wiggles in the 20% case seem too small to match the
observations. The inner part of the simulated rotation curve (< 0.5') is
dominated by the dynamics of the small bar, which is present at the center
of the galaxy. Its pattern speed might be different from the one of the
spirals and thus relate to a mismatch in the inner part of the rotation curve.

254
We conclude that an axisymmetric dark halo is needed to explain the
kinematics of the stellar disk. The influence of the stellar disk is
submaximal in that respect that we don't find strong enough velocity
wiggles in the observed kinematics as it would be expected if the stellar
disk was the major gravitating source inside the inner few disk scale
lenghts.
How this conclusion might apply to other spiral galaxies will be the
upcoming issue of this project. We plan to extend our analysis at first on
the 3 other galaxies where we have already now complete data sets.
Finally we intend to draw our final conclusions on a basis of a sample
consisting of 8 - 10 members. This should be sufficient to determine
reliable results about the luminous and dark mass distributions in spiral
galaxies.

20% disk 44% disk 60% disk 85% disk

Figure 3.

Comparison of observed (data points) and simulated (continuous lines) kinematics. Shown are results from different
runs with variable disk-to-halo fraction*) for one slit position (90 black/270 red). As we decrease the importance of
the disk, the velocity wiggles in the simulated rotation curves become less and less prominent. In the 85%-disk case we
find velocity jumps of >30 km/s - more than we observe. The variations in the 20%-disk case already tend to be too
small to successfully match with the observed data.

*)
The disk fraction is defined in the following way:
vtot2 = fdisk * vdisk2 + vhalo2 where vdisk and vhalo are the rotation velocities of the components:

Source; T. Kranz, A. Slyz, H.-W. Rix

Max-Planck Institut fr Astronomie, Heidelberg, Germany

255
METALLICITY

The globular cluster M80. Stars in globular clusters are mainly older metal-poor members
of Population II.
In astronomy and physical cosmology, the metallicity or Z is the fraction of mass of a star or other kind of
astronomical object that is not in hydrogen(X) or helium (Y).[1][2] Most of the physical matter in the universe is
in the form of hydrogen and helium, so astronomers use the word "metals" as a convenient short term for "all
elements except hydrogen and helium".[3] This usage is distinct from the usual physical definition of a
solid metal. For example, stars and nebulae with relatively high abundances of carbon, nitrogen, oxygen,
and neon are called "metal-rich" in astrophysical terms, even though those elements are non-metals in
chemistry.
The distinction between hydrogen and helium on the one hand and metals on the other is relevant because
the primordial universe is believed to have contained virtually no metals, which were later synthesised within
stars.
Metallicity within stars and other astronomical objects is an approximate estimation of their chemical
abundances that change over time by the mechanisms of stellar evolution,[4] and therefore provide an
indication of their age.[5] In cosmological terms, the universe is chemically evolving. According to the Big
Bang Theory, the early universe first consisted of hydrogen and helium, with trace amounts
of lithium and beryllium, but no heavier elements. Through the process of stellar evolution stars first
generate energy by synthesising metals from hydrogen and helium by nuclear reactions, then disperse most
of their mass by stellar winds or explode as supernovae, dispersing the new metals into the universe.[6] It is
believed that older generations of stars generally have lower metallicities than those of younger
generations,[7] having been formed in the metal-poor early universe.
Observed changes in the chemical abundances of different types of stars, based on the spectral peculiarities
that were later attributed to metallicity, led astronomer Walter Baade in 1944 to propose the existence of two
different populations of stars.[8] These became commonly known as Population I (metal-rich)
and Population II (metal-poor) stars. A third stellar population was introduced in 1978, known
as Population III stars.[9][10][11] These extremely metal-poor stars were theorised to have been the 'first-born'
stars created in the universe.

Star metallicity and planets


A star's metallicity measurement is one parameter that helps determine if a star will have planets and the
type of planets, as there is a direct correlation between metallicity and the type of planets a star may have.
Measurements have demonstrated the connection between a star's metallicity and gas giant planets,
like Jupiter and Saturn. The more metals in a star and thus its planetary system and proplyd, the more likely
the system may have gas giant planets and rocky planets. Current models show that the metallicity along

256
with the correct planetary system temperature and distance from the star are key to planet
and planetesimal formation. Metallicity also affects a star's color temperature. Metal poor stars are bluer and
metal rich stars are redder. The Sun, with 8 planets and 5 dwarf planets, is used as the reference, with a
[Fe/H] of 0.00. Other stars are noted with a positive or negative value. A star with a [Fe/H]=0.0 has the same
iron abundance as the Sun. A star with [Fe/H]=1.0 has one tenth heavy elements of that found in the Sun.
At [Fe/H]=+1, the heavy element abundance is 10 times the Sun's value. The survey of stellar population of
stars shows that older stars have less metallicity.[12][13][14][15][16]

Definition
Stellar composition, as determined by spectroscopy, is usually simply defined by the parameters X, Y and Z.
Here X is the fractional percentage of hydrogen, Y is the fractional percentage of helium, and all the
remaining chemical elements as the fractional percentage, Z. It is simply defined as;

In most stars, nebulae and other astronomical sources, hydrogen and helium are the two dominant

elements. The hydrogen mass fraction is generally expressed as where is the total

mass of the system and the fractional mass of the hydrogen it contains. Similarly, the helium mass

fraction is denoted as . The remainder of the elements are collectively referred to as


'metals', and the metallicitythe mass fraction of elements heavier than heliumcan be calculated as

For the surface of the Sun, these parameters are measured to have the following values:[17]

Description Solar value

Hydrogen mass fraction

Helium mass fraction

Metallicity

It should be noted that due to the effects of stellar evolution, neither the initial composition nor
the present day bulk composition of the Sun is the same as its present-day surface
composition.
The metallicity of many astronomical objects cannot be measured directly. Instead, proxies are
used to obtain an indirect estimate. For example, an observer might measure the oxygen
content of a galaxy (for example using the brightness of an oxygen emission line) directly, then
compare that value with models to estimate the total metallicity.

257
Calculation
The overall stellar metallicity is often defined using the total iron-content of the star "[Fe/H]":
though iron is not the most abundant heavy element (oxygen is), it is among the easiest to
measure with spectral data in the visible spectrum. The abundance ratio is defined as
the logarithm of the ratio of a star's iron abundance compared to that of the Sun and is
expressed thus:

where and are the number of iron and hydrogen atoms per unit of volume
respectively. The unit often used for metallicity is the "dex" which is a (now-deprecated)[citation
needed] contraction of 'decimal exponent'.[18] By this formulation, stars with a higher metallicity

than the Sun have a positive logarithmic value, whereas those with a lower metallicity than the
Sun have a negative value. The logarithm is based on powers of 10; stars with a value of +1
have ten times the metallicity of the Sun (101). Conversely, those with a value of 1 have one-
tenth (101), while those with a value of 2 have a hundredth (102), and so on.[3] Young
Population I stars have significantly higher iron-to-hydrogen ratios than older Population II
stars. Primordial Population III stars are estimated to have a metallicity of less than 6.0, that
is, less than a millionth of the abundance of iron in the Sun
The same sort of notation is used to express differences in the individual elements from the
solar proportion. For example, the notation "[O/Fe]" represents the difference in the logarithm of
the star's oxygen abundance compared to that of the Sun and the logarithm of the star's iron
abundance compared to the Sun:

The point of this notation is that if a mass of gas is diluted with pure hydrogen, then its [Fe/H]
value will decrease (because there are fewer iron atoms per hydrogen atom after the dilution),
but for all other elements X, the [X/Fe] ratios will remain unchanged. By contrast, if a mass of
gas is polluted with some amount of oxygen, then its [Fe/H] will remain unchanged but its
[O/Fe] ratio will increase. In general, a given stellar nucleosynthetic process alters the
proportions of only a few elements or isotopes, so a star or gas sample with nonzero [X/Fe]
values may be showing the signature of particular nuclear processes

Relation between Z and [Fe/H]


These two ways of expressing the metallic content of a star are related through the equation

where [M/H] is the star's total metal abundance (i.e. all elements heavier than helium) defined
as a more general expression than the one for [Fe/H]:

258
The iron abundance and the total metal abundance are often assumed to be related through a
constant A as:

where A assumes values between 0.9 and 1. Using the formulas presented above, the relation
between Z and [Fe/H] can finally be written as:

Metallicity distribution function

The Metallicity distribution function is an important concept in stellar and galactic evolution. It is a curve
of what proportion of stars have a particular metallicity ([Fe/H], the relative abundance of iron and hydrogen)
of a population of stars such as in a cluster or galaxy. [1][2][3][4][5][6][7]
MDFs are used to test different theories of galactic evolution. Much of the iron in a star will have come from
earlier type Ia supernovae. Other [alpha] metals can be produced in core collapse supernovae

259
EXPERIMENT.
ATHOUGHT EXPERIMENT ON FORMATION OF STAR FROM HYDROGEN, HUMAN SKIN AND HELIUM,

ABSTRACT
Attaining a thought experiment on the formation of stars, sometimes described as a paradox, devised by Astronomer and
physicist Ariny Amos ,The problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects, the
problem disparity of the religious notions on scientific big bang theory explanation of the origin of life like stars as celestial
objects. The scenario presents Ariny Amoss Skin , cut and measured that may be simultaneously both alive body and dead as
in cut measured piece in square, circular, rectangular,triangular prismatic,in a state known as a quantum
superposition,superposition principle as a result of being linked to a random subatomic event that may or may not occur. This
thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. Erwin
Schrdinger coined the term Verschrnkung (entanglement) in the course of developing the thought [Link] his
schrodingers cat, intended his thought experiment as a discussion of the EPR article named after its authors,Albert
Einstein, Podolsky, and Rosenin 1935. The EPR article highlighted the bizarre nature of quantum superpositions, in which a
quantum system such as an atom or photon can exist as a combination of multiple states corresponding to different possible
outcomes. The prevailing theory, called the Copenhagen interpretation quantum mechanics, says that a quantum system
remained in this superposition until it interacted with, or was observed by, the external world, a star can easily formed , evolve
by nuclear fusion of hydrogen, helium in Ariny Amos skin on direct contact with electric field as electrons emitted toward the
skin with hydrogen and helium explode, the star can be visible on space and at the skin and scattered lights once radiation
emitted to the sky where by stars, milky ways formed evolved at galaxies,the experiment involves waves emitted to space in
attempt to find response stimuli results, the big bang theory for the formation of a star from nucleosynthesis and expansion of
the world,experiment repeated many times in different locations include on 6 th -7th /Nov/2017 at lake Victoria in East [Link]
world leaders in the [Link] results discussed concluded, [Link] done by perturbation theory equation
of Paul Dirac, , [Link] Drake equation, cosmological perturbation theory, Isaac Newton law of Universal gravitation equation.

INTRODUCTION
In physics and systems theory,the human skin contains hydrogen and helium the major raw materials for stars formation and
evolution in the speed of light, the superposition principle,[1] also known as superposition property, states that, for all linear
systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses that would have
been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input
(A + B) produces response (X + Y). The homogeneity and additivity properties together are called the superposition principle
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a specific, simple form, often the
response becomes easier to [Link] paradox
In Fourier analysis, the stimulus is written as the superposition of infinitely many sinusoids. Due to the superposition principle,
each of these sinusoids can be analyzed separately, and its individual response can be computed. (The response is itself a
sinusoid, with the same frequency as the stimulus, but generally a different amplitude and phase.) According to the
superposition principle, the response to the original stimulus is the sum (or integral) of all the individual sinusoidal responses.
As another common example, in Green's function analysis, the stimulus is written as the superposition of infinitely
many impulse functions, and the response is then a superposition of impulse responses by nuclear fusion and fission results.
Fourier analysis is particularly common for waves. For example, in electromagnetic theory, ordinary light is described as a
superposition of plane waves (waves of fixed frequency, polarization, and direction). As long as the superposition principle
holds (which is often but not always; see nonlinear optics), the behavior of any light wave can be understood as a superposition
of the behavior of these simpler plane waves.
Waves are usually described by variations in some parameter through space and timefor example, height in a water
wave, pressure in a sound wave, or the electromagnetic field in a light wave. The value of this parameter is called
the amplitude of the wave, and the wave itself is a function specifying the amplitude at each point.
In any system with waves, the waveform at a given time is a function of the sources(i.e., external forces, if any, that create or
affect the wave) and initial conditions of the system. In many cases (for example, in the classic wave equation), the equation
describing the wave is linear. When this is true, the superposition principle can be applied. That means that the net amplitude
caused by two or more waves traversing the same space is the sum of the amplitudes that would have been produced by the
individual waves separately. For example, two waves traveling towards each other will pass right through each other without
any distortion on the other side. (See image at top.)
With regard to wave superposition, Richard Feynman wrote:[2]No-one has ever been able to define the difference
between interference and diffraction satisfactorily. It is just a question of usage, and there is no specific, important physical
difference between them. The best we can do is, roughly speaking, is to say that when there are only a few sources, say two,

260
interfering, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction
is more often used.
Other authors elaborate:[3]The difference is one of convenience and convention. If the waves to be superposed originate from a
few coherent sources, say, two, the effect is called interference. On the other hand, if the waves to be superposed originate by
subdividing a wavefront into infinitesimal coherent wavelets (sources), the effect is called diffraction. That is the difference
between the two phenomena is [a matter] of degree only, and basically they are two limiting cases of superposition effects.
Yet another source concurs:[4]Inasmuch as the interference fringes observed by Young were the diffraction pattern of the double
slit, this chapter [Fraunhofer diffraction] is therefore a continuation of Chapter 8 [Interference]. On the other hand, few opticians
would regard the Michelson interferometer as an example of diffraction. Some of the important categories of diffraction relate to
the interference that accompanies division of the wavefront, so Feynman's observation to some extent reflects the difficulty that
we may have in distinguishing division of amplitude and division of wavefront.
The phenomenon of interference between waves is based on this idea. When two or more waves traverse the same space, the
net amplitude at each point is the sum of the amplitudes of the individual waves. In some cases, such as in noise-cancelling
headphones, the summed variation has a smaller amplitude than the component variations; this is called destructive
interference. In other cases, such as in Line Array, the summed variation will have a bigger amplitude than any of the
components individually; this is called constructive interference.
In most realistic physical situations, the equation governing the wave is only approximately linear. In these situations, the
superposition principle only approximately holds. As a rule, the accuracy of the approximation tends to improve as the
amplitude of the wave gets smaller as per electronic device emits.
In quantum mechanics, a principal task is to compute how a certain type of wave propagates and behaves. The wave is
described by a wave function, and the equation governing its behavior is called the Schrdinger equation. A primary approach
to computing the behavior of a wave function is to write it as a superposition (called "quantum superposition") of (possibly
infinitely many) other wave functions of a certain typestationary states whose behavior is particularly simple. Since the
Schrdinger equation is linear, the behavior of the original wave function can be computed through the superposition principle.

RAW MATERIALS,

Helium,
Hydrogen,
Human skin
Electronic device
star counter telescope.
Electronic device
Stop clock.

261
APPARATUS FOR THE THOUGHT EXPERIMENT ON THE FORMATION OF STARS.

Apparatus for the experiment for the formation of stars. Source; Ariny Amos; Own Work.
Blue square as the main electricity switch, Yellow as the fuse cable for connection and an alternate switch for the
wire in light Green Attached to the Human skin in red contains hydrogen and Helium, Energy device as the electronic
device.

RAW MATERIALS DESCRIPTION.

RAW MATERIAL .1
HELIUM DESCRIPTION

Helium is a chemical element with symbol He and atomic number 2. It is a colorless, odorless, tasteless, non-
toxic, inert, monatomic gas, the first in the noble gas group in the periodic table. Its boiling point is the lowest
among all the elements.
After hydrogen, helium is the second lightest and second most abundant element in the observable universe, being
present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements
combined. Its abundance is similar to this figure in the Sun and in Jupiter. This is due to the very high nuclear
binding energy (per nucleon) of helium-4 with respect to the next three elements after helium. This helium-4
binding energy also accounts for why it is a product of both nuclear fusionand radioactive decay. Most helium in
the universe is helium-4, and is believed to have been formed during the Big Bang. Large amounts of new helium are
being created by nuclear fusion of hydrogen in stars.
Helium is named for the Greek god of the Sun, Helios. It was first detected as an unknown yellow spectral
line signature in sunlight during a solar eclipse in 1868 by Georges Rayet,[5] Captain C. T. Haig,[6] Norman R.
Pogson,[7] and Lieutenant John Herschel,[8] and was subsequently confirmed by French astronomer Jules
Janssen.[9] Janssen is often jointly credited with detecting the element along with Norman Lockyer. Janssen
recorded the helium spectral line during the solar eclipse of 1868 while Lockyer observed it from Britain. Lockyer was
the first to propose that the line was due to a new element, which he named. The formal discovery of the
element was made in 1895 by two Swedish chemists, Per Teodor Cleve and Nils Abraham Langlet, who found
helium emanating from the uranium ore cleveite. In 1903, large reserves of helium were found in natural gas
fields in parts of the United States, which is by far the largest supplier of the gas today
Liquid helium is used in cryogenics (its largest single use, absorbing about a quarter of production), particularly in the
cooling of superconducting magnets, with the main commercial application being in MRI scanners. Helium's other
industrial usesas a pressurizing and purge gas, as a protective atmosphere for arc welding and in processes such as
growing crystals to make silicon wafersaccount for half of the gas produced. A well-known but minor use is as
a lifting gas in balloons and airships.[10] As with any gas whose density differs from that of air, inhaling a small
volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior
of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum
mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such
as superconductivity, produced in matter near absolute zero.

262
On Earth it is relatively rare5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created
by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other
examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is
trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a
low-temperature separation process called fractional distillation. Previously, terrestrial heliuma non-renewable
resource, because once released into the atmosphere it readily escapes into spacewas thought to be in
increasingly short supply

Scientific discoveries of helium, and terrestrial Helium.


The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49
nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules
Janssenduring a total solar eclipse in Guntur, India.[17][18] This line was initially assumed to be sodium. On October
20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he
named the D3 Fraunhofer line because it was near the known D1 and D2 lines of sodium.[19][20] He concluded that it
was caused by an element in the Sun unknown on Earth. Lockyer and English chemist Edward Frankland named the
element with the Greek word for the Sun

On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the
mineral cleveite (a variety of uraninite with at least 10% rare earth elements) with mineral acids. And discovered
terrestrial Helium, Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas liberated
by sulfuric acid, he noticed a bright yellow line that matched the D3 line observed in the spectrum of the
Sun.[20][25][26][27] These samples were identified as helium by Lockyer and British physicist William Crookes.[28][29] It
was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham
Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic
weight.[18][30][31] Helium was also isolated by the American geochemist William Francis Hillebrand prior to Ramsay's
discovery when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand,
however, attributed the lines to nitrogen.[32] His letter of congratulations to Ramsay offers an interesting case of
discovery and near-discovery in science.
In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the
particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube to study the
spectra of the new gas inside.[34] In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by
cooling the gas to less than one kelvin. He tried to solidify it by further reducing the temperature but failed because
helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to
solidify 1 cm3 of helium in 1926 by applying additional external pressure.

In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at
temperatures near absolute zero, a phenomenon now called superfluidity.
Extraction and use

Historical marker, denoting a massive helium find near Dexter, Kansas.


After an oil drilling operation in 1903 in Dexter, Kansas, produced a gas geyser that would not burn, Kansas state
geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of
Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the
gas consisted of, by volume, 72% nitrogen, 15% methane (a combustiblepercentage only with sufficient oxygen),

263
1% hydrogen, and 12% an unidentifiable gas.[18][41] With further analysis, Cady and McFarland discovered that 1.84%
of the gas sample was helium.[42][43] This showed that despite its overall rarity on Earth, helium was concentrated in
large quantities under the American Great Plains, available for extraction as a byproduct of natural gas.[44]
This enabled the United States to become the world's leading supplier of helium. Following a suggestion by
Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War
I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of
5,700 m3 (200,000 cu ft) of 92% helium was produced in the program even though less than a cubic meter of the gas
had previously been obtained.[20] Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-
7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December
1, 1921,[45] nearly two years before the Navy's first rigid helium-filled airship, the Naval Aircraft Factory-built USS
Shenandoah, flew in September 1923.
Although the extraction process, using low-temperature gas liquefaction, was not developed in time to be significant
during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During
World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass
spectrometer was also vital in the atomic bomb Manhattan Project.
The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the
goal of supplying military airships in time of war and commercial airships in peacetime.[20] Because of the Helium
Control Act (1927), which banned the export of scarce helium on which the US then had a production monopoly,
together with the prohibitive cost of the gas, the Hindenburg, like all German Zeppelins, was forced to use
hydrogen as the lift gas. The helium market after World War II was depressed but the reserve was expanded in the
1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses)
during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak
wartime consumption.
After the "Helium Acts Amendments of 1960" (Public Law 86777), the U.S. Bureau of Mines arranged for five
private plants to recover helium from natural gas. For this helium conservation program, the Bureau built a 425-mile
(684 km) pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside
gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until
needed, at which time it was further purified.
By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting
the Congress of the United States in 1996 to phase out the reserve.[18][49] The resulting "Helium Privatization Act of
1996"[50](Public Law 104273) directed the United States Department of the Interior to empty the reserve, with
sales starting by 2005.
Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In
1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A
99.95% helium were available.
For many years, the United States produced more than 90% of commercially usable helium in the world, while
extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant
in Arzew, Algeria, producing 17 million cubic meters (600 million cubic feet) began operation, with enough
production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had
risen to more than 15 million kg per year.[53] In 20042006, additional plants in Ras Laffan, Qatar, and Skikda,
Algeria were built. Algeria quickly became the second leading producer of helium. [54] Through this time, both helium
consumption and the costs of producing helium increased.[55] From 2002 to 2007 helium prices doubled.
As of 2012, the United States National Helium Reserve accounted for 30 percent of the world's helium.[57] The
reserve was expected to run out of helium in 2018.[57] Despite that, a proposed bill in the United States
Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas,
United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants
were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease
the shortage.
In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely
affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business,
following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation
that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply
constraints.

264
Characteristics
The helium atom
: Helium atom

The helium atom. Depicted are the nucleus (pink) and the electron clouddistribution (black). The nucleus (upper
right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more
complicated nuclei this is not always the case.

Helium in quantum mechanics


In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen
atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and
(usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved
with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical
mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational
chemistrymethods have been used to create a quantum mechanical picture of helium electron binding which is
accurate to within < 2% of the correct value, in a few computational steps. [62] Such models show that each electron in
helium partly screens the nucleus from the other, so that the effective nuclear charge Z which each electron sees, is
about 1.69 units, not the 2 charges of a classic "bare" helium nucleus.
The related stability of the helium-4 nucleus and electron shell
The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments
show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of
helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of
protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the
nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully
occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's
intrinsic spin. Adding another of any of these particles would require angular momentum and would release
substantially less energy (in fact, no nucleus with five nucleons is stable). This arrangement is thus energetically
extremely stable for all these particles, and this stability accounts for many crucial facts regarding helium in nature.
For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical
inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling
points of all the elements.
In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for
the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable
helium-3 (2 protons and 1 neutron) is produced in fusion reactions from hydrogen, but it is a very small fraction
compared to the highly favorable helium-4.

The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few
minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1
ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were
helium-4 nuclei. So tight was helium-4 binding that helium-4 production consumed nearly all of the free neutrons in a

265
few minutes, before they could beta-decay, and also leaving few to form heavier atoms such as lithium, beryllium, or
boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements
(see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available
to make elements 3, 4 and 5. It was barely energetically favorable for helium to fuse into the next element with a
lower energy per nucleon, carbon. However, due to lack of intermediate elements, this process requires three helium
nuclei striking each other nearly simultaneously (see triple alpha process). There was thus no time for significant
carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the
temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe
with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with
nearly all the neutrons in the universe trapped in helium-4.
All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life)
have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other
than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by
contrast, makes up about 23% of the universe's ordinary matternearly all the ordinary matter that is not hydrogen.
Gas and plasma phases

Helium discharge tube shaped like the element's atomic symbol

Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It
is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass,
its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas
except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through
solids at a rate three times that of air and around 65% that of hydrogen.
Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF4, SF6, and
C4F8 have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x2/105, respectively, versus helium's 0.70797
x2/105),[65] and helium's index of refraction is closer to unity than that of any other gas. Helium has a
negative JouleThomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely
expand. Only below its JouleThomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool
upon free expansion.[20] Once precooled below this temperature, helium can be liquefied through expansion cooling.
Most extraterrestrial helium is found in a plasma state, with properties quite different from those of atomic helium.
In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when
the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For
example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere,
giving rise to Birkeland currentsand the aurora.[67]
Liquid helium

266
Liquefied helium. This helium is not only liquid, but has been cooled to the point of superfluidity. The drop of liquid at
the bottom of the glass represents helium spontaneously escaping from the container over the side, to empty out of
the container. The energy to drive this process is supplied by the potential energy of the falling helium.

Liquid helium
Unlike any other element, helium will remain liquid down to absolute zero at normal pressures. This is a direct effect
of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Solid helium
requires a temperature of 11.5 K (about 272 C or 457 F) at about 25 bar (2.5 MPa) of pressure.[68] It is often hard
to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has
a sharp melting pointand has a crystalline structure, but it is highly compressible; applying pressure in a laboratory
can decrease its volume by more than 30%.[69] With a bulk modulus of about 27 MPa[70] it is ~100 times more
compressible than water. Solid helium has a density of 0.214 0.006 g/cm3 at 1.15 K and 66 atm; the projected
density at 0 K and 25 bar (2.5 MPa) is 0.187 0.009 g/cm3.[71] At higher temperatures, helium will solidify with
sufficient pressure. At room temperature, this requires about 114,000 atm.
Helium I
Below its boiling point of 4.22 kelvins and above the lambda point of 2.1768 kelvins, the isotope helium-4 exists in a
normal colorless liquid state, called helium I.[20] Like other cryogenic liquids, helium I boils when it is heated and
contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as
the temperature is lowered further.
Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats
of Styrofoam are often used to show where the surface is.[20] This colorless liquid has a very low viscosity and a
density of 0.1450.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical
physics.[20] Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I
and helium II) are called quantum fluids, meaning they display atomic properties on a macroscopic scale. This may be
an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy)
from masking the atomic properties.
Helium II
Superfluid helium-4
Liquid helium below its lambda point (called helium II) exhibits very unusual characteristics. Due to its high thermal
conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has
a superfluidphase, but only at much lower temperatures; as a result, less is known about the properties of the
isotope.[20]

Unlike ordinary liquids, helium II will creep along surfaces in order to reach an equal level; after a short while, the
levels in the two containers will equalize. The Rollin film also covers the interior of the larger container; if it were not
sealed, the helium II would creep out and escape.
Helium II is a superfluid, a quantum mechanical state (see: macroscopic quantum phenomena) of matter with
strange properties. For example, when it flows through capillaries as thin as 10 7 to 108 m it has no
measurable viscosity.[18] However, when measurements were done between two moving discs, a viscosity

267
comparable to that of gaseous helium was observed. Current theory explains this using the two-fluid model for
helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in
a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an
excited state, which behave more like an ordinary fluid.
In the fountain effect, a chamber is constructed which is connected to a reservoir of helium II by a sintered disc
through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of
the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium
fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain
out of the container.
The thermal conductivity of helium II is greater than that of any other known substance, a million times that of
helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional
quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to
transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is
governed by equations that are similar to the wave equation used to characterize sound propagation in air. When
heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known
as second sound.
Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along
the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the
sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface
material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V.
Rollin.[20][76][77] As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is
very difficult to confine liquid helium. Unless the container is carefully constructed, the helium II will creep along the
surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a
Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the
restoring force is the van der Waals force.[78] These waves are known as third sound.[79]
Isotopes
Isotopes of helium
There are nine known isotopes of helium, but only helium-3 and helium-4 are stable. In the Earth's atmosphere,
one atom is 3
He for every million that are 4
He. Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation
processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive
elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus
because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang
nucleosynthesis.
Helium-3 is present on Earth only in trace amounts; most of it since Earth's formation, though some falls to Earth
trapped in cosmic dust.[81] Trace amounts are also produced by the beta decay of tritium.[82] Rocks from the Earth's
crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin
of rocks and the composition of the Earth's mantle. 3He is much more abundant in stars as a product of nuclear
fusion. Thus in the interstellar medium, the proportion of 3He to 4He is about 100 times higher than on
Earth.[83] Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being
bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much
higher than the approximately 5 ppt found in the Earth's atmosphere.[84][85] A number of people, starting with Gerald
Kulcinski in 1986,[86] have proposed to explore the moon, mine lunar regolith, and use the helium-3 for fusion.
Liquid helium-4 can be cooled to about 1 kelvin using evaporative cooling in a 1-K pot. Similar cooling of helium-3,
which has a lower boiling point, can achieve about 0.2 kelvin in a helium-3 refrigerator. Equal mixtures of liquid 3
He and 4He below 0.8 K separate into two immiscible phases due to their dissimilarity (they follow
different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution
refrigerators use this immiscibility to achieve temperatures of a few [Link] is possible to produce exotic
helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is helium-5 with
a half-life of 7.61022 s. Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 also
emits a beta particle as well as a gamma ray. Helium-7 and helium-8 are created in certain nuclear reactions.
Helium-6 and helium-8 are known to exhibit a nuclear halo.

268
Compounds
Helium compounds

Structure of the helium hydride ion, HHe+

Structure of the suspected fluoroheliate anion, OHeF


Helium has a valence of zero and is chemically unreactive under all normal conditions. [69] It is an electrical insulator
unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in
an electrical discharge with a voltage below its ionization potential.[20] Helium can form unstable compounds,
known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge,
to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe10, and
2He2, and the molecular ions He+
2, He2+
2, HeH+
, and HeD+
have been created this way. HeH+ is also stable in its ground state, but is extremely reactiveit is the
strongest Brnsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or
counteranion it contacts. This technique has also produced the neutral molecule He2, which has a large number
of band systems, and HgHe, which is apparently held together only by polarization forces.
Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other
substance, such as LiHe and He2.
Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF) which would be analogous
to HArF, discovered in 2000.[89] Calculations show that two new compounds containing a helium-oxygen bond could
be stable.[90] Two new molecular species, predicted using theory, CsFHeO and N(CH 3)4FHeO, are derivatives of a
metastable FHeO anion first theorized in 2005 by a group from Taiwan. If confirmed by experiment, the only
remaining element with no known stable compounds would be neon.
Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high
pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives
of these fullerenes are formed, the helium stays inside.[92] If helium-3 is used, it can be readily observed by
helium nuclear magnetic resonance spectroscopy.[93] Many fullerenes containing helium-3 have been reported.
Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties
and a definite composition, like all stoichiometric chemical compounds.
Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N2)11)
crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil

269
cell.[94] The insulating electride Na2He has been shown to be thermodynamically stable at pressures above 113 GPa.
It has a fluorite structure.

Occurrence and production


Natural abundance
Although it is rare on Earth, helium is the second most abundant element in the known Universe (after hydrogen),
constituting 23% of its baryonic mass.[18] The vast majority of helium was formed by Big Bang nucleosynthesis one
to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models.
In stars, it is formed by the nuclear fusion of hydrogen in proton-proton chain reactions and the CNO cycle, part
of stellar nucleosynthesis.
In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is
low and fairly constant despite the continuous production of new helium because most helium in the Earth's
atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere,
helium and other lighter gases are the most abundant elements.
Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals
of uranium and thorium, including cleveite, pitchblende, carnotite and monazite, because they emit alpha particles
(helium nuclei, He2+) to which electrons immediately combine as soon as the particle is stopped by the rock. In this
way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust,
the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are
also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface
under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in
natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few
ppm to more than 7% in a small gas field in San Juan County, New Mexico.
As of 2011 the world's helium reserves were estimated at 40 billion cubic meters, with a quarter of that being in
the South Pars / North Dome Gas-Condensate field owned jointly by Qatar and Iran. In 2015 and 2016 more
probable reserves were announced to be under the Rocky Mountains in North America [107] and in east Africa.[108]
Modern extraction and distribution
For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7%
helium. Since helium has a lower boiling point than any other element, low temperature and high pressure are used
to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by
successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are
precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in
99.995% pure Grade-A helium.[20] The principal impurity in Grade-A helium is neon. In a final production step, most of
the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid
helium and also allows helium suppliers to reduce the cost of long distance transportation, as the largest liquid helium
containers have more than five times the capacity of the largest gaseous helium tube trailers.
In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or
withdrawn from helium reserves with approximately 78% from the United States, 10% from Algeria, and most of the
remainder from Russia, Poland and Qatar. By 2013, increases in helium production in Qatar (under the
company RasGas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, and
made it the second largest exporter after the United States.] An estimated 54 billion cubic feet (1.5109 m3) of helium
was detected in Tanzania in 2016.
In the United States, most helium is extracted from natural gas of the Hugoton and nearby gas fields in Kansas,
Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium
Reserve, but since 2005 this reserve is being depleted and sold off, and is expected to be largely depleted by
2021, under the October 2013 Responsible Helium Administration and Stewardship Act (H.R. 527).
Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to
recover and purify helium. In 1996, the U.S. had proven helium reserves, in such gas well complexes, of about 147
billion standard cubic feet (4.2 billion SCM).[117] At rates of use at that time (72 million SCM per year in the U.S.; see
pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80%
of the time) at world use rates, although factors in saving and processing impact effective reserve numbers.
Helium must be extracted from natural gas because it is present in air at only a fraction of that of neon, yet the
demand for it is far higher. It is estimated that if all neon production were retooled to save helium, that 0.1% of the

270
world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-
tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity
protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method
of production.
Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated
containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers which have nominal
capacities as large as 42 m3 (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in
high-pressure cylinders holding as much as 8 m3 (approx. 282 standard cubic feet), while large quantities of high-
pressure gas are supplied in tube trailers which have capacities of as much as 4,860 m3 (approx. 172,000 standard
cubic feet).
Conservation advocates
According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the
free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had
been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by
2015.[120] According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of
helium. In their book, the Future of helium as a natural resource (Routledge, 2012), Nuttall, Clarke & Glowacki (2012)
also proposed to create an International Helium Agency (IHA) to build a sustainable market for this precious
commodity.[121]
Applications

The largest single use of liquid helium is to cool the superconducting magnets in modern MRI scanners.

271
While balloons are perhaps the best known use of helium, they are a minor part of all helium use. [49] Helium is used
for many purposes that require some of its unique properties, such as its low boiling point, low density,
low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32
million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in
cryogenic applications, most of which involves cooling the superconducting magnets in medical MRIscanners
and NMR spectrometers.[123] Other major uses were pressurizing and purging systems, welding, maintenance of
controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions.[122]
Controlled atmospheres
Helium is used as a protective gas in growing silicon and germanium crystals, in titaniumand zirconium production,
and in gas chromatography,[69] because it is inert. Because of its inertness, thermally and calorically
perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind
tunnels[124] and impulse facilities.[125]
Gas tungsten arc welding
Helium is used as a shielding gas in arc welding processes on materials that at welding temperatures are
contaminated and weakened by air or nitrogen.[18] A number of inert shielding gases are used in gas tungsten arc
welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat
conductivity, like aluminium or copper.
Minor uses
Industrial leak detection

272
A dual chamber helium leak detection machine
One industrial application for helium is leak detection. Because helium diffusesthrough solids three times faster
than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-
pressure containers.[126] The tested object is placed in a chamber, which is then evacuated and filled with helium. The
helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the
leak rates as small as 109mbarL/s (1010 Pam3/s). The measurement procedure is normally automatic and is called
helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with
a hand-held device.[127]
Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has
documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic
materials, inert gases such as helium will not permeate most bulk metals. [128]
Flight

Because of its low density and incombustibility, helium is the gas of choice to fill airships such as the Goodyear
blimp.
Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more
buoyant, and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-
flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullagemedium to
displace fuel and oxidizers in storage tanks and to condense hydrogen and oxygen to make rocket fuel. It is also used
to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space
vehicles. For example, the Saturn V rocket used in the Apollo program needed about 370,000 m3 (13 million cubic
feet) of helium to launch.
Minor commercial and recreational uses
Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used
for deep diving to reduce the effects of narcosis, which worsen with increasing depth As pressure increases with
depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to
considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds
number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less work of
breathing. At depths below 150 metres (490 ft) divers breathing heliumoxygen mixtures begin to experience tremors
and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome.[133] This effect may be
countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a heliumoxygen
mixture.
Heliumneon lasers, a type of low-powered gas laser producing a red beam, had various practical applications
which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode
lasers.
For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive
isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear
reactors.[126]

273
Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting
high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over
conventional refrigeration systems which contribute to ozone depletion or global warming.
Helium is also used in some hard disk drives.
Scientific uses of helium
The use of helium reduces the distorting effects of temperature variations in the space between lenses in
some telescopes, due to its extremely low index of refraction.[20] This method is especially used in solar
telescopes where a vacuum tight telescope tube would be too heavy.
Helium is a commonly used carrier gas for gas chromatography.
The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of
helium with a process known as helium dating.[18][20]
Helium at low temperatures is used in cryogenics, and in certain cryogenics applications. As examples of
applications, liquid helium is used to cool certain metals to the extremely low temperatures required
for superconductivity, such as in superconducting magnets for magnetic resonance imaging.
The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at
1.9 kelvin.

Inhalation and safety
Effects
Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human
blood.
The speed of sound in helium is nearly three times the speed of sound in air. Because the fundamental
frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled there is a
corresponding increase in the resonant frequencies of the vocal tract.[18][141] The fundamental
frequency (sometimes called pitch) does not change, since this is produced by direct vibration of the vocal folds,
which is unchanged.[142] However, the higher resonant frequencies cause a change in timbre, resulting in a reedy,
duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas
such as sulfur hexafluoride or xenon.
Hazards
Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen
needed for normal respiration.[18][143] Fatalities have been recorded, including a youth who suffocated in Vancouver in
2003 and two adults who suffocated in South Florida in 2006.[144][145] In 1998, an Australian girl (her age is not known)
from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party
balloon.[146][147][148] Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely
dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue.[143][149]
Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in
1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. [146][147][148]
In the United States only two fatalities were reported between 2000 and 2004, including a man who died in North
Carolina of barotrauma in 2002.[144][149] A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in
Australia had an embolism after breathing from a cylinder in 2000.[144] Since then two adults asphyxiated in South
Florida in 2006,[144][145][150]and there were cases in 2009 and 2010, one a Californian youth who was found with a bag
over his head, attached to a helium tank,[151] and another teenager in Northern Ireland died of
asphyxiation.[152] At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party.[153][154][155][156] A girl
from Michigan died from hypoxia later in the year.[157]
On February 4, 2015 it was revealed that during the recording of their main TV show on January 28, a 12-year-old
member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing
consciousness and falling in a coma as a result of air bubbles blocking the flow of blood to the brain, after inhaling
huge quantities of helium as part of a game. The incident was not made public until a week later.[158][159] The staff
of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital
and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not been sufficiently
recovered as of yet. Police have launched an investigation due to a neglect of safety measures. [160][161]
On July 13, 2017 CBS News reported that a political operative who reportedly attempted to recover e-mails missing
from the Clinton server, Peter W. Smith, "apparently" committed suicide in May at a hotel room in Rochester,

274
Minnesota and that his death was recorded as "asphyxiation due to displacement of oxygen in confined space with
helium".[162] More details followed in the Chicago Tribune.[163]
The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can
result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are
installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and
significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature.[69]
At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-
pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture
can alleviate the problem.[164][133]

RAW MATERIAL 2, HYDROGEN GAS,


DESCRIPTION OF HYDROGEN
Hydrogen is a chemical element with symbol H and atomic number 1. With a standard atomic weight of
circa 1.008, hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most
abundant chemical substance in the Universe, constituting roughly 75% of all baryonicmass. Non-
remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen,
termed protium (name rarely used, symbol 1H), has one proton and no neutrons.
The universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard
temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic,
highly combustiblediatomic gas with the molecular formula H2. Since hydrogen readily forms covalent compounds
with most nonmetallic elements, most of the hydrogen on Earth exists in molecular forms such as water or organic
compounds. Hydrogen plays a particularly important role in acidbase reactions because most acid-base reactions
involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a
negative charge (i.e., anion) when it is known as a hydride, or as a positively charged (i.e., cation) species denoted
by the symbol H+. The hydrogen cation is written as though composed of a bare proton, but in reality, hydrogen
cations in ionic compounds are always more complex. As the only neutral atom for which the Schrdinger
equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role
in the development of quantum mechanics.
Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. In 1766
81, Henry Cavendish was the first to recognize that hydrogen gas was a discrete substance,[11] and that it produces
water when burned, the property for which it was later named: in Greek, hydrogen means "water-former".
Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive
methods such as the electrolysis of water.[12]Most hydrogen is used near the site of its production, the two largest
uses being fossil fuel processing (e.g., hydrocracking) and ammonia production, mostly for the fertilizer market.
Hydrogen is a concern in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage
tanks.
Hydrogen gas (dihydrogen or molecular hydrogen)] is highly flammable and will burn in air at a very wide range of
concentrations between 4% and 75% by volume.[16] The enthalpy of combustion is 286 kJ/mol:
2 H2(g) + O2(g) 2 H2O(l) + 572 kJ (286 kJ/mol)
Hydrogen gas forms explosive mixtures with air in concentrations from 474% and with chlorine at 595%. The
explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the
temperature of spontaneous ignition in air, is 500 C (932 F).[18] Pure hydrogen-oxygenflames emit ultraviolet light
and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle
Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses
an ammonium perchlorate composite. The detection of a burning hydrogen leak may require a flame detector;
such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames.
The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still
debated. The visible orange flames in that incident were the result of a rich mixture of hydrogen to oxygen combined
with carbon compounds from the airship skin.
H2 reacts with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature
with chlorineand fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride,
which are also potentially dangerous acids.
Electron energy levels

275
Depiction of a hydrogen atom with size of central proton shown, and the atomic diameter shown as about twice
the Bohr model radius (image not to scale)
The ground state energy level of the electron in a hydrogen atom is 13.6 eV,[21]which is equivalent to an
ultraviolet photon of roughly 91 nm wavelength.
The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which
conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the atomic
electron and proton are held together by electromagnetic force, while planets and celestial objects are held
by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr,
the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain
allowed energies.
A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses
the Schrdinger equation, Dirac equation or even the Feynman path integral formulation to calculate
the probability density of the electron around the proton. The most complicated treatments allow for the small
effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a
ground state hydrogen atom has no angular momentum at allillustrating how the "planetary orbit" differs from
electron motion.
Elemental molecular forms
: Spin isomers of hydrogen
Spin isomers of hydrogen

Spin isomers of molecular hydrogen


Molecular hydrogen occurs in two isomeric forms, one with its two proton spins aligned parallel (orthohydrogen), the
other with its two proton spins aligned antiparallel (parahydrogen). [1] These two forms are often referred to as spin
isomers, since they differ not in chemical structure (like most isomers) but rather in nuclear spin state.
Parahydrogen is in a lower energy state than is orthohydrogen. At room temperatureand thermal equilibrium,
thermal excitation causes hydrogen to consist of approximately 75% orthohydrogen and 25% parahydrogen. When
hydrogen is liquified at low temperature, there is a slow spontaneous transition to a predominantly para ratio, with
the released energy having implications for storage. Essentially pure parahydrogen form can be obtained at very low
temperatures, but it is not possible to obtain a sample containing more than 75% orthohydrogen by cooling.

276
Nuclear spin states of hydrogen.
Each hydrogen molecule (H2) consists of two hydrogen atoms linked by a covalent bond. If we neglect the small
proportion of deuterium and tritium which may be present, each hydrogen atom consists of one proton and
one electron. Each proton has an associated magnetic moment, which is associated with the proton's spin of 1/2. In
the H2 molecule, the spins of the two hydrogen nuclei (protons) couple to form a triplet state known
as orthohydrogen, and a singlet state known as parahydrogen.
The triplet orthohydrogen state has total nuclear spin I = 1 so that the component along a defined axis can have the
three values MI = 1, 0, or 1. The corresponding nuclear spin wavefunctions are

and (in standard braket notation). Each orthohydrogen energy level then has a (nuclear) spin degeneracy of
three, meaning that it corresponds to three states of the same energy (in the absence of a magnetic field) The singlet
parahydrogen state has nuclear spin quantum numbers I = 0 and M I = 0, with wavefunction .

Since there is only one possibility, each parahydrogen level has a spin degeneracy of one
and is said to be non degenerate.
The ratio between the ortho and para forms is about 3:1 at standard temperature and pressure a reflection of
the ratio of spin degeneracies. However, if chemical equilibrium between the two forms is established, the para
form dominates at low temperatures (approx. 99.8% at 20 K)

Thermal properties
Since protons have spin 1/2, they are fermions and the permutational antisymmetry of the total H 2 wavefunction
imposes restrictions on the possible rotational states the two forms of H 2 can adopt. Orthohydrogen, with symmetric
nuclear spin functions, can only have rotational wavefunctions that are antisymmetric with respect to permutation of
the two protons, corresponding to odd values of the rotational quantum number J; conversely, parahydrogen with
an antisymmetric nuclear spin function, can only have rotational wavefunctions that are symmetric with respect to
permutation of the two protons, corresponding to even J. Applying the rigid rotor approximation, the energies and
degeneracies of the rotational states are given by:

.
The rotational partition function is conventionally written as

.
However, as long as these two spin isomers are not in equilibrium, it is more useful to write separate partition
functions for each:

The factor of 3 in the partition function for orthohydrogen accounts for the spin degeneracy associated with the +1
spin state; when equilibrium between the spin isomers is possible, then a general partition function incorporating this
degeneracy difference can be written as

277
The molar rotational energies and heat capacities are derived for any of these cases from

Plots shown here are molar rotational energies and heat capacities for ortho- and parahydrogen, and the "normal"
ortho/para (3:1) and equilibrium mixtures

Molar Rotational Energies

Molar Heat Capacities


Because of the antisymmetry-imposed restriction on possible rotational states, orthohydrogen has residual rotational
energy at low temperature wherein nearly all the molecules are in the J = 1 state (molecules in the symmetric spin-
triplet state cannot fall into the lowest, symmetric rotational state) and possesses nuclear-spin entropy due to the
triplet state's threefold degeneracy. The residual energy is significant because the rotational energy levels are
relatively widely spaced in H2; the gap between the first two levels when expressed in temperature units is twice the
characteristic rotational temperature for H2

278
.
This is the T = 0 intercept seen in the molar energy of orthohydrogen. Since "normal" room-temperature hydrogen is
a 3:1 ortho:para mixture, its molar residual rotational energy at low temperature is (3/4) x 2R rot = 1091 J/mol, which
is somewhat larger than the enthalpy of vaporizationof normal hydrogen, 904 J/mol at the boiling point, Tb = 20.369
K.[5] Notably, the boiling points of parahydrogen and normal (3:1) hydrogen are nearly equal; for parahydrogen
Hvap = 898 J/mol at Tb = 20.277 K, and it follows that nearly all the residual rotational energy of orthohydrogen is
retained in the liquid state
However, orthohydrogen is thermodynamically unstable at low temperatures and spontaneously converts into
[Link] process lacks any natural de-excitation radiation mode, so it is slow in the absence of a catalyst
which can facilitate interconversion of the singlet and triplet spin states. At room temperature, hydrogen contains
75% orthohydrogen, a proportion which the liquefaction process preserves if carried out in the absence of
a catalyst like ferric oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic
oxide, or some nickel compounds to accelerate the conversion of the liquid hydrogen into parahydrogen.
Alternatively, additional refrigeration equipment can be used to slowly absorb the heat that the orthohydrogen
fraction will (more slowly) release as it spontaneously converts into parahydrogen. If orthohydrogen is not removed
from rapidly liquified hydrogen, without a catalyst, the heat released during its decay can boil off as much as 50% of
the original liquid.
The two forms of molecular hydrogen were first proposed by Werner Heisenberg and Friedrich Hund in 1927.
Taking into account this theoretical framework, pure parahydrogen was first synthesized by Paul Harteck and Karl
Friedrich Bonhoefferin 1929.[6] When Heisenberg was awarded the 1932 Nobel prize in physics for the creation of
quantum mechanics, this discovery of the "allotropic forms of hydrogen" was singled out as its most noteworthy
application.[7] Modern isolation of pure parahydrogen has since been achieved using rapid in-vacuum deposition of
millimeters thick solid parahydrogen (pH2) samples which are notable for their excellent optical qualities.[8]

Use of hydrogen in NMR


Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a magnetic field absorb and re-
emit electromagnetic radiation. This energy is at a specific resonance frequency which depends on the strength of
the magnetic field and the magnetic properties of the isotope of the atoms; in practical applications, the frequency is
similar to VHF and UHF television broadcasts (601000 MHz). NMR allows the observation of specific quantum
mechanical magnetic properties of the atomic nucleus. Many scientific techniques exploit NMR phenomena to
study molecular physics, crystals, and non-crystalline materials through nuclear magnetic resonance
spectroscopy. NMR is also routinely used in advanced medical imagingtechniques, such as in magnetic resonance
imaging (MRI).
When an excess of parahydrogen is used during hydrogenation reactions (instead of the normal mixture of
orthohydrogen to parahydrogen of 3:1), the resultant product exhibits hyperpolarized signals in
proton NMR spectra, an effect termed PHIP (Parahydrogen Induced Polarisation) or, equivalently, PASADENA
(Parahydrogen And Synthesis Allow Dramatically Enhanced Nuclear Alignment) (named for first recognition of the
effect by Bowers and Weitekamp of Caltech),a phenomenon that has been used to study the mechanism of
hydrogenation reactions.

Other substances with spin isomers


Other molecules and functional groups containing two hydrogen atoms, such as water and methylene, also have
ortho- and para- forms (e.g. orthowater and parawater), but this is of little significance for their thermal
properties.[12] Their ortho-para ratios differ from that of dihydrogen.
Molecular oxygen (O
2) also exists in three lower-energy triplet states and one singlet state, as ground-state paramagnetic triplet
oxygen and energized highly reactive diamagnetic singlet oxygen. These states arise from the spins of
their unpaired electrons, not their protons or nuclei.

279
First tracks observed in liquid hydrogen bubble chamber at the Bevatron

There exist two different spin isomers of hydrogen diatomic molecules that differ by the relative spin of their
nuclei. In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state with a molecular
spin quantum number of 1 (12+12); in the parahydrogen form the spins are antiparallel and form a singlet with a
molecular spin quantum number of 0 (1212). At standard temperature and pressure, hydrogen gas contains about
25% of the para form and 75% of the ortho form, also known as the "normal form". The equilibrium ratio of
orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and has a
higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium
state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure
parahydrogen differ significantly from those of the normal form because of differences in rotational heat capacities,
as discussed more fully in spin isomers of hydrogen. The ortho/para distinction also occurs in other hydrogen-
containing molecules or functional groups, such as water and methylene, but is of little significance for their thermal
properties.

The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly
condensed H2 contains large quantities of the high-energy ortho form that converts to the para form very
slowly.[29] The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid
hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate some of the
hydrogen liquid, leading to loss of liquefied material. Catalysts for the ortho-para interconversion, such as ferric
oxide, activated carbon, platinized asbestos, rare earth metals, uranium compounds, chromic oxide, or some
nickel[30] compounds, are used during hydrogen cooling.[31]

Phases
Gaseous hydrogen
Liquid hydrogen
Slush hydrogen
Solid hydrogen
Metallic hydrogen
Compounds

Category:Hydrogen compounds
Covalent and organic compounds
While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen can
form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen; in
these compounds hydrogen takes on a partial positive charge.[32] When bonded to fluorine, oxygen, or nitrogen,
hydrogen can participate in a form of medium-strength noncovalent bonding with the hydrogen of other similar
molecules, a phenomenon called hydrogen bonding that is critical to the stability of many biological

280
molecules.[33][34] Hydrogen also forms compounds with less electronegative elements, such as metals and metalloids,
where it takes on a partial negative charge. These compounds are often known as hydrides.[35]
Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array
with heteroatomsthat, because of their general association with living things, are called organic compounds.[36] The
study of their properties is known as organic chemistry[37] and their study in the context of living organisms is
known as biochemistry.[38] By some definitions, "organic" compounds are only required to contain carbon. However,
most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds
most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word
"organic" in chemistry.[36] Millions of hydrocarbons are known, and they are usually formed by complicated synthetic
pathways that seldom involve elementary hydrogen.
Hydrides
Hydride
Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. The term "hydride" suggests that
the H atom has acquired a negative or anionic character, denoted H , and is used when hydrogen forms a compound
with a more electropositive element. The existence of the hydride anion, suggested by Gilbert N. Lewis in 1916 for
group 1 and 2 salt-like hydrides, was demonstrated by Moers in 1920 by the electrolysis of molten lithium
hydride (LiH), producing a stoichiometryquantity of hydrogen at the anode.[39] For hydrides other than group 1 and 2
metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group 2
hydrides is BeH
2, which is polymeric. In lithium aluminium hydride, the AlH
4 anion carries hydridic centers firmly attached to the Al(III).
Although hydrides can be formed with almost all main-group elements, the number and combination of possible
compounds varies widely; for example, more than 100 binary borane hydrides are known, but only one binary
aluminium hydride.]Binary indium hydride has not yet been identified, although larger complexes exist.
In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination
complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides)
and aluminium complexes, as well as in clustered carboranes.

Protons and acids


Acidbase reaction
Oxidation of hydrogen removes its electron and gives H+, which contains no electrons and a nucleus which is usually
composed of one proton. That is why H+ is often called a proton. This species is central to discussion of acids. Under
the Bronsted-Lowry theory, acids are proton donors, while bases are proton acceptors.
A bare proton, H+, cannot exist in solution or in ionic crystals because of its unstoppable attraction to other atoms or
molecules with electrons. Except at the high temperatures associated with plasmas, such protons cannot be removed
from the electron clouds of atoms and molecules, and will remain attached to them. However, the term 'proton' is
sometimes used loosely and metaphorically to refer to positively charged or cationic hydrogen attached to other
species in this fashion, and as such is denoted "H+" without any implication that any single protons exist freely as a
species.
To avoid the implication of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes
considered to contain a less unlikely fictitious species, termed the "hydronium ion" (H3O+). However, even in this
case, such solvated hydrogen cations are more realistically conceived as being organized into clusters that form
species closer to H9O+4. Other oxonium ions are found when water is in acidic solution with other solvents.
Although exotic on Earth, one of the most common ions in the universe is the H+3 ion, known as protonated
molecular hydrogen or the trihydrogen cation. Isotopes: Isotopes of hydrogen

281
Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes,
it has no neutrons
Hydrogen has three naturally occurring isotopes, denoted 1H, 2H and 3H. Other, highly unstable nuclei (4H to 7
H) have been synthesized in the laboratory but not observed in nature.

1H is the most common hydrogen isotope with an abundance of more than 99.98%. Because the nucleus of this
isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium.[48]
2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the
nucleus. All deuterium in the universe is thought to have been produced at the time of the Big Bang, and has
endured since that time. Deuterium is not radioactive, and does not represent a significant toxicity hazard. Water
enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its
compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H-NMR spectroscopy.
Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for
commercial nuclear fusion.
3H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying
into helium-3 through beta decay with a half-life of 12.32 years. It is so radioactive that it can be used in luminous
paint, making it useful in such things as watches. The glass prevents the small amount of radiation from getting
out.[51] Small amounts of tritium are produced naturally by the interaction of cosmic rays with atmospheric gases;
tritium has also been released during nuclear weapons tests. It is used in nuclear fusion reactions, as a tracer
in isotope geochemistry, and in specialized self-powered lighting devices. Tritium has also been used in chemical
and biological labeling experiments as a radiolabel.
Hydrogen is the only element that has different names for its isotopes in common use today. During the early study of
radioactivity, various heavy radioactive isotopes were given their own names, but such names are no longer used,
except for deuterium and tritium. The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and
tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is not available for
protium.[57] In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry allows any of
D, T, 2H, and 3H to be used, although 2H and 3H are preferred.
The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, is also sometimes considered
as a light radioisotope of hydrogen, due to the mass difference between the antimuon and the electron. [59] which was
discovered in 1960 During the muon's 2.2 s lifetime, muonium can enter into compounds such as muonium chloride
(MuCl) or sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively.

282
Discovery and use of hydrogen

In 1671, Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in
the production of hydrogen gas. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete
substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air"
was in fact identical to the hypothetical substance called "phlogiston" and further finding in 1781 that the gas
produces water when burned. He is usually given credit for the discovery of hydrogen as an element. In
1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek - hydro meaning "water" and -
genes meaning "creator") when he and Laplace reproduced Cavendish's finding that water is produced when
hydrogen is burned.

Antoine-Laurent de Lavoisier Discovered hydrogen gas


Lavoisier produced hydrogen for his experiments on mass conservation by reacting a flux of steam with
metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water
at high temperature can be schematically represented by the set of following reactions:
Fe + H2O FeO + H2
2 Fe + 3 H2O Fe2O3 + 3 H2
2 Fe + 4 H2O Fe3O4 + 4 H2
3
Many metals such as zirconium undergo a similar reaction with water leading to the production of hydrogen.
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention,
the vacuum flask.[7] He produced solid hydrogen the next year. Deuterium was discovered in December 1931
by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy
water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in
1932.[7] Franois Isaac de Rivazbuilt the first de Rivaz engine, an internal combustion engine powered by a
mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819.
The Dbereiner's lamp and limelight were invented in 1823.
The first hydrogen-filled balloon was invented by Jacques Charles in 1783.] Hydrogen provided the lift for the first
reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard.[7] German
count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were
called Zeppelins; the first of which had its maiden flight in 1900.[7] Regularly scheduled flights started in 1910 and by
the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident.
Hydrogen-lifted airships were used as observation platforms and bombers during the war.
The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service
resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the
U.S. government refused to sell the gas for this purpose. Therefore, H 2 was used in the Hindenburg airship, which
was destroyed in a midair fire over New Jersey on 6 May 1937.[7] The incident was broadcast live on radio and
filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition
of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was
already done.
In the same year the first hydrogen-cooled turbogenerator went into service with gaseous hydrogen as a coolant in
the rotor and the stator in 1937 at Dayton, Ohio, by the Dayton Power & Light Co.; because of the thermal
conductivity of hydrogen gas, this is the most common type in its field today.
The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology
satellite-2 (NTS-2). For example, the ISS, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-
hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen
batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design
life.
Role in quantum theory

283
Hydrogen emission spectrum lines in the visible range. These are the four visible lines of the Balmer series
Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together
with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory
of atomicstructure.[73] Furthermore, study of the corresponding simplicity of the hydrogen molecule and the
corresponding cation H+
2 brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical
treatment of the hydrogen atom had been developed in the mid-1920s.
One of the first quantum effects to be explicitly noticed (but not understood at the time) was a Maxwell observation
involving hydrogen, half a century before full quantum mechanical theory arrived. Maxwell observed that
the specific heat capacity of H2 unaccountably departs from that of a diatomic gas below room temperature and
begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory,
this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in
H2 because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in
hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and
do not exhibit the same effect.
Antihydrogen () is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron.
Antihydrogen is the only type of antimatter atom to have been produced as of 2015.

HYDROGEN SPECTRAL LINES,

The emission spectrum of atomic hydrogen (n=3 to n=1) is divided into a number of spectral series, with wavelengths given by
the Rydberg formula. These observed spectral lines are due to the electron making transitions between two energy levels in the
atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics. The
spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts.

The spectral series of hydrogen, on a logarithmic scale.


Source; OrangeDog - Own work by uploader. A logarithmic plot of for , where n ranges from 1 to
6, nranges from n + 1 to , and R is the w:Rydberg constant
Spectral lines of Hydrogen, divided into series. Shown on a logarithmic scale.

284
Physics of hydrogen

Electron transitions and their resulting wavelengths for hydrogen. Energy levels are not to scale.

Electron transitions and their resulting wavelengths for hydrogen. Energy levels are not to scale.

Electron transitions and their resulting wavelengths for hydrogen. Energy levels are not to scale.

A_hidrogen_szinkepei.jpg: User:Szdori derivative work: OrangeDog (talk contribs) -A_hidrogen_szinkepei.jpg


Electron shell transitions of Hydrogen.

285
To a good approximation, a hydrogen atom can be thought of as consisting of an electron orbiting its nucleus.
The electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each
with its own energy. These states were visualized by the Bohr modelof the hydrogen atom as being distinct orbits around the
nucleus. Each energy state, or orbit, is designated by an integer, n as shown in the figure.
Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To
distinguish the two states, the lower energy state is commonly designated as n, and the higher energy state is designated
as n. The energy of an emitted photon corresponds to the energy difference between the two states. Because the energy of
each state is fixed, the energy difference between them is fixed, and the transition will always produce a photon with the same
energy.

The spectral lines are grouped into series according to n. Lines are named sequentially starting from the longest
wavelength/lowest frequency of the series, using Greek letters within each series. For example, the 2 1 line is called "Lyman-
alpha" (Ly-), while the 7 3 line is called "Paschen-delta" (Pa-).

energy level diagram of electrons in hydrogen atom

There are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. These emission lines
correspond to much rarer atomic events such as hyperfine transitions. The fine structure also results in single spectral lines
appearing as two or more closely grouped thinner lines, due to relativistic corrections.

Rydberg formula

Rydberg formula
The energy differences between levels in the Bohr model, and hence the wavelengths of emitted/absorbed photons, is given by
the Rydberg formula:

where Z is the atomic number, i.e. the number of protons in the atomic nucleus of this element; n is the upper energy
level; n is the lower energy level; and R is the Rydberg constant (1.097373107 m1).[4] Meaningful values are returned
only when n is greater than n. Note that this equation is valid for all hydrogen-like species, i.e. atoms having only a single
electron, and the particular case of hydrogen spectral lines are given by Z=1.

Series

286
Lyman series (n = 1)

Lyman series of hydrogen atom spectral lines in the ultraviolet


Main article: Lyman series

[Link]: Original uploader was Adriferr at [Link] derivative


work: OrangeDog (talk contribs) - [Link] Vectorised from the original. Precision also reduced to
agree with most sources.
Plot of the Lyman [Link];Orangedog own work.

The series is named after its discoverer, Theodore Lyman, who discovered the spectral lines from 19061914. All the
wavelengths in the Lyman series are in the ultraviolet band.

, vacuum
n
(nm)

2 121.57
3 102.57
4 97.254
5 94.974
6 93.780
91.175
[7]

Balmer series (n = 2)

287
The four visible hydrogen emission spectrum lines in the Balmer series. H-alpha is the red line at the right.

Named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in
1885. Balmer lines are historically referred to as "H-alpha", "H-beta", "H-gamma" and so on, where H is the element
hydrogen. Four of the Balmer lines are in the technically "visible" part of the spectrum, with wavelengths longer than
400 nm and shorter than 700 nm. Parts of the Balmer series can be seen in the solar spectrum. H-alpha is an important
line used in astronomy to detect the presence of hydrogen.

, air
n
(nm)

3 656.3
4 486.1
5 434.0
6 410.2
7 397.0
364.6
[7]

Paschen series (Bohr series, n = 3)

Named after the German physicist Friedrich Paschen who first observed them in 1908. The Paschen lines all lie in
the infrared band.[9] This series overlaps with the next (Brackett) series, i.e. the shortest line in the Brackett series has a
wavelength that falls among the Paschen series. All subsequent series overlap.

, air
n
(nm)

4 1875
5 1282
6 1094
7 1005
8 954.6
9 922.9
820.4
[7]

Brackett series (n = 4)

Named after the American physicist Frederick Sumner Brackett who first observed the spectral lines in 1922. [10]

288
, air
n
(nm)

5 4051
6 2625
7 2166
8 1944
9 1817
1458
[7]

Pfund series (n = 5)

Experimentally discovered in 1924 by August Herman Pfund.[11]

, vacuum
n
(nm)

6 7460
7 4654
8 3741
9 3297
10 3039
2279
[12]

Humphreys series (n = 6)

Discovered in 1953 by American physicist Curtis J. Humphreys.[13]

, vacuum
n
(m)

7 12.37
8 7.503
9 5.908
10 5.129
11 4.673
3.282
[12]

Further (n > 6)
Further series are unnamed, but follow the same pattern as dictated by the Rydberg equation. Series are increasingly
spread out and occur in increasing wavelengths. The lines are also increasingly faint, corresponding to increasingly rare

289
atomic events. The seventh series of atomic hydrogen was first demonstrated experimentally at infrared wavelengths in
1972 by John Strong and Peter Hansen at the University of Massachusetts Amherst.

Extension to other systems

The concepts of the Rydberg formula can be applied to any system with a single particle orbiting a nucleus, for example
a He+ ion or a muonium exotic atom. The equation must be modified based on the system's Bohr radius; emissions will be
of a similar character but at a different range of energies.
All other atoms possess at least two electrons in their neutral form and the interactions between these electrons makes
analysis of the spectrum by such simple methods as described here impractical. The deduction of the Rydberg formula
was a major step in physics, but it was long before an extension to the spectra of other elements could be accomplished.

Natural occurrence

NGC 604, a giant region of ionized hydrogen in the Triangulum Galaxy

Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal
matter by mass and more than 90% by number of atoms. (Most of the mass of the universe, however, is not in the
form of chemical-element type matter, but rather is postulated to occur as yet-undetected forms of mass such
as dark matter and dark energy.[77]) This element is found in great abundance in stars and gas
giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in
powering stars through the proton-proton reaction and the CNO cycle nuclear fusion.[78]
Throughout the universe, hydrogen is mostly found in the atomic and plasma states, with properties quite different
from those of molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in
very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged
particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the
Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic
state in the interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is
thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.[79]
Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H 2. However, hydrogen gas is very
rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from
Earth's gravitymore easily than heavier gases. However, hydrogen is the third most abundant element on the Earth's
surface, mostly in the form of chemical compounds such as hydrocarbons and water.[42] Hydrogen gas is produced
by some bacteria and algae and is a natural component of flatus, as is methane, itself a hydrogen source of
increasing importance.

290
A molecular form called protonated molecular hydrogen (H+3) is found in the interstellar medium, where it is
generated by ionization of molecular hydrogen from cosmic rays. This charged ion has also been observed in the
upper atmosphere of the planet Jupiter. The ion is relatively stable in the environment of outer space due to the low
temperature and density. H+
3 is one of the most abundant ions in the Universe, and it plays a notable role in the chemistry of the interstellar
medium.[82]Neutral triatomic hydrogen H3 can exist only in an excited form and is unstable.[83] By contrast, the
positive hydrogen molecular ion (H+2) is a rare molecule in the universe.

Production

Hydrogen production
H2 is produced in chemistry and biology laboratories, often as a by-product of other reactions; in industry for
the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents in
biochemical reactions.
Steam reforming
Hydrogen can be prepared in several different ways, but economically the most important processes involve removal
of hydrogen from hydrocarbons, as about 95% of hydrogen production came from steam reforming around year
2000.[84]Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.[85] At high
temperatures (10001400 K, 7001100 C or 13002000 F), steam (water vapor) reacts with methane to
yield carbon monoxide and H2.CH4 + H2O CO + 3 H2
This reaction is favored at low pressures but is nonetheless conducted at high pressures (2.0 MPa, 20 atm or
600 inHg). This is because high-pressure H2 is the most marketable product and pressure swing adsorption (PSA)
purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is
often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can
be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized
technology is the formation of coke or carbon:CH4 C + 2 H2Consequently, steam reforming typically employs
anexcess of H2O. Additional hydrogen can be recovered from the steam by use of carbon monoxide through
the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source
of carbon dioxide: CO + H2O CO2 + H2 Other important methods for H2 production include partial oxidation of
hydrocarbons:
2 CH4 + O2 2 CO + 4 H2 and the coal reaction, which can serve as a prelude to the shift reaction above:
C + H2O CO + H2 Hydrogen is sometimes produced and consumed in the same industrial process, without being
separated. In the Haber process for the production of ammonia, hydrogen is generated from
naturalgas. Electrolysis of brine to yield chlorine also produces hydrogen as a co-product.
Metal-acid In the laboratory, H2 is usually prepared by the reaction of dilute non-oxidizing acids on some reactive
metals such as zincwith Kipp's apparatus. Zn + 2 H+ Zn2+ + H2 Aluminium can also produce H2 upon treatment
with bases:2 Al + 6 H2O + 2 OH 2 Al(OH)4 + 3 H2 The electrolysis of water is a simple method of producing
hydrogen. A lowvoltage current is run through the water, and gaseous oxygen forms at the anode while gaseous
hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing
hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so
both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of
oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is in
the range 8094%.2 H2O(l) 2 H2(g) + O2(g)

An alloy of aluminium and gallium in pellet form added to water can be used to generate hydrogen. The process also
produces alumina, but the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-
used. This has important potential implications for a hydrogen economy, as hydrogen can be produced on-site and
does not need to be transported.
Thermochemical
There are more than 200 thermochemical cycles which can be used for water splitting, around a dozen of these
cycles such as the iron oxide cycle, cerium(IV) oxidecerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine
cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and in testing phase to produce hydrogen
and oxygen from water and heat without using electricity. A number of laboratories (including in France, Germany,

291
Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen from solar energy and
water.
Anaerobic corrosion
Under anaerobic conditions, iron and steel alloys are slowly oxidized by the protons of water concomitantly reduced
in molecular hydrogen (H2). The anaerobic corrosion of iron leads first to the formation of ferrous
hydroxide (green rust) and can be described by the following reaction:
Fe + 2 H2O Fe(OH)2 + H2
In its turn, under anaerobic conditions, the ferrous hydroxide (Fe(OH)2 ) can be oxidized by the protons of water to
form magnetite and molecular hydrogen. This process is described by the Schikorr reaction:
3 Fe(OH)2 Fe3O4 + 2 H2O + H2
ferrous hydroxide magnetite + water + hydrogen
The well crystallized magnetite (Fe3O4) is thermodynamically more stable than the ferrous hydroxide (Fe(OH)2 ).
This process occurs during the anaerobic corrosion of iron and steel in oxygen-free groundwater and in
reducing soils below the water table.
Geological occurrence: the serpentinization reaction
In the absence of atmospheric oxygen (O2), in deep geological conditions prevailing far away from Earth atmosphere,
hydrogen (H2) is produced during the process of serpentinization by the anaerobic oxidation by the water
protons(H+) of the ferrous (Fe2+) silicate present in the crystal lattice of the fayalite (Fe2SiO4, the olivine iron-
endmember). The corresponding reaction leading to the formation of magnetite (Fe3O4), quartz (SiO2) and
hydrogen (H2) is the following:3Fe2SiO4 + 2 H2O 2 Fe3O4 + 3 SiO2 + 3 H2fayalite + water magnetite + quartz
+ hydrogen
This reaction closely resembles the Schikorr reaction observed in the anaerobic oxidation of the ferrous
hydroxide in contact with water.

Formation in transformers
From all the fault gases formed in power transformers, hydrogen is the most common and is generated under most
fault conditions; thus, formation of hydrogen is an early indication of serious problems in the transformer's life cycle.

Applications
Consumption in processes
Large quantities of H2 are needed in the petroleum and chemical industries. The largest application of H2 is for the
processing ("upgrading") of fossil fuels, and in the production of ammonia. The key consumers of H2 in the
petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. H2 has several other
important uses. H2 is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated
fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of
hydrogen in the manufacture of hydrochloric acid. H2 is also used as a reducing agent of metallic ores.
Hydrogen is highly soluble in many rare earth and transition metals and is soluble in both nanocrystalline
and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal
lattice.] These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the
gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,[13] complicating the
design of pipelines and storage tanks. Apart from its use as a reactant, H2 has wide applications in physics and
engineering. It is used as a shielding gas in weldingmethods such as atomic hydrogen welding. H2 is used asthe
rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any
[Link] H2 is used in cryogenic research, including superconductivity studies. Because H2 is lighter than air,
having a little more than 114 of the density of air, it was once widely used as a lifting gas in balloons and airships.[101]
In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming gas) as a tracer
gas for minute leak detection. Applications can be found in the automotive, chemical, power generation, aerospace,
and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package leak
testing among other anti-oxidizing properties.
Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission
applications as a moderator to slow neutrons, and in nuclear fusion reactions.[7] Deuterium compounds have
applications in chemistry and biology in studies of reaction isotope effects. Tritium (hydrogen-3), produced

292
in nuclear reactors, is used in the production of hydrogen bombs,] as an isotopic label in the biosciences, and as
a radiation source in luminous paints.
The triple point temperature of equilibrium hydrogen is a defining fixed point on the ITS-90 temperature scale at
13.8033 kelvins.
Coolant: Hydrogen-cooled turbo generator
Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that
are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific
heat and thermal conductivity of all gases.
Energy carrier: Hydrogen economy and Hydrogen infrastructure
Hydrogen is not an energy resource,] except in the hypothetical context of commercial nuclear fusion power plants
using deuterium or tritium, a technology presently far from development. The Sun's energy comes from nuclear
fusion of hydrogen, but this process is difficult to achieve controllably on Earth. Elemental hydrogen from solar,
biological, or electrical sources requires more energy to make than is obtained by burning it, so in these cases
hydrogen functions as an energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as
methane), but these sources are unsustainable.
The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any practicable
pressure is significantly less than that of traditional fuel sources, although the energy density per unit fuel mass is
higher.[108]Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as a possible
future carrier of energy on an economy-wide scale. For example, CO2 sequestration followed by carbon capture
and storage could be conducted at the point of H2 production from fossil fuels.[112] Hydrogen used in transportation
would burn relatively cleanly, with some NOx emissions,[113] but without carbon emissions.[112] However, the
infrastructure costs associated with full conversion to a hydrogen economy would be substantial. [114] Fuel cells can
convert hydrogen and oxygen directly to electricity more efficiently than internal combustion engines.
Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous
silicon and amorphous carbon that helps stabilizing material properties.[116] It is also a potential electron donor in
various oxide materials,including ZnO, SnO2, CdO, MgO,
ZrO2, HfO2, La2O3, Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3.

Biological reactions
Biohydrogen and Biological hydrogen production (Algae)
H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via
reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the
reversible redox reaction between H2 and its component two protons and two electrons. Creation of hydrogen gas
occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water. The natural cycle of
hydrogen production and consumption by organisms is called the hydrogen cycle.
Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light
reactions in all photosynthetic organisms. Some such organisms, including the alga Chlamydomonas
reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are
reduced to form H2 gas by specialized hydrogenases in the chloroplast.[123] Efforts have been undertaken to
genetically modify cyanobacterial hydrogenases to efficiently synthesize H 2 gas even in the presence of
oxygen.[124] Efforts have also been undertaken with genetically modified alga in a bioreactor.

Safety and precautions


Hydrogen safety
Hydrogen poses a number of hazards to human safety, from potential detonations and fires when mixed with air to
being an asphyxiant in its pure, oxygen-free form.[126] In addition, liquid hydrogen is a cryogen and presents
dangers (such as frostbite) associated with very cold liquids.[127] Hydrogen dissolves in many metals, and, in addition
to leaking out, may have adverse effects on them, such as hydrogen embrittlement,[128] leading to cracks and
explosions.[129] Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire, while being
extremely hot, is almost invisible, and thus can lead to accidental burns. [130]
Even interpreting the hydrogen data (including safety data) is confounded by a number of phenomena. Many physical
and chemical properties of hydrogen depend on the parahydrogen/orthohydrogen ratio (it often takes days or
weeks at a given temperature to reach the equilibrium ratio, for which the data is usually given). Hydrogen

293
detonation parameters, such as critical detonation pressure and temperature, strongly depend on the container
geometry.

RAW MATERIAL 3; HUMAN SKIN


DESCRIPTION.
The human skin is the outer covering of the body. In humans, it is the largest organ of the integumentary system. The skin has
up to seven layers of ectodermal tissue and guards the underlying muscles, bones, ligaments and internal organs.[1] Human skin
is similar to that of most other mammals. Though nearly all human skin is covered with hair follicles, it can appear hairless.
There are two general types of skin, hairy and glabrous skin (hairless).[2] The adjective cutaneous literally means "of the skin"
(from Latin cutis, skin).

Chemical composition of the human skin, Mainly Carbon, Nitrogen, Hydrogen and Oxygen with small amounts of
Phosphorous Iron, Sodium, Magnesium, Sulphur, Calcium and Chlorine and traces of many others. The
composition of skin is virtually the same as the average for the body.

Because it interfaces with the environment, skin plays an important immunityrole in protecting the body against pathogens[3] and
excessive water loss.[4] Its other functions are insulation, temperature regulation, sensation, synthesis of vitamin D, and the
protection of vitamin B folates. Severely damaged skin will try to heal by forming scar tissue. This is often discolored and
depigmented.
In humans, skin pigmentation varies among populations, and skin type can range from dry to oily. Such skin variety provides a
rich and diverse habitat for bacteria that number roughly 1000 species from 19 phyla, present on the human skin.

Structure
Skin has mesodermal cells, pigmentation, such as melanin provided by melanocytes, which absorb some of the potentially
dangerous ultraviolet radiation (UV) in sunlight. It also contains DNA repair enzymes that help reverse UV damage, such that
people lacking the genes for these enzymes suffer high rates of skin cancer. One form predominantly produced by UV
light, malignant melanoma, is particularly invasive, causing it to spread quickly, and can often be deadly. Human skin
pigmentation varies among populations in a striking manner. This has led to the classification of people(s) on the basis of skin
color.
The skin is the largest organ in the human body. For the average adult human, the skin has a surface area of between 1.5-2.0
square meters (16.1-21.5 sq ft.). The thickness of the skin varies considerably over all parts of the body, and between men and
women and the young and the old. An example is the skin on the forearm which is on average 1.3 mm in the male and 1.26 mm
in the female.[8] The average square inch (6.5 cm) of skin holds 650 sweat glands, 20 blood vessels, 60,000 melanocytes, and
more than 1,000 nerve [Link] average human skin cell is about 30 micrometers in diameter, but there are variants. A skin
cell usually ranges from 25-40 micrometers (squared), depending on a variety of factors.
Skin is composed of three primary layers: the epidermis, the dermis and the hypodermis.[8]

Epidermis
Epidermis, "epi" coming from the Greek meaning "over" or "upon", is the outermost layer of the skin. It forms the waterproof,
protective wrap over the body's surface which also serves as a barrier to infection and is made up of stratified
squamous epithelium with an underlying basal lamina.
The epidermis contains no blood vessels, and cells in the deepest layers are nourished almost exclusively by diffused oxygen
from the surrounding air[10] and to a far lesser degree by blood capillaries extending to the outer layers of the dermis. The main
type of cells which make up the epidermis are Merkel cells, keratinocytes, with melanocytes and Langerhans cells also present.
The epidermis can be further subdivided into the following strata (beginning with the outermost layer): corneum, lucidum (only
in palms of hands and bottoms of feet), granulosum, spinosum, basale. Cells are formed through mitosis at the basale layer.
The daughter cells (see cell division) move up the strata changing shape and composition as they die due to isolation from their
blood source. The cytoplasm is released and the protein keratin is inserted. They eventually reach the corneum and slough off

294
(desquamation). This process is called "keratinization". This keratinized layer of skin is responsible for keeping water in the
body and keeping other harmful chemicals and pathogensout, making skin a natural barrier to infection.

2D projection of a 3D OCT-tomogram of the skin at the fingertip, depicting the stratum corneum(~500 m thick) with the stratum disjunctum on

top and the stratum lucidum in the middle. At the bottom are the superficial parts of the dermis. The sweatducts are clearly visible. ( Rotating 3D

Version)

Source;
CompoWikid77 - Still slice of rotating image, extracted from Wikimedia image:Image:[Link].
Still image of Optical Coherence Tomography (OCT) tomogram of a fingertip, depicting stratum corneum (~500m thick)
with stratum disjunctum on top and stratum lucidum (connection to stratum spinosum) in the middle. At the bottom are
superficial parts of the dermis. Sweatducts are clearly visible. This still image loads 90x times faster than the rotating-
image.
Comments.
The epidermis contains no blood vessels, and is nourished by diffusion from the dermis. The main type of cells which make up
the epidermis are keratinocytes, melanocytes, Langerhans cells and Merkels cells. The epidermis helps the skin to regulate
body temperature.
Layers
Epidermis is divided into several layers where cells are formed through mitosis at the innermost layers. They move up the strata
changing shape and composition as they differentiate and become filled with keratin. They eventually reach the top layer
called stratum corneum and are sloughed off, or desquamated. This process is called keratinization and takes place within
weeks. The outermost layer of the epidermis consists of 25 to 30 layers of dead cells.
Sublayers
Epidermis is divided into the following 5 sublayers or strata:

Stratum corneum
Stratum lucidum
Stratum granulosum
Stratum spinosum
Stratum germinativum (also called "stratum basale").
Blood capillaries are found beneath the epidermis, and are linked to an arteriole and a venule. Arterial shunt vessels may
bypass the network in ears, the nose and fingertips.

Dermis
The dermis is the layer of skin beneath the epidermis that consists of epithelial tissue and cushions the body from stress and
strain. The dermis is tightly connected to the epidermis by a basement membrane. It also harbors many nerve endingsthat
provide the sense of touch and heat. It contains the hair follicles, sweat glands, sebaceous glands, apocrine glands, lymphatic
vessels and blood vessels. The blood vessels in the dermis provide nourishment and waste removal from its own cells as well
as from the Stratum basale of the epidermis.

295
The dermis is structurally divided into two areas: a superficial area adjacent to the epidermis, called the papillary region, and a
deep thicker area known as the reticular region.
Papillary region
The papillary region is composed of loose areolar connective tissue. It is named for its fingerlike projections called papillae, that
extend toward the epidermis. The papillae provide the dermis with a "bumpy" surface that interdigitates with the epidermis,
strengthening the connection between the two layers of skin.
In the palms, fingers, soles, and toes, the influence of the papillae projecting into the epidermis forms contours in the skin's
surface. These epidermal ridges occur in patterns (see: fingerprint) that are genetically and epigenetically determined and are
therefore unique to the individual, making it possible to use fingerprints or footprints as a means of identification.

Reticular region
The reticular region lies deep in the papillary region and is usually much thicker. It is composed of dense irregular connective
tissue, and receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it.
These protein fibers give the dermis its properties of strength, extensibility, and elasticity.
Also located within the reticular region are the roots of the hairs, sebaceous glands, sweat glands, receptors, nails, and blood
vessels.
Tattoo ink is held in the dermis. Stretch marks often from pregnancy and obesity, are also located in the dermis.

Subcutaneous tissue

The subcutaneous tissue (also hypodermis and subcutis) is not part of the skin, and lies below the dermis of the cutis. Its
purpose is to attach the skin to underlying bone and muscle as well as supplying it with blood vessels and nerves. It consists of
loose connective tissue, adipose tissue and elastin. The main cell types
are fibroblasts, macrophages and adipocytes(subcutaneous tissue contains 50% of body fat). Fat serves as padding and
insulation for the body.

Cross-section

Skin layers, of both hairy and hairless skin

296
Development

Skin color
Human skin shows high skin color variety from the darkest brown to the lightest pinkish-white hues. Human skin shows higher
variation in color than any other single mammalian species and is the result of natural selection. Skin pigmentation in humans
evolved to primarily regulate the amount of ultraviolet radiation (UVR) penetrating the skin, controlling its biochemical effects.
The actual skin color of different humans is affected by many substances, although the single most important substance
determining human skin color is the pigment melanin. Melanin is produced within the skin in cells called melanocytes and it is
the main determinant of the skin color of darker-skinned humans. The skin color of people with light skin is determined mainly
by the bluish-white connective tissue under the dermis and by the hemoglobin circulating in the veins of the dermis. The red
color underlying the skin becomes more visible, especially in the face, when, as consequence of physical exerciseor the
stimulation of the nervous system (anger, fear), arterioles dilate.
There are at least five different pigments that determine the color of the skin. These pigments are present at different levels and
places.

Melanin: It is brown in color and present in the basal layer of the epidermis.
Melanoid: It resembles melanin but is present diffusely throughout the epidermis.
Carotene: This pigment is yellow to orange in color. It is present in the stratum corneum and fat cells of dermis
and superficial fascia.
Hemoglobin (also spelled haemoglobin): It is found in blood and is not a pigment of the skin but develops a purple color.
Oxyhemoglobin: It is also found in blood and is not a pigment of the skin. It develops a red color.
There is a correlation between the geographic distribution of UV radiation (UVR) and the distribution of indigenous skin
pigmentation around the world. Areas that highlight higher amounts of UVR reflect darker-skinned populations, generally
located nearer towards the equator. Areas that are far from the tropics and closer to the poles have lower concentration of
UVR, which is reflected in lighter-skinned populations.[15]
In the same population it has been observed that adult human females are considerably lighter in skin pigmentation than males.
Females need more calcium during pregnancy and lactation and vitamin D which is synthesized from sunlight helps in
absorbing calcium. For this reason it is thought that females may have evolved to have lighter skin in order to help their bodies
absorb more calcium.
The Fitzpatrick scale[17][18] is a numerical classification schema for human skin color developed in 1975 as a way to classify the
typical response of different types of skin to ultraviolet (UV) light:

I Always burns, never tans Pale, Fair, Freckles


II Usually burns, sometimes tans Fair
III May burn, usually tans Light Brown
IV Rarely burns, always tans Olive brown
V Moderate constitutional pigmentation Brown
VI Marked constitutional pigmentation Black
Aging

A typical rash

Skin infected with scabies

As skin ages, it becomes thinner and more easily damaged. Intensifying this effect is the decreasing ability of skin to heal itself
as a person ages.
Among other things, skin aging is noted by a decrease in volume and elasticity. There are many internal and external causes to
skin aging. For example, aging skin receives less blood flow and lower glandular activity.

297
A validated comprehensive grading scale has categorized the clinical findings of skin aging as laxity (sagging), rhytids
(wrinkles), and the various facets of photoaging, including erythema (redness), and telangiectasia, dyspigmentation (brown
discoloration), solar elastosis (yellowing), keratoses (abnormal growths) and poor texture.
Cortisol causes degradation of collagen, accelerating skin aging.
Anti-aging supplements are used to treat skin aging.
Photoaging
Photoaging has two main concerns: an increased risk for skin cancer and the appearance of damaged skin. In younger skin,
sun damage will heal faster since the cells in the epidermis have a faster turnover rate, while in the older population the skin
becomes thinner and the epidermis turnover rate for cell repair is lower which may result in the dermis layer being damaged.

Functions of the human skin

Skin performs the following functions:

1. Protection: an anatomical barrier from pathogens and damage between the internal and external environment in
bodily defense; Langerhans cells in the skin are part of the adaptive immune system. Perspiration
contains lysozyme that break the bonds within the cell walls of bacteria.
2. Sensation: contains a variety of nerve endings that react to heat and cold, touch, pressure, vibration, and tissue
injury; see somatosensory system and haptics.
3. Heat regulation: the skin contains a blood supply far greater than its requirements which allows precise control of
energy loss by radiation, convection and conduction. Dilated blood vessels increase perfusion and heatloss, while
constricted vessels greatly reduce cutaneous blood flow and conserve heat.
4. Control of evaporation: the skin provides a relatively dry and semi-impermeable barrier to fluid loss.[4] Loss of this
function contributes to the massive fluid loss in burns.
5. Aesthetics and communication: others see our skin and can assess our mood, physical state and attractiveness.
6. Storage and synthesis: acts as a storage center for lipids and water, as well as a means of synthesis of vitamin Dby
action of UV on certain parts of the skin.
7. Excretion: sweat contains urea, however its concentration is 1/130th that of urine, hence excretion by sweating is at
most a secondary function to temperature regulation.
8. Absorption: the cells comprising the outermost 0.250.40 mm of the skin are "almost exclusively supplied by external
oxygen", although the "contribution to total respiration is negligible".In addition, medicine can be administered through
the skin, by ointments or by means of adhesive patch, such as the nicotine patch or iontophoresis. The skin is an
important site of transport in many other organisms.
9. Water resistance: The skin acts as a water-resistant barrier so essential nutrients are not washed out of the body.

Skin flora

The human skin is a rich environment for microbes. Around 1000 species of bacteria from 19 bacterial phyla have been found.
Most come from only four phyla: Actinobacteria (51.8%), Firmicutes (24.4%), Proteobacteria (16.5%),
and Bacteroidetes (6.3%). Propionibacteria and Staphylococci species were the main species in sebaceous areas. There are
three main ecological areas: moist, dry and sebaceous. In moist places on the body Corynebacteria together
with Staphylococci dominate. In dry areas, there is a mixture of species but dominated by b
Proteobacteria and Flavobacteriales. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The
areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae,
and umbilical cord stump. Most similarly were beside the nostril, nares (inside the nostril), and on the back.
Reflecting upon the diversity of the human skin researchers on the human skin microbiome have observed: "hairy, moist
underarms lie a short distance from smooth dry forearms, but these two niches are likely as ecologically dissimilar as
rainforests are to deserts."
The NIH has launched the Human Microbiome Project to characterize the human microbiota which includes that on the skin
and the role of this microbiome in health and disease.
Microorganisms like Staphylococcus epidermidis colonize the skin surface. The density of skin flora depends on region of the
skin. The disinfected skin surface gets recolonized from bacteria residing in the deeper areas of the hair follicle, gut and
urogenital openings.

Clinical significance

298
Diseases of the skin include skin infections and skin neoplasms (including skin cancer).
Dermatology is the branch of medicine that deals with conditions of the skin.

Society and culture

Hygiene and skin care


Exfoliation (cosmetology)
The skin supports its own ecosystems of microorganisms, including yeasts and bacteria, which cannot be removed by any
amount of cleaning. Estimates place the number of individual bacteria on the surface of one square inch (6.5 square cm) of
human skin at 50 million, though this figure varies greatly over the average 20 square feet (1.9 m2) of human skin. Oily surfaces,
such as the face, may contain over 500 million bacteria per square inch (6.5 cm). Despite these vast quantities, all of the
bacteria found on the skin's surface would fit into a volume the size of a pea. [25] In general, the microorganisms keep one
another in check and are part of a healthy skin. When the balance is disturbed, there may be an overgrowth and infection, such
as when antibiotics kill microbes, resulting in an overgrowth of yeast. The skin is continuous with the inner epithelial lining of the
body at the orifices, each of which supports its own complement of microbes.
Cosmetics should be used carefully on the skin because these may cause allergic reactions. Each season requires suitable
clothing in order to facilitate the evaporation of the sweat. Sunlight, water and air play an important role in keeping the skin
healthy.
Oily skin
Oily skin is caused by over-active sebaceous glands, that produce a substance called sebum, a naturally healthy skin
lubricant.[1] When the skin produces excessive sebum, it becomes heavy and thick in texture. Oily skin is typified by shininess,
blemishes and pimples. The oily-skin type is not necessarily bad, since such skin is less prone to wrinkling, or other signs of
aging,[1] because the oil helps to keep needed moisture locked into the epidermis (outermost layer of skin).
The negative aspect of the oily-skin type is that oily complexions are especially susceptible to clogged pores, blackheads, and
buildup of dead skin cells on the surface of the skin. [1] Oily skin can be sallow and rough in texture and tends to have large,
clearly visible pores everywhere, except around the eyes and neck. [1]

Permeability

Human skin has a low permeability; that is, most foreign substances are unable to penetrate and diffuse through the skin. Skin's
outermost layer, the stratum corneum, is an effective barrier to most inorganic nanosized particles.[26][27] This protects the body
from external particles such as toxins by not allowing them to come into contact with internal tissues. However, in some cases it
is desirable to allow particles entry to the body through the skin. Potential medical applications of such particle transfer has
prompted developments in nanomedicine and biology to increase skin permeability. One application of transcutaneous particle
delivery could be to locate and treat cancer. Nanomedical researchers seek to target the epidermis and other layers of active
cell division where nanoparticles can interact directly with cells that have lost their growth-control mechanisms (cancer cells).
Such direct interaction could be used to more accurately diagnose properties of specific tumors or to treat them by delivering
drugs with cellular specificity.

Nanoparticles

Nanoparticles 40 nm in diameter and smaller have been successful in penetrating the skin.[28][29][30] Research confirms that
nanoparticles larger than 40 nm do not penetrate the skin past the stratum corneum. Most particles that do penetrate will diffuse
through skin cells, but some will travel down hair follicles and reach the dermis layer.
The permeability of skin relative to different shapes of nanoparticles has also been studied. Research has shown that spherical
particles have a better ability to penetrate the skin compared to oblong (ellipsoidal) particles because spheres are symmetric in
all three spatial dimensions.[30] One study compared the two shapes and recorded data that showed spherical particles located
deep in the epidermis and dermis whereas ellipsoidal particles were mainly found in the stratum corneum and epidermal
layers.[31] Nanorods are used in experiments because of their unique fluorescent properties but have shown mediocre
penetration.
Nanoparticles of different materials have shown skins permeability limitations. In many experiments, gold nanoparticles 40 nm
in diameter or smaller are used and have shown to penetrate to the epidermis. Titanium oxide (TiO2), zinc oxide(ZnO),
and silver nanoparticles are ineffective in penetrating the skin past the stratum corneum. [32][33] Cadmium selenide(CdSe) quantum
dots have proven to penetrate very effectively when they have certain properties. Because CdSe is toxic to living organisms,
the particle must be covered in a surface group. An experiment comparing the permeability of quantum dots coated
in polyethylene glycol (PEG), PEG-amine, and carboxylic acid concluded the PEG and PEG-amine surface groups allowed for
the greatest penetration of particles. The carboxylic acid coated particles did not penetrate past the stratum corneum. [31]

299
Increasing permeability

Scientists previously believed that the skin was an effective barrier to inorganic particles. Damage from mechanical stressors
was believed to be the only way to increase its permeability.[34] Recently, however, simpler and more effective methods for
increasing skin permeability have been developed. For example, ultraviolet radiation (UVR) has been used to slightly damage
the surface of skin, causing a time-dependent defect allowing easier penetration of nanoparticles. [35] The UVRs high energy
causes a restructuring of cells, weakening the boundary between the stratum corneum and the epidermal layer. [35][36] The
damage of the skin is typically measured by the transepidermal water loss (TEWL), though it may take 35 days for the TEWL
to reach its peak value. When the TEWL reaches its highest value, the maximum density of nanoparticles is able to permeate
the skin. Studies confirm that UVR damaged skin significantly increases the permeability. [35][36] The effects of increased
permeability after UVR exposure can lead to an increase in the number of particles that permeate the skin. However, the
specific permeability of skin after UVR exposure relative to particles of different sizes and materials has not been determined.[35]
Other skin damaging methods used to increase nanoparticle penetration include tape stripping, skin abrasion, and chemical
enhancement. Tape stripping is the process in which tape is applied to skin then lifted to remove the top layer of skin. Skin
abrasion is done by shaving the top 5-10 micrometers off the surface of the skin. Chemical enhancement is the process in
which chemicals such as polyvinylpyrrolidone (PVP), dimethyl sulfoxide (DMSO), and oleic acid are applied to the surface of
the skin to increase permeability.
Electroporation is the application of short pulses of electric fields on skin and has proven to increase skin permeability. The
pulses are high voltage and on the order of milliseconds when applied. Charged molecules penetrate the skin more frequently
than neutral molecules after the skin has been exposed to electric field pulses. Results have shown molecules on the order of
100 micrometers to easily permeate electroporated skin.

Applications

A large area of interest in nanomedicine is the transdermal patch because of the possibility of a painless application of
therapeutic agents with very few side effects. Transdermal patches have been limited to administer a small number of drugs,
such as nicotine, because of the limitations in permeability of the skin. Development of techniques that increase skin
permeability has led to more drugs that can be applied via transdermal patches and more options for patients.
Increasing the permeability of skin allows nanoparticles to penetrate and target cancer cells. Nanoparticles along with multi-
modal imaging techniques have been used as a way to diagnose cancer non-invasively. Skin with high permeability allowed
quantum dots with an antibody attached to the surface for active targeting to successfully penetrate and identify
cancerous tumors in mice. Tumor targeting is beneficial because the particles can be excited using fluorescence
microscopy and emit light energy and heat that will destroy cancer cells.

Sunblock and sunscreen

Sunblock and sunscreen are different important skin-care products though both offer full protection from the sun.
SunblockSunblock is opaque and stronger than sunscreen, since it is able to block most of the UVA/UVB rays and radiation
from the sun, and does not need to be reapplied several times in a day. Titanium dioxide and zinc oxide are two of the
important ingredients in sunblock.
SunscreenSunscreen is more transparent once applied to the skin and also has the ability to protect against UVA/UVB rays,
although the sunscreen's ingredients have the ability to break down at a faster rate once exposed to sunlight, and some of the
radiation is able to penetrate to the skin. In order for sunscreen to be more effective it is necessary to consistently reapply and
use one with a higher sun protection factor.

Diet

Vitamin A, also known as retinoids, benefits the skin by normalizing keratinization, downregulating sebum production which
contributes to acne, and reversing and treating photodamage, striae, and cellulite.
Vitamin D and analogs are used to downregulate the cutaneous immune system and epithelial proliferation while promoting
differentiation.
Vitamin C is an antioxidant that regulates collagen synthesis, forms barrier lipids, regenerates vitamin E, and provides
photoprotection.
Vitamin E is a membrane antioxidant that protects against oxidative damage and also provides protection against
harmful UVrays.
Several scientific studies confirmed that changes in baseline nutritional status affects skin condition.
The Mayo Clinic lists foods they state help the skin: yellow, green, and orange fruits and vegetables; fat-
free dairy products; whole-grain foods; fatty fish, nuts.

300
PROCEDURE.
Measurments must be done for the raw materials and electricity received by the raw material , the human
skin and as electronic device switched on to the human skin and the the skys galaxies for superposition
quantum superposition attained, as stop clock switched on and off at start and end, astronomer identifies the
stars , milky ways at galaxies,
RESULTS
Scientific theory of the big bang validated to be correct as galaxies display stars,milkways.
THE TEN BRIGHTEST STARS WITHIN TEN PARSECS

The brightest star in the sky, Sirius, happens to be rather close to the Sun at only 2.6 parsecs. This is an
oddity, because most of the bright stars in the sky are very distant. Out of the 357 known stars within 10 parsecs
of the Sun, only 8 are close enough and luminous enough to have an apparentvisual magnitude less than 2. If one
were to construct such a table for stars with apparent visual magnitudes less than or equal to 6, one would only
find 53 stars. These characteristics of the closest stars underlie the points that most stars are much less luminous
than the Sun and most bright stars in the night sky are very distant.
The table shows the 10 brightest stars within 10 parsecs of the Sun. This table is derived from the
Preliminary Version of the Third Catalog of Nearby Stars of Gliese and Jahreiss (1991), [1]which contains all
known stars within 25 parsecs of the Sun. Of the stars within 10 parsecs, the table shows all that have an apparent
visual magnitude less than 3 and all four A type stars (A is the spectral
classification), which are the four brightest main-sequence stars. The table also contains the sole redgiant
within 10 parsecs, the star Pollux. That this star is a red giant is indicated by the roman numeral
III in the spectral classification of the star. All stars in the table but Pollux are main sequence stars. Several
of these stars are marked on the Hertzsprung-Russell diagram for the nearest stars.
Several stars in this list are members of binary star systems, but only Rigil Kentaurus has both stars in
the table. The brighter star in a binary is labeled with an A, and the companions star is labeled with a B.
The first column in the table gives the common names of each star. The second column gives the catalog
number from the Catalog of Nearby Stars. The third column gives the distance based on a star's annual parallax.
The fourth column gives the apparent visual magnitude (V) of each star. The absolute visual magnitude (MV) is
given in the fifth column. The color index (B-V), which is the difference of the B (blue) apparent magnitude
and the V apparent magnitude, is given in the sixth
column. The final column gives the spectral type and luminosity class of each star.
Brightest Stars within 10 Parsecs.

301
10 Brightest Stars within 10 Parsecs
Catalog Distance Stellar
Name Number (pc) V MV BV Type

Sirius A
Canis 1.47
Majoris A Gl 244 2.63 1.43 0.00 A1 V

Rigil
Kentaurus A 4.38
Centauri A Gl 559 1.34 0.01 0.64 G2 V

Vega
Lyrae Gl 721 7.72 0.03 0.59 0.00 A0 V

Procyon A
Canis F5 IV
Minoris Gl 280 3.50 0.38 2.66 0.42 V

Altair A7 IV
Aquilae Gl 768 5.00 0.77 2.29 0.22 V

Pollux
Geminorum Gl 286 9.97 1.14 1.15 1.00 K0 IIIb

Fomalhaut
Piscis
Austrini Gl 881 6.51 1.16 2.09 0.09 A3 V

Rigil
Kentaurus B
Centauri B Gl 559 1.34 1.34 5.71 0.84 K0 V

Botis Gl 534 9.55 2.68 2.78 0.58 G0 IV

302
Name Catalog Distance Stellar
Number (pc) V MV BV Type

Hydrae Gl 19 6.45 2.80 3.76 0.62 G2 IV

The ten brightest stars within ten parsecs

Source; Astrophysics spectator.

ENERGY EXCESS IN NUCLEI.

Nuclear fusion in stars takes elements with a high rest mass per nucleon and transforms them into
elements with a low rest mass per nucleon (proton or neutron). In the table given below, the rest mass excess
per nucleon is given for elements of different charge and atomic number. The values are given as an energy
difference in MeV from the rest mass energy per nucleon of carbon-12.
In this table, the nuclei with charge and atomic numbers that are an integer multiple of helium-4's
charge and atomic number are highlighted in red. Of the remaining atoms, those that are intermediate states of
the proton-proton process of converting hydrogen into helium are highlighted in green. The remaining atoms
that are involved in the CNO cycle for the conversion of hydrogen into helium are highlighted in blue.
This table highlights several points about the elements that impacts nuclear fusion within stars. First, the
largest excess is in hydrogen, at 7.3 MeV, and the greatest energy release is in conversion of hydrogen into
helium, which releases 6.7 MeV of energy per nucleon. The next point is that elements that are multiples of the
helium charge and atomic number are energetically favored over other isotopic states. The final point is that the
energy per nucleon in beryllium-8 is slightly less that the energy per nucleon in helium-4, which makes
beryllium-8 unstable to decay into two helium
atoms.

Rest Mass Excess


Go to the table.
A H He Li Be B C N O
1 7.289
2 6.567
3 4.983 4.977
4 7.055 0.606
5 6.218 2.291 2.336
6 2.933 2.348 3.063
7 3.719 2.130 2.253 4.000
8 4.000 2.618 0.618 2.865
9 2.773 1.261 1.380 3.221
10 1.261 1.205 1.566
11 1.835 0.789 0.968
12 1.114 0.000 1.447
13 1.274 0.240 0.411

303
14 0.216 0.205 0.572
15 0.658 0.007 0.191
16 0.355 0.296 0.682
17 0.463 0.048 0.115
18 0.043 0.048 0.296
19 0.175 0.078 0.092
20 0.190 0.001 0.352
21 0.002 0.273
22 0.365
23 0.224
24 0.248
A H He Li Be B C N O F Ne

.Rest mass excess.

Source; Astrophysics spectator.

ELEMENTAL ABUNDANCE.

The observed elemental abundances in the Solar System are given in the table at the bottom of this page
as the number of atoms of the listed element divided by the number of the hydrogen atoms. The error in the
measured elemental abundance is given as a percentage of the derived value. The condensation temperature is
given for those elements that are known to condense. All values give in the following table are derived from
Newson (1995).[1]
The abundances of elements in the solar system carry the signature of thermonuclear fusion within
stars. While the most abundant elements are hydrogen (H) and helium (He), reflecting the equilibrium
composition of the early universe, the high abundances of carbon (C), oxygen (O), neon (Ne), and magnesium
(Mg) reflect the stages of nuclear fusion involving the conversion of heliuminto heavier elements, while the
high abundance of nitrogen (N) reflects the CNO process ofhydrogen fusion, which converts carbon and
oxygen into nitrogen (Go to table).

Solar Abundances

Solar Asteroid
Chemical Fraction Error Fraction Error T
Z Symbol of H (%) of H (%) (K)

1 H 1

2 He 9.75102 8.4

3 Li 1.451011 30.0 2.05109 9.2 1225

4 Be 1.411011 26.0 2.621012 9.5

304
Solar Asteroid
Chemical Fraction Error Fraction Error T
Z Symbol of H (%) of H (%) (K)

5 B 4.001010 100.0 7.601010 10.0

6 C 3.62104 9.6

7 N 1.12104 9.6

8 O 8.53104 8.4

9 F 3.02108 100.0 3.63108 15.0 736

10 Ne 1.23104 14.0

11 Na 2.14106 7.0 2.06106 7.1 970

12 Mg 3.80105 12.0 3.85105 3.8 1340

13 Al 2.95106 17.0 3.04106 3.6 1650

14 Si 3.55105 12.0 3.58105 4.4 1311

15 P 2.82107 10.0 3.73107 10.0 1151

16 S 1.62105 15.0 1.85105 13.0 648

17 Cl 3.00107 100.0 1.88107 15.0 863

18 Ar 3.62106 6.0

19 K 1.32107 35.0 1.35107 7.7 1000

20 Ca 2.29106 5.0 2.19106 7.1 1518

21 Sc 1.26109 23.0 1.23109 8.6 1644

22 Ti 9.77108 5.0 8.60108 5.0 1549

23 V 1.00108 5.0 1.05108 5.1 1450

24 Cr 4.68107 7.0 4.84107 7.6 1277

25 Mn 2.45107 7.0 3.42107 9.6 1190

26 Fe 4.68105 7.0 3.23105 2.7 1336

27 Co 8.32108 10.0 8.06108 6.6 1351

28 Ni 1.78106 10.0 1.77106 5.1 1354

29 Cu 1.62108 10.0 1.87108 11.0 1037

30 Zn 3.98108 20.0 4.52108 4.4 825

31 Ga 7.591010 26.0 1.35109 6.9 918

305
Solar Asteroid
Chemical Fraction Error Fraction Error T
Z Symbol of H (%) of H (%) (K)

32 Ge 2.57109 38.0 4.27109 9.6 825


[1] Newson, Horton E., Composition of the Solar System, Planets, Meteorites, and Major Terrestrial
Reservoirs. In Global Earth Physics: A Handbook of Physical Constants edited by T.J. Ahrens, 159189.
AGU Reference Shelf, No. 1. Washington: American Geophysical Union, 1995.
Source; Astrophysics spectator.

DISCUSSION OFHUMAN SKIN EXPERIMENT ON THE FORMATION OF STARS.

DISCUSSION ON STAR FORMATION,


In nuclear physics and nuclear chemistry, nuclear fission is either a nuclear reaction or a radioactive
decay process in which the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often
produces free neutrons and gamma photons, and releases a very large amount of energy even by the energetic
standards of radioactive decay.
Nuclear fission of heavy elements was discovered on December 17, 1938 by German Otto Hahn and his
assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her nephew Otto
Robert Frisch. Frisch named the process by analogy with biological fission of living cells. It is an exothermic
reactionwhich can release large amounts of energy both as electromagnetic radiation and as kinetic energy of
the fragments (heating the bulk material where fission takes place). In order for fission to produce energy, the
total binding energy of the resulting elements must be less negative (higher energy) than that of the starting
elemen

Fission is a form of nuclear transmutation because the resulting fragments are not the same element as the
original atom. The two nuclei produced are most often of comparable but slightly different sizes, typically with a
mass ratio of products of about 3 to 2, for common fissile isotopes.[1][2] Most fissions are binary fissions (producing
two charged fragments), but occasionally (2 to 4 times per 1000 events), three positively charged fragments are
produced, in a ternary fission. The smallest of these fragments in ternary processes ranges in size from a proton to
an argonnucleus.

Apart from fission induced by a neutron, harnessed and exploited by humans, a natural form of
spontaneous radioactive decay (not requiring a neutron) is also referred to as fission, and occurs especially in very
high-mass-number isotopes. Spontaneous fission was discovered in 1940
by Flyorov, Petrzhak and Kurchatov[3]in Moscow, when they decided to confirm that, without bombardment by
neutrons, the fission rate of uranium was indeed negligible, as predicted by Niels Bohr; it was not.
The unpredictable composition of the products (which vary in a broad probabilistic and somewhat chaotic manner)
distinguishes fission from purely quantum-tunneling processes such as proton emission, alpha decay, and cluster
decay, which give the same products each time. Nuclear fission produces energy for nuclear power and drives the
explosion of nuclear weapons. Both uses are possible because certain substances called nuclear fuels undergo
fission when struck by fission neutrons, and in turn emit neutrons when they break apart. This makes possible a
self-sustaining nuclear chain reaction that releases energy at a controlled rate in a nuclear reactor or at a very
rapid uncontrolled rate in a nuclear weapon.
The amount of free energy contained in nuclear fuel is millions of times the amount of free energy contained in a
similar mass of chemical fuel such as gasoline, making nuclear fission a very dense source of energy. The products
of nuclear fission, however, are on average far more radioactive than the heavy elements which are normally
fissioned as fuel, and remain so for significant amounts of time, giving rise to a nuclear waste problem. Concerns

306
over nuclear waste accumulation and over the destructive potential of nuclear weapons are a counterbalance to
the peaceful desire to use fission as an energy source, and give rise to ongoing political debate over nuclear
power.

Physical overview
Mechanism

A visual representation of an induced nuclear fission event where a slow-moving neutron is absorbed by the
nucleus of a uranium-235 atom, which fissions into two fast-moving lighter elements (fission products) and
additional neutrons. Most of the energy released is in the form of the kinetic velocities of the fission products and
the neutrons.

Fission product yields by mass for thermal neutronfission of U-235, Pu-239, a combination of the two typical of
current nuclear power reactors, and U-233used in the thorium cycle.
;Fission product yields by mass for thermal neutron fission of U-235, Pu-239, a combination of the two
typical of current nuclear power reactors, and U-233 used in the thorium cycle.
Source; JWB at [Link]

Nuclear fission can occur without neutron bombardment as a type of radioactive decay. This type of fission
(called spontaneous fission) is rare except in a few heavy isotopes. In engineered nuclear devices, essentially all
nuclear fission occurs as a "nuclear reaction" a bombardment-driven process that results from the collision of
two subatomic particles. In nuclear reactions, a subatomic particle collides with an atomic nucleus and causes
changes to it. Nuclear reactions are thus driven by the mechanics of bombardment, not by the relatively
constant exponential decay and half-life characteristic of spontaneous radioactive processes.
Many types of nuclear reactions are currently known. Nuclear fission differs importantly from other types of
nuclear reactions, in that it can be amplified and sometimes controlled via a nuclear chain reaction (one type of
general chain reaction). In such a reaction, free neutrons released by each fission event can trigger yet more
events, which in turn release more neutrons and cause more fissions.

307
The chemical element isotopes that can sustain a fission chain reaction are called nuclear fuels, and are said to
be fissile. The most common nuclear fuels are 235U (the isotope of uranium with an atomic mass of 235 and of
use in nuclear reactors) and 239Pu (the isotope of plutoniumwith an atomic mass of 239). These fuels break apart
into a bimodal range of chemical elements with atomic masses centering near 95 and 135 u (fission products).
Most nuclear fuels undergo spontaneous fission only very slowly, decaying instead mainly via an alpha-
betadecay chain over periods of millennia to eons. In a nuclear reactor or nuclear weapon, the overwhelming
majority of fission events are induced by bombardment with another particle, a neutron, which is itself produced by
prior fission events.
Nuclear fissions in fissile fuels are the result of the nuclear excitation energy produced when a fissile nucleus
captures a neutron. This energy, resulting from the neutron capture, is a result of the attractive nuclear
force acting between the neutron and nucleus. It is enough to deform the nucleus into a double-lobed "drop," to
the point that nuclear fragments exceed the distances at which the nuclear force can hold two groups of charged
nucleons together and, when this happens, the two fragments complete their separation and then are driven
further apart by their mutually repulsive charges, in a process which becomes irreversible with greater and greater
distance. A similar process occurs in fissionable isotopes (such as uranium-238), but in order to fission, these
isotopes require additional energy provided by fast neutrons (such as those produced by nuclear
fusion in thermonuclear weapons).
The liquid drop model of the atomic nucleus predicts equal-sized fission products as an outcome of nuclear
deformation. The more sophisticated nuclear shell model is needed to mechanistically explain the route to the
more energetically favorable outcome, in which one fission product is slightly smaller than the other. A theory of
the fission based on shell model has been formulated by Maria Goeppert Mayer.
The most common fission process is binary fission, and it produces the fission products noted above, at 9515 and
13515 u. However, the binary process happens merely because it is the most probable. In anywhere from 2 to 4
fissions per 1000 in a nuclear reactor, a process called ternary fission produces three positively charged fragments
(plus neutrons) and the smallest of these may range from so small a charge and mass as a proton (Z=1), to as large a
fragment as argon(Z=18). The most common small fragments, however, are composed of 90% helium-4 nuclei with
more energy than alpha particles from alpha decay (so-called "long range alphas" at ~ 16 MeV), plus helium-6
nuclei, and tritons (the nuclei of tritium). The ternary process is less common, but still ends up producing significant
helium-4 and tritium gas buildup in the fuel rods of modern nuclear reactors. [4]
Energetics
Input

The stages of binary fission in a liquid drop model.

Energy input deforms the nucleus into a fat "cigar" shape, then a "peanut" shape, followed by binary fission as the
two lobes exceed the short-range nuclear forceattraction distance, then are pushed apart and away by their

308
electrical charge. In the liquid drop model, the two fission fragments are predicted to be the same size. The nuclear
shell model allows for them to differ in size, as usually experimentally observed.
The fission of a heavy nucleus requires a total input energy of about 7 to 8 million electron volts(MeV) to initially
overcome the nuclear force which holds the nucleus into a spherical or nearly spherical shape, and from there,
deform it into a two-lobed ("peanut") shape in which the lobes are able to continue to separate from each other,
pushed by their mutual positive charge, in the most common process of binary fission (two positively charged
fission products + neutrons). Once the nuclear lobes have been pushed to a critical distance, beyond which the
short range strong force can no longer hold them together, the process of their separation proceeds from the
energy of the (longer range) electromagnetic repulsion between the fragments. The result is two fission fragments
moving away from each other, at high energy.
About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus
via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission.
Uranium-238, for example, has a near-zero fission cross section for neutrons of less than one MeV energy. If no
additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the
neutron, as happens when U-238 absorbs slow and even some fraction of fast neutrons, to become U-239. The
remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy
of the incoming neutron, which is increasingly able to fission a fissionableheavy nucleus as it exceeds a kinetic
energy of one MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission U-238 directly
(see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However,
this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons
produced by any type of fission have enough energy to efficiently fission U-238 (fission neutrons have
a mode energy of 2 MeV, but a median of only 0.75 MeV, meaning half of them have less than this insufficient
energy). Among the heavy actinide elements, however, those isotopes that have an odd number of neutrons (such
as U-235 with 143 neutrons) bind an extra neutron with an additional 1 to 2 MeV of energy over an isotope of the
same element with an even number of neutrons (such as U-238 with 146 neutrons). This extra binding energy is
made available as a result of the mechanism of neutron pairing effects. This extra energy results from the Pauli
exclusion principle allowing an extra neutron to occupy the same nuclear orbital as the last neutron in the nucleus,
so that the two form a pair. In such isotopes, therefore, no neutron kinetic energy is needed, for all the necessary
energy is supplied by absorption of any neutron, either of the slow or fast variety (the former are used in
moderated nuclear reactors, and the latter are used in fast neutron reactors, and in weapons). As noted above,
the subgroup of fissionable elements that may be fissioned efficiently with their own fission neutrons (thus
potentially causing a nuclear chain reaction in relatively small amounts of the pure material) are termed "fissile."
Examples of fissile isotopes are U-235 and plutonium-239.
Output
Typical fission events release about two hundred million eV (200 MeV) of energy for each fission event. The exact
isotope which is fissioned, and whether or not it is fissionable or fissile, has only a small impact on the amount of
energy released. This can be easily seen by examining the curve of binding energy (image below), and noting that
the average binding energy of the actinide nuclides beginning with uranium is around 7.6 MeV per nucleon. Looking
further left on the curve of binding energy, where the fission products cluster, it is easily observed that the binding
energy of the fission products tends to center around 8.5 MeV per nucleon. Thus, in any fission event of an isotope
in the actinide's range of mass, roughly 0.9 MeV is released per nucleon of the starting element. The fission of U235
by a slow neutron yields nearly identical energy to the fission of U238 by a fast neutron. This energy release profile
holds true for thorium and the various minor actinides as well.
By contrast, most chemical oxidation reactions (such as burning coal or TNT) release at most a few eV per event.
So, nuclear fuel contains at least ten million times more usable energy per unit mass than does chemical fuel.
The energy of nuclear fission is released as kinetic energy of the fission products and fragments, and
as electromagnetic radiation in the form of gamma rays; in a nuclear reactor, the energy is converted to heat as
the particles and gamma rays collide with the atoms that make up the reactor and its working fluid,
usually water or occasionally heavy water or molten salts.
When a uranium nucleus fissions into two daughter nuclei fragments, about 0.1 percent of the mass of the uranium
nucleus[7] appears as the fission energy of ~200 MeV. For uranium-235 (total mean fission energy 202.5 MeV),
typically ~169 MeV appears as the kinetic energy of the daughter nuclei, which fly apart at about 3% of the speed
of light, due to Coulomb repulsion. Also, an average of 2.5 neutrons are emitted, with a mean kinetic energy per

309
neutron of ~2 MeV (total of 4.8 MeV).[8] The fission reaction also releases ~7 MeV in prompt gamma ray photons.
The latter figure means that a nuclear fission explosion or criticality accident emits about 3.5% of its energy as
gamma rays, less than 2.5% of its energy as fast neutrons (total of both types of radiation ~ 6%), and the rest as
kinetic energy of fission fragments (this appears almost immediately when the fragments impact surrounding
matter, as simple heat). In an atomic bomb, this heat may serve to raise the temperature of the bomb core to
100 million kelvin and cause secondary emission of soft X-rays, which convert some of this energy to ionizing
radiation. However, in nuclear reactors, the fission fragment kinetic energy remains as low-temperature heat,
which itself causes little or no ionization.
So-called neutron bombs (enhanced radiation weapons) have been constructed which release a larger fraction of
their energy as ionizing radiation (specifically, neutrons), but these are all thermonuclear devices which rely on the
nuclear fusion stage to produce the extra radiation. The energy dynamics of pure fission bombs always remain at
about 6% yield of the total in radiation, as a prompt result of fission.
The total prompt fission energy amounts to about 181 MeV, or ~ 89% of the total energy which is eventually
released by fission over time. The remaining ~ 11% is released in beta decays which have various half-lives, but
begin as a process in the fission products immediately; and in delayed gamma emissions associated with these beta
decays. For example, in uranium-235 this delayed energy is divided into about 6.5 MeV in betas, 8.8 MeV
in antineutrinos (released at the same time as the betas), and finally, an additional 6.3 MeV in delayed gamma
emission from the excited beta-decay products (for a mean total of ~10 gamma ray emissions per fission, in all).
Thus, about 6.5% of the total energy of fission is released some time after the event, as non-prompt or delayed
ionizing radiation, and the delayed ionizing energy is about evenly divided between gamma and beta ray energy.
In a reactor that has been operating for some time, the radioactive fission products will have built up to steady
state concentrations such that their rate of decay is equal to their rate of formation, so that their fractional total
contribution to reactor heat (via beta decay) is the same as these radioisotopic fractional contributions to the
energy of fission. Under these conditions, the 6.5% of fission which appears as delayed ionizing radiation (delayed
gammas and betas from radioactive fission products) contributes to the steady-state reactor heat production under
power. It is this output fraction which remains when the reactor is suddenly shut down (undergoes scram). For this
reason, the reactor decay heat output begins at 6.5% of the full reactor steady state fission power, once the
reactor is shut down. However, within hours, due to decay of these isotopes, the decay power output is far less.
See decay heat for detail.
The remainder of the delayed energy (8.8 MeV/202.5 MeV = 4.3% of total fission energy) is emitted as
antineutrinos, which as a practical matter, are not considered "ionizing radiation." The reason is that energy
released as antineutrinos is not captured by the reactor material as heat, and escapes directly through all materials
(including the Earth) at nearly the speed of light, and into interplanetary space (the amount absorbed is minuscule).
Neutrino radiation is ordinarily not classed as ionizing radiation, because it is almost entirely not absorbed and
therefore does not produce effects (although the very rare neutrino event is ionizing). Almost all of the rest of the
radiation (6.5% delayed beta and gamma radiation) is eventually converted to heat in a reactor core or its shielding.
Some processes involving neutrons are notable for absorbing or finally yielding energy for example neutron
kinetic energy does not yield heat immediately if the neutron is captured by a uranium-238 atom to breed
plutonium-239, but this energy is emitted if the plutonium-239 is later fissioned. On the other hand, so-
called delayed neutrons emitted as radioactive decay products with half-lives up to several minutes, from fission-
daughters, are very important to reactor control, because they give a characteristic "reaction" time for the total
nuclear reaction to double in size, if the reaction is run in a "delayed-critical" zone which deliberately relies on
these neutrons for a supercritical chain-reaction (one in which each fission cycle yields more neutrons than it
absorbs). Without their existence, the nuclear chain-reaction would be prompt critical and increase in size faster
than it could be controlled by human intervention. In this case, the first experimental atomic reactors would have
run away to a dangerous and messy "prompt critical reaction" before their operators could have manually shut
them down (for this reason, designer Enrico Fermi included radiation-counter-triggered control rods, suspended
by electromagnets, which could automatically drop into the center of Chicago Pile-1). If these delayed neutrons
are captured without producing fissions, they produce heat as well.
Product nuclei and binding energy
Main articles: fission product and fission product yield

310
In fission there is a preference to yield fragments with even proton numbers, which is called the odd-even effect on
the fragments' charge distribution. However, no odd-even effect is observed on fragment mass
number distribution. This result is attributed to nucleon pair breaking.
In nuclear fission events the nuclei may break into any combination of lighter nuclei, but the most common event is
not fission to equal mass nuclei of about mass 120; the most common event (depending on isotope and process) is
a slightly unequal fission in which one daughter nucleus has a mass of about 90 to 100 u and the other the
remaining 130 to 140 u.[10] Unequal fissions are energetically more favorable because this allows one product to be
closer to the energetic minimum near mass 60 u (only a quarter of the average fissionable mass), while the other
nucleus with mass 135 u is still not far out of the range of the most tightly bound nuclei (another statement of this,
is that the atomic binding energy curve is slightly steeper to the left of mass 120 u than to the right of it).
Origin of the active energy and the curve of binding energy

The "curve of binding energy": A graph of binding energy per nucleon of common isotopes.
Nuclear fission of heavy elements produces exploitable energy because the specific binding energy (binding energy
per mass) of intermediate-mass nuclei with atomic numbers and atomic masses close to 62Ni and 56Fe is greater
than the nucleon-specific binding energy of very heavy nuclei, so that energy is released when heavy nuclei are
broken apart. The total rest masses of the fission products (Mp) from a single reaction is less than the mass of the
original fuel nucleus (M). The excess mass m = M Mp is the invariant mass of the energy that is released
as photons (gamma rays) and kinetic energy of the fission fragments, according to the mass-energy
equivalence formula E = mc2.
The variation in specific binding energy with atomic number is due to the interplay of the two
fundamental forces acting on the component nucleons (protons and neutrons) that make up the nucleus. Nuclei
are bound by an attractive nuclear force between nucleons, which overcomes the electrostatic
repulsion between protons. However, the nuclear force acts only over relatively short ranges (a
few nucleon diameters), since it follows an exponentially decaying Yukawa potential which makes it insignificant
at longer distances. The electrostatic repulsion is of longer range, since it decays by an inverse-square rule, so that
nuclei larger than about 12 nucleons in diameter reach a point that the total electrostatic repulsion overcomes the
nuclear force and causes them to be spontaneously unstable. For the same reason, larger nuclei (more than about
eight nucleons in diameter) are less tightly bound per unit mass than are smaller nuclei; breaking a large nucleus
into two or more intermediate-sized nuclei releases energy. The origin of this energy is the nuclear force, which
intermediate-sized nuclei allows to act more efficiently, because each nucleon has more neighbors which are within
the short range attraction of this force. Thus less energy is needed in the smaller nuclei and the difference to the
state before is set free.
Also because of the short range of the strong binding force, large stable nuclei must contain proportionally more
neutrons than do the lightest elements, which are most stable with a 1 to 1 ratio of protons and neutrons. Nuclei
which have more than 20 protons cannot be stable unless they have more than an equal number of neutrons. Extra
neutrons stabilize heavy elements because they add to strong-force binding (which acts between all nucleons)
without adding to protonproton repulsion. Fission products have, on average, about the same ratio of neutrons
and protons as their parent nucleus, and are therefore usually unstable to beta decay (which changes neutrons to
protons) because they have proportionally too many neutrons compared to stable isotopes of similar mass.

311
This tendency for fission product nuclei to beta-decay is the fundamental cause of the problem of radioactive high
level waste from nuclear reactors. Fission products tend to be beta emitters, emitting fast-moving electrons to
conserve electric charge, as excess neutrons convert to protons in the fission-product atoms. See Fission
products (by element) for a description of fission products sorted by element.
Chain reactions

A schematic nuclear fission chain reaction. 1. A uranium-235 atom absorbs a neutron and fissions into two new
atoms (fission fragments), releasing three new neutrons and some binding energy. 2. One of those neutrons is
absorbed by an atom of uranium-238 and does not continue the reaction. Another neutron is simply lost and does
not collide with anything, also not continuing the reaction. However, the one neutron does collide with an atom of
uranium-235, which then fissions and releases two neutrons and some binding energy. 3. Both of those neutrons
collide with uranium-235 atoms, each of which fissions and releases between one and three neutrons, which can
then continue the reaction.
Nuclear chain reaction
Several heavy elements, such as uranium, thorium, and plutonium, undergo both spontaneous fission, a form
of radioactive decay and induced fission, a form of nuclear reaction. Elemental isotopes that undergo induced
fission when struck by a free neutron are called fissionable; isotopes that undergo fission when struck by a slow-
moving thermal neutron are also called fissile. A few particularly fissile and readily obtainable isotopes
(notably 233U, 235U and 239Pu) are called nuclear fuels because they can sustain a chain reaction and can be
obtained in large enough quantities to be useful.
All fissionable and fissile isotopes undergo a small amount of spontaneous fission which releases a few free
neutrons into any sample of nuclear fuel. Such neutrons would escape rapidly from the fuel and become a free
neutron, with a mean lifetime of about 15 minutes before decaying to protons and beta particles. However,
neutrons almost invariably impact and are absorbed by other nuclei in the vicinity long before this happens (newly
created fission neutrons move at about 7% of the speed of light, and even moderated neutrons move at about

312
8 times the speed of sound). Some neutrons will impact fuel nuclei and induce further fissions, releasing yet more
neutrons. If enough nuclear fuel is assembled in one place, or if the escaping neutrons are sufficiently contained,
then these freshly emitted neutrons outnumber the neutrons that escape from the assembly, and a sustained
nuclear chain reaction will take place.
An assembly that supports a sustained nuclear chain reaction is called a critical assembly or, if the assembly is
almost entirely made of a nuclear fuel, a critical mass. The word "critical" refers to a cusp in the behavior of
the differential equation that governs the number of free neutrons present in the fuel: if less than a critical mass is
present, then the amount of neutrons is determined by radioactive decay, but if a critical mass or more is present,
then the amount of neutrons is controlled instead by the physics of the chain reaction. The actual massof a critical
mass of nuclear fuel depends strongly on the geometry and surrounding materials.
Not all fissionable isotopes can sustain a chain reaction. For example, 238U, the most abundant form of uranium, is
fissionable but not fissile: it undergoes induced fission when impacted by an energetic neutron with over 1 MeV of
kinetic energy. However, too few of the neutrons produced by 238U fission are energetic enough to induce further
fissions in 238U, so no chain reaction is possible with this isotope. Instead, bombarding 238U with slow neutrons
causes it to absorb them (becoming 239U) and decay by beta emission to 239Np which then decays again by the
same process to 239Pu; that process is used to manufacture 239Pu in breeder reactors. In-situ plutonium production
also contributes to the neutron chain reaction in other types of reactors after sufficient plutonium-239 has been
produced, since plutonium-239 is also a fissile element which serves as fuel. It is estimated that up to half of the
power produced by a standard "non-breeder" reactor is produced by the fission of plutonium-239 produced in
place, over the total life-cycle of a fuel load.
Fissionable, non-fissile isotopes can be used as fission energy source even without a chain reaction.
Bombarding 238U with fast neutrons induces fissions, releasing energy as long as the external neutron source is
present. This is an important effect in all reactors where fast neutrons from the fissile isotope can cause the fission
of nearby 238U nuclei, which means that some small part of the 238U is "burned-up" in all nuclear fuels, especially in
fast breeder reactors that operate with higher-energy neutrons. That same fast-fission effect is used to augment
the energy released by modern thermonuclear weapons, by jacketing the weapon with 238U to react with
neutrons released by nuclear fusion at the center of the device. But the explosive effects of nuclear fission chain
reactions can be reduced by using substances like moderators which slow down the speed of secondary neutrons.
Fission reactors

The cooling towers of the Philippsburg Nuclear Power Plant, in Germany.


Critical fission reactors are the most common type of nuclear reactor. In a critical fission reactor, neutrons
produced by fission of fuel atoms are used to induce yet more fissions, to sustain a controllable amount of energy
release. Devices that produce engineered but non-self-sustaining fission reactions are subcritical fission reactors.
Such devices use radioactive decay or particle accelerators to trigger fissions. Critical fission reactors are built for
three primary purposes, which typically involve different engineering trade-offs to take advantage of either the heat
or the neutrons produced by the fission chain reaction: power reactors are intended to produce heat for nuclear
power, either as part of a generating station or a local power system such as a nuclear submarine. research
reactors are intended to produce neutrons and/or activate radioactive sources for scientific, medical, engineering,
or other research purposes. breeder reactors are intended to produce nuclear fuels in bulk from more
abundant isotopes. The better known fast breeder reactor makes 239Pu (a nuclear fuel) from the naturally very
abundant 238U (not a nuclear fuel). Thermal breeder reactors previously tested using 232Th to breed the fissile
isotope 233U (thorium fuel cycle) continue to be studied and developed.

313
While, in principle, all fission reactors can act in all three capacities, in practice the tasks lead to conflicting
engineering goals and most reactors have been built with only one of the above tasks in mind. (There are several
early counter-examples, such as the Hanford N reactor, now decommissioned). Power reactors generally convert
the kinetic energy of fission products into heat, which is used to heat a working fluid and drive a heat engine that
generates mechanical or electrical power. The working fluid is usually water with a steam turbine, but some designs
use other materials such as gaseous helium. Research reactors produce neutrons that are used in various ways,
with the heat of fission being treated as an unavoidable waste product. Breeder reactors are a specialized form of
research reactor, with the caveat that the sample being irradiated is usually the fuel itself, a mixture of 238U
and 235U. For a more detailed description of the physics and operating principles of critical fission reactors,
see nuclear reactor physics. For a description of their social, political, and environmental aspects, see nuclear
power.
Fission bombs

The mushroom cloud of the atomic bomb dropped on Nagasaki, Japan on August 9, 1945, rose over 18
kilometres (11 mi) above the bomb's hypocenter. An estimated 39,000 people were killed by the atomic
bomb,[11] of whom 23,14528,113 were Japanese factory workers, 2,000 were Korean slave laborers, and 150
were Japanese combatants.
One class of nuclear weapon, a fission bomb (not to be confused with the fusion bomb), otherwise known as
an atomic bomb or atom bomb, is a fission reactor designed to liberate as much energy as possible as rapidly as
possible, before the released energy causes the reactor to explode (and the chain reaction to stop). Development of
nuclear weapons was the motivation behind early research into nuclear fission which the Manhattan
Project during World War II (September 1, 1939 September 2, 1945) carried out most of the early scientific work
on fission chain reactions, culminating in the three events involving fission bombs that occurred during the war. The
first fission bomb, codenamed "The Gadget", was detonated during the Trinity Test in the desert of New
Mexico on July 16, 1945. Two other fission bombs, codenamed "Little Boy" and "Fat Man", were used
in combat against the Japanese cities of Hiroshima and Nagasaki in on August 6 and 9, 1945 respectively.
Even the first fission bombs were thousands of times more explosive than a comparable mass of chemical
explosive. For example, Little Boy weighed a total of about four tons (of which 60 kg was nuclear fuel) and was 11
feet (3.4 m) long; it also yielded an explosion equivalent to about 15 kilotons of TNT, destroying a large part of the
city of Hiroshima. Modern nuclear weapons (which include a thermonuclear fusion as well as one or more fission
stages) are hundreds of times more energetic for their weight than the first pure fission atomic bombs (see nuclear
weapon yield), so that a modern single missile warhead bomb weighing less than 1/8 as much as Little Boy (see for
example W88) has a yield of 475,000 tons of TNT, and could bring destruction to about 10 times the city area.
While the fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a
controlled nuclear reactor, the two types of device must be engineered quite differently (see nuclear reactor
physics). A nuclear bomb is designed to release all its energy at once, while a reactor is designed to generate a
steady supply of useful power. While overheating of a reactor can lead to, and has led to, meltdown and steam
explosions, the much lower uranium enrichmentmakes it impossible for a nuclear reactor to explode with the

314
same destructive power as a nuclear weapon. It is also difficult to extract useful power from a nuclear bomb,
although at least one rocket propulsion system, Project Orion, was intended to work by exploding fission bombs
behind a massively padded and shielded spacecraft.
The strategic importance of nuclear weapons is a major reason why the technology of nuclear fission is politically
sensitive. Viable fission bomb designs are, arguably, within the capabilities of many, being relatively simple from an
engineering viewpoint. However, the difficulty of obtaining fissile nuclear material to realize the designs is the key
to the relative unavailability of nuclear weapons to all but modern industrialized governments with special
programs to produce fissile materials (see uranium enrichment and nuclear fuel cycle.
Discovery of nuclear fission
The discovery of nuclear fission occurred in 1938 in the buildings of Kaiser Wilhelm Society for Chemistry, today
part of the Free University of Berlin, following nearly five decades of work on the science of radioactivity and the
elaboration of new nuclear physics that described the components of atoms. In 1911, Ernest
Rutherford proposed a model of the atom in which a very small, dense and positively
charged nucleus of protons (the neutron had not yet been discovered) was surrounded by orbiting, negatively
charged electrons (the Rutherford model). Niels Bohr improved upon this in 1913 by reconciling the quantum
behavior of electrons (the Bohr model). Work by Henri Becquerel, Marie Curie, Pierre Curie, and Rutherford
further elaborated that the nucleus, though tightly bound, could undergo different forms of radioactive decay,
and thereby transmute into other elements. (For example, by alpha decay: the emission of an alpha particletwo
protons and two neutrons bound together into a particle identical to a helium nucleus.)
Some work in nuclear transmutation had been done. In 1917, Rutherford was able to accomplish transmutation of
nitrogen into oxygen, using alpha particles directed at nitrogen 14N + 17O + p. This was the first observation of
a nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic
nucleus. Eventually, in 1932, a fully artificial nuclear reaction and nuclear transmutation was achieved by
Rutherford's colleagues Ernest Waltonand John Cockcroft, who used artificially accelerated protons against
lithium-7, to split this nucleus into two alpha particles. The feat was popularly known as "splitting the atom",
although it was not the modern nuclear fission reaction later discovered in heavy elements, which is discussed
below.[16] Meanwhile, the possibility of combining nucleinuclear fusionhad been studied in connection with
understanding the processes which power stars. The first artificial fusion reaction had been achieved by Mark
Oliphant in 1932, using two accelerated deuterium nuclei (each consisting of a single proton bound to a single
neutron) to create a helium-3 nucleus.
After English physicist James Chadwick discovered the neutron in 1932, Enrico Fermi and his colleagues
in Rome studied the results of bombarding uranium with neutrons in 1934. Fermi concluded that his experiments
had created new elements with 93 and 94 protons, which the group dubbed ausonium and hesperium. However,
not all were convinced by Fermi's analysis of his results. The German chemist Ida Noddack notably suggested in
print in 1934 that instead of creating a new, heavier element 93, that "it is conceivable that the nucleus breaks up
into several large fragments." However, Noddack's conclusion was not pursued at the time.

The experimental apparatus with which Otto Hahn and Fritz Strassmanndiscovered nuclear fission in 1938
After the Fermi publication, Otto Hahn, Lise Meitner, and Fritz Strassmann began performing similar
experiments in Berlin. Meitner, an Austrian Jew, lost her citizenship with the "Anschluss", the occupation and
annexation of Austria into Nazi Germany in March 1938, but she fled in July 1938 to Sweden and started a
correspondence by mail with Hahn in Berlin. By coincidence, her nephew Otto Robert Frisch, also a refugee, was
also in Sweden when Meitner received a letter from Hahn dated 19 December describing his chemical proof that
some of the product of the bombardment of uranium with neutrons was barium. Hahn suggested a bursting of the

315
nucleus, but he was unsure of what the physical basis for the results were. Barium had an atomic mass 40% less
than uranium, and no previously known methods of radioactive decay could account for such a large difference in
the mass of the nucleus. Frisch was skeptical, but Meitner trusted Hahn's ability as a chemist. Marie Curie had been
separating barium from radium for many years, and the techniques were well-known. According to Frisch:
Was it a mistake? No, said Lise Meitner; Hahn was too good a chemist for that. But how could barium be formed
from uranium? No larger fragments than protons or helium nuclei (alpha particles) had ever been chipped away
from nuclei, and to chip off a large number not nearly enough energy was available. Nor was it possible that the
uranium nucleus could have been cleaved right across. A nucleus was not like a brittle solid that can be cleaved or
broken; George Gamow had suggested early on, and Bohr had given good arguments that a nucleus was much
more like a liquid drop. Perhaps a drop could divide itself into two smaller drops in a more gradual manner, by first
becoming elongated, then constricted, and finally being torn rather than broken in two? We knew that there were
strong forces that would resist such a process, just as the surface tension of an ordinary liquid drop tends to resist
its division into two smaller ones. But nuclei differed from ordinary drops in one important way: they were
electrically charged, and that was known to counteract the surface tension.
The charge of a uranium nucleus, we found, was indeed large enough to overcome the effect of the surface tension
almost completely; so the uranium nucleus might indeed resemble a very wobbly unstable drop, ready to divide
itself at the slightest provocation, such as the impact of a single neutron. But there was another problem. After
separation, the two drops would be driven apart by their mutual electric repulsion and would acquire high speed
and hence a very large energy, about 200 MeV in all; where could that energy come from? ...Lise Meitner... worked
out that the two nuclei formed by the division of a uranium nucleus together would be lighter than the original
uranium nucleus by about one-fifth the mass of a proton. Now whenever mass disappears energy is created,
according to Einstein's formula E= mc2, and one-fifth of a proton mass was just equivalent to 200 MeV. So here was
the source for that energy; it all fitted!
In short, Meitner and Frisch had correctly interpreted Hahn's results to mean that the nucleus of uranium had split
roughly in half. Frisch suggested the process be named "nuclear fission," by analogy to the process of living cell
division into two cells, which was then called binary fission. Just as the term nuclear "chain reaction" would later
be borrowed from chemistry, so the term "fission" was borrowed from biology.
On 22 December 1938, Hahn and Strassmann sent a manuscript to Naturwissenschaften reporting that they had
discovered the element barium after bombarding uranium with neutrons. Simultaneously, they communicated
these results to Meitner in Sweden. She and Frisch correctly interpreted the results as evidence of nuclear
fission.[24] Frisch confirmed this experimentally on 13 January 1939.[25][26] For proving that the barium resulting from
his bombardment of uranium with neutrons was the product of nuclear fission, Hahn was awarded the Nobel Prize
for Chemistry in 1944 (the sole recipient) "for his discovery of the fission of heavy nuclei". (The award was actually
given to Hahn in 1945, as "the Nobel Committee for Chemistry decided that none of the year's nominations met the
criteria as outlined in the will of Alfred Nobel." In such cases, the Nobel Foundation's statutes permit that year's
prize be reserved until the following year.)[27]

News spread quickly of the new discovery, which was correctly seen as an entirely novel physical effect with great
scientificand potentially practicalpossibilities. Meitner's and Frisch's interpretation of the discovery of Hahn and
Strassmann crossed the Atlantic Ocean with Niels Bohr, who was to lecture at Princeton University. I.I.
Rabi and Willis Lamb, two Columbia University physicists working at Princeton, heard the news and carried it
back to Columbia. Rabi said he told Enrico Fermi; Fermi gave credit to Lamb. Bohr soon thereafter went from
Princeton to Columbia to see Fermi. Not finding Fermi in his office, Bohr went down to the cyclotron area and
found Herbert L. Anderson. Bohr grabbed him by the shoulder and said: Young man, let me explain to you about
something new and exciting in physics.[28] It was clear to a number of scientists at Columbia that they should try to
detect the energy released in the nuclear fission of uranium from neutron bombardment. On 25 January 1939, a
Columbia University team conducted the first nuclear fission experiment in the United States, [29] which was done in
the basement of Pupin Hall; the members of the team were Herbert L. Anderson, Eugene T. Booth, John R.
Dunning, Enrico Fermi, G. Norris Glasoe, and Francis G. Slack. The experiment involved placing uranium oxide
inside of an ionization chamber and irradiating it with neutrons, and measuring the energy thus released. The
results confirmed that fission was occurring and hinted strongly that it was the isotope uranium 235 in particular
that was fissioning. The next day, the Fifth Washington Conference on Theoretical Physics began in Washington,
D.C. under the joint auspices of the George Washington University and the Carnegie Institution of

316
Washington. There, the news on nuclear fission was spread even further, which fostered many more experimental
demonstrations.[30]
During this period the Hungarian physicist Le Szilrd, who was residing in the United States at the time, realized
that the neutron-driven fission of heavy atoms could be used to create a nuclear chain reaction. Such a reaction
using neutrons was an idea he had first formulated in 1933, upon reading Rutherford's disparaging remarks about
generating power from his team's 1932 experiment using protons to split lithium. However, Szilrd had not been
able to achieve a neutron-driven chain reaction with neutron-rich light atoms. In theory, if in a neutron-driven chain
reaction the number of secondary neutrons produced was greater than one, then each such reaction could trigger
multiple additional reactions, producing an exponentially increasing number of reactions. It was thus a possibility
that the fission of uranium could yield vast amounts of energy for civilian or military purposes (i.e., electric power
generation or atomic bombs).
Szilard now urged Fermi (in New York) and Frdric Joliot-Curie (in Paris) to refrain from publishing on the
possibility of a chain reaction, lest the Nazi government become aware of the possibilities on the eve of what would
later be known as World War II. With some hesitation Fermi agreed to self-censor. But Joliot-Curie did not, and in
April 1939 his team in Paris, including Hans von Halban and Lew Kowarski, reported in the journal Nature that
the number of neutrons emitted with nuclear fission of 235U was then reported at 3.5 per fission.[31] (They later
corrected this to 2.6 per fission.) Simultaneous work by Szilard and Walter Zinn confirmed these results. The results
suggested the possibility of building nuclear reactors(first called "neutronic reactors" by Szilard and Fermi) and
even nuclear bombs. However, much was still unknown about fission and chain reaction systems.

Fission chain reaction realized

Drawing of the first artificial reactor, Chicago Pile-1.


"Chain reactions" at that time were a known phenomenon in chemistry, but the analogous process in nuclear
physics, using neutrons, had been foreseen as early as 1933 by Szilrd, although Szilrd at that time had no idea
with what materials the process might be initiated. Szilrd considered that neutrons would be ideal for such a
situation, since they lacked an electrostatic charge.
With the news of fission neutrons from uranium fission, Szilrd immediately understood the possibility of a nuclear
chain reaction using uranium. In the summer, Fermi and Szilard proposed the idea of a nuclear reactor (pile) to
mediate this process. The pile would use natural uranium as fuel. Fermi had shown much earlier that neutrons were
far more effectively captured by atoms if they were of low energy (so-called "slow" or "thermal" neutrons), because
for quantum reasons it made the atoms look like much larger targets to the neutrons. Thus to slow down the
secondary neutrons released by the fissioning uranium nuclei, Fermi and Szilard proposed a graphite "moderator,"
against which the fast, high-energy secondary neutrons would collide, effectively slowing them down. With enough
uranium, and with pure-enough graphite, their "pile" could theoretically sustain a slow-neutron chain reaction. This
would result in the production of heat, as well as the creation of radioactive fission products.
In August 1939, Szilard and fellow Hungarian refugees physicists Teller and Wigner thought that the Germans
might make use of the fission chain reaction and were spurred to attempt to attract the attention of the United
States government to the issue. Towards this, they persuaded German-Jewish refugee Albert Einstein to lend his

317
name to a letter directed to President Franklin Roosevelt. The EinsteinSzilrd letter suggested the possibility of
a uranium bomb deliverable by ship, which would destroy "an entire harbor and much of the surrounding
countryside." The President received the letter on 11 October 1939 shortly after World War II began in Europe,
but two years before U.S. entry into it. Roosevelt ordered that a scientific committee be authorized for overseeing
uranium work and allocated a small sum of money for pile research.
In England, James Chadwick proposed an atomic bomb utilizing natural uranium, based on a paper by Rudolf
Peierls with the mass needed for critical state being 3040 tons. In America, J. Robert Oppenheimer thought that a
cube of uranium deuteride 10 cm on a side (about 11 kg of uranium) might "blow itself to hell." In this design it was
still thought that a moderator would need to be used for nuclear bomb fission (this turned out not to be the case if
the fissile isotope was separated). In December, Werner Heisenberg delivered a report to the German Ministry of
War on the possibility of a uranium bomb. Most of these models were still under the assumption that the bombs
would be powered by slow neutron reactionsand thus be similar to a reactor undergoing a meltdown.
In Birmingham, England, Frisch teamed up with Peierls, a fellow German-Jewish refugee. They had the idea of using
a purified mass of the uranium isotope 235U, which had a cross section just determined, and which was much
larger than that of 238U or natural uranium (which is 99.3% the latter isotope). Assuming that the cross section for
fast-neutron fission of 235U was the same as for slow neutron fission, they determined that a pure 235U bomb could
have a critical mass of only 6 kg instead of tons, and that the resulting explosion would be tremendous. (The
amount actually turned out to be 15 kg, although several times this amount was used in the actual uranium (Little
Boy) bomb). In February 1940 they delivered the FrischPeierls memorandum. Ironically, they were still officially
considered "enemy aliens" at the time. Glenn Seaborg, Joseph W. Kennedy, Arthur Wahl, and Italian-Jewish
refugee Emilio Segr shortly thereafter discovered 239Pu in the decay products of 239U produced by
bombarding 238U with neutrons, and determined it to be a fissile material, like 235U.
The possibility of isolating uranium-235 was technically daunting, because uranium-235 and uranium-238 are
chemically identical, and vary in their mass by only the weight of three neutrons. However, if a sufficient quantity of
uranium-235 could be isolated, it would allow for a fast neutron fission chain reaction. This would be extremely
explosive, a true "atomic bomb." The discovery that plutonium-239 could be produced in a nuclear reactor pointed
towards another approach to a fast neutron fission bomb. Both approaches were extremely novel and not yet well
understood, and there was considerable scientific skepticism at the idea that they could be developed in a short
amount of time.
On June 28, 1941, the Office of Scientific Research and Development was formed in the U.S. to mobilize
scientific resources and apply the results of research to national defense. In September, Fermi assembled his first
nuclear "pile" or reactor, in an attempt to create a slow neutron-induced chain reaction in uranium, but the
experiment failed to achieve criticality, due to lack of proper materials, or not enough of the proper materials which
were available.
Producing a fission chain reaction in natural uranium fuel was found to be far from trivial. Early nuclear reactors did
not use isotopically enriched uranium, and in consequence they were required to use large quantities of highly
purified graphite as neutron moderation materials. Use of ordinary water (as opposed to heavy water) in nuclear
reactors requires enriched fuel the partial separation and relative enrichment of the rare 235U isotope from the
far more common 238U isotope. Typically, reactors also require inclusion of extremely chemically pure neutron
moderator materials such as deuterium (in heavy water), helium, beryllium, or carbon, the latter usually
as graphite. (The high purity for carbon is required because many chemical impurities such as the boron-
10 component of natural boron, are very strong neutron absorbers and thus poison the chain reaction and end it
prematurely.)
Production of such materials at industrial scale had to be solved for nuclear power generation and weapons
production to be accomplished. Up to 1940, the total amount of uranium metal produced in the USA was not more
than a few grams, and even this was of doubtful purity; of metallic beryllium not more than a few kilograms; and
concentrated deuterium oxide (heavy water) not more than a few kilograms. Finally, carbon had never been
produced in quantity with anything like the purity required of a moderator.
The problem of producing large amounts of high purity uranium was solved by Frank Spedding using
the thermite or "Ames" process. Ames Laboratory was established in 1942 to produce the large amounts of
natural (unenriched) uranium metal that would be necessary for the research to come. The critical nuclear chain-
reaction success of the Chicago Pile-1(December 2, 1942) which used unenriched (natural) uranium, like all of the
atomic "piles" which produced the plutonium for the atomic bomb, was also due specifically to Szilard's realization

318
that very pure graphite could be used for the moderator of even natural uranium "piles". In wartime Germany,
failure to appreciate the qualities of very pure graphite led to reactor designs dependent on heavy water, which in
turn was denied the Germans by Allied attacks in Norway, where heavy waterwas produced. These difficulties
among many others prevented the Nazis from building a nuclear reactor capable of criticality during the war,
although they never put as much effort as the United States into nuclear research, focusing on other technologies
(see German nuclear energy project for more details).

In nuclear physics, nuclear fusion is a reaction in which two or more atomic nuclei come close enough to form
one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in mass between
the products and reactants is manifested as the release of large amounts of energy. This difference in mass arises
due to the difference in atomic "binding energy" between the atomic nuclei before and after the reaction. Fusion
is the process that powers active or "main sequence" stars, or other high magnitude stars.
The fusion process that produces a nucleus lighter than iron-56 or nickel-62 will generally yield a net energy
release. These elements have the smallest mass per nucleon and the largest binding energy per nucleon,
respectively. Fusion of light elements toward these releases energy (an exothermic process), while a fusion
producing nuclei heavier than these elements, will result in energy retained by the resulting nucleons, and the
resulting reaction is endothermic. The opposite is true for the reverse process, nuclear fission. This means that
the lighter elements, such as hydrogen and helium, are in general more fusible; while the heavier elements, such
as uranium and plutonium, are more fissionable. The extreme astrophysical event of a supernova can produce
enough energy to fuse nuclei into elements heavier than iron.
Following the discovery of quantum tunneling by physicist Friedrich Hund, in 1929 Robert Atkinson and Fritz
Houtermans used the measured masses of light elements to predict that large amounts of energy could be
released by fusing small nuclei. Building upon the nuclear transmutation experiments by Ernest Rutherford,
carried out several years earlier, the laboratory fusion of hydrogen isotopes was first accomplished by Mark
Oliphant in 1932. During the remainder of that decade the steps of the main cycle of nuclear fusion in stars were
worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of
the Manhattan Project. Fusion was accomplished in 1951 with the Greenhouse Item nuclear test. Nuclear fusion
on a large scale in an explosion was first carried out on November 1, 1952, in the Ivy Mike hydrogen bomb test.
Research into developing controlled thermonuclear fusion for civil purposes also began in earnest in the 1950s,
and it continues to this day.

319
The nuclear binding energy curve. The formation of nuclei with masses up to Iron-56 releases
energy, while forming those that are heavier requires energy input. This is because the nuclei
below Iron-56 have high binding energies, while the heavier ones have lower binding energies, as
illustrated above.
Source; Fastfission.

The nuclear binding energy curve. The formation of nuclei with masses up to Iron-56 releases energy, while
forming those that are heavier requires energy input. This is because the nuclei below Iron-56 have high binding
energies, while the heavier ones have lower binding energies, as illustrated above.
The Sun is a main-sequence star, and thus generates its energy by nuclear fusion of hydrogen nuclei
into helium. In its core, the Sun fuses 620 million metric tons of hydrogen each second.
Process

Fusion of deuterium with tritiumcreating helium-4, freeing a neutron, and releasing 17.59 MeV as kinetic energy of
the products while a corresponding amount of mass disappears, in agreement with kinetic E = mc2, where m is
the decrease in the total rest mass of particles.[1]

320
The origin of the energy released in fusion of light elements is due to interplay of two opposing forces, the nuclear
force which combines together protons and neutrons, and the Coulomb force, which causes protons to repel each
other. The protons are positively charged and repel each other but they nonetheless stick together, demonstrating
the existence of another force referred to as nuclear attraction. [2]Light nuclei (or nuclei smaller than iron and
nickel), are sufficiently small and proton-poor allowing the nuclear force to overcome the repulsive Coulomb force.
This is because the nucleus is sufficiently small that all nucleons feel the short-range attractive force at least as
strongly as they feel the infinite-range Coulomb repulsion. Building up these nuclei from lighter nuclei
by fusion thus releases the extra energy from the net attraction of these particles. For larger nuclei, however, no
energy is released, since the nuclear force is short-range and cannot continue to act across still larger atomic nuclei.
Thus, energy is no longer released when such nuclei are made by fusion; instead, energy is required as input to such
processes.
Fusion reactions create the light elements that power the stars and produce virtually all elements in a process
called [Link] Sun is a main-sequence star, and thus generates its energy by nuclear fusion of
hydrogen nuclei into helium. In its core, the Sun fuses 6100 million metric tons of hydrogen each second and gives
helium 6060 million metric tons. The fusion of lighter elements in stars releases energy and the mass that always
accompanies it. For example, in the fusion of two hydrogen nuclei to form helium, 0.7% of the mass is carried away
from the system in the form of kinetic energy of an alpha particle or other forms of energy, such as
electromagnetic radiation.[3]
Research into controlled fusion, with the aim of producing fusion power for the production of electricity, has been
conducted for over 60 years. It has been accompanied by extreme scientific and technological difficulties, but has
resulted in progress. At present, controlled fusion reactions have been unable to produce break-even (self-
sustaining) controlled fusion.[4]Workable designs for a reactor that theoretically will deliver ten times more fusion
energy than the amount needed to heat plasma to the required temperatures are in development (see ITER). The
ITER facility is expected to finish its construction phase in 2019. It will start commissioning the reactor that same
year and initiate plasma experiments in 2020, but is not expected to begin full deuterium-tritium fusion until
2027.[5]
It takes considerable energy to force nuclei to fuse, even those of the lightest element, hydrogen. This is because
all nuclei have a positive charge due to their protons, and as like charges repel, nuclei strongly resist being pushed
close together. When accelerated to high enough speeds, nuclei can overcome this electrostatic repulsion and
brought close enough such that the attractive nuclear force is greater than the repulsive Coulomb force. As
the strong force grows very rapidly once beyond that critical distance, the fusing nucleons "fall" into one another
and result is fusion and net energy produced. The fusion of lighter nuclei, which creates a heavier nucleus and often
a free neutron or proton, generally releases more energy than it takes to force the nuclei together; this is
an exothermic process that can produce self-sustaining reactions. The US National Ignition Facility, which uses
laser-driven inertial confinement fusion, was designed with a goal of break-even fusion.
The first large-scale laser target experiments were performed in June 2009 and ignition experiments began in early
2011.[6][7]
Energy released in most nuclear reactions is much larger than in chemical reactions, because the binding
energy that holds a nucleus together is far greater than the energy that holds electrons to a nucleus. For example,
the ionization energygained by adding an electron to a hydrogen nucleus is 13.6 eVless than one-millionth of
the 17.6 MeV released in the deuteriumtritium (DT) reaction shown in the adjacent diagram. The complete
conversion of one gram of matter would release 91013 joules of energy. Fusion reactions have an energy
density many times greater than nuclear fission; the reactions produce far greater energy per unit of mass even
though individual fission reactions are generally much more energetic than individual fusion ones, which are
themselves millions of times more energetic than chemical reactions. Only direct conversion of mass into energy,
such as that caused by the annihilatory collision of matter and antimatter, is more energetic per unit of mass than
nuclear fusion.

321
Nuclear fusion in stars

The proton-proton chain dominates in stars the size of the Sun or smaller.

The CNO cycle dominates in stars heavier than the Sun.


The most important fusion process in nature is the one that powers stars, stellar nucleosynthesis. In the 20th
century, it was realized that the energy released from nuclear fusion reactions accounted for the longevity of
the Sun and other stars as a source of heat and light. The fusion of nuclei in a star, starting from its initial hydrogen
and helium abundance, provides that energy and synthesizes new nuclei as a byproduct of the fusion process. The
prime energy producer in the Sun is the fusion of hydrogen to form helium, which occurs at a solar-core
temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the
release of two positrons, two neutrinos(which changes two of the protons into neutrons), and energy. Different
reaction chains are involved, depending on the mass of the star. For stars the size of the sun or smaller, the proton-
proton chain dominates. In heavier stars, the CNO cycle is more important.

322
As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements. However the
heaviest elements are synthesized by fusion that occurs as a more massive star undergoes a violent supernova at
the end of its life, a process known as supernova nucleosynthesis.

Requirements
Details and supporting references on the material in this section can be found in textbooks on nuclear physics or
nuclear fusion.[8]
A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large distances,
two naked nuclei repel one another because of the repulsive electrostatic force between their positively
chargedprotons. If two nuclei can be brought close enough together, however, the electrostatic repulsion can be
overcome by the quantum effect in which nuclei can tunnel through coulomb forces.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all the other
nucleons of the nucleus (if the atom is small enough), but primarily to its immediate neighbours due to the short
range of the force. The nucleons in the interior of a nucleus have more neighboring nucleons than those on the
surface. Since smaller nuclei have a larger surface area-to-volume ratio, the binding energy per nucleon due to
the nuclear forcegenerally increases with the size of the nucleus but approaches a limiting value corresponding to
that of a nucleus with a diameter of about four nucleons. It is important to keep in mind that nucleons are quantum
objects. So, for example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one
from the other, such as which one is in the interior and which is on the surface, is in fact meaningless, and the
inclusion of quantum mechanics is therefore necessary for proper calculations.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus will feel an
electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy per nucleon due to the
electrostatic force thus increases without limit as nuclei atomic number grows.

The electrostatic force between the positively charged nuclei is repulsive, but when the separation is small
enough, the quantum effect will tunnel through the wall. Therefore, the prerequisite for fusion is that the two
nuclei be brought close enough together for a long enough time for quantum tunnelling to act.
The net result of the opposing electrostatic and strong nuclear forces is that the binding energy per nucleon
generally increases with increasing size, up to the elements iron and nickel, and then decreases for heavier nuclei.
Eventually, the binding energy becomes negative and very heavy nuclei (all with more than 208 nucleons,
corresponding to a diameter of about 6 nucleons) are not stable. The four most tightly bound nuclei, in decreasing

order of binding energy per nucleon, are i

.[9] Even though the nickel isotope, is more stable, the iron isotope

is an order of magnitudemore common. This is due to the fact that there is no easy way for stars to create
through the alpha process.
An exception to this general trend is the helium-4 nucleus, whose binding energy is higher than that of lithium, the
next heaviest element. This is because protons and neutrons are fermions, which according to the Pauli exclusion
principlecannot exist in the same nucleus in exactly the same state. Each proton or neutron's energy state in a
nucleus can accommodate both a spin up particle and a spin down particle. Helium-4 has an anomalously large
binding energy because its nucleus consists of two protons and two neutrons, so all four of its nucleons can be in

323
the ground state. Any additional nucleons would have to go into higher energy states. Indeed, the helium-4 nucleus
is so tightly bound that it is commonly treated as a single particle in nuclear physics, namely, the alpha particle.
The situation is similar if two nuclei are brought together. As they approach each other, all the protons in one
nucleus repel all the protons in the other. Not until the two nuclei actually come close enough for long enough so
the strong nuclear forcecan take over (by way of tunneling). Consequently, even when the final energy state is
lower, there is a large energy barrier that must first be overcome. It is called the Coulomb barrier.
The Coulomb barrier is smallest for isotopes of hydrogen, as their nuclei contain only a single positive charge.
A diproton is not stable, so neutrons must also be involved, ideally in such a way that a helium nucleus, with its
extremely tight binding, is one of the products.
Using deuterium-tritium fuel, the resulting energy barrier is about 0.1 MeV. In comparison, the energy needed to
remove an electron from hydrogen is 13.6 eV, about 7500 times less energy. The (intermediate) result of the
fusion is an unstable 5He nucleus, which immediately ejects a neutron with 14.1 MeV. The recoil energy of the
remaining 4He nucleus is 3.5 MeV, so the total energy liberated is 17.6 MeV. This is many times more than what was
needed to overcome the energy barrier.

The fusion reaction rate increases rapidly with temperature until it maximizes and then gradually drops off. The DT
rate peaks at a lower temperature (about 70 keV, or 800 million kelvin) and at a higher value than other reactions
commonly considered for fusion energy.
The reaction cross section is a measure of the probability of a fusion reaction as a function of the relative
velocity of the two reactant nuclei. If the reactants have a distribution of velocities, e.g. a thermal distribution, then
it is useful to perform an average over the distributions of the product of cross section and velocity. This average is
called the 'reactivity', denoted <v>. The reaction rate (fusions per volume per time) is <v> times the product of
the reactant number densities:

If a species of nuclei is reacting with itself, such as the DD reaction, then the product must be replaced

by .

increases from virtually zero at room temperatures up to meaningful magnitudes at temperatures


of 10100 keV. At these temperatures, well above typical ionization energies (13.6 eV in the hydrogen case), the
fusion reactants exist in a plasma state.

The significance of as a function of temperature in a device with a particular energy confinement time is
found by considering the Lawson criterion. This is an extremely challenging barrier to overcome on Earth, which
explains why fusion research has taken many years to reach the current advanced technical state. [10]

Methods for achieving fusion


Thermonuclear fusion

324
the matter is sufficiently heated (hence being plasma), the fusion reaction may occur due to collisions with
extreme thermal kinetic energies of the particles. In the form of thermonuclear weapons, thermonuclear fusion is
the only fusion technique so far to yield undeniably large amounts of useful fusion energy. Usable amounts of
thermonuclear fusion energy released in a controlled manner have yet to be achieved. In nature, this is what
produces energy in stars through stellar nucleosynthesis

Thermonuclear fusion is a way to achieve nuclear fusion by using extremely high temperatures. There are two
forms of thermonuclear fusion: uncontrolled, in which the resulting energy is released in an uncontrolled manner,
as it is in thermonuclear weapons ("hydrogen bombs") and in most stars; and controlled, where the fusion
reactions take place in an environment allowing some or all of the energy released to be harnessed for constructive
purposes. This article focuses on the latter.
[Link] requirements
Temperature is a measure of the average kinetic energy of particles, so by heating the material it will gain energy.
After reaching sufficient temperature, given by the Lawson criterion, the energy of accidental collisions within
the plasma is high enough to overcome the Coulomb barrier and the particles may fuse together.
In a deuteriumtritium fusion reaction, for example, the energy necessary to overcome the Coulomb barrier is
0.1 MeV. Converting between energy and temperature shows that the 0.1 MeV barrier would be overcome at a
temperature in excess of 1.2 billion kelvins.
There are two effects that lower the actual temperature needed. One is the fact that temperature is
the average kinetic energy, implying that some nuclei at this temperature would actually have much higher energy
than 0.1 MeV, while others would be much lower. It is the nuclei in the high-energy tail of the velocity
distribution that account for most of the fusion reactions. The other effect is quantum tunnelling. The nuclei do
not actually have to have enough energy to overcome the Coulomb barrier completely. If they have nearly enough
energy, they can tunnel through the remaining barrier. For these reasons fuel at lower temperatures will still
undergo fusion events, at a lower rate.
Thermonuclear fusion is one of the methods being researched in the attempts to produce fusion power. If
Thermonuclear fusion becomes favorable to use, it would reduce the world's carbon footprint significantly.

[Link]
The key problem in achieving thermonuclear fusion is how to confine the hot plasma. Due to the high temperature,
the plasma can not be in direct contact with any solid material, so in fact it has to be located in a vacuum. Also,
high temperatures imply high pressures. Moreover, the plasma tends to expand immediately and some force is
necessary to act against it. This force can be either gravitation in stars, magnetic forces in magnetic confinement
fusion reactors, or the fusion reaction may occur before the plasma starts to expand, so in fact the
plasma's inertia is keeping the material together.

[Link] confinement
Stellar nucleosynthesis
One force capable of confining the fuel well enough to satisfy the Lawson criterion is gravity. The mass needed,
however, is so great that gravitational confinement is only found in starsthe least massive stars capable of
sustained fusion are red dwarfs, while brown dwarfs are able to fuse deuterium and lithium if they are of
sufficient mass. In stars heavy enough, after the supply of hydrogen is exhausted in their cores, their cores (or a
shell around the core) start fusing helium to carbon. In the most massive stars (at least 811 solar masses), the
process is continued until some of their energy is produced by fusing lighter elements to iron. As iron has one of
the highest binding energies, reactions producing heavier elements are generally endothermic. Therefore
significant amounts of heavier elements are not formed during stable periods of massive star evolution, but are
formed in supernova explosions. Some lighter stars also form these elements in the outer parts of the stars over
long periods of time, by absorbing energy from fusion in the inside of the star, by absorbing neutrons that are
emitted from the fusion process.
All of the elements heavier than iron have some potential energy to release, in theory. At the extremely heavy end
of element production, these heavier elements can produce energy in the process of being split again back toward
the size of iron, in the process of nuclear fission. Nuclear fission thus releases energy which has been stored,
sometimes billions of years before, during stellar nucleosynthesis.

325
[Link] confinement
Electrically charged particles (such as fuel ions) will follow magnetic field lines (see Guiding centre). The fusion
fuel can therefore be trapped using a strong magnetic field. A variety of magnetic configurations exist, including the
toroidal geometries of tokamaks and stellarators and open-ended mirror confinement systems.

[Link] confinement
Inertial confinement fusion
A third confinement principle is to apply a rapid pulse of energy to a large part of the surface of a pellet of fusion
fuel, causing it to simultaneously "implode" and heat to very high pressure and temperature. If the fuel is dense
enough and hot enough, the fusion reaction rate will be high enough to burn a significant fraction of the fuel before
it has dissipated. To achieve these extreme conditions, the initially cold fuel must be explosively compressed.
Inertial confinement is used in the hydrogen bomb, where the driver is x-rays created by a fission bomb. Inertial
confinement is also attempted in "controlled" nuclear fusion, where the driver is a laser, ion, or electron beam, or
a Z-pinch. Another method is to use conventional high explosive material to compress a fuel to fusion
conditions.[1][2] The UTIAS explosive-driven-implosion facility was used to produce stable, centred and focused
hemispherical implosions[3] to generate neutrons from D-D reactions. The simplest and most direct method proved
to be in a predetonated stoichiometric mixture of deuterium-oxygen. The other successful method was using a
miniature Voitenko compressor,[4] where a plane diaphragm was driven by the implosion wave into a secondary
small spherical cavity that contained pure deuterium gas at one atmosphere.

[Link] confinement
There are also electrostatic confinement fusion devices. These devices confine ions using electrostatic fields. The
best known is the Fusor. This device has a cathode inside an anode wire cage. Positive ions fly towards the negative
inner cage, and are heated by the electric field in the process. If they miss the inner cage they can collide and fuse.
Ions typically hit the cathode, however, creating prohibitory high conduction losses. Also, fusion rates in fusors are
very low due to competing physical effects, such as energy loss in the form of light radiation. [6] Designs have been
proposed to avoid the problems associated with the cage, by generating the field using a non-neutral cloud. These
include a plasma oscillating device,[7] a penning trap and the polywell.[8] The technology is relatively immature,
however, and many scientific and engineering questions remain.

.
[Link] confinement fusion
Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear fusion
reactions by heating and compressing a fuel target, typically in the form of a pellet that most often contains a
mixture of deuterium and tritium.

[Link] electrostatic confinement


Inertial electrostatic confinement
Inertial electrostatic confinement is a set of devices that use an electric field to heat ions to fusion conditions. The
most well known is the fusor. Starting in 1999, a number of amateurs have been able to do amateur fusion using
these homemade devices. Other IEC devices include: the Polywell, MIX POPS and Marble concepts.

[Link]-beam or beam-target fusion


If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called beam-
target fusion; if both nuclei are accelerated, it is beam-beam fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic energies
sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and can be done in an
efficient mannerrequiring only a vacuum tube, a pair of electrodes, and a high-voltage transformer; fusion can be
observed with as little as 10 kV between the electrodes. The key problem with accelerator-based fusion (and with
cold targets in general) is that fusion cross sections are many orders of magnitude lower than Coulomb interaction
cross sections. Therefore, the vast majority of ions expend their energy emitting bremsstrahlung radiation and the
ionization of atoms of the target. Devices referred to as sealed-tube neutron generators are particularly relevant

326
to this discussion. These small devices are miniature particle accelerators filled with deuterium and tritium gas in an
arrangement that allows ions of those nuclei to be accelerated against hydride targets, also containing deuterium
and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of neutron generators are produced
annually for use in the petroleum industry where they are used in measurement equipment for locating and
mapping oil reserves.
[Link]-catalyzed fusion
Muon-catalyzed fusion is a fusion process that occurs at ordinary temperatures. It was studied in detail by Steven
Jones in the early 1980s. Net energy production from this reaction has been unsuccessful because of the high
energy required to create muons, their short 2.2 s half-life, and the high chance that a muon will bind to the
new alpha particle and thus stop catalyzing fusion.
Other principles

The Tokamak configuration variable, research fusion reactor, at the cole Polytechnique Fdrale de
Lausanne (Switzerland).
Some other confinement principles have been investigated.
Antimatter-initialized fusion uses small amounts of antimatter to trigger a tiny fusion explosion. This has been
studied primarily in the context of making nuclear pulse propulsion, and pure fusion bombs feasible. This is not
near becoming a practical power source, due to the cost of manufacturing antimatter alone.
Pyroelectric fusion was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal
heated from 34 to 7 C (29 to 45 F), combined with a tungsten needle to produce an electric field of about
25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. At the estimated
energy levels,[19] the D-D fusion reaction may occur, producing helium-3 and a 2.45 MeV neutron. Although it
makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more
energy than it produces.
Hybrid nuclear fusion-fission (hybrid nuclear power) is a proposed means of generating power by use of a
combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated
by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to the
delays in the realization of pure fusion.[24] Project PACER, carried out at Los Alamos National
Laboratory (LANL) in the mid-1970s, explored the possibility of a fusion power system that would involve exploding
small hydrogen bombs (fusion bombs) inside an underground cavity. As an energy source, the system is the only
fusion power system that could be demonstrated to work using existing technology. However it would also require
a large, continuous supply of nuclear bombs, making the economics of such a system rather questionable.
Important reactions
Astrophysical reaction chains
At the temperatures and densities in stellar cores the rates of fusion reactions are notoriously slow. For example, at
solar core temperature (T 15 MK) and density (160 g/cm3), the energy release rate is only 276 W/cm3about a
quarter of the volumetric rate at which a resting human body generates heat. [25] Thus, reproduction of stellar core
conditions in a lab for nuclear fusion power production is completely impractical. Because nuclear reaction rates
depend on density as well as temperature and most fusion schemes operate at relatively low densities, those
methods are strongly dependent on higher temperatures. The fusion rate as a function of temperature
(exp(E/kT)), leads to the need to achieve temperatures in terrestrial reactors 10100 times higher temperatures
than in stellar interiors: T 0.11.0109 K.
Criteria and candidates for terrestrial reactions
In artificial fusion, the primary fuel is not constrained to be protons and higher temperatures can be used, so
reactions with larger cross-sections are chosen. Another concern is the production of neutrons, which activate the

327
reactor structure radiologically, but also have the advantages of allowing volumetric extraction of the fusion energy
and tritium breeding. Reactions that release no neutrons are referred to as aneutronic.
To be a useful energy source, a fusion reaction must satisfy several criteria. It must:
Be exothermic
This limits the reactants to the low Z (number of protons) side of the curve of binding energy. It also makes
helium 4He
the most common product because of its extraordinarily tight binding, although 3He
and 3H
also show up.
Involve low atomic number (Z) nuclei
This is because the electrostatic repulsion must be overcome before the nuclei are close enough to fuse
Have two reactants
At anything less than stellar densities, three body collisions are too improbable. In inertial confinement, both stellar
densities and temperatures are exceeded to compensate for the shortcomings of the third parameter of the Lawson
criterion, ICF's very short confinement time.
Have two or more products
This allows simultaneous conservation of energy and momentum without relying on the electromagnetic force.
Conserve both protons and neutrons
The cross sections for the weak interaction are too small.
Few reactions meet these criteria. The following are those with the largest cross sections:[26]
2 3 4
(1) 1D + 1T 2He ( 3.5 MeV ) + n0 ( 14.1 MeV )

2 2 3
(2i) 1D + 1D 1T ( 1.01 MeV ) + p+ ( 3.02 MeV ) 50%

3
(2ii) 2He ( 0.82 MeV ) + n0 ( 2.45 MeV ) 50%

2 3 4
(3) 1D + 2He 2He ( 3.6 MeV ) + p+ ( 14.7 MeV )

3 3 4
(4) 1T + 1T 2He + 2 n0 + 11.3 MeV

3 3 4
(5) 2He + 2He 2He + 2 p+ + 12.9 MeV

3 3 4
(6i) 2He + 1T 2He + p+ + n0 + 12.1 MeV 57%

4 2
(6ii) 2He ( 4.8 MeV ) + 1D ( 9.5 MeV ) 43%

2 6 24
(7i) 1D + 3Li 2He + 22.4 MeV

3 4
(7ii) 2He + 2He + n0 + 2.56 MeV

328
7
(7iii) 3Li + p+ + 5.0 MeV

7
(7iv) 4Be + n0 + 3.4 MeV

6 4 3
(8) p+ + 3Li 2He ( 1.7 MeV ) + 2He ( 2.3 MeV )

3 6 24
(9) 2He + 3Li 2He + p+ + 16.9 MeV

11 34
(10) p+ + 5B 2He + 8.7 MeV

For reactions with two products, the energy is divided between them in inverse proportion to their masses, as
shown. In most reactions with three products, the distribution of energy varies. For reactions that can result in
more than one set of products, the branching ratios are given.

Some reaction candidates can be eliminated at once. The D-6Li reaction has no advantage compared to

because it is roughly as difficult to burn but produces substantially more neutrons through
side reactions. There is also a reaction, but the cross section is far too low, except possibly when Ti > 1 MeV, but at

such high te mperatures an endothermic, direct neutron-producing reaction also becomes very significant.

Finally there is also a reaction, which is not only difficult to burn, but
can be easily induced to split into two alpha particles and a neutron.
In addition to the fusion reactions, the following reactions with neutrons are important in order to "breed" tritium
in "dry" fusion bombs and some proposed fusion reactors:
6 3 4
n0 + 3Li 1T + 2He
+ 4.784 MeV
7 3 4
n0 + 3Li 1T + 2He
+ n0 2.467 MeV

The latter of the two equations was unknown when the U.S. conducted the Castle Bravo fusion bomb test in 1954.
Being just the second fusion bomb ever tested (and the first to use lithium), the designers of the Castle Bravo
"Shrimp" had understood the usefulness of Lithium-6 in tritium production, but had failed to recognize that

329
Lithium-7 fission would greatly increase the yield of the bomb. While Li-7 has a small neutron cross-section for low
neutron energies, it has a higher cross section above 5 MeV.[27] The 15 Mt yield was 150% greater than the
predicted 6 Mt and caused unexpected exposure to fallout.
To evaluate the usefulness of these reactions, in addition to the reactants, the products, and the energy released,
one needs to know something about the cross section. Any given fusion device has a maximum plasma pressure it
can sustain, and an economical device would always operate near this maximum. Given this pressure, the largest
fusion output is obtained when the temperature is chosen so that <v>/T 2 is a maximum. This is also the
temperature at which the value of the triple product nT required for ignition is a minimum, since that required
value is inversely proportional to <v>/T2 (see Lawson criterion). (A plasma is "ignited" if the fusion reactions
produce enough power to maintain the temperature without external heating.) This optimum temperature and the
value of <v>/T2 at that temperature is given for a few of these reactions in the following table.

Note that many of the reactions form chains. For instance, a reactor fueled with

and creates some which is then possible to use in the reaction if the energies are "right".

An elegant idea is to combine the reactions (8) and (9). The from reaction (8) can react with in reaction
(9) before completely thermalizing. This produces an energetic proton, which in turn undergoes reaction (8) before
thermalizing. Detailed analysis shows that this idea would not work well, but it is a good example of a case where
the usual assumption of a Maxwellian plasma is not appropriate.
.

Any of the reactions above can in principle be the basis of fusion power production. In addition to the temperature

and cross section discussed above, we must consider the total energy of the fusion products s, the energy of

the charged fusion products and the atomic number Z of the non-hydrogenic reactant.

Specification of the reaction entails some difficulties, though. To begin with, one must average over the

two branches (2i) and (2ii). More difficult is to decide how to treat the and products. burns so well

in a deuterium plasma that it is almost impossible to extract from the plasma. The reaction is optimized at

a much higher temperature, so the burnup at the optimum temperature may be low. Therefore, it seems

reasonable to assume the but not the gets burned up and adds its energy to the net reaction, which
means the total reaction would be the sum of (2i), (2ii), and (1):

330
.

For calculating the power of a reactor (in which the reaction rate is determined by the D-D step), we count

the
fusion energy per D-D reaction as Efus = (4.03 MeV + 17.6 MeV)50% + (3.27 MeV)50% = 12.5 MeV and the energy
in charged particles as Ech = (4.03 MeV + 3.5 MeV)50% + (0.82 MeV)50% = 4.2 MeV. (Note: if the tritium ion reacts
with a deuteron while it still has a large kinetic energy, then the kinetic energy of the helium-4 produced may be
quite different from 3.5 MeV,[28] so this calculation of energy in charged particles is only an approximation of the
average.) The amount of energy per deuteron consumed is 2/5 of this, or 5.0 MeV (a specific energy of about 225
million MJ per kilogram of deuterium).

Another unique aspect of the reaction is that there is only one reactant, which must be taken into account
when calculating the reaction rate.

With this choice, we tabulate parameters for four of the most important reactions

The last column is the neutronicity of the reaction, the fraction of the fusion energy released as neutrons. This is
an important indicator of the magnitude of the problems associated with neutrons like radiation damage, biological
shielding, remote handling, and safety. For the first two reactions it is calculated as (Efus-Ech)/Efus. For the last two
reactions, where this calculation would give zero, the values quoted are rough estimates based on side reactions
that produce neutrons in a plasma in thermal equilibrium.
Of course, the reactants should also be mixed in the optimal proportions. This is the case when each reactant ion
plus its associated electrons accounts for half the pressure. Assuming that the total pressure is fixed, this means
that density of the non-hydrogenic ion is smaller than that of the hydrogenic ion by a factor 2/(Z+1). Therefore, the
rate for these reactions is reduced by the same factor, on top of any differences in the values of <v>/T2. On the

other hand, because the reaction has only one reactant, its rate is twice as high as when the fuel is divided
between two different hydrogenic species, thus creating a more efficient reaction.
Thus there is a "penalty" of (2/(Z+1)) for non-hydrogenic fuels arising from the fact that they require more
electrons, which take up pressure without participating in the fusion reaction. (It is usually a good assumption that
the electron temperature will be nearly equal to the ion temperature. Some authors, however discuss the
possibility that the electrons could be maintained substantially colder than the ions. In such a case, known as a "hot

ion mode", the "penalty" would not apply.) There is at the same time a "bonus" of a factor 2 for because
each ion can react with any of the other ions, not just a fraction of them.

331
We can now compare these reactions in the following table.

The maximum value of <v>/T2 is taken from a previous table. The "penalty/bonus" factor is that related to a non-
hydrogenic reactant or a single-species reaction. The values in the column "reactivity" are found by dividing
1.241024 by the product of the second and third columns. It indicates the factor by which the other reactions

occur more slowly than the reaction under comparable conditions. The column "Lawson criterion" weights
these results with Ech and gives an indication of how much more difficult it is to achieve ignition with these

reactions, relative to the difficulty for the reaction. The last column is labeled "power density" and
weights the practical reactivity with Efus. It indicates how much lower the fusion power density of the other

reactions is compared to the reaction and can be considered a measure of the economic potential.

Bremsstrahlung losses in quasineutral, isotropic plasmas[edit]


The ions undergoing fusion in many systems will essentially never occur alone but will be mixed with electrons that
in aggregate neutralize the ions' bulk electrical charge and form a plasma. The electrons will generally have a
temperature comparable to or greater than that of the ions, so they will collide with the ions and emit x-
ray radiation of 1030 keV energy, a process known as Bremsstrahlung.
The huge size of the Sun and stars means that the x-rays produced in this process will not escape and will deposit
their energy back into the plasma. They are said to be opaque to x-rays. But any terrestrial fusion reactor will
be optically thin for x-rays of this energy range. X-rays are difficult to reflect but they are effectively absorbed (and
converted into heat) in less than mm thickness of stainless steel (which is part of a reactor's shield). This means the
bremsstrahlung process is carrying energy out of the plasma, cooling it.
The ratio of fusion power produced to x-ray radiation lost to walls is an important figure of merit. This ratio is
generally maximized at a much higher temperature than that which maximizes the power density (see the previous
subsection). The following table shows estimates of the optimum temperature and the power ratio at that
temperature for several reactions.

332
The actual ratios of fusion to Bremsstrahlung power will likely be significantly lower for several reasons. For one,
the calculation assumes that the energy of the fusion products is transmitted completely to the fuel ions, which
then lose energy to the electrons by collisions, which in turn lose energy by Bremsstrahlung. However, because the
fusion products move much faster than the fuel ions, they will give up a significant fraction of their energy directly
to the electrons. Secondly, the ions in the plasma are assumed to be purely fuel ions. In practice, there will be a
significant proportion of impurity ions, which will then lower the ratio. In particular, the fusion products
themselves must remain in the plasma until they have given up their energy, and will remain some time after that in
any proposed confinement scheme. Finally, all channels of energy loss other than Bremsstrahlung have been
neglected. The last two factors are related. On theoretical and experimental grounds, particle and energy
confinement seem to be closely related. In a confinement scheme that does a good job of retaining energy, fusion
products will build up. If the fusion products are efficiently ejected, then energy confinement will be poor, too.
The temperatures maximizing the fusion power compared to the Bremsstrahlung are in every case higher than the
temperature that maximizes the power density and minimizes the required value of the fusion triple product. This

will not change the optimum operating point for very much because the Bremsstrahlung fraction is low, but

it will push the other fuels into regimes where the power density relative to is even lower and the required

confinement even more difficult to achieve. For and , Bremsstrahlung losses will be a serious,

possibly prohibitive problem. For , and the Bremsstrahlung losses appear to make a
fusion reactor using these fuels with a quasineutral, isotropic plasma impossible. Some ways out of this dilemma
are consideredand rejectedin fundamental limitations on plasma fusion systems not in thermodynamic
equilibrium. This limitation does not apply to non-neutral and anisotropic plasmas,

CALCULATIONS FOR THE SKIN EXPERIMENT FOR THE FORMATION OF STARS.


SCHRODINGER EQUATION.
Description many equations , theories , principles .
In quantum mechanics, the Schrdinger equation is a mathematical equation that describes the changes over time of a
physical system in which quantum effects, such as waveparticle duality, are significant. The equation is a mathematical
formulation for studying quantum mechanical systems. It is considered a central result in the study of quantum systems and
its derivation was a significant landmark in developing the theory of quantum mechanics. It was named after Erwin
Schrdinger, who derived the equation in 1925 and published it in 1926, forming the basis for his work that resulted in
Schrdinger being awarded the Nobel Prize in Physics in 1933.[1][2] The equation is a type of differential equation known as
a wave-equation, which serves as a mathematical model of the movement of waves.
In classical mechanics, Newton's second law (F = ma) is used to make a mathematical prediction as to what path a given
system will take following a set of known initial conditions. In quantum mechanics, the analogue of Newton's law is
Schrdinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or
localised). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-
evolution of the system's wave function (also called a "state function").
The concept of a wavefunction is a fundamental postulate of quantum mechanics. Using these postulates, Schrdinger's
equation can be derived from the fact that the time-evolution operator must be unitary and must therefore be generated by
the exponential of a self-adjoint operator, which is the quantum Hamiltonian. This derivation is explained below.
In the Copenhagen interpretation of quantum mechanics, the wave function is the most complete description that can be
given of a physical system. Solutions to Schrdinger's equation describe not only molecular, atomic, and subatomic systems,
but also macroscopic systems, possibly even the whole universe. ffSchrdinger's equation is central to all applications of
quantum mechanics including quantum field theory which combines special relativity with quantum mechanics. Theories
of quantum gravity, such as string theory, also do not modify Schrdinger's equation.
The Schrdinger equation is not the only way to study quantum mechanical systems and make predictions, as there are other
quantum mechanical formulations such as matrix mechanics, introduced by Werner Heisenberg, and path integral

333
formulation, developed chiefly by Richard Feynman. Paul Dirac incorporated matrix mechanics and the Schrdinger equation
into a single formulation.

Equation

Time-dependent equation
The form of the Schrdinger equation depends on the physical situation (see below for special cases). The most general form
is the time-dependent Schrdinger equation, which gives a description of a system evolving with time: [5]:143

A wave function that satisfies the nonrelativistic Schrdinger equation with V = 0. In other words, this corresponds to a particle traveling freely

through empty space. The real part of the wave function is plotted here.

where i is the imaginary unit, is the reduced Planck constant which is :, the symbol /t indicates a partial
derivative with respect to time t, (the Greek letter psi) is the wave function of the quantum system, r and t are the
position vector and time respectively, and is the Hamiltonian operator (which characterizes the total energy of any
given wave function and takes different forms depending on the situation).

Each of these three rows is a wave function which satisfies the time-dependent Schrdinger equation for a harmonic oscillator. Left: The

real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave

function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row

is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary".

334
The most famous example is the nonrelativistic Schrdinger equation for a single particle moving in an electric field (but
not a magnetic field; see the Pauli equation):[6]

where is the particle's "reduced mass", V is its potential energy, 2 is the Laplacian (a differential operator), and is the
wave function (more precisely, in this context, it is called the "position-space wave function"). In plain language, it means
"total energy equals kinetic energy plus potential energy", but the terms take unfamiliar forms for reasons explained below.
Given the particular differential operators involved, this is a linear partial differential equation. It is also a diffusion equation,
but unlike the heat equation, this one is also a wave equation given the imaginary unitpresent in the transient term.
The term "Schrdinger equation" can refer to both the general equation (first box above), or the specific nonrelativistic version
(second box above and variations thereof). The general equation is indeed quite general, used throughout quantum
mechanics, for everything from the Dirac equation to quantum field theory, by plugging in various complicated expressions for
the Hamiltonian. The specific nonrelativistic version is a simplified approximation to reality, which is quite accurate in many
situations, but very inaccurate in others (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrdinger equation, the Hamiltonian operator is set up for the system, accounting for the kinetic and potential
energy of the particles constituting the system, then inserted into the Schrdinger equation. The resulting partial differential
equation is solved for the wave function, which contains information about the system.

Time-independent equation

The time-dependent Schrdinger equation described above predicts that wave functions can form standing waves,
called stationary states (also called "orbitals", as in atomic orbitals or molecular orbitals). These states are important in their
own right, and if the stationary states are classified and understood, then it becomes easier to solve the time-dependent
Schrdinger equation for any state. Stationary states can also be described by a simpler form of the Schrdinger equation,
the time-independent Schrdinger equation. (This is only used when the Hamiltonian itself is not dependent on time explicitly.
However, even in this case the total wave function still has a time dependency.)

In words, the equation states:

When the Hamiltonian operator acts on a certain wave function , and the result is proportional to the same wave
function , then is a stationary state, and the proportionality constant, E, is the energy of the state .
The time-independent Schrdinger equation is discussed further below. In linear algebra terminology, this equation is
an eigenvalue equation.

335
As before, the most famous manifestation is the nonrelativistic Schrdinger equation for a single particle moving in an
electric field (but not a magnetic field):

with definitions as above.

Derivation

In the modern understanding of quantum mechanics, Schrdinger's equation may be derived as follows. [7] If the wave-function

at time t is given by , then by the linearity of quantum mechanics the wave-function at time t' must be given

by , where is a linear operator. Since time-evolution must preserve the norm of the

wave-function, it follows that must be a member of the unitary group of operators acting on wave-functions. We

also know that when , we must have . Therefore, expanding the operator for t' close to t,

we can write where H is a Hermitian operator. This follows from the fact that the Lie

algebra corresponding to the unitary group comprises Hermitian operators. Taking the limit as the time-difference
becomes very small, we obtain Schrdinger's equation.
So far, H is only an abstract Hermitian operator. However using the correspondence principle it is possible to show that, in the
classical limit, the expectation value of H is indeed the classical energy. The correspondence principle does not completely fix
the form of the quantum Hamiltonian due to the uncertainty principle and therefore the precise form of the quantum
Hamiltonian must be fixed empirically.

Implications

The Schrdinger equation and its solutions introduced a breakthrough in thinking about physics. Schrdinger's equation was
the first of its type, and solutions led to consequences that were very unusual and unexpected for the time.

Total, kinetic, and potential energy

The overall form of the equation is not unusual or unexpected, as it uses the principle of the conservation of energy. The
terms of the nonrelativistic Schrdinger equation can be interpreted as total energy of the system, equal to the system kinetic
energy plus the system potential energy. In this respect, it is just the same as in classical physics.

Quantization

The Schrdinger equation predicts that if certain properties of a system are measured, the result may be quantized, meaning
that only specific discrete values can occur. One example is energy quantization: the energy of an electron in an atom is
always one of the quantized energy levels, a fact discovered via atomic spectroscopy. (Energy quantization is
discussed below.) Another example is quantization of angular momentum. This was an assumption in the earlier Bohr model
of the atom, but it is a prediction of the Schrdinger equation.
Another result of the Schrdinger equation is that not every measurement gives a quantized result in quantum mechanics.
For example, position, momentum, time, and (in some situations) energy can have any value across a continuous range. [8]:165
167

336
Measurement and uncertainty

Measurement in quantum mechanics, Heisenberg uncertainty principle, and Interpretations of quantum mechanics
In classical mechanics, a particle has, at every moment, an exact position and an exact momentum. These values
change deterministically as the particle moves according to Newton's laws. Under the Copenhagen interpretation of quantum
mechanics, particles do not have exactly determined properties, and when they are measured, the result is randomly drawn
from a probability distribution. The Schrdinger equation predicts what the probability distributions are, but fundamentally
cannot predict the exact result of each measurement.
The Heisenberg uncertainty principle is the statement of the inherent measurement uncertainty in quantum mechanics. It
states that the more precisely a particle's position is known, the less precisely its momentum is known, and vice versa.
The Schrdinger equation describes the (deterministic) evolution of the wave function of a particle. However, even if the wave
function is known exactly, the result of a specific measurement on the wave function is uncertain

Quantum tunneling

Quantum tunneling

Quantum tunneling through a barrier. A particle coming from the left does not have enough energy to climb the barrier. However, it can

sometimes "tunnel" to the other side.


Quantum tunneling through a barrier. A particle coming from the left does not have enough energy to climb the
barrier. However, it can sometimes "tunnel" to the other side.

Source;Felix Kling - Own work


Tunneleffekt: Qualitativer Verlauf der Wellenfunktion, Welle trifft von links auf Potentialbarriere

In classical physics, when a ball is rolled slowly up a large hill, it will come to a stop and roll back, because it doesn't have
enough energy to get over the top of the hill to the other side. However, the Schrdinger equation predicts that there is a
small probability that the ball will get to the other side of the hill, even if it has too little energy to reach the top. This is
called quantum tunneling. It is related to the distribution of energy: although the ball's assumed position seems to be on one
side of the hill, there is a chance of finding it on the other side.

Particles as waves

Matter wave, Waveparticle duality, and Double-slit experiment

337
A human skin or double slit experiment showing the accumulation of electrons on a screen as time passes.

A double slit experiment showing the accumulation of electrons on a screen as time passes.
user:Belsazar - Provided with kind permission of Dr. Tonomura
Results of a double-slit-experiment performed by Dr. Tonomura showing the build-up of an interference pattern of
single electrons. Numbers of electrons are 11 (a), 200 (b), 6000 (c), 40000 (d), 140000 (e).

The nonrelativistic Schrdinger equation is a type of partial differential equationcalled a wave equation. Therefore, it is often
said particles can exhibit behavior usually attributed to waves. In some modern interpretations this description is reversed
the quantum state, i.e. wave, is the only genuine physical reality, and under the appropriate conditions it can show features of
particle-like behavior. However, Ballentine[9]:Chapter 4, p.99shows that such an interpretation has problems. Ballentine points out that
whilst it is arguable to associate a physical wave with a single particle, there is still only oneSchrdinger wave equation for
many particles. He points out:
"If a physical wave field were associated with a particle, or if a particle were identified with a wave packet, then
corresponding to N interacting particles there should be N interacting waves in ordinary three-dimensional space.
But according to (4.6) that is not the case; instead there is one "wave" function in an abstract 3N-dimensional
configuration space. The misinterpretation of psi as a physical wave in ordinary space is possible only because the
most common applications of quantum mechanics are to one-particle states, for which configuration space and
ordinary space are isomorphic."

338
Two-slit diffraction is a famous example of the strange behaviors that waves regularly display, that are not intuitively
associated with particles. The overlapping waves from the two slits cancel each other out in some locations, and
reinforce each other in other locations, causing a complex pattern to emerge. Intuitively, one would not expect this
pattern from firing a single particle at the slits, because the particle should pass through one slit or the other, not a
complex overlap of both.
However, since the Schrdinger equation is a wave equation, a single particle fired through a double-slit does show this
same pattern (figure on right). Note: The experiment must be repeated many times for the complex pattern to emerge.
Although this is counterintuitive, the prediction is correct; in particular, electron diffraction and neutron diffraction are well
understood and widely used in science and engineering.
Related to diffraction, particles also display superposition and interference.
The superposition property allows the particle to be in a quantum superposition of two or more quantum states at the
same time. However, it is noted that a "quantum state" in quantum mechanics means the probability that a system will
be, for example at a position x, not that the system will actually be at position x. It does not imply that the particle itself
may be in two classical states at once. Indeed, quantum mechanics is generally unable to assign values for properties
prior to measurement at all.

Interpretation of the wave function

Interpretations of quantum mechanics


The Schrdinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time.
However, the Schrdinger equation does not directly say what, exactly, the wave function is. Interpretations of quantum
mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results
of experimental measurements.
An important aspect is the relationship between the Schrdinger equation and wavefunction collapse. In the
oldest Copenhagen interpretation, particles follow the Schrdinger equation except during wavefunction collapse, during
which they behave entirely differently. The advent of quantum decoherence theory allowed alternative approaches (such as
the Everett many-worlds interpretation and consistent histories), wherein the Schrdinger equation is always satisfied, and
wavefunction collapse should be explained as a consequence of the Schrdinger equation.

Historical background and development schrodinger equation.

Erwin Schrdinger
: Theoretical and experimental justification for the Schrdinger equation
Following Max Planck's quantization of light (see black body radiation), Albert Einstein interpreted Planck's quanta to
be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs
of waveparticle duality. Since energy and momentum are related in the same way as frequency and wavenumber in special
relativity, it followed that the momentum p of a photon is inversely proportional to its wavelength , or proportional to
its wavenumber k:

339
where h is Planck's constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have
mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle
counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus
of an atom are allowed.[11] These quantized orbits correspond to discrete energy levels, and de Broglie reproduced
the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular
momentum L according to:

According to de Broglie the electron is described by a wave and a whole number of wavelengths must fit along the
circumference of the electron's orbit:

This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the
completion of the relativistic energymomentum 4-vector to derive what we now call the de Broglie relation. [12] Unlike de
Broglie, Lunn went on to formulate the differential equation now known as the Schrdinger equation, and solve forits energy
eigenvalues for the hydrogen atom. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen.
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they
should satisfy some sort of wave equation. Inspired by Debye's remark, Schrdinger decided to find a proper 3-dimensional
wave equation for the electron. He was guided by William R. Hamilton's analogy between mechanics and optics, encoded in
the observation that the zero-wavelength limit of optics resembles a mechanical systemthe trajectories of light rays become
sharp tracks that obey Fermat's principle, an analog of the principle of least action. A modern version of his reasoning is
reproduced below. The equation he found is: [15]

However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections.[16][17] Schrdinger used the
relativistic energy momentum relation to find what is now known as the KleinGordon equation in a Coulomb
potential (in natural units):

He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula.
Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin in December 1925.[18]
While at the cabin, Schrdinger decided that his earlier nonrelativistic calculations were novel enough to publish, and decided
to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for
hydrogen (he had sought help from his friend the mathematician Hermann Weyl[19]:3) Schrdinger showed that his
nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in
1926.[19]:1[20] In the equation, Schrdinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a
wave (x, t), moving in a potential well V, created by the proton. This computation accurately reproduced the energy
levels of the Bohr model. In a paper, Schrdinger himself explained this equation as follows:

The already ... mentioned psi-function.... is now the means for predicting probability of
measurement results. In it is embodied the momentarily attained sum of theoretically
based future expectation, somewhat as laid down in a catalog.

Erwin Schrdinger[21]

This 1926 paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as
opposed to Heisenberg's matrix mechanics, which he considered overly formal.

340
The Schrdinger equation details the behavior of but says nothing of its nature. Schrdinger tried to interpret it as a charge
density in his fourth paper, but he was unsuccessful. [23]:219 In 1926, just a few days after Schrdinger's fourth and final paper
was published, Max Born successfully interpreted as the probability amplitude, whose absolute square is equal
to probability density.[23]:220 Schrdinger, though, always opposed a statistical or probabilistic approach, with its
associated discontinuitiesmuch like Einstein, who believed that quantum mechanics was a statistical approximation to an
underlying deterministic theoryand never reconciled with the Copenhagen interpretation.[24]
Louis de Broglie in his later years proposed a real valued wave function connected to the complex wave function by a
proportionality constant and developed the De BroglieBohm theory.

The wave equation for particles

Waveparticle duality
The Schrdinger equation is a diffusion equation,[25] the solutions are functions which describe wave-like motions. Wave
equations in physics can normally be derived from other physical laws the wave equation for mechanical vibrations on
strings and in matter can be derived from Newton's laws, where the wave function represents the displacement of matter,
and electromagnetic waves from Maxwell's equations, where the wave functions are electric and magnetic fields. The basis
for Schrdinger's equation, on the other hand, is the energy of the system and a separate postulate of quantum mechanics:
the wave function is a description of the system. [26] The Schrdinger equation is therefore a new concept in itself; as Feynman
put it:

Where did we get that (equation) from? Nowhere. It is not possible to derive it from anything you know. It came
out of the mind of Schrdinger.

Richard Feynman[27]

The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and
consistent with the De Broglie relations. The solution is the wave function , which contains all the information that can be
known about the system. In the Copenhagen interpretation, the modulus of is related to the probability the particles are in
some spatial configuration at some instant of time. Solving the equation for can be used to predict how the particles will
behave under the influence of the specified potential and with each other.
The Schrdinger equation was developed principally from the De Broglie hypothesis, a wave equation that would describe
particles,[28] and can be constructed as shown informally in the following sections. [29] For a more rigorous description of
Schrdinger's equation, see also Resnick et al.[30]

Consistency with energy conservation

The total energy E of a particle is the sum of kinetic energy T and potential energy V, this sum is also the frequent
expression for the Hamiltonian H in classical mechanics:

Explicitly, for a particle in one dimension with position x, mass m and momentum p, and potential energy V which
generally varies with position and time t:

For three dimensions, the position vector r and momentum vector p must be used:

This formalism can be extended to any fixed number of particles: the total energy of the system is then the total kinetic
energies of the particles, plus the total potential energy, again the Hamiltonian. However, there can be interactions between

341
the particles (an N-body problem), so the potential energy V can change as the spatial configuration of particles changes,
and possibly with time. The potential energy, in general, is not the sum of the separate potential energies for each particle, it
is a function of all the spatial positions of the particles. Explicitly:

Linearity

The simplest wavefunction is a plane wave of the form:

where the A is the amplitude, k the wavevector, and the angular frequency, of the plane wave. In general, physical
situations are not purely described by plane waves, so for generality the superposition principle is required; any wave can be
made by superposition of sinusoidal plane waves. So if the equation is linear, a linear combination of plane waves is also an
allowed solution. Hence a necessary and separate requirement is that the Schrdinger equation is a linear differential
equation.

For discrete k the sum is a superposition of plane waves:

for some real amplitude coefficients An, and for continuous k the sum becomes an integral, the Fourier transform of a
momentum space wavefunction:[31]

where d3k = dkxdkydkz is the differential volume element in k-space, and the integrals are taken over all k-space. The
momentum wavefunction (k) arises in the integrand since the position and momentum space wavefunctions are Fourier
transforms of each other.

342
Consistency with the De Broglie relations

Diagrammatic summary of the quantities related to the wavefunction, as used in De broglie's hypothesis and development of the Schrdinger

equation.[28]

Einstein's light quanta hypothesis (1905) states that the energy E of a photon is proportional to the frequency (or angular
frequency, = 2) of the corresponding quantum wavepacket of light:

Likewise De Broglie's hypothesis (1924) states that any particle can be associated with a wave, and that the momentum p of
the particle is inversely proportional to the wavelength of such a wave (or proportional to the wavenumber, k = 2/), in
one dimension, by:

while in three dimensions, wavelength is related to the magnitude of the wavevector k:

The PlanckEinstein and de Broglie relations illuminate the deep connections between energy with time, and space with
momentum, and express waveparticle duality. In practice, natural units comprising = 1 are used, as the De
Broglie equations reduce to identities: allowing momentum, wavenumber, energy and frequency to be used interchangeably,
to prevent duplication of quantities, and reduce the number of dimensions of related quantities. For familiarity SI units are still
used in this article.
Schrdinger's insight, late in 1925, was to express the phase of a plane wave as a complex phase factor using these
relations:

and to realize that the first order partial derivatives were:


with respect to space:

343
with respect to time:

Another postulate of quantum mechanics is that all observables are represented by linear Hermitian operators which act on
the wavefunction, and the eigenvalues of the operator are the values the observable takes. The previous derivatives are
consistent with the energy operator, corresponding to the time derivative,

where E are the energy eigenvalues, and the momentum operator, corresponding to the spatial derivatives (the gradient ),

where p is a vector of the momentum eigenvalues. In the above, the "hats" ( ) indicate these observables are operators, not
simply ordinary numbers or vectors. The energy and momentum operators are differential operators, while the potential
energy function V is just a multiplicative factor.
Substituting the energy and momentum operators into the classical energy conservation equation obtains the operator:

so in terms of derivatives with respect to time and space, acting this operator on the wavefunction immediately led
Schrdinger to his equation:

Waveparticle duality can be assessed from these equations as follows. The kinetic energy T is related to the square of
momentum p. As the particle's momentum increases, the kinetic energy increases more rapidly, but since the
wavenumber |k| increases the wavelength decreases. In terms of ordinary scalar and vector quantities (not operators):

The kinetic energy is also proportional to the second spatial derivatives, so it is also proportional to the magnitude of
the curvature of the wave, in terms of operators:

344
As the curvature increases, the amplitude of the wave alternates between positive and negative more rapidly, and also
shortens the wavelength. So the inverse relation between momentum and wavelength is consistent with the energy the
particle has, and so the energy of the particle has a connection to a wave, all in the same mathematical formulation. [

Wave and particle motion

Increasing levels of wavepacket localization, meaning the particle has a more localized position.
Source; Maschen - Own work
Quantum mechanics travelling wavefunctions

345
Source; Maschen - Own work
Perfect localization of a particle

In the limit 0, the particle's position and momentum become known exactly. This is equivalent to the classical particle.

Schrdinger required that a wave packet solution near position r with wavevector near k will move along the trajectory
determined by classical mechanics for times short enough for the spread in k(and hence in velocity) not to substantially
increase the spread in r. Since, for a given spread in k, the spread in velocity is proportional to Planck's constant , it is
sometimes said that in the limit as approaches zero, the equations of classical mechanics are restored from quantum
mechanics.[32] Great care is required in how that limit is taken, and in what cases.

The limiting short-wavelength is equivalent to tending to zero because this is limiting case of increasing the wave packet
localization to the definite position of the particle (see images right). Using the Heisenberg uncertainty principle for position
and momentum, the products of uncertainty in position and momentum become zero as 0:

where denotes the (root mean square) measurement uncertainty in x and px (and similarly for the y and z directions)
which implies the position and momentum can only be known to arbitrary precision in this limit.
The Schrdinger equation in its general form

is closely related to the HamiltonJacobi equation (HJE)

where S is action and H is the Hamiltonian function (not operator). Here the generalized coordinates qi for i= 1, 2,
3 (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = (q1, q2, q3) = (x, y, z). [32]

Substituting

where is the probability density, into the Schrdinger equation and then taking the limit 0 in the resulting equation,
yields the HamiltonJacobi equation.
The implications are:

The motion of a particle, described by a (short-wavelength) wave packet solution to the


Schrdinger equation, is also described by the HamiltonJacobi equation of motion.
The Schrdinger equation includes the wavefunction, so its wave packet solution implies the
position of a (quantum) particle is fuzzily spread out in wave fronts. On the contrary, the Hamilton
Jacobi equation applies to a (classical) particle of definite position and momentum, instead the
position and momentum at all times (the trajectory) are deterministic and can be simultaneously
known.

Nonrelativistic quantum mechanics

The quantum mechanics of particles without accounting for the effects of special relativity, for example particles propagating
at speeds much less than light, is known as nonrelativistic quantum mechanics. Following are several forms of

346
Schrdinger's equation in this context for different situations: time independence and dependence, one and three spatial
dimensions, and one and N particles.
In actuality, the particles constituting the system do not have the numerical labels used in theory. The language of
mathematics forces us to label the positions of particles one way or another, otherwise there would be confusion between
symbols representing which variables are for which particle. [30]

Time independent

If the Hamiltonian is not an explicit function of time, the equation is separable into a product of spatial and temporal parts. In
general, the wavefunction takes the form:

where (space coords) is a function of all the spatial coordinate(s) of the particle(s) constituting the system only,
and (t) is a function of time only.

Substituting for into the Schrdinger equation for the relevant number of particles in the relevant number of dimensions,
solving by separation of variables implies the general solution of the time-dependent equation has the form:[15]

Since the time dependent phase factor is always the same, only the spatial part needs to be solved for in time independent
problems. Additionally, the energy operator = i/t can always be replaced by the energy eigenvalue E, thus the time
independent Schrdinger equation is an eigenvalue equation for the Hamiltonian operator:[5]:143ff

This is true for any number of particles in any number of dimensions (in a time independent potential). This case describes
the standing wave solutions of the time-dependent equation, which are the states with definite energy (instead of a probability
distribution of different energies). In physics, these standing waves are called "stationary states" or "energy eigenstates"; in
chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their
properties according to the relative phases between the energy levels.
The energy eigenvalues from this equation form a discrete spectrum of values, so mathematically energy must be quantized.
More specifically, the energy eigenstates form a basis any wavefunction may be written as a sum over the discrete energy
states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral
theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of
a Hermitian matrix.

One-dimensional examples

For a particle in one dimension, the Hamiltonian is:

and substituting this into the general Schrdinger equation gives:

347
This is the only case the Schrdinger equation is an ordinary differential equation, rather than a partial differential equation.
The general solutions are always of the form:

For N particles in one dimension, the Hamiltonian is:

where the position of particle n is xn. The corresponding Schrdinger equation is:

so the general solutions have the form:

For non-interacting distinguishable particles, [33] the potential of the system only influences each particle separately, so the total
potential energy is the sum of potential energies for each particle:

and the wavefunction can be written as a product of the wavefunctions for each particle:

For non-interacting identical particles, the potential is still a sum, but wavefunction is a bit more complicated it is a sum over
the permutations of products of the separate wavefunctions to account for particle exchange. In general for interacting
particles, the above decompositions are not possible.
Free particle

For no potential, V = 0, so the particle is free and the equation reads: [5]:151ff

which has oscillatory solutions for E > 0 (the Cn are arbitrary constants):

and exponential solutions for E<0

348
The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with
periodic or fixed boundary conditions.
See also free particle and wavepacket for more discussion on the free particle.
Constant potential

Animation of a de Broglie wave incident on a barrier.


The original uploader was Jean-Christophe BENOIST at French Wikipedia - Transferred [Link] to Commons.
An example of Tunnel Effect - The evolution of the wave function of an electron through a potential barrier

For a constant potential, V = V0, the solution is oscillatory for E > V0 and exponential for E < V0, corresponding to
energies that are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and
correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small
amount of quantum bleeding into the classically disallowed region, due to quantum tunneling. If the potential V0 grows to
infinity, the motion is classically confined to a finite region. Viewed far enough away, every solution is reduced to an
exponential; the condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed
energies.[31]

349
Harmonic oscillator

A harmonic oscillator in classical mechanics (AB) and quantum mechanics (CH). In (AB), a ball, attached to a spring, oscillates back and

forth. (CH) are six solutions to the Schrdinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part

(blue) or imaginary part (red) of the wavefunction. Stationary states, or energy eigenstates, which are solutions to the time-independent

Schrdinger equation, are shown in C, D, E, F, but not G or H., source; Sbyrnes321 - Own work

: Quantum harmonic oscillator


The Schrdinger equation for this situation is

It is a notable quantum system to solve for; since the solutions are exact (but complicated in terms of Hermite polynomials),
and it can describe or at least approximate a wide variety of other systems, including vibrating atoms, molecules,[34] and atoms
or ions in lattices,[35] and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in
quantum mechanics.
There is a family of solutions in the position basis they are

where n = 0,1,2,..., and the functions Hn are the Hermite polynomials.


Three-dimensional examples

The extension from one dimension to three dimensions is straightforward, all position and momentum operators are replaced
by their three-dimensional expressions and the partial derivative with respect to space is replaced by the gradient operator.
The Hamiltonian for one particle in three dimensions is:

350
generating the equation:

with stationary state solutions of the form:

where the position of the particle is r. Two useful coordinate systems for solving the Schrdinger equation are Cartesian
coordinates so that r = (x, y, z) and spherical polar coordinates so that r = (r, , ), although other orthogonal
coordinates are useful for solving the equation for systems with certain geometric symmetries.

For N particles in three dimensions, the Hamiltonian is:

where the position of particle n is rn and the gradient operators are partial derivatives with respect to the particle's position
coordinates. In Cartesian coordinates, for particle n, the position vector is rn = (xn, yn, zn) while the gradient and Laplacian
operator are respectively:

The Schrdinger equation is:

with stationary state solutions:

Again, for non-interacting distinguishable particles the potential is the sum of particle potentials

and the wavefunction is a product of the particle wavefunctions

For non-interacting identical particles, the potential is a sum but the wavefunction is a sum over permutations of products.
The previous two equations do not apply to interacting particles.

351
Following are examples where exact solutions are known. See the main articles for further details.
Hydrogen atom

This form of the Schrdinger equation can be applied to the hydrogen atom:[26][28]

where e is the electron charge, r is the position of the electron (r = |r| is the magnitude of the position), the potential term is
due to the Coulomb interaction, wherein 0 is the electric constant (permittivity of free space) and

is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass mp and the electron of mass me. The negative
sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the
electron mass is used since the electron and proton together orbit each other about a common centre of mass, and constitute
a two-body problem to solve. The motion of the electron is of principle interest here, so the equivalent one-body problem is
the motion of the electron using the reduced mass.
The wavefunction for hydrogen is a function of the electron's coordinates, and in fact can be separated into functions of each
coordinate.[36] Usually this is done in spherical polar coordinates:

where R are radial functions and Ym


(, ) are spherical harmonics of degree and order m. This is the only atom for which the Schrdinger equation has been
solved for exactly. Multi-electron atoms require approximative methods. The family of solutions are: [37]

where:

is the Bohr radius,

are the generalized Laguerre polynomials of degree n 1.


n, , m are the principal, azimuthal, and magnetic quantum numbers respectively: which take the
values:

NB: generalized Laguerre polynomials are defined differently by different authorssee main article on them and the
hydrogen atom.

Two-electron atoms or ions

The equation for any two-electron system, such as the neutral helium atom (He, Z = 2), the negative hydrogen ion (H, Z =
1), or the positive lithium ion (Li+, Z = 3) is:[29]

352
where r1 is the position of one electron (r1 = |r1| is its magnitude), r2 is the position of the other electron ( r2 = |r2| is the
magnitude), r12 = |r12| is the magnitude of the separation between them given by

is again the two-body reduced mass of an electron with respect to the nucleus of mass M, so this time

and Z is the atomic number for the element (not a quantum number).
The cross-term of two laplacians

is known as the mass polarization term, which arises due to the motion of atomic nuclei. The wavefunction is a function of the
two electron's positions:

There is no closed form solution for this equation.

Time dependent

This is the equation of motion for the quantum state. In the most general form, it is written: [5]:143ff

and the solution, the wavefunction, is a function of all the particle coordinates of the system and time.
Following are specific cases.
For one particle in one dimension, the Hamiltonian

generates the equation:

For N particles in one dimension, the Hamiltonian is:

where the position of particle n is xn, generating the equation:

353
For one particle in three dimensions, the Hamiltonian is:

generating the equation:

For N particles in three dimensions, the Hamiltonian is:

where the position of particle n is rn, generating the equation:[5]:141

This last equation is in a very high dimension, so the solutions are not easy to visualize.

Solution methods

General techniques: Methods for special cases:

Perturbation theory List of quantum-mechanical systems with analytical


The variational method solutions
Quantum Monte Carlo methods HartreeFock method and post HartreeFock methods
Density functional theory
The WKB approximation and semi-classical
expansion

Properties
The Schrdinger equation has the following properties: some are useful, but there are shortcomings.
Ultimately, these properties arise from the Hamiltonian used, and the solutions to the equation.
Linearity
Linear differential equation
In the development above, the Schrdinger equation was made to be linear for generality, though this has
other implications. If two wave functions 1 and 2 are solutions, then so is any linear combination of the
two:

354
where a and b are any complex numbers (the sum can be extended for any number of wavefunctions).
This property allows superpositions of quantum states to be solutions of the Schrdinger equation. Even
more generally, it holds that a general solution to the Schrdinger equation can be found by taking a
weighted sum over all single state solutions achievable. For example, consider a wave
function (x, t) such that the wave function is a product of two functions: one time independent, and one
time dependent. If states of definite energy found using the time independent Schrdinger equation are
given by E(x) with amplitude An and time dependent phase factor is given by

then a valid general solution is

Additionally, the ability to scale solutions allows one to solve for a wave function without normalizing it first.
If one has a set of normalized solutions n, then

can be normalized by ensuring that

This is much more convenient than having to verify that

Real energy eigenstates


For the time-independent equation, an additional feature of linearity follows: if two wave
functions 1 and 2 are solutions to the time-independent equation with the same energy E, then so is any
linear combination:

Two different solutions with the same energy are called degenerate.[31]
In an arbitrary potential, if a wave function solves the time-independent equation, so does its complex
conjugate, denoted *. By taking linear combinations, the real and imaginary parts of are each solutions.
If there is no degeneracy they can only differ by a factor.
In the time-dependent equation, complex conjugate waves move in opposite directions. If (x, t) is one
solution, then so is *(x, t). The symmetry of complex conjugation is called time-reversal symmetry.

Space and time derivatives

355
Continuity of the wavefunction and its first spatial derivative (in the x direction, y and zcoordinates not
shown), at some time t.
The Schrdinger equation is first order in time and second in space, which describes the time
evolution of a quantum state (meaning it determines the future amplitude from the present).
Explicitly for one particle in 3-dimensional Cartesian coordinates the equation is

The first time partial derivative implies the initial value (at t = 0) of the wavefunction

is an arbitrary constant. Likewise the second order derivatives with respect to space implies the
wavefunction and its first order spatial derivatives

are all arbitrary constants at a given set of points, where xb, yb, zb are a set of points describing
boundary b (derivatives are evaluated at the boundaries). Typically there are one or two boundaries, such
as the step potential and particle in a box respectively.
As the first order derivatives are arbitrary, the wavefunction can be a continuously differentiable function of
space, since at any boundary the gradient of the wavefunction can be matched.

356
On the contrary, wave equations in physics are usually second order in time, notable are the family of
classical wave equations and the quantum KleinGordon equation.
Local conservation of probability
Main articles: Probability current and Continuity equation
The Schrdinger equation is consistent with probability conservation. Multiplying the Schrdinger equation
on the right by the complex conjugate wavefunction, and multiplying the wavefunction to the left of the
complex conjugate of the Schrdinger equation, and subtracting, gives the continuity equation for
probability:[38]

where

is the probability density (probability per unit volume, * denotes complex conjugate), and

is the probability current (flow per unit area).


Hence predictions from the Schrdinger equation do not violate probability conservation.
Positive energy
If the potential is bounded from below, meaning there is a minimum value of potential energy, the
eigenfunctions of the Schrdinger equation have energy which is also bounded from below. This can be
seen most easily by using the variational principle, as follows.).
For any linear operator bounded from below, the eigenvector with the smallest eigenvalue is the
vector that minimizes the quantity

over all which are normalized.[38] In this way, the smallest eigenvalue is expressed through
the variational principle. For the Schrdinger Hamiltonian bounded from below, the smallest eigenvalue
is called the ground state energy. That energy is the minimum value of

(using integration by parts). Due to the complex modulus of 2 (which is positive definite), the right hand
side is always greater than the lowest value of V(x). In particular, the ground state energy is positive
when V(x) is everywhere positive.
For potentials which are bounded below and are not infinite over a region, there is a ground state which
minimizes the integral above. This lowest energy wavefunction is real and positive definite meaning the
wavefunction can increase and decrease, but is positive for all positions. It physically cannot be negative: if
it were, smoothing out the bends at the sign change (to minimize the wavefunction) rapidly reduces the
gradient contribution to the integral and hence the kinetic energy, while the potential energy changes
linearly and less quickly. The kinetic and potential energy are both changing at different rates, so the total

357
energy is not constant, which can't happen (conservation). The solutions are consistent with Schrdinger
equation if this wavefunction is positive definite.
The lack of sign changes also shows that the ground state is nondegenerate, since if there were two
ground states with common energy E, not proportional to each other, there would be a linear combination
of the two that would also be a ground state resulting in a zero solution.
Analytic continuation to diffusion
Path integral formulation (The Schrdinger equation)
The above properties (positive definiteness of energy) allow the analytic continuation of the Schrdinger
equation to be identified as a stochastic process. This can be interpreted as the HuygensFresnel
principle applied to De Broglie waves; the spreading wavefronts are diffusive probability amplitudes. [38] For
a free particle (not subject to a potential) in a random walk, substituting = it into the time-dependent
Schrdinger equation gives:[39]

which has the same form as the diffusion equation, with diffusion coefficient /2m. In that case, the
diffusivity yields the De Broglie relation in accordance with the Markov process.[40]

Regularity

On the space of square-integrable densities, the Schrdinger semigroup is a unitary evolution,

and therefore surjective. The flows satisfy the Schrdinger equation , where the derivative is

taken in the distribution sense. However, since


for most physically reasonable Hamiltonians (e.g., the Laplace operator, possibly modified by a potential)

is unbounded in , this shows that the semigroup flows lack Sobolev regularity in general. Instead,
solutions of the Schrdinger equation satisfy a Strichartz estimate.

Relativistic quantum mechanics


Relativistic quantum mechanics is obtained where quantum mechanics and special
relativity simultaneously apply. In general, one wishes to build relativistic wave equations from the
relativistic energymomentum relation

instead of classical energy equations. The KleinGordon equation and the Dirac equation are two such
equations. The KleinGordon equation,

,
was the first such equation to be obtained, even before the nonrelativistic one, and applies to massive
spinless particles. The Dirac equation arose from taking the "square root" of the KleinGordon equation by

358
factorizing the entire relativistic wave operator into a product of two operators one of these is the
operator for the entire Dirac equation.
The general form of the Schrdinger equation remains true in relativity, but the Hamiltonian is less
obvious. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an
electromagnetic field (described by the electromagnetic potentials and A) is:

in which the = (1, 2, 3) and 0 are the Dirac gamma matrices related to the spin of the particle. The Dirac
equation is true for all spin-12 particles, and the solutions to the equation are 4-component spinor
fields with two components corresponding to the particle and the other two for the antiparticle.
For the KleinGordon equation, the general form of the Schrdinger equation is inconvenient to use, and
in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations
for relativistic quantum fields can be obtained in other ways, such as starting from a Lagrangian
density and using the EulerLagrange equations for fields, or use the representation theory of the Lorentz
group in which certain representations can be used to fix the equation for a free particle of given spin (and
mass).
In general, the Hamiltonian to be substituted in the general Schrdinger equation is not just a function of
the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to
a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-
component spinor fields.

Quantum field theory


The general equation is also valid and used in quantum field theory, both in relativistic and nonrelativistic
situations. However, the solution is no longer interpreted as a "wave", but should be interpreted as an
operator acting on states existing in a Fock space.

First Order Form


The Schrdinger equation can also be derived from a first order form [41][42][43] similar to the manner in which
the Klein-Gordon equation can be derived from the Dirac equation. In 1D the first order equation is given
by

This equation allows for the inclusion of spin in nonrelativistic quantum mechanics. Squaring the above

equation yields the Schrdinger equation in 1D. The matrices obey the following properties

The 3 dimensional version of the equation is given by

359
Here nilpotent matrix and are the Dirac gamma matrices.
The Schrdinger equation in 3D can be obtained by squaring the above equation. In the

nonrelativistic limit , the above equation can be derived from the Dirac
equation

PERTURBATION THEORY

Another Perturbation theory used to calculate the for formations of by Ariny Amoss skin experiment
described

Perturbation theory comprises mathematical methods for finding an approximate solution to a problem,
by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a
middle step that breaks the problem into "solvable" and "perturbation" parts.[1] Perturbation theory is
applicable if the problem at hand cannot be solved exactly, but can be formulated by adding a "small"
term to the mathematical description of the exactly solvable problem.

Perturbation theory leads to an expression for the desired solution in terms of a formal power series in
some "small" parameter known as a perturbation series that quantifies the deviation from the exactly
solvable problem. The leading term in this power series is the solution of the exactly solvable problem,
while further terms describe the deviation in the solution, due to the deviation from the initial problem.
Formally, we have for the approximation to the full solution A, a series in the small parameter (here
called ), like the following:

In this example, A0 would be the known solution to the exactly solvable initial problem at the skin during
the experiment as the electrons bombard the hydrongen producing scattered star shaped lights and A1,
A2, ... represent the higher-order terms which may be found iteratively by some systematic procedure.
This applies to the galaxies of star formation in the sky For small these higher-order terms in the series
become successively smaller.

An approximate "perturbation solution" is obtained by truncating the series, usually by keeping only the
first two terms, the initial solution and the "first-order" perturbation correction

General description

Perturbation theory is closely related to methods used in numerical analysis. Calculation of the formation
of stars possible in Ariny Amos own skin experiment, The earliest use of what would now be called

360
perturbation theory was to deal with the otherwise unsolvable mathematical problems of celestial
mechanics: for example the orbit of the Moon, which moves noticeably differently from a simple
Keplerian ellipse because of the competing gravitation of the Earth and the Sun.

Perturbation methods start with a simplified form of the original problem, which is simple enough to be
solved exactly. In celestial mechanics, this is usually a Keplerian ellipse. Under non-relativistic gravity, an
ellipse is exactly correct when there are only two gravitating bodies (say, the Earth and the Moon) but
not quite correct when there are three or more objects (say, the Earth, Moon, Sun, and the rest of the
solar system) and not quite correct when the gravitational interaction is stated using formulations from
General relativity

The solved, but simplified problem is then "perturbed" to make the conditions that the perturbed
solution actually satisfies closer to the formula in the original problem, such as including the gravitational
attraction of a third body (the Sun). Typically, the "conditions" that represent reality are a formula (or
several) that specifically express some physical law, like Newton's second law, the force-acceleration
equation,

In the case of the example, the force F is calculated based on the number of gravitationally relevant
bodies; the acceleration a is obtained, using calculus, from the path of the Moon in its orbit. Both of
these come in two forms: approximate values for force and acceleration, which result from
simplifications, and hypothetical exact values for force and acceleration, which would require the
complete answer to calculate.

The slight changes that result from accommodating the perturbation, which themselves may have been
simplified yet again, are used as corrections to the approximate solution. Because of simplifications
introduced along every step of the way, the corrections are never perfect, and the conditions met by the
corrected solution do not perfectly match the equation demanded by reality. However, even only one
cycle of corrections often provides an excellent approximate answer to what the real solution should be.

There is no requirement to stop at only one cycle of corrections. A partially corrected solution can be re-
used as the new starting point for yet another cycle of perturbations and corrections. In principle, cycles
of finding increasingly better corrections could go on indefinitely. In practice, one typically stops at one or
two cycles of corrections. The usual difficulty with the method is that the corrections progressively make
the new solutions very much more complicated, so each cycle is much more difficult to manage than the
previous cycle of corrections. Isaac Newton is reported to have said, regarding the problem of the
Moon's orbit, that "It causeth my head to ache."[3]

This general procedure is a widely used mathematical tool in advanced sciences and engineering: start
with a simplified problem and gradually add corrections that make the formula that the corrected
problem becomes a closer and closer match to the original formula. It is the natural extension to

361
mathematical functions of the "guess, check, and fix" method first used by older civilisations to compute
certain numbers, such as square roots.[citation needed]

Examples

Examples for the "mathematical description" are: an algebraic equation,[4] a differential equation (e.g.,
the equations of motion[5] or a wave equation), a free energy (in statistical mechanics), radiative
transfer,[6] a Hamiltonian operator (in quantum mechanics).

Examples for the kind of solution to be found perturbatively: the solution of the equation (e.g., the
trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization),
the ground state energy of a quantum mechanical problem.

Examples for the exactly solvable problems to start with: linear equations, including linear equations of
motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-
interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all
degrees of freedom).

Examples of "perturbations" to deal with: Nonlinear contributions to the equations of motion,


interactions between particles, terms of higher powers in the Hamiltonian/Free Energy.

For physical problems involving interactions between particles, the terms of the perturbation series may
be displayed (and manipulated) using Feynman diagrams.

Examples for the kind of solution to be found perturbatively: the solution of the equation (e.g., the
trajectory of a particle), the statistical average of some physical quantity (e.g., average magnetization),
the ground state energy of a quantum mechanical problem.

Examples for the exactly solvable problems to start with: linear equations, including linear equations of
motion (harmonic oscillator, linear wave equation), statistical or quantum-mechanical systems of non-
interacting particles (or in general, Hamiltonians or free energies containing only terms quadratic in all
degrees of freedom).

Examples of "perturbations" to deal with: Nonlinear contributions to the equations of motion,


interactions between particles, terms of higher powers in the Hamiltonian/Free Energy.

For physical problems involving interactions between particles, the terms of the perturbation series may
be displayed (and manipulated) using Feynman diagrams.

History of perturbation theory

Perturbation theory was first devised to solve otherwise intractable problems in the calculation of the
motions of planets in the solar system. For instance, Newton's law of universal gravitation explained the
gravitation between two heavenly bodies, but when a third body is added, the problem was, "How does
each body pull on each?" Newton's equation only allowed the mass of two bodies to be analyzed. The
gradually increasing accuracy of astronomical observations led to incremental demands in the accuracy

362
of solutions to Newton's gravitational equations, which led several notable 18th and 19th century
mathematicians, such as Lagrange and Laplace, to extend and generalize the methods of perturbation
theory. These well-developed perturbation methods were adopted and adapted to solve new problems
arising during the development of quantum mechanics in 20th century atomic and subatomic physics.
Paul Dirac developed perturbation theory in 1927 to evaluate when a particle would be emitted in
radioactive elements. It was later named Fermi's golden rule.

Beginnings in the study of planetary motion

Since the planets are very remote from each other, and since their mass is small as compared to the mass
of the Sun, the gravitational forces between the planets can be neglected, and the planetary motion is
considered, to a first approximation, as taking place along Kepler's orbits, which are defined by the
equations of the two-body problem, the two bodies being the planet and the Sun.[9]

Since astronomic data came to be known with much greater accuracy, it became necessary to consider
how the motion of a planet around the Sun is affected by other planets. This was the origin of the three-
body problem; thus, in studying the system MoonEarthSun the mass ratio between the Moon and the
Earth was chosen as the small parameter. Lagrange and Laplace were the first to advance the view that
the constants which describe the motion of a planet around the Sun are "perturbed", as it were, by the
motion of other planets and vary as a function of time; hence the name "perturbation theory".

Perturbation theory was investigated by the classical scholarsLaplace, Poisson, Gaussas a result of
which the computations could be performed with a very high accuracy. The discovery of the planet
Neptune in 1848 by Urbain Le Verrier, based on the deviations in motion of the planet Uranus (he sent
the coordinates to Johann Gottfried Galle who successfully observed Neptune through his telescope),
represented a triumph of perturbation theory.

Perturbation orders

The standard exposition of perturbation theory is given in terms of the order to which the perturbation is
carried out: first-order perturbation theory or second-order perturbation theory, and whether the
perturbed states are degenerate, which requires singular perturbation. In the singular case extra care
must be taken, and the theory is slightly more elaborate.

In chemistry

Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related
methods. Implicit perturbation theory[10] works with the complete Hamiltonian from the very beginning
and never specifies a perturbation operator as such. MllerPlesset perturbation theory uses the
difference between the HartreeFock Hamiltonian and the exact non-relativistic Hamiltonian as the
perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree
Fock energy and electron correlation is included at second-order or higher. Calculations to second, third
or fourth order are very common and the code is included in most ab initio quantum chemistry
programs. A related but more accurate method is the coupled cluster method.

363
COSMOLOGICAL PERTURBATION THEORY AN ALTERNATIVE TO CALCULATE STAR FORMATION BY SKIN
EXPERIMENT.

In physical cosmology, cosmological perturbation theory is the theory by which the evolution of
structure is understood in the big bang model. It uses general relativity to compute the gravitational
forces causing small perturbations to grow and eventually seed the formation of stars, quasars, galaxies
and clusters. It only applies to situations in which the universe is predominantly homogeneous, such as
during cosmic inflation and large parts of the big bang. The universe is believed to still be homogeneous
enough that the theory is a good approximation on the largest scales, but on smaller scales more
involved techniques, such as N-body simulations, must be used.

Because of the gauge invariance of general relativity, the correct formulation of cosmological
perturbation theory is subtle. There are currently two distinct approaches to perturbation theory in
classical general relativity:

gauge-invariant perturbation theory based on foliating a space-time with hyper-surfaces, and

1+3 covariant gauge-invariant perturbation theory based on threading a space-time with frames.

Gauge-invariant perturbation theory

The gauge-invariant perturbation theory is based on developments by Bardeen (1980),[1] Kodama and
Sasaki (1984)[2] building on the work of Lifshitz (1946).[3] This is the standard approach to perturbation
theory of general relativity for cosmology.[4] This approach is widely used for the computation of
anisotropies in the cosmic microwave background radiation[5] as part of the physical cosmology program
and focuses on predictions arising from linearisations that preserve gauge invariance with respect to
Friedmann-Lematre-Robertson-Walker (FLRW) models. This approach draws heavily on the use of
Newtonian like analogue and usually has as it starting point the FRW background around which
perturbations are developed. The approach is non-local and coordinate dependent but gauge invariant
as the resulting linear framework is built from a specified family of background hyper-surfaces which are
linked by gauge preserving mappings to foliate the space-time. Although intuitive this approach does not
deal well with the nonlinearities natural to general relativity.

1+3 covariant gauge-invariant perturbation theory

In relativistic cosmology using the Lagrangian threading dynamics of Ehlers (1971)[6] and Ellis (1971)[7] it is
usual to use the gauge-invariant covariant perturbation theory developed by Hawking (1966)[8] and Ellis
and Bruni (1989).[9] Here rather than starting with a background and perturbing away from that
background one starts with full general relativity and systematically reduces the theory down to one that
is linear around a particular background.[10] The approach is local and both covariant as well as gauge
invariant but can be non-linear because the approach is built around the local comoving observer frame

364
(see frame bundle) which is used to thread the entire space-time. This approach to perturbation theory
produces differential equations that are of just the right order needed to describe the true physical
degrees of freedom and as such no non-physical gauge modes exist. It is usual to express the theory in a
coordinate free manner. For applications of kinetic theory, because one is required to use the full tangent
bundle, it becomes convenient to use the tetrad formulation of relativistic cosmology. The application of
this approach to the computation of anisotropies in cosmic microwave background radiation[11] requires
the linearization of the full relativistic kinetic theory developed by Thorne (1980)[12] and Ellis, Matravers
and Treciokas (1983).

Gauge freedom and frame fixing

In relativistic cosmology there is a freedom associated with the choice of threading frame, this frame
choice is distinct from choice associated with coordinates. Picking this frame is equivalent to fixing the
choice of timelike world lines mapped into each other, this reduces the gauge freedom it does not fix the
gauge but the theory remains gauge invariant under the remaining gauge freedoms. In order to fix the
gauge a specification of correspondences between the time surfaces in the real universe (perturbed) and
the background universe are required along with the correspondences between points on the initial
spacelike surfaces in the background and in the real universe. This is the link between the gauge-invariant
perturbation theory and the gauge-invariant covariant perturbation theory. Gauge invariance is only
guaranteed if the choice of frame coincides exactly with that of the background; usually this is trivial to
ensure because physical frames have this property.

Newtonian-like equations

Newtonian-like equations emerge from perturbative general relativity with the choice of the Newtonian
gauge; the Newtonian gauge provides the direct link between the variables typically used in the gauge-
invariant perturbation theory and those arising from the more general gauge-invariant covariant
perturbation theory.

NEWTONS LAWS OF UNIVERSAL GRAVITATION.

Sir Isaac Newton PRS (/njutn/;[6] 25 December 1642 20 March 1726/27[1]) was an English mathematician, astronomer,
and physicist (described in his own day as a "natural philosopher") who is widely recognised as one of the most influential
scientists of all time and a key figure in the scientific revolution. His book Philosophi Naturalis Principia
Mathematica ("Mathematical Principles of Natural Philosophy"), first published in 1687, laid the foundations of classical
mechanics. Newton also made pathbreaking contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for
developing the infinitesimal calculus.
Newton's Principia formulated the laws of motion and universal gravitation that dominated scientists' view of the physical
universe for the next three centuries. By deriving Kepler's laws of planetary motion from his mathematical description
of gravity, and using the same principles to account for the trajectories of comets, the tides, the precession of the equinoxes,
and other phenomena, Newton removed the last doubts about the validity of the heliocentric model of the Solar System and
demonstrated that the motion of objects on Earth and of celestial bodies could be accounted for by the same principles.
Newton's theoretical prediction that the Earth is shaped as an oblate spheroid was later vindicated by the geodetic
measurements of Maupertuis, La Condamine, and others, thus convincing most Continental European scientists of the
superiority of Newtonian mechanics over the earlier system of Descartes.
Newton also built the first practical reflecting telescope and developed a sophisticated theory of colour based on the
observation that a prismdecomposes white light into the colours of the visible spectrum. Newton's work on light was collected
in his highly influential book Opticks, first published in 1704. He also formulated an empirical law of cooling, made the first
theoretical calculation of the speed of sound, and introduced the notion of a Newtonian fluid. In addition to his work on

365
calculus, as a mathematician Newton contributed to the study of power series, generalised the binomial theorem to non-
integer exponents, developed a method for approximating the roots of a function, and classified most of the cubic plane
curves.
Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge. He
was a devout but unorthodox Christian, who privately rejected the doctrine of the Trinity and who, unusually for a member of
the Cambridge faculty of the day, refused to take holy ordersin the Church of England. Beyond his work on the mathematical
sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology, but most of his work in those
areas remained unpublished until long after his death. Politically and personally tied to the Whig party, Newton served two
brief terms as Member of Parliament for the University of Cambridge, in 168990 and 170102. He was knighted by Queen
Anne in 1705 and he spent the last three decades of his life in London, serving as Warden (16961700) and Master (1700
1727) of the Royal Mint, as well as president of the Royal Society (17031727).

Newton's law of universal gravitation states that a particle attracts every other particle in the universe
using a force that is directly proportional to the product of their masses and inversely proportional to the
square of the distance between their centers. This is a general physical law derived from empirical
observations by what Isaac Newton called inductive reasoning.[1] It is a part of classical mechanics and
was formulated in Newton's work Philosophi Naturalis Principia Mathematica ("the Principia"), first
published on 5 July 1687. (When Newton's book was presented in 1686 to the Royal Society, Robert
Hooke made a claim that Newton had obtained the inverse square law from him; see the History section
below.)

In modern language, the law states: Every point mass attracts every single other point mass by a force
pointing along the line intersecting both points. The force is proportional to the product of the two
masses and inversely proportional to the square of the distance between them. The first test of Newton's
theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the
British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newton's
Principia and approximately 71 years after his death.

366
Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the
magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws,
where force is inversely proportional to the square of the distance between the bodies. Coulomb's law
has the product of two charges in place of the product of the masses, and the electrostatic constant in
place of the gravitational constant.

Newton's law has since been superseded by Albert Einstein's theory of general relativity, but it continues
to be used as an excellent approximation of the effects of gravity in most applications. Relativity is
required only when there is a need for extreme precision, or when dealing with very strong gravitational
fields, such as those found near extremely massive and dense objects, or at very close distances (such as
Mercury's orbit around the Sun).

Early history

A recent assessment (by Ofer Gal) about the early history of the inverse square law is "by the late 1670s",
the assumption of an "inverse proportion between gravity and the square of distance was rather
common and had been advanced by a number of different people for different reasons".

The same author does credit Hooke with a significant and even seminal contribution, but he treats
Hooke's claim of priority on the inverse square point as uninteresting since several individuals besides
Newton and Hooke had at least suggested it, and he points instead to the idea of "compounding the
celestial motions" and the conversion of Newton's thinking away from "centrifugal" and towards
"centripetal" force as Hooke's significant contributions.

Newton himself gave credit in his principia to two persons: Bullialdus(he wrote without proof that there
was a force on the earth towards the sun), and Borelli (wrote that all planets were attracted towards the
sun). Whiteside wrote that the main influence was Borelli, because Newton had a copy of his book

Plagiarism dispute

In 1686, when the first book of Newton's Principia was presented to the Royal Society, Robert Hooke
accused Newton of plagiarism by claiming that he had taken from him the "notion" of "the rule of the
decrease of Gravity, being reciprocally as the squares of the distances from the Center". At the same time
(according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the
Curves generated thereby" was wholly Newton's.

In this way, the question arose as to what, if anything, Newton owed to Hooke. This is a subject
extensively discussed since that time and on which some points, outlined below, continue to excite
controversy.

Hooke's work and claims

Robert Hooke published his ideas about the "System of the World" in the 1660s, when he read to the
Royal Society on March 21, 1666, a paper "On gravity", "concerning the inflection of a direct motion into
a curve by a supervening attractive principle", and he published them again in somewhat developed form

367
in 1674, as an addition to "An Attempt to Prove the Motion of the Earth from Observations". Hooke
announced in 1674 that he planned to "explain a System of the World differing in many particulars from
any yet known", based on three "Suppositions": that "all Celestial Bodies whatsoever, have an attraction
or gravitating power towards their own Centers" [and] "they do also attract all the other Celestial Bodies
that are within the sphere of their activity";that "all bodies whatsoever that are put into a direct and
simple motion, will so continue to move forward in a straight line, till they are by some other effectual
powers deflected and bent..."; and that "these attractive powers are so much the more powerful in
operating, by how much the nearer the body wrought upon is to their own Centers". Thus Hooke clearly
postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the
attracting body, together with a principle of linear inertia.

Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might
apply to these attractions. Hooke's gravitation was also not yet universal, though it approached
universality more closely than previous hypotheses. He also did not provide accompanying evidence or
mathematical demonstration. On the latter two aspects, Hooke himself stated in 1674: "Now what these
several degrees [of attraction] are I have not yet experimentally verified"; and as to his whole proposal:
"This I only hint at present", "having my self many other things in hand which I would first compleat, and
therefore cannot so well attend it" (i.e. "prosecuting this Inquiry"). It was later on, in writing on 6 January
1679|80 to Newton, that Hooke communicated his "supposition ... that the Attraction always is in a
duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will
be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the
Distance." (The inference about the velocity was incorrect.

Hooke's correspondence with Newton during 16791680 not only mentioned this inverse square
supposition for the decline of attraction with increasing distance, but also, in Hooke's opening letter to
Newton, of 24 November 1679, an approach of "compounding the celestial motions of the planets of a
direct motion by the tangent & an attractive motion towards the central body".

Newton's work and claims

Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be
credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed
with Sir Christopher Wren previous to Hooke's 1679 letter. Newton also pointed out and acknowledged
prior work of others, including Bullialdus, (who suggested, but without demonstration, that there was an
attractive force from the Sun in the inverse square proportion to the distance), and Borelli[5] (who
suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a
gravitational attraction towards the Sun so as to make the planets move in ellipses). D T Whiteside has
described the contribution to Newton's thinking that came from Borelli's book, a copy of which was in
Newton's library at his death.

Newton further defended his work by saying that had he first heard of the inverse square proportion
from Hooke, he would still have some rights to it in view of his demonstrations of its accuracy. Hooke,
without evidence in favor of the supposition, could only guess that the inverse square law was

368
approximately valid at great distances from the center. According to Newton, while the 'Principia' was
still at pre-publication stage, there were so many a-priori reasons to doubt the accuracy of the inverse-
square law (especially close to an attracting sphere) that "without my (Newton's) Demonstrations, to
which Mr Hooke is yet a stranger, it cannot believed by a judicious Philosopher to be any where
accurate."

This remark refers among other things to Newton's finding, supported by mathematical demonstration,
that if the inverse square law applies to tiny particles, then even a large spherically symmetrical mass also
attracts masses external to its surface, even close up, exactly as if all its own mass were concentrated at
its center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to
large spherical planetary masses as if they were tiny [Link] addition, Newton had formulated in
Propositions 43-45 of Book 1, and associated sections of Book 3, a sensitive test of the accuracy of the
inverse square law, in which he showed that only where the law of force is accurately as the inverse
square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as
they are observed to do apart from small effects attributable to inter-planetary perturbations.

In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s
show that Newton himself had, by 1669, arrived at proofs that in a circular case of planetary motion,
"endeavour to recede" (what was later called centrifugal force) had an inverse-square relation with
distance from the center. After his 1679-1680 correspondence with Hooke, Newton adopted the
language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although
much has been made of the change in language and difference of point of view, as between centrifugal
or centripetal forces, the actual computations and proofs remained the same either way. They also
involved the combination of tangential and radial displacements, which Newton was making in the
1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did
not change the [Link] background shows there was basis for Newton to deny deriving the inverse
square law from Hooke.

Newton's acknowledgment

On the other hand, Newton did accept and acknowledge, in all editions of the 'Principia', that Hooke (but
not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton
acknowledged Wren, Hooke and Halley in this connection in the Scholium to Proposition 4 in Book 1.
Newton also acknowledged to Halley that his correspondence with Hooke in 1679-80 had reawakened
his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke
had told Newton anything new or original: "yet am I not beholden to him for any light into that business
but only for the diversion he gave me from my other studies to think on these things & for his
dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."

Modern priority controversy

Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether
Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and
valuable, even though that was not a claim actually voiced by Hooke at the time. As described above,

369
Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of
radially directed force or endeavour, for example in his derivation of the inverse square relation for the
circular case. They also show Newton clearly expressing the concept of linear inertiafor which he was
indebted to Descartes' work, published in 1644 (as Hooke probably was). These matters do not appear to
have been learned by Newton from Hooke.

Nevertheless, a number of authors have had more to say about what Newton gained from Hooke and
some aspects remain controversial The fact that most of Hooke's private papers had been destroyed or
have disappeared does not help to establish the truth.

Newton's role in relation to the inverse square law was not as it has sometimes been represented. He did
not claim to think it up as a bare idea. What Newton did was to show how the inverse-square law of
attraction had many necessary mathematical connections with observable features of the motions of
bodies in the solar system; and that they were related in such a way that the observational evidence and
the mathematical demonstrations, taken together, gave reason to believe that the inverse square law
was not just approximately true but exactly true (to the accuracy achievable in Newton's time and for
about two centuries afterwards and with some loose ends of points that could not yet be certainly
examined, where the implications of the theory had not yet been adequately identified or calculated).

About thirty years after Newton's death in 1727, Alexis Clairaut, a mathematical astronomer eminent in
his own right in the field of gravitational studies, wrote after reviewing what Hooke published, that "One
must not think that this idea ... of Hooke diminishes Newton's glory"; and that "the example of Hooke"
serves "to show what a distance there is between a truth that is glimpsed and a truth that is
demonstrated"

Modern form
In modern language, the law states the following:

Every point mass attracts every single other point mass by a force
pointing along the line intersecting both points. The force is
proportional to the product of the two masses and inversely
proportional to the square of the distance between them:[2]

where:

F is the force between the masses;


G is the gravitational constant (6.6741011 N(m/kg)2);
m1 is the first mass;
m2 is the second mass;
r is the distance between the centers of the masses.

370
Error plot showing experimental values for big G.

Source;Stephan Schlamminger - [Link]

Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the
constant G is approximately equal to 6.6741011 N m2 kg2.[29] The value of the constant G was first
accurately determined from the results of the Cavendish experiment conducted by the British scientist
Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G.[3] This
experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It
took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so
none of Newton's calculations could use the value of G; instead he could only calculate a force relative to
another force.

371
Bodies w
ith spatial extent

Gravitational field strength within the Earth

Source;Dake (Slice [Link]) derivative work: KronicTOOL (talk); Software: Photoshop - Slice [Link]

View of inner parts of earth. continental crust oceanic crust upper mantle lower mantle outer core inner
core A : crust-mantle boundary (Mohorovii discontinuity) B : core-mantle boundary (Gutenberg
discontinuity) C : outer-inner core boundary (Lehmann discontinuity)

372
Gravitational field strength within the Earth

Gravity field near the surface of the Earth an object is shown accelerating toward the surface

Source;Lookang - Own work

the curvature of the Earth is negligible at this scale, and the gravity force lines can be approximated as
being parallel and pointing straight down to the center of the Earth.

If the bodies in question have spatial extent (rather than being theoretical point masses), then the
gravitational force between them is calculated by summing the contributions of the notional point
masses which constitute the bodies. In the limit, as the component point masses become "infinitely
small", this entails integrating the force (in vector form, see below) over the extents of the two bodies.

In this way, it can be shown that an object with a spherically-symmetric distribution of mass exerts the
same gravitational attraction on external bodies as if all the object's mass were concentrated at a point
at its centre.[2] (This is not generally true for non-spherically-symmetrical bodies.)

For points inside a spherically-symmetric distribution of matter, Newton's Shell theorem can be used to
find the gravitational force. The theorem tells us how different parts of the mass distribution affect the
gravitational force measured at a point located a distance r0 from the center of the mass distribution:[30]

The portion of the mass that is located at radii r < r0 causes the same force at r0 as if all of the mass
enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted
above).

373
The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the distance r0
from the center. That is, the individual gravitational forces exerted by the elements of the sphere out
there, on the point at r0, cancel each other out.

As a consequence, for example, within a shell of uniform thickness and density there is no net
gravitational acceleration anywhere within the hollow sphere.

Furthermore, inside a uniform sphere the gravity increases linearly with the distance from the center; the
increase due to the additional mass is 1.5 times the decrease due to the larger distance from the center.
Thus, if a spherically symmetric body has a uniform core and a uniform mantle with a density that is less
than 2/3 of that of the core, then the gravity initially decreases outwardly beyond the boundary, and if
the sphere is large enough, further outward the gravity increases again, and eventually it exceeds the
gravity at the core/mantle boundary. The gravity of the Earth may be highest at the core/mantle
boundary.

Vector form

Gravity field surrounding Earth from a macroscopic perspective.

Newton's law of universal gravitation can be written as a vector equation to account for the direction of
the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors.

where

F21 is the force applied on object 2 exerted by object 1,

G is the gravitational constant,

m1 and m2 are respectively the masses of objects 1 and 2,

374
|r12| = |r2 r1| is the distance between objects 1 and 2, and

is the unit vector from object 1 to 2.

It can be seen that the vector form of the equation is the same as the scalar form given earlier, except
that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also,
it can be seen that F12 = F21.

Gravitational field

The gravitational field is a vector field that describes the gravitational force which would be applied on
an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at
that point.

It is a generalisation of the vector form, which becomes particularly useful if more than 2 objects are
involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 2 is a rocket,
object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field
g(r) as:

so that we can write:

This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI,
this is m/s2.

Gravitational fields are also conservative; that is, the work done by gravity from one position to another
is path-independent. This has the consequence that there exists a gravitational potential field V(r) such
that

If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r)
outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that
case

the gravitational field is on, inside and outside of symmetric masses.

375
As per Gauss Law, field in a symmetric body can be found by the mathematical equation:

where is a closed surface and is the mass enclosed by the surface.

Hence, for a hollow sphere of radius and total mass

For a uniform solid sphere of radius and total mass ,

Problematic aspects

Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore
widely used. Deviations from it are small when the dimensionless quantities /c2 and (v/c)2 are both
much less than one, where is the gravitational potential, v is the velocity of the objects being studied,
and c is the speed of [Link] example, Newtonian gravity provides an accurate description of the
Earth/Sun system, since

where rorbit is the radius of the Earth's orbit around the Sun.

In situations where either dimensionless parameter is large, then general relativity must be used to
describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and
low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity.

376
Theoretical concerns with Newton's expression

There is no immediate prospect of identifying the mediator of gravity. Attempts by physicists to


identify the relationship between the gravitational force and other known fundamental forces
are not yet resolved, although considerable headway has been made over the last 50 years (See:
Theory of everything and Standard Model). Newton himself felt that the concept of an
inexplicable action at a distance was unsatisfactory (see "Newton's reservations" below), but that
there was nothing more that he could do at the time.
Newton's theory of gravitation requires that the gravitational force be transmitted
instantaneously. Given the classical assumptions of the nature of space and time before the
development of General Relativity, a significant propagation delay in gravity leads to unstable
planetary and stellar orbits.

Observations conflicting with Newton's formula

Newton's Theory does not fully explain the precession of the perihelion of the orbits of the
planets, especially of planet Mercury, which was detected long after the life of Newton.[32] There
is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only
from the gravitational attractions from the other planets, and the observed precession, made
with advanced telescopes during the 19th century.
The predicted angular deflection of light rays by gravity that is calculated by using Newton's
Theory is only one-half of the deflection that is actually observed by astronomers. Calculations
using General Relativity are in much closer agreement with the astronomical observations.
In spiral galaxies, the orbiting of stars around their centers seems to strongly disobey Newton's
law of universal gravitation. Astrophysicists, however, explain this spectacular phenomenon in
the framework of Newton's laws, with the presence of large amounts of Dark matter.

Newton's reservations

While Newton was able to formulate his law of gravity in his monumental work, he was deeply
uncomfortable with the notion of "action at a distance" that his equations implied. In 1692, in his third
letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum
without the mediation of anything else, by and through which their action and force may be conveyed
from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a
competent faculty of thinking could ever fall into it."

He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of
motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable
to experimentally identify the motion that produces the force of gravity (although he invented two
mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the
cause of this force on grounds that to do so was contrary to sound science. He lamented that
"philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational
force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were
fundamental to all the "phenomena of nature". These fundamental phenomena are still under
investigation and, though hypotheses abound, the definitive answer has yet to be found. And in

377
Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover
the cause of these properties of gravity from phenomena and I feign no hypotheses.... It is enough that
gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to
account for all the motions of celestial bodies."

Einstein's solution

Part of a series on

Spacetime

Special relativity
General relativity

Spacetime concepts[show]

General relativity[show]

Classical gravity[show]

Relevant mathematics[show]

These objections were explained by Einstein's theory of general relativity, in which gravitation is an
attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's
theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories
determined by the geometry of spacetime. This allowed a description of the motions of light and mass

378
that was consistent with all available observations. In general relativity, the gravitational force is a
fictitious force due to the curvature of spacetime, because the gravitational acceleration of a body in free
fall is due to its world line being a geodesic of spacetime.

Extensions

Newton was the first to consider in his Principia an extended expression of his law of gravity including an
inverse-cube term of the form

attempting to explain the Moon's apsidal motion. Other extensions were proposed by Laplace (around
1790) and Decombes (1913):

In recent years, quests for non-inverse square terms in the law of gravity have been carried out by
neutron interferometry.

Solutions of Newton's law of universal gravitation

n-body problem

The n-body problem is an ancient, classical problem of predicting the individual motions of a group of
celestial objects interacting with each other gravitationally. Solving this problem from the time of the
Greeks and on has been motivated by the desire to understand the motions of the Sun, planets and
the visible stars. In the 20th century, understanding the dynamics of globular cluster star systems became
an important n-body problem too. The n-body problem in general relativity is considerably more difficult
to solve.

The classical physical problem can be informally stated as: given the quasi-steady orbital properties
(instantaneous position, velocity and time) of a group of celestial bodies, predict their interactive forces;
and consequently, predict their true orbital motions for all future times.

The two-body problem has been completely solved, as has the Restricted 3-Body Problem.

379
FERMI PROBLEM

In physics or engineering education, a Fermi problem, Fermi quiz, Fermi question, Fermi estimate,
or order estimation is an estimation problem designed to teach dimensional analysis, approximation, and
such a problem is usually a back-of-the-envelope calculation. The estimation technique is named after
physicist Enrico Fermi as he was known for his ability to make good approximate calculations with little
or no actual data. Fermi problems typically involve making justified guesses about quantities and
their variance or lower and upper bounds.

Historical background

An example is Enrico Fermi's estimate of the strength of the atomic bomb that detonated at the Trinity test,
based on the distance traveled by pieces of paper he dropped from his hand during the blast. [1] Fermi's
estimate of 10 kilotons of TNT was remarkably close to the now-accepted value of around 20 kilotons.

Examples
An example problem, of a type generally attributed to Fermi,[2] is "How many piano tuners are there
in Chicago?" A typical solution to this problem involves multiplying a series of estimates that yield the
correct answer if the estimates are correct. For example, we might make the following assumptions:

1. There are approximately 9,000,000 people living in Chicago.


2. On average, there are two persons in each household in Chicago.
3. Roughly one household in twenty has a piano that is tuned regularly.
4. Pianos that are tuned regularly are tuned on average about once per year.
5. It takes a piano tuner about two hours to tune a piano, including travel time.
6. Each piano tuner works eight hours in a day, five days in a week, and 50 weeks in a year.
From these assumptions, we can compute that the number of piano tunings in a single year in Chicago is
(9,000,000 persons in Chicago) (2 persons/household) (1 piano/20 households) (1 piano
tuning per piano per year) = 225,000 piano tunings per year in Chicago.
We can similarly calculate that the average piano tuner performs
(50 weeks/year) (5 days/week) (8 hours/day) (2 hours to tune a piano) = 1000 piano tunings
per year.
Dividing gives
(225,000 piano tunings per year in Chicago) (1000 piano tunings per year per piano tuner) = 225
piano tuners in Chicago.
The actual number of piano tuners in Chicago is about 290.[3]
A famous example of a Fermi-problem-like estimate is the Drake equation, which seeks to estimate the
number of intelligent civilizations in the galaxy. The basic question of why, if there were a significant
number of such civilizations, ours has never encountered any others is called the Fermi paradox.

Advantages and scope


Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated
methods to calculate a precise answer. This provides a useful check on the results. While the estimate is
almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find
faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast,
precise calculations can be extremely complex but with the expectation that the answer they produce is
correct. The far larger number of factors and operations involved can obscure a very significant error,
either in mathematical process or in the assumptions the equation is based on, but the result may still be

380
assumed to be right because it has been derived from a precise formula that is expected to yield good
results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably
precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi
estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be
expected to be the answer, giving context to the results.
As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an
answer within the same scale as the correct result, and if not gives a base for understanding why this is
the case. For example, if the estimate tells you there should be a hundred or so tuners but the precise
answer tells you there are many thousands then you know you need to find out why there is this
divergence from the expected result. First looking for errors, then for factors the estimation didn't take
account of - Does Chicago have a number of music schools or other places with a disproportionately high
ratio of pianos to people? Whether close or very far from the observed results, the context the estimation
provides gives useful information both about the process of calculation and the assumptions that have
been used to look at problems.
Fermi estimates are also useful in approaching problems where the optimal choice of calculation method
depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the
internal stresses of a structure are low enough that it can be accurately described by linear elasticity; or if
the estimate already bears significant relationship in scale relative to some other value, for example, if a
structure will be over-engineered to withstand loads several times greater than the estimate.
Although Fermi calculations are often not accurate, as there may be many problems with their
assumptions, this sort of analysis does tell us what to look for to get a better answer. For the above
example, we might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical
day, or look up an accurate number for the population of Chicago. It also gives us a rough estimate that
may be good enough for some purposes: if we want to start a store in Chicago that sells piano tuning
equipment, and we calculate that we need 10,000 potential customers to stay in business, we can
reasonably assume that the above estimate is far enough below 10,000 that we should consider a different
business plan (and, with a little more work, we could compute a rough upper bound on the number of
piano tuners by considering the most extreme reasonable values that could appear in each of our
assumptions).

Explanation
Fermi estimates generally work because the estimations of the individual terms are often close to correct,
and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a
Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano
tuners in Chicago) will probably be more accurate than might be first supposed.
In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort of Wiener
process or random walk on the logarithmic scale, which diffuses as n (in number of terms n). In discrete
terms, the number of overestimates minus underestimates will have a binomial distribution. In continuous
terms, if one makes a Fermi estimate of n steps, with standard deviation units on the log scale from the
actual value, then the overall estimate will have standard deviation n , since the standard deviation of a
sum scales as n in the number of summands.
For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the
correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will
have grown by a logarithmic factor of 9 = 3, so 23 = 8. Thus one will expect to be within 18 to 8 times the
correct value within an order of magnitude, and much less than the worst case of erring by a factor of
29 = 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the
overall estimate will be correspondingly better.

381
DRAKE EQUATION.
D Drake equation
.

Dr. Frank Drake


The Drake equation is a probabilistic argument used to arrive at an estimate of the number of active,
communicative extraterrestrial civilizations in the Milky Waygalaxy.[1][2] The number of such civilizations N,
is assumed to be equal to the mathematical product of

i. R, the average rate of star formations, in our galaxy,


ii. fp, the fraction of formed stars that have planets,
iii. ne for stars that have planets, the average number of planets that can potentially support life,
iv. fl, the fraction of those planets that actually develop life,
v. fi, the fraction of planets bearing life on which intelligent, civilized life, has developed,
vi. fc, the fraction of these civilizations that have developed communications, i.e., technologies that
release detectable signs into space, and
vii. L, the length of time over which such civilizations release detectable signals,
for a combined expression of:

The equation was written in 1961 by Frank Drake, not for purposes of quantifying the number of
civilizations, but as a way to stimulate scientific dialogue at the first scientific meeting on the search for
intelligent extraterrestrial life (SETI). [3][4] The equation summarizes the main concepts which scientists
must contemplate when considering the question of other radio-communicative life.[3]
Criticism related to the Drake equation focuses not on the equation itself, but on the fact that the
estimated values for several of its factors are highly conjectural, the combined effect being that the
uncertainty associated with any derived value is so large that the equation cannot be used to draw
firm conclusions.

History

In September 1959, physicists Giuseppe Cocconi and Philip Morrison published an article in the
journal Nature with the provocative title "Searching for Interstellar Communications."[5][6] Cocconi and
Morrison argued that radio telescopes had become sensitive enough to pick up transmissions that
might be broadcast into space by civilizations orbiting other stars. Such messages, they suggested,

382
might be transmitted at a wavelength of 21 cm (1,420.4 MHz). This is the wavelength of radio
emission by neutral hydrogen, the most common element in the universe, and they reasoned that
other intelligences might see this as a logical landmark in the radio spectrum.
Two months later, Harvard University astronomy professor Harlow Shapley speculated on the number
of inhabited planets in the universe, saying "The universe has 10 million, million, million suns (10
followed by 18 zeros) similar to our own. One in a million has planets around it. Only one in a million
million has the right combination of chemicals, temperature, water, days and nights to support
planetary life as we know it. This calculation arrives at the estimated figure of 100 million worlds where
life has been forged by evolution."[7]
Seven months after Cocconi and Morrison published their article, Drake made the first systematic
search for signals from extraterrestrial intelligent beings. Using the 25 m dish of the National Radio
Astronomy Observatory in Green Bank, West Virginia, Drake monitored two nearby Sun-like
stars: Epsilon Eridani and Tau Ceti. In this project, which he called Project Ozma, he slowly scanned
frequencies close to the 21 cm wavelength for six hours a day from April to July 1960.[6] The project
was well designed, inexpensive, and simple by today's standards. It was also unsuccessful.
Soon thereafter, Drake hosted a "search for extraterrestrial intelligence" meeting on detecting their
radio signals. The meeting was held at the Green Bank facility in 1961. The equation that bears
Drake's name arose out of his preparations for the meeting.[8]
As I planned the meeting, I realized a few day[s] ahead of time we needed an agenda. And so I wrote
down all the things you needed to know to predict how hard it's going to be to detect extraterrestrial
life. And looking at them it became pretty evident that if you multiplied all these together, you got a
number, N, which is the number of detectable civilizations in our galaxy. This was aimed at the radio
search, and not to search for primordial or primitive life forms. Frank Drake.
The ten attendees were conference organizer J. Peter Pearman, Frank Drake, Philip Morrison,
businessman and radio amateur Dana Atchley, chemist Melvin Calvin, astronomer Su-Shu Huang,
neuroscientist John C. Lilly, inventor Barney Oliver, astronomer Carl Sagan and radio-astronomer Otto
Struve.[9] These participants dubbed themselves "The Order of the Dolphin" (because of Lilly's work
on dolphin communication), and commemorated their first meeting with a plaque at the observatory
hall.[10][11]

Equation
The Drake equation is:

where:
N = the number of civilizations in our galaxy with which communication might be possible (i.e.
which are on our current past light cone);
and
R = the average rate of star formation in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fl = the fraction of planets that could support life that actually develop life at some point
fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
fc = the fraction of civilizations that develop a technology that releases detectable signs of their
existence into space
L = the length of time for which such civilizations release detectable signals into space [12][13]

Usefulness

383
The Allen Telescope Array for SETI
The Drake equation amounts to a summary of the factors affecting the likelihood that we might detect
radio-communication from intelligent extraterrestrial life.[1][12][14] The last four parameters, fl, fi, fc, and L, are
not known and are very hard to estimate, with values ranging Drake equation is not in the solving, but
rather in the contemplation of all the various concepts which scientists must incorporate when over many
orders of magnitude (see criticism). Therefore, the usefulness of the considering the question of life
elsewhere,[1][3] and gives the question of life elsewhere a basis for scientific analysis. The Drake equation
is a statement that stimulates intellectual curiosity about the universe around us, for helping us to
understand that life as we know it is the end product of a natural, cosmic evolution, and for helping us
realize how much we are a part of that universe.[13] What the equation and the search for life has done is
focus science on some of the other questions about life in the universe, specifically abiogenesis, the
development of multi-cellular life and the development of intelligence itself.[15]
Within the limits of our existing technology, any practical search for distant intelligent life must necessarily
be a search for some manifestation of a distant technology. After about 50 years, the Drake equation is still
of seminal importance because it is a 'road map' of what we need to learn in order to solve this
fundamental existential question.[1] It also formed the backbone of astrobiology as a science; although
speculation is entertained to give context, astrobiology concerns itself primarily with hypotheses that fit
firmly into existing scientific theories. Some 50 years of SETI have failed to find anything, even though
radio telescopes, receiver techniques, and computational abilities have improved enormously since the
early 1960s, but it has been discovered, at least, that our galaxy is not teeming with very powerful alien
transmitters continuously broadcasting near the 21 cm hydrogen frequency. No one could say this in
1961.[16]

Modifications
As many observers have pointed out, the Drake equation is a very simple model that does not include
potentially relevant parameters,[17] and many changes and modifications to the equation have been
proposed. One line of modification, for example, attempts to account for the uncertainty inherent in many
of the terms.[18]Others note that the Drake equation ignores many concepts that might be relevant to the
odds of contacting other civilizations. For example, David Brin states: "The Drake equation merely speaks
of the number of sites at which ETIs spontaneously arise. The equation says nothing directly about the
contact cross-section between an ETIS and contemporary human society".[19]Because it is the contact

384
cross-section that is of interest to the SETI community, many additional factors and modifications of the
Drake equation have been proposed.
Colonization
It has been proposed to generalize the Drake equation to include additional effects of alien civilizations
colonizing other star systems. Each original site expands with an expansion velocity v, and establishes
additional sites that survive for a lifetime L. The result is a more complex set of 3 equations.[19]
Reappearance factor
The Drake equation may furthermore be multiplied by how many times an intelligent civilization may occur
on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime
after, for example, 10,000 years, life may still prevail on the planet for billions of years, permitting the
next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the
same planet. Thus, if nr is the average number of times a new civilization reappears on the same planet
where a previous civilization once has appeared and ended, then the total number of civilizations on such
a planet would be 1 + nr, which is the actual reappearance factor added to the equation.
factor depends on what generally is the cause of civilization extinction. If it is generally by temporary
uninhabitability, for example a nuclear winter, then nr may be relatively high. On the other hand, if it is
generally by permanent uninhabitability, such as stellar evolution, then nr may be almost zero. In the case
of total life extinction, a similar factor may be applicable for fl, that is, how many times life may appear on a
planet where it has appeared once.
METI factor
Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the
same. For example, humans, although being in a communicative phase, are not a communicative
civilization; we do not practise such activities as the purposeful and regular transmission of interstellar
messages. For this reason, he suggested introducing the METI factor (messaging to extraterrestrial
intelligence) to the classical Drake equation.[20] He defined the factor as "the fraction of communicative
civilizations with clear and non-paranoid planetary consciousness", or alternatively expressed, the fraction
of communicative civilizations that actually engage in deliberate interstellar transmission.
The METI factor is somewhat misleading since active, purposeful transmission of messages by a
civilization is not required for them to receive a broadcast sent by another that is seeking first contact. It is
merely required they have capable and compatible receiver systems operational; however, this is a
variable humans cannot accurately estimate.
Biogenic gases
Astronomer Sara Seager proposed a revised equation that focuses on the search for planets with
biosignature gases.[21]These gases are produced by living organisms that can accumulate in a planet
atmosphere to levels that can be detected with remote space telescopes.[22]
The Seager equation looks like this:[22][a]

where:
N = the number of planets with detectable signs of life
N = the number of stars observed
FQ = the fraction of stars that are quiet
FHZ = the fraction of stars with rocky planets in the habitable zone
FO = the fraction of those planets that can be observed
FL = the fraction that have life
FS = the fraction on which life produces a detectable signature gas
Seager stresses, Were not throwing out the Drake Equation, which is really a different topic, explaining, Since
Drake came up with the equation, we have discovered thousands of exoplanets. We as a community have had our
views revolutionized as to what could possibly be out there. And now we have a real question on our hands, one
thats not related to intelligent life: Can we detect any signs of life in any way in the very near future? [23]
Estimates
Original estimates

385
There is considerable disagreement on the values of these parameters, but the 'educated guesses' used
by Drake and his colleagues in 1961 were:[24][25]

R = 1 yr1 (1 star formed per year, on the average over the life of the galaxy; this was regarded as
conservative)

fp = 0.2 to 0.5 (one fifth to one half of all stars formed will have planets)

ne = 1 to 5 (stars with planets will have between 1 and 5 planets capable of developing life)

fl = 1 (100% of these planets will develop life)

fi = 1 (100% of which will develop intelligent life)

fc = 0.1 to 0.2 (1020% of which will be able to communicate)

L = 1000 to 100,000,000 years (which will last somewhere between 1000 and 100,000,000 years)

Inserting the above minimum numbers into the equation gives a minimum N of 20 (see: Range of results).
Inserting the maximum numbers gives a maximum of 50,000,000. Drake states that given the
uncertainties, the original meeting concluded that N L, and there were probably between 1000 and
100,000,000 civilizations in the Milky Way galaxy.

Current estimates

This section discusses and attempts to list the best current estimates for the parameters of the Drake
equation.

Rate of star creation in our galaxy, R

Latest calculations from NASA and the European Space Agency indicate that the current rate of star
formation in our galaxy is about 0.681.45 M of material per year.[26][27] To get the number of stars per
year, this must account for the initial mass function (IMF) for stars, where the average new star mass is
about 0.5 M.[28] This gives a star formation rate of about 1.53 stars per year.

Fraction of those stars that have planets, fp

Recent analysis of microlensing surveys has found that fp may approach 1that is, stars are orbited by
planets as a rule, rather than the exception; and that there are one or more bound planets per Milky Way
star.[29][30]

Average number of planets per star having planets that might support life, ne[edit]

In November 2013, astronomers reported, based on Kepler space mission data, that there could be as
many as 40 billion Earth-sized planets orbiting in the habitable zones of sun-like stars and red dwarf
stars within the Milky Way Galaxy.[31][32] 11 billion of these estimated planets may be orbiting sun-like

386
stars.[33] Since there are about 100 billion stars in the galaxy, this implies fp ne is roughly 0.4. The nearest
planet in the habitable zone may be as little as 12 light-years away, according to the scientists.[31][32]

The consensus at the Green Bank meeting was that ne had a minimum value between 3 and 5. Dutch
astronomer Govert Schilling has opined that this is optimistic.[34] Even if planets are in the habitable zone,
the number of planets with the right proportion of elements is difficult to estimate.[35] Brad Gibson,
Yeshe Fenner, and Charley Lineweaver determined that about 10% of star systems in the Milky Way
galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable for a
sufficient time.

The discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-
supporting planets commonly survive the formation of their stellar systems. So-called hot Jupiters may
migrate from distant orbits to near orbits, in the process disrupting the orbits of habitable planets.

On the other hand, the variety of star systems that might have habitable zones is not just limited to solar-
type stars and Earth-sized planets. It is now estimated that even tidally locked planets close to red
dwarf stars might have habitable zones,[37]although the flaring behavior of these stars might argue
against this.[38] The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's
moon Titan) adds further uncertainty to this figure.

The authors of the rare Earth hypothesis propose a number of additional constraints on habitability for
planets, including being in galactic zones with suitably low radiation, high star metallicity, and low
enough density to avoid excessive asteroid bombardment. They also propose that it is necessary to have
a planetary system with large gas giants which provide bombardment protection without a hot Jupiter;
and a planet with plate tectonics, a large moon that creates tidal pools, and moderate axial tilt to
generate seasonal variation.

Fraction of the above that actually go on to develop life, fl

Geological evidence from the Earth suggests that fl may be high; life on Earth appears to have begun
around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively
common once conditions are right. However, this evidence only looks at the Earth (a single model
planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living
organisms that already inhabit it (ourselves). From a classical hypothesis testing standpoint, there are
zero degrees of freedom, permitting no valid estimates to be made. If life were to be found on Mars that
developed independently from life on Earth it would imply a value for fl close to 1. While this would raise
the degrees of freedom from zero to one, there would remain a great deal of uncertainty on any
estimate due to the small sample size, and the chance they are not really independent.

Countering this argument is that there is no evidence for abiogenesis occurring more than once on the
Earth that is, all terrestrial life stems from a common origin. If abiogenesis were more common it
would be speculated to have occurred more than once on the Earth. Scientists have searched for this by
looking for bacteria that are unrelated to other life on Earth, but none have been found yet.[41] It is also
possible that life arose more than once, but that other branches were out-competed, or died in mass

387
extinctions, or were lost in other ways. Biochemists Francis Crick and Leslie Orgel laid special emphasis on
this uncertainty: "At the moment we have no means at all of knowing" whether we are "likely to be alone
in the galaxy (Universe)" or whether "the galaxy may be pullulating with life of many different
forms."[42] As an alternative to abiogenesis on Earth, they proposed the hypothesis of directed
panspermia, which states that Earth life began with "microorganisms sent here deliberately by a
technological society on another planet, by means of a special long-range unmanned spaceship".

Fraction of the above that develops intelligent life, fi

This value remains particularly controversial. Those who favor a low value, such as the biologist Ernst
Mayr, point out that of the billions of species that have existed on Earth, only one has become intelligent
and from this, infer a tiny value for fi.[43]Likewise, the Rare Earth hypothesis, notwithstanding their low
value for ne above, also think a low value for fi dominates the analysis.[44] Those who favor higher values
note the generally increasing complexity of life over time, concluding that the appearance of intelligence
is almost inevitable,[45][46] implying an fi approaching 1. Skeptics point out that the large spread of values
in this factor and others make all estimates unreliable.

In addition, while it appears that life developed soon after the formation of Earth, the Cambrian
explosion, in which a large variety of multicellular life forms came into being, occurred a considerable
amount of time after the formation of Earth, which suggests the possibility that special conditions were
necessary. Some scenarios such as the snowball Earth or research into the extinction events have raised
the possibility that life on Earth is relatively fragile. Research on any past life on Mars is relevant since a
discovery that life did form on Mars but ceased to exist might raise our estimate of fl but would indicate
that in half the known cases, intelligent life did not develop.

Estimates of fi have been affected by discoveries that the Solar System's orbit is circular in the galaxy, at
such a distance that it remains out of the spiral arms for tens of millions of years (evading radiation
from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of
rotation.

Fraction of the above revealing their existence via signal release into space, fc

For deliberate communication, the one example we have (the Earth) does not do much explicit
communication, though there are some efforts covering only a tiny fraction of the stars that might look
for our presence. (See Arecibo message, for example). There is considerable speculation why an
extraterrestrial civilization might exist but choose not to communicate. However, deliberate
communication is not required, and calculations indicate that current or near-future Earth-level
technology might well be detectable to civilizations not too much more advanced than our own.[47] By
this standard, the Earth is a communicating civilization.

Another question is what percentage of civilizations in the galaxy are close enough for us to detect,
assuming that they send out signals. For example, existing Earth radio telescopes could only detect Earth
radio transmissions from roughly a light year away.

388
Lifetime of such a civilization wherein it communicates its signals into space, L

Michael Shermer estimated L as 420 years, based on the duration of sixty historical Earthly
civilizations.[49] Using 28 civilizations more recent than the Roman Empire, he calculates a figure of 304
years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of
most of these civilizations was followed by later civilizations that carried on the technologies, so it is
doubtful that they are separate civilizations in the context of the Drake equation. In the expanded
version, including reappearance number, this lack of specificity in defining single civilizations does not
matter for the end result, since such a civilization turnover could be described as an increase in
the reappearance number rather than increase in L, stating that a civilization reappears in the form of the
succeeding cultures. Furthermore, since none could communicate over interstellar space, the method of
comparing with historical civilizations could be regarded as invalid.

David Grinspoon has argued that once a civilization has developed enough, it might overcome all threats
to its survival. It will then last for an indefinite period of time, making the value for L potentially billions of
years. If this is the case, then he proposes that the Milky Way galaxy may have been steadily
accumulating advanced civilizations since it formed.[50] He proposes that the last factor L be replaced
with fIC T, where fIC is the fraction of communicating civilizations become "immortal" (in the sense that
they simply do not die out), and T representing the length of time during which this process has been
going on. This has the advantage that T would be a relatively easy to discover number, as it would simply
be some fraction of the age of the universe.

It has also been hypothesized that once a civilization has learned of a more advanced one, its longevity
could increase because it can learn from the experiences of the other.[51]

The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are
relatively high and the determining factor in whether there are large or small numbers of civilizations in
the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid
self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in
environmental issues and his efforts to warn against the dangers of nuclear warfare.

Range of results

As many skeptics have pointed out, the Drake equation can give a very wide range of values, depending
on the assumptions, and the values used in portions of the Drake equation are not well-
established.[34][53][54][55] In particular, the result can be N 1, meaning we are likely alone in the galaxy,
or N 1, implying there are many civilizations we might contact. One of the few points of wide
agreement is that the presence of humanity implies a probability of intelligence arising of greater than
zero.

As an example of a low estimate, combining NASA's star formation rates, the rare Earth hypothesis value
of fp ne fl = 105,[57] Mayr's view on intelligence arising, Drake's view of communication, and Shermer's
estimate of lifetime:

389
R = 1.53 yr1,[26] fp ne fl = 105,[40] fi = 109,[43] fc = 0.2[Drake, above], and L = 304 years

gives:

N = 1.5 105 109 0.2 304 = 9.1 1011

i.e., suggesting that we are probably alone in this galaxy, and possibly the observable universe.

On the other hand, with larger values for each of the parameters above, values of N can be derived that
are greater than 1. The following higher values that have been proposed for each of the parameters:

R = 1.53 yr1,[26] fp = 1,[29] ne = 0.2,[58][59] fl = 0.13,[60] fi = 1,[45] fc = 0.2[Drake, above], and L = 109 years[50]

Use of these parameters gives:

N = 3 1 0.2 0.13 1 0.2 109 = 156,000,000

Monte Carlo simulations of estimates of the Drake equation factors based on a stellar and planetary
model of the Milky Way have resulted in the number of civilizations varying by a factor of 100.[61]

Intelligent life ever existed

The Drake equation can be modified to determine just how unlikely intelligent life must be, to give the
result that Earth has the only intelligent life that has ever arisen, either in our galaxy or the universe as a
whole. This simplifies the calculation by removing the lifetime and communication constraints. Since star
and planets counts are known, this leaves the only unknown as the odds that a habitable
planet ever develops intelligent life. For Earth to have the only civilization that has ever occurred in the
universe, then the odds of any habitable planet ever developing such a civilization must be less
than 2.51024. Similarly, for Earth to host the only civilization in our galaxy for all time, the odds of a
habitable zone planet ever hosting intelligent life must be less than 1.71011 (about 1 in 60 billion). The
figure for the universe implies that it is highly unlikely that Earth hosts the only intelligent life that has
ever occurred. The figure for our galaxy suggests that other civilizations may have occurred or will likely
occur in our galaxy.

Criticism

Criticism of the Drake equation follows mostly from the observation that several terms in the equation
are largely or entirely based on conjecture. Star formation rates are well-known, and the incidence of
planets has a sound theoretical and observational basis, but the other terms in the equation become very
speculative. The uncertainties revolve around our understanding of the evolution of life, intelligence, and
civilization, not physics. No statistical estimates are possible for some of the parameters, where only one
example is known. The net result is that the equation cannot be used to draw firm conclusions of any
kind, and the resulting margin of error is huge, far beyond what some consider acceptable or
meaningful.[67]

390
One reply to such criticisms is that even though the Drake equation currently involves speculation about
unmeasured parameters, it was intended as a way to stimulate dialogue on these topics. Then the focus
becomes how to proceed experimentally. Indeed, Drake originally formulated the equation merely as an
agenda for discussion at the Green Bank conference.

FERMI PARADOX

Fermi paradox

The pessimists' most telling argument in the SETI debate stems not from theory or conjecture but from
an actual observation: the presumed lack of extraterrestrial contact.[6] A civilization lasting for tens of
millions of years might be able to travel anywhere in the galaxy, even at the slow speeds foreseeable
with our own kind of technology. Furthermore, no confirmed signs of intelligence elsewhere have been
recognized as such, either in our galaxy or in the observable universe of 2 trilliongalaxies.[70][71] According
to this line of thinking, the tendency to fill up all available territory seems to be a universal trait of living
things, so the Earth should have already been colonized, or at least visited, but no evidence of this exists.
Hence Fermi's question "Where is everybody?".

A large number of explanations have been proposed to explain this lack of contact; a book published in
2015 elaborated on 75 different explanations.[74] In terms of the Drake Equation, the explanations can be
divided into three classes:

Few intelligent civilizations ever arise. This is an argument that at least one of the first few
terms, R fp ne fl fi, has a low value. The most common suspect is, fi, but explanations such as the rare
Earth hypothesis argue that ne is the small term.

Intelligent civilizations exist, but we see no evidence, meaning fc is small. Typical arguments include that
civilizations are too far apart, it is too expensive to spread throughout the galaxy, civilizations broadcast
signals for only a brief period of time, it is dangerous to communicate, and many others.

The lifetime of intelligent, communicative civilizations is short, meaning the value of L is small. Drake
suggested that a large number of extraterrestrial civilizations would form, and he further speculated that
the lack of evidence of such civilizations may be because technological civilizations tend to disappear
rather quickly. Typical explanations include it is the nature of intelligent life to destroy itself, it is the
nature of intelligent life to destroy others, they tend to experience a technological singularity, and
others.

These lines of reasoning lead to the Great Filter hypothesis,[75] which states that since there are no
observed extraterrestrial civilizations, despite the vast number of stars, then some step in the process
must be acting as a filter to reduce the final value. According to this view, either it is very hard for
intelligent life to arise, or the lifetime of such civilizations, or the period of time they reveal their
existence, must be relatively short.

In fiction and popular culture

391
Frederik Pohl's Hugo Award-winning short story "Fermi and Frost", cites a paradox as evidence for the
short lifetime of technical civilizationsthat is, the possibility that once a civilization develops the power
to destroy itself (perhaps by nuclear warfare), it does.

Optimistic results of the equation along with unobserved extraterrestrials also serves as backdrop for
humorous suggestions such as Terry Bisson's classic short story "They're Made Out of Meat", that there
are many extraterrestrial civilizations but that they are deliberately ignoring humanity.[76]

The equation was cited by Gene Roddenberry as supporting the multiplicity of inhabited planets shown
on Star Trek, the television series he created. However, Roddenberry did not have the equation with him,
and he was forced to "invent" it for his original proposal.[77] The invented equation created by
Roddenberry is:

However, a number raised to the first power is merely the number itself.

Eleanor Ann Arroway paraphrases the Drake equation several times in the film Contact (1997), using the
magnitude of Nand its implications on the output value to justify the SETI program.

In an episode of The Big Bang Theory, Howard Wolowitz alludes to the Drake Equation on "Ladies Night"
for the number of potential women available to the quartet, after which his friend Sheldon
Cooper recites the equation from memory.

Ambient electronic music duo Carbon Based Lifeforms has the Drake Equation recited in their song
"Abiogenesis" from their album World Of Sleepers.

The equation is being explained by Dr. Strickland in the Sci-Fi TV series The Expanse to colonel Janus.

FIRST EXAMPLE OF FERMI PROBLEM ESTIMATE APPLICATION.


TRINITROBENZENE (TNT)
From Wikipedia, the free encyclopedia
For other uses, see TNT (disambiguation).

TNT

392
Names

Preferred IUPAC name

2-Methyl-1,3,5-trinitrobenzene

Other names

2,4,6-Trinitrotoluene
TNT
Trilite
Tolite
Trinol
Trotyl
Tritolo
Tritolol
Triton
Tritone
Trotol
Trinitrotoluol
2,4,6-Trinitromethylbenzene

Identifiers

CAS Number 118-96-7

3D model (JSmol) Interactive image

Abbreviations TNT

ChemSpider 8073

DrugBank DB01676

393
ECHA InfoCard 100.003.900

EC Number 204-289-6

KEGG C16391
PubChem CID 11763

RTECS number XU0175000

UNII H43RF5TRM5

UN number 0209 Dry or wetted with <


30% water
0388, 0389 Mixtures with
trinitrobenzene,
hexanitrostilbene

InChI[show]

SMILES[show]

Properties
Chemical formula C7H5N3O6

Molar mass 227.13 gmol1

Appearance Pale yellow solid. Loose


"needles", flakes
or prillsbefore melt-casting.
A solid block after being
poured into a casing.

Density 1.654 g/cm3

Melting point 80.35 C (176.63 F;


353.50 K)

Boiling point 240.0 C (464.0 F; 513.1 K)


(decomposes)[1]
Solubility in water 0.13 g/L (20 C)

Solubility in ether, acetone, benzene, pyridine soluble

Vapor pressure 0.0002 mmHg (20C)[2]

394
Explosive data

Shock sensitivity Insensitive

Friction sensitivity Insensitive to 353 N

Detonation velocity 6900 m/s

RE factor 1.00

Hazards

Safety data sheet ICSC 0967


EU classification(DSD) (outdated)
E Explosive

T Toxic

N Dangerous for the


environment

R-phrases(outdated) R2, R23/24/25, R33, R51/53

S-phrases(outdated) (S1/2), S35, S45, S61

NFPA 704

4
2
4

Lethal dose or concentration (LD, LC):


LD50 (median dose) 795 mg/kg (rat, oral)
660 (mouse, oral)[3]
LDLo (lowest published) 500 mg/kg (rabbit, oral)
1850 mg/kg (cat, oral)[3]

US health exposure limits (NIOSH):


PEL(Permissible) TWA 1.5 mg/m3 [skin][2]
REL(Recommended) TWA 0.5 mg/m3 [skin][2]

395
IDLH (Immediate danger) 500 mg/m3[2]

Related compounds
Related compounds picric acid
hexanitrobenzene
2,4-Dinitrotoluene
Except where otherwise noted, data are given for materials in
their standard state (at 25 C [77 F], 100 kPa).

verify (what is ?)

Infobox references

Trinitrotoluene (/tranatrotljuin, -ljwin/;[4][5] TNT), or more specifically 2,4,6-trinitrotoluene, is


a chemical compound with the formula C6H2(NO2)3CH3. This yellow solid is sometimes used as
a reagentin chemical synthesis, but it is best known as an explosive material with convenient handling
properties. The explosive yield of TNT is considered to be the standard measure of bombs and
other explosives. In chemistry, TNT is used to generate charge transfer salts.

History

Chunks of explosives-grade TNT

Trinitrotoluene melting at 81 C

396
M107 artillery shells. All are labelled to indicate a filling of "Comp B" (mixture of TNT and RDX) and
have fuzes fitted

Analysis of TNT production by branch of the German army between 1941 and the first quarter of 1944
shown in thousands of tons per month

397
Detonation of the 500-ton TNT explosive charge as part of Operation Sailor Hat in 1965. The white blast-
wave is visible on the water surface and a shock condensation cloud is visible overhead.

World War I-era HE artillery shell for a 9.2 inch howitzer. The red band of copper at the lower part of the
grenade is called a belt or girdle. The green band (marked "Trotyl") indicates that the grenade is blind or
use as exercise. Coloring of sharp fired grenades may vary depending on country, etc. Yellow ring is
common too sharp and red for older projectiles, etc. Blue = Exercise, White = Phosphorus, Gray = Smoke
TNT was first prepared in 1863 by German chemist Julius Wilbrand[6] and originally used as a yellow dye.
Its potential as an explosive was not appreciated for several years, mainly because it was so difficult to
detonate and because it was less powerful than alternatives. Its explosive properties were first discovered
by another German chemist, Carl Hussermann, in 1891.[7] TNT can be safely poured when liquid into
shell cases, and is so insensitive that it was exempted from the UK's Explosives Act 1875 and was not
considered an explosive for the purposes of manufacture and storage.[8]
The German armed forces adopted it as a filling for artillery shells in 1902. TNT-filled armour-
piercing shells would explode after they had penetrated the armour of British capital ships, whereas the
British lyddite-filled shells tended to explode upon striking armour, thus expending much of their energy
outside the ship.[8] The British started replacing lyddite with TNT in 1907.
The United States Navy continued filling armor-piercing shells with explosive D after some other nations
had switched to TNT; but began filling naval mines, bombs, depth charges, and torpedo warheads with
burster charges of crude grade B TNT with the color of brown sugar and requiring an explosive
booster charge of granular crystallized grade A TNT for detonation. High-explosive shells were filled
with grade A TNT, which became preferred for other uses as industrial chemical capacity became
available for removing xylene and similar hydrocarbons from the toluene feedstock and
other nitrotoluene isomer byproducts from the nitrating reactions.[9]

Preparation
In industry, TNT is produced in a three-step process. First, toluene is nitrated with a mixture
of sulfuric and nitric acid to produce mononitrotoluene (MNT). The MNT is separated and then renitrated
to dinitrotoluene (DNT). In the final step, the DNT is nitrated to trinitrotoluene (TNT) using
an anhydrous mixture of nitric acid and oleum. Nitric acid is consumed by the manufacturing process, but

398
the diluted sulfuric acid can be reconcentrated and reused. After nitration, TNT is stabilized by a process
called sulfitation, where the crude TNT is treated with aqueous sodium sulfite solution to remove less
stable isomers of TNT and other undesired reaction products. The rinse water from sulphitation is known
as red water and is a significant pollutant and waste product of TNT manufacture.[10]
Control of nitrogen oxides in feed nitric acid is very important because free nitrogen dioxide can result in
oxidation of the methyl group of toluene. This reaction is highly exothermic and carries with it the risk of a
runaway reaction leading to an explosion.
In the laboratory, 2,4,6-trinitrotoluene is produced by a two-step process. A nitrating mixture of
concentrated nitric and sulfuric acids is used to nitrate toluene to a mixture of mono- and di-nitrotoluene
isomers, with careful cooling to maintain temperature. The nitrated toluenes are then separated, washed
with dilute sodium bicarbonate to remove oxides of nitrogen, and then carefully nitrated with a mixture
of fuming nitric acid and sulfuric acid. Towards the end of the nitration, the mixture is heated on a steam
bath. The trinitrotoluene is separated, washed with a dilute solution of sodium sulfite and
then recrystallized from alcohol.

Applications
TNT is one of the most commonly used explosives for military, industrial, and mining applications. TNT has
been used in conjunction with hydraulic fracturing, a process used to recover oil and gas from shale
formations. The technique involves displacing and detonating nitroglycerin in hydraulically induced
fractures followed by wellbore shots using pelletized TNT.[11]
TNT is valued partly because of its insensitivity to shock and friction, with reduced risk of
accidental detonation compared to more sensitive explosives such as nitroglycerin. TNT melts at 80 C
(176 F), far below the temperature at which it will spontaneously detonate, allowing it to be poured or
safely combined with other explosives. TNT neither absorbs nor dissolves in water, which allows it to be
used effectively in wet environments. To detonate, TNT must be triggered by a pressure wave from a
starter explosive, called an explosive booster.
Although blocks of TNT are available in various sizes (e.g. 250 g, 500 g, 1,000 g), it is more commonly
encountered in synergistic explosive blends comprising a variable percentage of TNT plus other
ingredients. Examples of explosive blends containing TNT include:

Amatex: (ammonium nitrate and RDX) [12]


Amatol:(ammonium nitrate [13])
Ammonal: (ammonium nitrate and aluminium powder plus sometimes charcoal).
Baratol: ( barium nitrate and wax [14])
Composition B ( RDX and paraffin wax [15] )
Composition H6
Cyclotol (RDX) [16]
Ednatol
Hexanite [17](hexanitrodiphenylamine[18][19]).
Minol
Octol
Pentolite
Picratol
Tetrytol
Torpex
Tritonal

Explosive character
Upon detonation, TNT decomposes as follows:

399
2 C7H5N3O6 3 N2 + 5 H2O + 7 CO + 7 C
2 C7H5N3O6 3 N2 + 5 H2 + 12 CO + 2 C
The reaction is exothermic but has a high activation energy in the gas phase (~62 kcal/mol). The
condensed phases (solid or liquid) show markedly lower activation energies of roughly 35
kcal/mol due to unique bimolecular decomposition routes at elevated densities.[20] Because of the
production of carbon, TNT explosions have a sooty appearance. Because TNT has an excess of
carbon, explosive mixtures with oxygen-rich compounds can yield more energy per kilogram than
TNT alone. During the 20th century, amatol, a mixture of TNT with ammonium nitrate was a
widely used military explosive.
TNT can be detonated with a high velocity initiator or by efficient concussion. [21]For many years,
TNT used to be the reference point for the Figure of Insensitivity. TNT had a rating of exactly 100
on the "F of I" scale. The reference has since been changed to a more sensitive explosive
called RDX, which has an F of I rating of 80.

Energy content
See also: TNT equivalent

Cross-sectional view of Oerlikon 20 mm cannon shells (dating from circa 1945) showing color
codes for TNT and pentolite fillings
The heat of detonation utilized by NIST is 4184 J/g (4.184 MJ/kg).[22] The energy density of TNT is
used as a reference-point for many other explosives, including nuclear weapons, the energy
content of which is measured in equivalent kilotons (~4.184 terajoules) or megatons
(~4.184 petajoules) of TNT. The heat of combustion is 14.5 megajoules per kilogram, which
requires that some of the carbon in TNT react with atmospheric oxygen, which does not occur in
the initial event.[23]
For comparison, gunpowder contains 3 megajoules per kilogram, dynamitecontains 7.5
megajoules per kilogram, and gasoline contains 47.2 megajoules per kilogram (though gasoline
requires an oxidant, so an optimized gasoline and O2 mixture contains 10.4 megajoules per
kilogram).

Detection
Various methods can be used to detect TNT including optical and electrochemical sensors
and explosive-sniffing dogs. In 2013, researchers from the Indian Institutes of
Technology using noble-metal quantum clusters could detect TNT at the sub-
zeptomolar (1018 mol/m3) level.[24]

Safety and toxicity


TNT is poisonous, and skin contact can cause skin irritation, causing the skin to turn a bright
yellow-orange color. During the First World War, munition workers who handled the chemical
found that their skin turned bright yellow, which resulted in their acquiring the nickname "canary
girls" or simply "canaries."
People exposed to TNT over a prolonged period tend to experience anemia and
abnormal liver functions. Blood and liver effects, spleen enlargement and other harmful effects on
the immune system have also been found in animals that ingested or breathed trinitrotoluene.
There is evidence that TNT adversely affects male fertility.[25] TNT is listed as a possible
human carcinogen, with carcinogenic effects demonstrated in animal experiments (rat), although

400
effects upon humans so far amount to none [according to IRIS of March 15,
2000].[26]Consumption of TNT produces red urine through the presence of breakdown products
and not blood as sometimes believed.[27]
Some military testing grounds are contaminated with TNT. Wastewater from munitions programs
including contamination of surface and subsurface watersmay be colored pink because of the
presence of TNT. Such contamination, called "pink water", may be difficult and expensive
to remedy.
TNT is prone to exudation of dinitrotoluenes and other isomers of trinitrotoluene. Even small
quantities of such impurities can cause such effect. The effect shows especially
in projectiles containing TNT and stored at higher temperatures, e.g. during summer. Exudation of
impurities leads to formation of pores and cracks (which in turn cause increased shock
sensitivity). Migration of the exudated liquid into the fuze screw thread can form fire channels,
increasing the risk of accidental detonations; fuze malfunction can result from the liquids migrating
into its mechanism.[28] Calcium silicate is mixed with TNT to mitigate the tendency towards
exudation.[29]

Ecological impact
Because of its use in construction and demolition, TNT has become the most widely used
explosive, and thus its toxicity is the most characterized and reported. Residual TNT from
manufacture, storage, and use can pollute water, soil, atmosphere, and biosphere.
The concentration of TNT in contaminated soil can reach 50 g/kg of soil, where the highest
concentrations can be found on or near the surface. In the last decade, the United States
Environmental Protection Agency (USEPA) has declared TNT a pollutant whose removal is
priority.[30] The USEPA maintains that TNT levels in soil should not exceed 17.2 gram per
kilogram of soil and 0.01 milligrams per liter of water.[31]
Aqueous solubility
Dissolution is a measure of the rate that solid TNT in contact with water is dissolved. The
relatively low aqueous solubility of TNT causes the dissolution of solid particles to be continuously
released to the environment over extended periods of time.[32] Studies have shown that the TNT
dissolved slower in saline water than in freshwater. However, when salinity was altered, TNT
dissolved at the same speed (Figure 2).[33] Because TNT is moderately soluble in water, it can
migrate through subsurface soil, and cause groundwater contamination.[34]
Soil adsorption[edit]
Adsorption is a measure of the distribution between soluble and sediment adsorbed contaminants
following attainment of equilibrium. TNT and its transformation products are known to adsorb to
surface soils and sediments, where they undergo reactive transformation or remained
stored.[35] The movement or organic contaminants through soils is a function of their ability to
associate with the mobile phase (water) and a stationary phase (soil). Materials that associate
strongly with soils move slowly through soil. Materials that associate strongly with water move
through water with rates approaching that of ground water movement.
The association constant for TNT with a soil is 2.7 to 11 liters per kilogram of soil. [36] This means
that TNT has a one- to tenfold tendency to adhere to soil particulates than not when introduced
into the soil.[32] Hydrogen bonding and ion exchange are two suggested mechanisms of
adsorption between the nitro functional groups and soil colloids.
The number of functional groups on TNT influences the ability to adsorb into soil. Adsorption
coefficient values have been shown to increase with an increase in the number of amino groups.
Thus, adsorption of the TNT decomposition product 2,4-diamino-6-nitrotoluene (2,4-DANT) was
greater than that for 4-amino-2,6-dinitrotoluene (4-ADNT), which was greater than that for
TNT.[32] Lower adsorption coefficients for 2,6-DNT compared to 2,4-DNT can be attributed to
the steric hindrance of the NO3 group in the ortho position.

401
Research has shown that in freshwater environments, with a high abundances of Ca 2+, the
adsorption of TNT and its transformation products to soils and sediments may be lower than
observed in a saline environment, dominated by K+and Na+. Therefore, when considering the
adsorption of TNT, the type of soil or sediment and the ionic composition and strength of the
ground water are important factors.
The association constants for TNT and its degradation products with clays have been determined.
Clay minerals have a significant effect on the adsorption of energetic compounds. It should be
noted that soil properties, such as organic carbon content and cation exchange capacity had
significant impacts of the adsorption coefficients reported in the table below.
Additional studies have shown that the mobility of TNT degradation products is likely to be lower
than TNT in subsurface environments where specific adsorption to clay minerals dominates the
sorption process.[37] Thus, the mobility of TNT and its transformation products are dependent on
the characteristics of the sorbent.[37] The mobility of TNT in groundwater and soil has been
extrapolated from sorption and desorption isotherm models determined with humic acids, in
aquifer sediments, and soils.[37] From these models, it is predicted that TNT has a low retention
and transports readily in the environment.
Compared to other explosives, TNT has a higher association constant with soil, meaning it
adheres more with soil than with water. Conversely, other explosives, such as RDX and HMX with
low association constants (ranging from 0.06 to 7.3 L/kg and 0 to 1.6 L/kg respectively) can move
more rapidly in water.[32]
Chemical breakdown
TNT is a reactive molecule and is particularly prone to react with reduced components of
sediments or photodegradation in the presence of sunlight. TNT is thermodynamically and
kinetically capable of reacting with a wide number of components of many environmental
systems. This includes wholly abiotic reactants, like photons, hydrogen sulfide, Fe2+, or microbial
communities, both oxic and anoxic.
Soils with high clay contents or small particle sizes and high total organic carbon content have
been shown to promote TNT transformation. Possible TNT transformations include reduction of
one, two, or three nitro-moieties to amines and coupling of amino transformation products to
form dimers. Formation of the two monoamino transformation products, 2-ADNT and 4-ADNT are
energetically favored, and therefore are observed in contaminated soils and ground water. The
diamino products are energetically less favorable, and even less likely are the triamino products.
The transformation of TNT is significantly enhanced under anaerobic conditions as well as under
highly reducing conditions. TNT transformations in soils can occur both biologically and
abiotically.[37]
Photolysis is a major process that impacts the transformation of energetic compounds. The
alteration of a molecule in photolysis occurs in the presence of direct absorption of light energy by
the transfer of energy from a photosensitized compound. Phototransformation of TNT results in
the formation of nitrobenzenes, benzaldehydes, azodicarboxylic acids, and nitrophenols, as a
result of the oxidation of methyl groups, reduction of nitro groups, and dimer formation.
Evidence of the photolysis of TNT has been seen due to the color change to pink of the
wastewaters when exposed to sunlight. Photolysis was more rapid in river water than in distilled
water. Ultimately, photolysis affects the fate of TNT primarily in the aquatic environment but could
also affect the reaction when exposed to sunlight on the soil surface.
Biodegradation
The ligninolytic physiological phase and manganese peroxidase system of fungi can cause a very
limited amount of mineralization of TNT in a liquid culture; though not in soil. An organism capable
of the remediation of large amounts of TNT in soil has yet to be discovered.[38] Both wild and
transgenic plants can phytoremediate explosives from soil and water.

402
SECOND EXAMPLE FERMI PROBLEM ESTIMATE APPLICATION.

TRINITY (NUCLEAR TEST)


From Wikipedia, the free encyclopedia
(Redirected from Trinity test)

Trinity

The Trinity explosion, 16 ms after detonation. The viewed


hemisphere's highest point in this image is about 200 metres
(660 ft) high.

Information

Country United States

Test site Trinity Site, New Mexico

Date July 16, 1945

Test type Atmospheric

Device type Plutonium implosion fission

Yield 20 kilotons of TNT (84 TJ)

Test chronology

403
Operation Crossroads

Trinity Site

U.S. National Register of Historic Places

U.S. Historic district

U.S. National Historic Landmark District

N.M. State Register of Cultural Properties

Trinity Site Obelisk

404
Nearest city Bingham, New Mexico

Coordinates
334038N1062831WCoordinates:
334038N 1062831W

Area 36,480 acres (14,760 ha)

Built 1945

NRHP Reference # 66000493[1]

NMSRCP # 30

Significant dates

Added to NRHP October 15, 1966

Designated NHLD December 21, 1965[2]

Designated NMSRCP December 20, 1968

Trinity was the code name of the first detonation of a nuclear weapon. It was conducted by the United
States Army at 5:29 a.m. on July 16, 1945, as part of the Manhattan Project. The test was conducted in
the Jornada del Muerto desert about 35 miles (56 km) southeast of Socorro, New Mexico, on what was
then the USAAF Alamogordo Bombing and Gunnery Range (now part of White Sands Missile Range). The
only structures originally in the vicinity were the McDonald Ranch House and its ancillary buildings, which
scientists used as a laboratory for testing bomb components. A base camp was constructed, and there
were 425 people present on the weekend of the test.
The code name "Trinity" was assigned by J. Robert Oppenheimer, the director of the Los Alamos
Laboratory, inspired by the poetry of John Donne. The test was of an implosion-design plutonium device,

405
informally nicknamed "The Gadget", of the same design as the Fat Man bomb later detonated over
Nagasaki, Japan, on August 9, 1945. The complexity of the design required a major effort from the Los
Alamos Laboratory, and concerns about whether it would work led to a decision to conduct the first nuclear
test. The test was planned and directed by Kenneth Bainbridge.
Fears of a fizzle led to the construction of a steel containment vessel called Jumbo that could contain the
plutonium, allowing it to be recovered, but Jumbo was not used. A rehearsal was held on May 7, 1945, in
which 108 short tons (96 long tons; 98 t) of high explosive spiked with radioactive isotopes were
detonated. The Gadget's detonation released the explosive energy of about 22 kilotons of TNT (92 TJ).
Observers included Vannevar Bush, James Chadwick, James Conant, Thomas Farrell, Enrico
Fermi, Richard Feynman, Leslie Groves, Robert Oppenheimer, Geoffrey Taylor, and Richard Tolman.
The test site was declared a National Historic Landmark district in 1965, and listed on the National
Register of Historic Places the following year.

Background
: Manhattan Project
The creation of nuclear weapons arose from scientific and political developments of the 1930s. The
decade saw many new discoveries about the nature of atoms, including the existence of nuclear fission.
The concurrent rise of fascist governments in Europe led to a fear of a German nuclear weapon project,
especially among scientists who were refugees from Nazi Germany and other fascist countries. When their
calculations showed that nuclear weapons were theoretically feasible, the British and United States
governments supported an all-out effort to build them.[3]
These efforts were transferred to the authority of the U.S. Army in June 1942, and became the Manhattan
Project.[4]Brigadier General Leslie R. Groves, Jr., was appointed its director in September 1942. The
weapons development portion of this project was located at the Los Alamos Laboratory in northern New
Mexico, under the directorship of physicist J. Robert Oppenheimer. The University of Chicago, Columbia
University and the Radiation Laboratory at the University of California, Berkeley conducted other
development work.
Production of the fissile isotopes uranium-235 and plutonium-239 were enormous undertakings given the
technology of the 1940s, and accounted for 80% of the total costs of the project. Uranium enrichment was
carried out at the Clinton Engineer Works near Oak Ridge, Tennessee.[7] Theoretically, enriching uranium
was feasible through pre-existing techniques, but it proved difficult to scale to industrial levels and was
extremely costly. Only 0.71 percent of natural uranium was uranium-235, and it was estimated that it would
take 27,000 years to produce a gram of uranium with mass spectrometers, but kilogram amounts were
required.
Plutonium is a synthetic element with complicated physical, chemical and metallurgical properties. It is not
found in nature in appreciable quantities. Until mid-1944, the only plutonium that had been isolated had
been produced in cyclotrons in microgram amounts, whereas weapons required kilograms.[9] In April 1944,
physicist Emilio Segr, the head of the Los Alamos Laboratory's P-5 (Radioactivity) Group, received the
first sample of reactor-bred plutonium from the X-10 Graphite Reactor at Oak Ridge. He discovered that, in
addition to the plutonium-239 isotope, it also contained significant amounts of plutonium-240.[11] The
Manhattan Project produced plutonium in nuclear reactors at the Hanford Engineer Works near Hanford,
Washington.
The longer the plutonium remained irradiated inside a reactornecessary for high yields of the metalthe
greater the content of the plutonium-240 isotope, which undergoes spontaneous fission at thousands of
times the rate of plutonium-239. The extra neutrons it released meant that there was an unacceptably high
probability that plutonium in a gun-type fission weapon would detonate too soon after a critical mass was
formed, producing a "fizzle" a nuclear explosion many times smaller than a full explosion.[11] This meant
that the Thin Man bomb design that the laboratory had developed would not work properly.
The Laboratory turned to an alternative, albeit more technically difficult, design, an implosion-type nuclear
weapon. In September 1943, mathematician John von Neumann had proposed a design in which a
fissile core would be surrounded by two different high explosives that produced shock waves of different

406
speeds. Alternating the faster- and slower-burning explosives in a carefully calculated configuration would
produce a compressive wave upon their simultaneous detonation. This so-called "explosive lens" focused
the shock waves inward with enough force to rapidly compress the plutonium core to several times its
original density. This reduced the size of a critical mass, making it supercritical. It also activated a
small neutron source at the center of the core, which assured that the chain reaction began in earnest at
the right moment. Such a complicated process required research and experimentation
in engineering and hydrodynamics before a practical design could be developed.[13] The entire Los Alamos
Laboratory was reorganized in August 1944 to focus on design of a workable implosion bomb. [14]

Preparation
Decision

407
Map of the Trinity Site
Source; united states army, Jones [Link]. [Link]/wiki/file
The idea of testing the implosion device was brought up in discussions at Los Alamos in January 1944,
and attracted enough support for Oppenheimer to approach Groves. Groves gave approval, but he had
concerns. The Manhattan Project had spent a great deal of money and effort to produce the plutonium and
he wanted to know whether there would be a way to recover it. The Laboratory's Governing Board then
directed Norman Ramsey to investigate how this could be done. In February 1944 Ramsey proposed a
small-scale test in which the explosion was limited in size by reducing the number of generations of chain
reactions, and that it take place inside a sealed containment vessel from which the plutonium could be
recovered.[15]
The means of generating such a controlled reaction were uncertain, and the data obtained would not be as
useful as that from a full-scale explosion.[15]Oppenheimer argued that the "implosion gadget must be
tested in a range where the energy release is comparable with that contemplated for final use."[16]In March
1944, he obtained Groves's tentative approval for testing a full-scale explosion inside a containment
vessel, although Groves was still worried about how he would explain the loss of a billion dollars worth of
plutonium to a Senate Committee in the event of a failure.[15]
Code name
The exact origin of the code name "Trinity" for the test is unknown, but it is often attributed to
Oppenheimer as a reference to the poetry of John Donne, which in turn references the Christian notion of
the Trinity (three-fold nature of God). In 1962, Groves wrote to Oppenheimer about the origin of the name,

408
asking if he had chosen it because it was a name common to rivers and peaks in the West and would not
attract attention, and elicited this reply:
I did suggest it, but not on that ground ... Why I chose the name is not clear, but I know what thoughts
were in my mind. There is a poem of John Donne, written just before his death, which I know and love.
From it a quotation:
As West and East
In all flatt Mapsand I am oneare one,
So death doth touch the Resurrection.[a][17]
That still does not make a Trinity, but in another, better known devotional poem Donne
opens,
Batter my heart, three person'd God.[b][18][19]
Organization]
In March 1944, planning for the test was assigned to Kenneth Bainbridge, a professor of physics
at Harvard University, working under explosives expert George Kistiakowsky. Bainbridge's group was
known as the E-9 (Explosives Development) Group.[20] Stanley Kershaw, formerly from the National Safety
Council, was made responsible for safety.[20] Captain Samuel P. Davalos, the assistant post engineer at
Los Alamos, was placed in charge of construction.[21] First Lieutenant Harold C. Bush became commander
of the Base Camp at Trinity.[22] Scientists William Penney, Victor Weisskopf and Philip Moon were
consultants. Eventually seven subgroups were formed:[23]

TR-1 (Services) under John H. Williams


TR-2 (Shock and Blast) under John H. Manley
TR-3 (Measurements) under Robert R. Wilson
TR-4 (Meteorology) under J. M. Hubbard
TR-5 (Spectrographic and Photographic) under Julian E. Mack
TR-6 (Airborne Measurements) under Bernard Waldman
TR-7 (Medical) under Louis H. Hempelmann
The E-9 group was renamed the X-2 (Development, Engineering and Tests) Group in the August 1944
reorganization.[20]
Test site

Trinity Site (red arrow) near Carrizozo Malpais


Safety and security required a remote, isolated and unpopulated area. The scientists also wanted a flat
area to minimize secondary effects of the blast, and with little wind to spread radioactive fallout. Eight
candidate sites were considered: the Tularosa Valley; the Jornada del Muerto Valley; the area southwest
of Cuba, New Mexico, and north of Thoreau; and the lava flats of the El Malpais National Monument, all in
New Mexico; the San Luis Valley near the Great Sand Dunes National Monument in Colorado; the Desert
Training Areaand San Nicolas Island in Southern California; and the sand bars of Padre Island, Texas.[24]

409
The sites were surveyed by car and by air by Bainbridge, R. W. Henderson, Major W. A. Stevens and
Major Peer de Silva. The site finally chosen, after consulting with Major General Uzal Ent, the commander
of the Second Air Forceon September 7, 1944,[24] lay at the northern end of the Alamogordo Bombing
Range, in Socorro County near the towns of Carrizozo and San Antonio.(33.6773N 106.4754W).[25]
The only structures in the vicinity were the McDonald Ranch House and its ancillary buildings, about 2
miles (3.2 km) to the southeast.[26] Like the rest of the Alamogordo Bombing Range, it had been acquired
by the government in 1942. The patented land had been condemned and grazing
rights suspended.[27][28] Scientists used this as a laboratory for testing bomb components. [26] Bainbridge
and Davalos drew up plans for a base camp with accommodation and facilities for 160 personnel, along
with the technical infrastructure to support the test. A construction firm from Lubbock, Texasbuilt the
barracks, officers' quarters, mess hall and other basic facilities.[21] The requirements expanded and, by
July 1945, 250 people worked at the Trinity test site. On the weekend of the test, there were 425
present.[29]

The Trinity test base camp


Lieutenant Bush's twelve-man military police unit arrived at the site from Los Alamos on December 30,
1944. This unit established initial security checkpoints and horse patrols. The distances around the site
proved too great for the horses, so they resorted to using jeeps and trucks for transportation. The horses
were used for playing polo.[24][30] Maintenance of morale among men working long hours under harsh
conditions along with dangerous reptiles and insects was a challenge. Bush strove to improve the food and
accommodation, and to provide organized games and nightly movies.[31]
Throughout 1945, other personnel arrived at the Trinity Site to help prepare for the bomb test. They tried to
use water out of the ranch wells, but found the water so alkaline they could not drink it. They were forced
to use U.S. Navysaltwater soap and hauled drinking water in from the firehouse in Socorro. Gasoline and
diesel were purchased from the Standard Oil plant there.[30] Military and civilian construction personnel
built warehouses, workshops, a magazine and commissary. The railroad siding at Pope, New Mexico, was
upgraded by adding an unloading platform. Roads were built, and 200 miles (320 km) of telephone wire
was strung. Electricity was supplied by portable generators.[32][33]
Due to its proximity to the bombing range, the base camp was accidentally bombed twice in May. When
the lead plane on a practice night raid accidentally knocked out the generator or otherwise doused the
lights illuminating their target, they went in search of the lights, and since they had not been informed of
the presence of the Trinity base camp, and it was lit, bombed it instead. The accidental bombing damaged
the stables and the carpentry shop, and a small fire resulted.[34]
Jumbo

410
Jumbo arrives at the site
Responsibility for the design of a containment vessel for an unsuccessful explosion, known as "Jumbo",
was assigned to Robert W. Henderson and Roy W. Carlson of the Los Alamos Laboratory's X-2A Section.
The bomb would be placed into the heart of Jumbo, and if the bomb's detonation was unsuccessful, the
outer walls of Jumbo would not be breached, making it possible to recover the bomb's plutonium. Hans
Bethe, Victor Weisskopf, and Joseph O. Hirschfelder, made the initial calculations, followed by a more
detailed analysis by Henderson and Carlson.[22] They drew up specifications for a steel sphere 13 to 15
feet (3.96 to 4.57 m) in diameter, weighing 150 short tons (130 long tons; 140 t) and capable of handling a
pressure of 50,000 pounds per square inch (340,000 kPa). After consulting with the steel companies and
the railroads, Carlson produced a scaled-back cylindrical design that would be much easier to
manufacture, but still difficult to transport. Carlson identified a company that normally made boilers for the
Navy, Babcock & Wilcox, had made something similar and were willing to attempt its manufacture.[35]
As delivered in May 1945,[36] Jumbo was 10 feet (3.05 m) in diameter and 25 feet (7.62 m) long with walls
14 inches (356 mm) thick, and weighed 214 short tons (191 long tons; 194 t).[37][38] A special train brought it
from Barberton, Ohio, to the siding at Pope, where it was loaded on a large trailer and towed 25 miles
(40 km) across the desert by crawler tractors.[39] At the time, it was the heaviest item ever shipped by
rail.[38]
For many of the Los Alamos scientists, Jumbo was "the physical manifestation of the lowest point in the
Laboratory's hopes for the success of an implosion bomb."[36] By the time it arrived, the reactors at Hanford
produced plutonium in quantity, and Oppenheimer was confident that there would be enough for a second
test.[35] The use of Jumbo would interfere with the gathering of data on the explosion, the primary objective
of the test.[39] An explosion of more than 500 tons of TNT (2,100 GJ) would vaporize the steel and make it
hard to measure the thermal effects. Even 100 tons of TNT (420 GJ) would send fragments flying,
presenting a hazard to personnel and measuring equipment.[40] It was therefore decided not to use
it.[39] Instead, it was hoisted up a steel tower 800 yards (732 m) from the explosion, where it could be used
for a subsequent test.[35] In the end, Jumbo survived the explosion, although its tower did not. [37]
The development team also considered other methods of recovering active material in the event of a dud
explosion. One idea was to cover it with a cone of sand. Another was to suspend the bomb in a tank of
water. As with Jumbo, it was decided not to proceed with these means of containment either. The CM-10
(Chemistry and Metallurgy) group at Los Alamos also studied how the active material could be chemically
recovered after a contained or failed explosion.[40]
100-ton test
Because there would be only one chance to carry out the test correctly, Bainbridge decided that a
rehearsal should be carried out to allow the plans and procedures to be verified, and the instrumentation to
be tested and calibrated. Oppenheimer was initially skeptical, but gave permission, and later agreed that it
contributed to the success of the Trinity test.[33]

411
Men stack crates of high explosives for the 100-ton test
A 20-foot (6.1 m) high wooden platform was constructed 800 yards (732 m) to the south-east of
Trinity ground zero (33.67123N 106.47229W) and 108 long tons (110 t) of TNT were stacked on top of it.
Kistiakowsky assured Bainbridge that the explosives used were not susceptible to shock. This was proven
correct when some boxes fell off the elevator lifting them up to the platform. Flexible tubing was threaded
through the pile of boxes of explosives. A radioactive slug from Hanford with 1,000 curies (37 TBq) of beta
ray activity and 400 curies (15 TBq) of gamma ray activity was dissolved, and Hempelmann poured it into
the tubing.[33][41][42]
The test was scheduled for May 5, but was postponed for two days to allow for more equipment to be
installed. Requests for further postponements had to be refused because they would have impacted the
schedule for the main test. The detonation time was set for 04:00 Mountain War Time (MWT), on May 7,
but there was a 37-minute delay to allow the observation plane,[43] a Boeing B-29 Superfortress from
the 216th Army Air Forces Base Unit flown by Major Clyde "Stan" Shields,[44] to get into position.[43]
The fireball of the conventional explosion was visible from Alamogordo Army Air Field 60 miles (97 km)
away, but there was little shock at the base camp 10 miles (16 km) away.[43] Shields thought that the
explosion looked "beautiful", but it was hardly felt at 15,000 feet (4,572 m).[44] Herbert L.
Anderson practiced using a converted M4 Sherman tank lined with lead to approach the 5-foot (1.52 m)
deep and 30-foot (9.14 m) wide blast crater and take a sample of dirt, although the radioactivity was low
enough to allow several hours of unprotected exposure. An electrical signal of unknown origin caused the
explosion to go off 0.25 seconds early, ruining experiments that required split-second timing.
The piezoelectric gauges developed by Anderson's team correctly indicated an explosion of 108 tons of
TNT (450 GJ), but Luis Alvarez and Waldman's airborne condenser gauges were far less accurate.[41][45]
In addition to uncovering scientific and technological issues, the rehearsal test revealed practical concerns
as well. Over 100 vehicles were used for the rehearsal test but it was realized more would be required for
the main test, and they would need better roads and repair facilities. More radios were required, and more
telephone lines, as the telephone system had become overloaded. Lines needed to be buried to prevent
damage by vehicles. A teletype was installed to allow better communication with Los Alamos. A town hall
was built to allow for large conferences and briefings, and the mess hall had to be upgraded. Because dust
thrown up by vehicles interfered with some of the instrumentation, 20 miles (32 km) of road was sealed at
a cost of $5,000 per mile ($3,100/km).[45][33]

The Gadget

412
Norris Bradbury, group leader for bomb assembly, stands next to the assembled
Gadget atop the test tower. Later, he became the director of Los Alamos, after the
departure of Oppenheimer.
The term "Gadget" was a laboratory euphemism for a bomb, from which the laboratory's weapon physics
division, "G Division", took its name in August 1944. At that time it did not refer specifically to the Trinity
Test device as it had yet to be developed, but once it was, it became the laboratory code name. The Trinity
Gadget was officially a Y-1561 device, as was the Fat Manused a few weeks later in the bombing of
Nagasaki. The two were very similar, with only minor differences, the most obvious being the absence of
fuzing and the external ballistic casing. The bombs were still under development, and small changes
continued to be made to the Fat Man design.
To keep the design as simple as possible, a near solid spherical core was chosen rather than a hollow
one, although calculations showed that a hollow core would be more efficient in its use of
plutonium.[50][51] The core was compressed to prompt super-criticality by the implosion generated by the
high explosive lens. This design became known as a "Christy Core"[52] or "Christy pit" after physicist Robert
F. Christy, who made the solid pit design a reality after it was initially proposed by Edward Teller. Along
with the pit, the whole physics package was also informally nicknamed "Christy['s] Gadget".
Of the several allotropes of plutonium, the metallurgists preferred the malleable phase. This was
stabilized at room temperature by alloying it with gallium. Two equal hemispheres of plutonium-gallium
alloy were plated with silver, and designated by serial numbers HS-1 and HS-2.[55] The 6.19-kilogram
(13.6 lb) radioactive core generated 15 W of heat, which warmed it up to about 100 to 110 F (38 to
43 C), and the silver plating developed blisters that had to be filed down and covered with gold foil; later
cores were plated with nickel instead. The Trinity core consisted of just these two hemispheres. Later
cores also included a ring with a triangular cross-section to prevent jets forming in the gap between them.

Basic nuclear components of the Gadget. The uranium slug containing the plutonium sphere was
inserted late in the assembly process.

413
A trial assembly of the Gadget without the active components or explosive lenses was carried out by the
bomb assembly team headed by Norris Bradbury at Los Alamos on July 3. It was driven to Trinity and
back. A set of explosive lenses arrived on July 7, followed by a second set on July 10. Each was examined
by Bradbury and Kistiakowsky, and the best ones were selected for use. [57] The remainder were handed
over to Edward Creutz, who conducted a test detonation at Pajarito Canyon near Los Alamos without
nuclear material.[58] This test brought bad news: magnetic measurements of the simultaneity of the
implosion seemed to indicate that the Trinity test would fail. Bethe worked through the night to assess the
results, and reported that they were consistent with a perfect explosion.
Assembly of the nuclear capsule began on July 13 at the McDonald Ranch House, where the master
bedroom had been turned into a clean room. The polonium-beryllium "Urchin" initiator was assembled,
and Louis Slotin placed it inside the two hemispheres of the plutonium core. Cyril Smith then placed the
core in the uranium tamper plug, or "slug." Air gaps were filled with 0.5-mil (0.013 mm) gold foil, and the
two halves of the plug were held together with uranium washers and screws which fit smoothly into the
domed ends of the plug. The completed capsule was then driven to the base of the tower.

Louis Slotin and Herbert Lehr with the Gadget prior to insertion of the tamper plug
(visible in front of Lehr's left knee)
At the tower a temporary eyebolt was screwed into the 105-pound (48 kg) capsule, and a chain hoist was
used to lower the capsule into the gadget. As the capsule entered the hole in the uranium tamper, it
stuck. Robert Bacherrealized that the heat from the plutonium core had caused the capsule to expand,
while the explosives assembly with the tamper had cooled during the night in the desert. By leaving the
capsule in contact with the tamper, the temperatures equalized and in a few minutes the capsule had
slipped completely into the tamper.[61] The eyebolt was then removed from the capsule and replaced with a
threaded uranium plug, a boron disk was placed on top of the capsule, an aluminum plug was screwed
into the hole in the pusher, and the two remaining high explosive lenses were installed. Finally, the
upper Duralpolar cap was bolted into place. Assembly was completed at about 16:45 on July 13. [62]
The Gadget was hoisted to the top of a 100-foot (30 m) steel tower. The height would give a better
indication of how the weapon would behave when dropped from a bomber, as detonation in the air would
maximize the amount of energy applied directly to the target (as the explosion expanded in a spherical
shape) and would generate less nuclear fallout. The tower stood on four legs that went 20 feet (6.1 m) into
the ground, with concrete footings. Atop it was an oak platform, and a shack made of corrugated iron that
was open on the western side. The Gadget was hauled up with an electric winch. [63] A truckload of
mattresses was placed underneath in case the cable broke and the Gadget fell. [64] The seven man arming
party, consisting of Bainbridge, Kistiakowsky, Joseph McKibben and four soldiers including Lieutenant
Bush, drove out to the tower to perform the final arming shortly after 22:00 on July 15.[64]
Personnel[

414
The 30-metre (100 ft) "shot tower" constructed for the test
In the final two weeks before the test, some 250 personnel from Los Alamos were at work at the Trinity
site,[65] and Lieutenant Bush's command had ballooned to 125 men guarding and maintaining the base
camp. Another 160 men under Major T.O. Palmer were stationed outside the area with vehicles to
evacuate the civilian population in the surrounding region should that prove necessary. They had enough
vehicles to move 450 people to safety, and had food and supplies to last them for two days. Arrangements
were made for Alamogordo Army Air Field to provide accommodation.[67]Groves had warned the Governor
of New Mexico, John J. Dempsey, that martial lawmight have to be declared in the southwestern part of
the state.
Shelters were established 10,000 yards (9,100 m) due north, west and south of the tower, known as N-
10,000, W-10,000 and S-10,000. Each had its own shelter chief: Robert Wilson at N-10,000, John Manley
at W-10,000 and Frank Oppenheimer at S-10,000.[69] Many other observers were around 20 miles (32 km)
away, and some others were scattered at different distances, some in more informal situations. Richard
Feynman claimed to be the only person to see the explosion without the goggles provided, relying on a
truck windshield to screen out harmful ultravioletwavelengths.
Bainbridge asked Groves to keep his VIP list down to just ten. He chose himself, Oppenheimer, Richard
Tolman, Vannevar Bush, James Conant, Brigadier General Thomas F. Farrell, Charles Lauritsen, Isidor
Isaac Rabi, Sir Geoffrey Taylor and Sir James Chadwick. The VIPs viewed the test from Compania Hill,
about 20 miles (32 km) northwest of the tower.[71] The observers set up a betting pool on the results of the
test. Edward Teller was the most optimistic, predicting 45 kilotons of TNT (190 TJ).[72] He wore gloves to
protect his hands, and sunglasses underneath the welding goggles that the government had supplied
everyone with.[71] Teller was also one of the few scientists to actually watch the test (with eye protection),
instead of following orders to lie on the ground with his back turned.[73] He also brought suntan lotion,
which he shared with the others.

The Gadget is unloaded at the base of the tower for the final assembly
Others were less optimistic. Ramsey chose zero (a complete dud), Robert Oppenheimer chose 300 tons of
TNT (1,300 GJ), Kistiakowsky 1,400 tons of TNT (5,900 GJ), and Bethe chose 8,000 tons of TNT
(33,000 GJ).[72] Rabi, the last to arrive, took 18,000 tons of TNT (75,000 GJ) by default, which would win

415
him the pool. In a video interview, Bethe stated that his choice of 8 kt was exactly the value calculated by
Segr, and he was swayed by Segr's authority over that of a more junior [but unnamed] member of
Segr's group who had calculated 20 kt. Enrico Fermi offered to take wagers among the top physicists and
military present on whether the atmosphere would ignite, and if so whether it would destroy just the state,
or incinerate the entire planet. This last result had been previously calculated by Bethe to be almost
impossible, although for a while it had caused some of the scientists some anxiety. Bainbridge was furious
with Fermi for scaring the guards who, unlike the physicists, did not have the advantage of their knowledge
about the scientific possibilities. His own biggest fear was that nothing would happen, in which case he
would have to head back to the tower to investigate.
Julian Mack and Berlyn Brixner were responsible for photography. The photography group employed some
fifty different cameras, taking motion and still photographs. Special Fastax cameras taking 10,000 frames
per second would record the minute details of the explosion. Spectrograph cameras would record the
wavelengths of light emitted by the explosion, and pinhole cameras would record gamma rays. A rotating
drum spectrograph at the 10,000-yard (9,100 m) station would obtain the spectrum over the first hundredth
of a second. Another, slow recording one would track the fireball. Cameras were placed in bunkers only
800 yards (730 m) from the tower, protected by steel and lead glass, and mounted on sleds so they could
be towed out by the lead-lined tank.[81] Some observers brought their own cameras despite the security.
Segr brought in Jack Aeby's 35 mm Perfex 44. It would take the only known well-exposed color
photograph of the detonation explosion.

Explosion
Detonation
The scientists wanted good visibility, low humidity, light winds at low altitude and westerly winds at high
altitude for the test. The best weather was predicted between July 18 and 21, but the Potsdam
Conference was due to start on July 16 and President Harry S. Truman wanted the test to be conducted
before the conference began. It was therefore scheduled for July 16, the earliest date at which the bomb
components would be available.

Jack Aeby's still photo is the only known well-exposed color photograph of the
detonation
The detonation was initially planned for 04:00 MWT but was postponed because of rain and lightning from
early that morning. It was feared that the danger from radiation and fallout would be increased by rain, and
lightning had the scientists concerned about a premature detonation.] A crucial favorable weather report
came in at 04:45, and the final twenty-minute countdown began at 05:10, read by Samuel Allison. By
05:30 the rain had gone. There were some communication problems. The shortwave radio frequency for
communicating with the B-29s was shared with the Voice of America, and the FM radios shared a
frequency with a railroad freight yard in San Antonio, Texas.
Two circling B-29s observed the test, with Shields again flying the lead plane. They carried members
of Project Alberta, who would carry out airborne measurements during the atomic missions. These
included Captain Deak Parsons, the Associate Director of the Los Alamos Laboratory and the head of
Project Alberta; Luis Alvarez, Harold Agnew, Bernard Waldman, Wolfgang Panofsky and William Penney.
The overcast obscured their view of the test site.

416
At [Link] MWT ( 2 seconds), the device exploded with an energy equivalent to around 20 kilotons of
TNT (84 TJ). The desert sand, largely made of silica, melted and became a mildly radioactive light green
glass, which was named trinitite.[87] It left a crater in the desert 5 feet (1.5 m) deep and 30 feet (9.1 m)
wide.[42] At the time of detonation, the surrounding mountains were illuminated "brighter than daytime" for
one to two seconds, and the heat was reported as "being as hot as an oven" at the base camp. The
observed colors of the illumination changed from purple to green and eventually to white. The roar of the
shock wave took 40 seconds to reach the observers. It was felt over 100 miles (160 km) away, and
the mushroom cloud reached 7.5 miles (12.1 km) in height.

Trinitite
Ralph Carlisle Smith, watching from Compania Hill, wrote:
I was staring straight ahead with my open left eye covered by a welder's glass and my right eye remaining
open and uncovered. Suddenly, my right eye was blinded by a light which appeared instantaneously all
about without any build up of intensity. My left eye could see the ball of fire start up like a tremendous
bubble or nob-like mushroom. I dropped the glass from my left eye almost immediately and watched the
light climb upward. The light intensity fell rapidly, hence did not blind my left eye but it was still amazingly
bright. It turned yellow, then red, and then beautiful purple. At first it had a translucent character, but
shortly turned to a tinted or colored white smoke appearance. The ball of fire seemed to rise in something
of toadstool effect. Later the column proceeded as a cylinder of white smoke; it seemed to move
ponderously. A hole was punched through the clouds, but two fog rings appeared well above the white
smoke column. There was a spontaneous cheer from the observers. Dr. von Neumann said "that was at
least 5,000 tons and probably a lot more."
In his official report on the test, Farrell wrote:
The lighting effects beggared description. The whole country was lighted by a searing light with the
intensity many times that of the midday sun. It was golden, purple, violet, gray, and blue. It lighted every
peak, crevasse and ridge of the nearby mountain range with a clarity and beauty that cannot be described
but must be seen to be imagined
William L. Laurence of The New York Times had been transferred temporarily to the Manhattan Project at
Groves's request in early 1945.[91] Groves had arranged for Laurence to view significant events, including
Trinity and the atomic bombing of Japan. Laurence wrote press releases with the help of the Manhattan
Project's public relations staff.[92] He later recalled that
A loud cry filled the air. The little groups that hitherto had stood rooted to the earth like desert plants broke
into dance, the rhythm of primitive man dancing at one of his fire festivals at the coming of Spring.
After the initial euphoria of witnessing the explosion had passed, Bainbridge told Oppenheimer, "Now we
are all sons of bitches."[33] Rabi noticed Oppenheimer's reaction: "I'll never forget his walk;" Rabi recalled,
"I'll never forget the way he stepped out of the car ... his walk was like High Noon ... this kind of strut. He
had done it."

417
Film of the Trinity test
Oppenheimer later recalled that, while witnessing the explosion, he thought of a verse from the Hindu holy
book, the Bhagavad Gita (XI,12):
If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the
mighty one
Years later he would explain that another verse had also entered his head at that time:
We knew the world would not be the same. A few people laughed, a few people cried. Most people were
silent. I remembered the line from the Hindu scripture, the Bhagavad Gita; Vishnu is trying to persuade
the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, 'Now I
am become Death, the destroyer of worlds.' I suppose we all thought that, one way or another.
John R. Lugo was flying a U.S. Navy transport at 10,000 feet (3,000 m), 30 miles (48 km) east
of Albuquerque, en route to the west coast. "My first impression was, like, the sun was coming up in the
south. What a ball of fire! It was so bright it lit up the cockpit of the plane." Lugo radioed Albuquerque. He
got no explanation for the blast but was told, "Don't fly south."[101]

Ground zero after the test

418

An aerial photograph of the Trinity crater shortly after the test.[d]

The Jumbo container after the test


Energy measurements
Nuclear weapon yield Calculating yields and controversy

Lead-lined Sherman tank used in Trinity test


The T (Theoretical) Division at Los Alamos had predicted a yield of between 5 and 10 kilotons of TNT (21
and 42 TJ). Immediately after the blast, the two lead-lined Sherman tanks made their way to the
crater. Radiochemical analysis of soil samples that they collected indicated that the total yield (or energy
release) had been around 18.6 kilotons of TNT (78 TJ).
Fifty beryllium-copper diaphragm microphones were also used to record the pressure of the blast wave.
These were supplemented by mechanical pressure gauges.[103] These indicated a blast energy of 9.9
kilotons of TNT (41 TJ) 0.1 kilotons of TNT (0.42 TJ), with only one of the mechanical pressure gauges
working correctly that indicated 10 kilotons of TNT (42 TJ).
Fermi prepared his own experiment to measure the energy that was released as blast. He later recalled
that:
About 40 seconds after the explosion the air blast reached me. I tried to estimate its strength by dropping
from about six feet small pieces of paper before, during, and after the passage of the blast wave. Since, at

419
the time, there was no wind I could observe very distinctly and actually measure the displacement of the
pieces of paper that were in the process of falling while the blast was passing. The shift was about 2 1/2
meters, which, at the time, I estimated to correspond to the blast that would be produced by ten thousand
tons of T.N.T.
There were also several gamma ray and neutron detectors; few survived the blast, with all the gauges
within 200 feet (61 m) of ground zero being destroyed,[106] but sufficient data were recovered to measure
the gamma ray component of the ionizing radiation released.

Data from the Trinity test, and others, resulted in the following total energy distribution
being observed for kiloton range detonations near sea level[108]

Blast 50%

Thermal energy 35%

Initial ionizing radiation 5%

Residual fallout radiation 10%

The official estimate for the total yield of the Trinity gadget, which includes the energy of
the blast component together with the contributions from the explosion's light output and
both forms of ionizing radiation, is 21 kilotons of TNT (88 TJ),[109]of which about 15
kilotons of TNT (63 TJ) was contributed by fission of the plutonium core, and about 6
kilotons of TNT (25 TJ) was from fission of the natural uranium
tamper.[110] A re-analysis of data published in 2016 put the yield at 22.1 kilotons of TNT (92 TJ), with a
margin of error estimated at 2.7 kilotons of TNT (11 TJ).
As a result of the data gathered on the size of the blast, the detonation height for the bombing of
Hiroshima was set at 1,885 feet (575 m) to take advantage of the mach stem blast reinforcing
effect.[112] The final Nagasaki burst height was 1,650 feet (500 m) so the Mach stem started
sooner.[113] The knowledge that implosion worked led Oppenheimer to recommend to Groves that the
uranium-235 used in a Little Boy gun-type weapon could be used more economically in a composite core
with plutonium. It was too late to do this with the first Little Boy, but the composite cores would soon enter
production.
Civilian detection
Civilians noticed the bright lights and huge explosion. Groves therefore had the Second Air Force issue a
press release with a cover story that he had prepared weeks before:
Alamogordo, N.M., July 16
The commanding officer of the Alamogordo Army Air Base made the following statement today: "Several
inquiries have been received concerning a heavy explosion which occurred on the Alamogordo Air base
reservation this morning. A remotely located ammunition magazine containing a considerable amount of
high explosives and pyrotechnics exploded. There was no loss of life or injury to anyone, and the property
damage outside of the explosives magazine was negligible. Weather conditions affecting the content of
gas shells exploded by the blast may make it desirable for the Army to evacuate temporarily a few civilians
from their homes."[115][116]

420
The press release was written by Laurence. He had prepared four releases, covering outcomes ranging
from an account of a successful test (the one which was used) to catastrophic scenarios involving serious
damage to surrounding communities, evacuation of nearby residents, and a placeholder for the names of
those killed.[117][118][119]As Laurence was a witness to the test he knew that the last release, if used, might
be his own obituary.[117] A newspaper article published the same day stated that "the blast was seen and
felt throughout an area extending from El Paso to Silver City, Gallup, Socorro,
and Albuquerque."[120] An Associated Press article quoted a blind woman 150 miles (240 km) away who
asked "What's that brilliant light?" These articles appeared in New Mexico, but East Coast newspapers
ignored them.[117]
Information about the Trinity test was made public shortly after the bombing of Hiroshima. The Smyth
Report, released on August 12, 1945, gave some information on the blast, and the edition released
by Princeton University Press a few weeks later incorporated the War Department's press release on the
test as Appendix 6, and contained the famous pictures of a "bulbous" Trinity fireball. [121] Groves,
Oppenheimer and other dignitaries visited the test site in September 1945, wearing white canvas
overshoes to prevent fallout from sticking to the soles of their shoes. [122]
Official notifications
The results of the test were conveyed to the Secretary of War Henry L. Stimson at the Potsdam
Conference in Germany in a coded message from his assistant George L. Harrison:
Operated on this morning. Diagnosis not yet complete but results seem satisfactory and already exceed
expectations. Local press release necessary as interest extends great distance. Dr. Groves pleased. He
returns tomorrow. I will keep you posted.[123]
The message arrived at the "Little White House" in the Potsdam suburb of Babelsberg and was at once
taken to Truman and Secretary of State James F. Byrnes.[124] Harrison sent a follow-up message which
arrived on the morning of July 18:[124]
Doctor has just returned most enthusiastic and confident that the little boy is as husky as his big brother.
The light in his eyes discernible from here to High Hold and I could have heard his screams from here to
my farm.
Because Stimson's summer home at High Hold was on Long Island and Harrison's farm near Upperville,
Virginia, this indicated that the explosion could be seen 200 miles (320 km) away and heard 50 miles
(80 km) away.[125]
Fallout
Film badges used to measure exposure to radioactivity indicated that no observers at N-10,000 had been
exposed to more than 0.1 roentgens, but the shelter was evacuated before the radioactive cloud could
reach it. The explosion was more efficient than expected and the thermal updraft drew most of the cloud
high enough that little fallout fell on the test site. The crater was far more radioactive than expected due to
the formation of trinitite, and the crews of the two lead-lined Sherman tanks were subjected to
considerable exposure. Anderson's dosimeter and film badge recorded 7 to 10 roentgens, and one of the
tank drivers, who made three trips, recorded 13 to 15 roentgens.

Major General Leslie Groves and Robert Oppenheimer at the Trinity shot tower remains a few weeks later.
The white overshoes were to prevent the trinitite fallout from sticking to the soles of their shoes.

421
The heaviest fallout contamination outside the restricted test area was 30 miles (48 km) from the
detonation point, on Chupadera Mesa. The fallout there was reported to have settled in a white mist onto
some of the livestock in the area, resulting in local beta burns and a temporary loss of dorsal or back hair.
Patches of hair grew back discolored white. The Army bought 75 cattle in all from ranchers; the 17 most
significantly marked were kept at Los Alamos, while the rest were shipped to Oak Ridge for long-term
observation.[127][128][129][130]
Unlike the 100 or so atmospheric nuclear explosions later conducted at the Nevada Test Site, fallout doses
to the local inhabitants have not been reconstructed for the Trinity event, due primarily to scarcity of
data.[131] In 2014, a National Cancer Institute study commenced that will attempt to close this gap in the
literature and complete a Trinity radiation dose reconstruction for the population of the state of New
Mexico.
In August 1945, shortly after the bombing of Hiroshima, the Kodak Company
observed spotting and fogging on their film, which was at that time usually packaged in cardboard
containers. Dr. J. H. Webb, a Kodak employee, studied the matter and concluded that the contamination
must have come from a nuclear explosion somewhere in the United States. He discounted the possibility
that the Hiroshima bomb was responsible, due to the timing of the events. A hot spot of fallout
contaminated the river water that the paper mill in Indiana used to manufacture the cardboard
pulp from corn husks.[134] Aware of the gravity of his discovery, Dr. Webb kept this secret until 1949.[135]
This incident along with the next continental US tests in 1951 set a precedent. In subsequent atmospheric
nuclear tests at the Nevada test site, United States Atomic Energy Commission officials gave the
photographic industry maps and forecasts of potential contamination, as well as expected fallout
distributions, which enabled them to purchase uncontaminated materials and take other protective
measures.

Site today
In September 1953, about 650 people attended the first Trinity Site open house. Visitors to a Trinity Site
open house are allowed to see the ground zero and McDonald Ranch House areas.[136] More than seventy
years after the test, residual radiation at the site is about ten times higher than normal background
radiation in the area. The amount of radioactive exposure received during a one-hour visit to the site is
about half of the total radiation exposure which a U.S. adult receives on an average day from natural and
medical sources.
On December 21, 1965, the 51,500-acre (20,800 ha) Trinity Site was declared a National Historic
Landmarkdistrict,[138][2] and on October 15, 1966, was listed on the National Register of Historic
Places.[1] The landmark includes the base camp, where the scientists and support group lived; ground
zero, where the bomb was placed for the explosion; and the McDonald ranch house, where the plutonium
core to the bomb was assembled. One of the old instrumentation bunkers is visible beside the road just
west of ground zero.[139] An inner oblong fence was added in 1967, and the corridor barbed wire fence that
connects the outer fence to the inner one was completed in 1972. Jumbo was moved to the parking lot in
1979; it is missing its ends from an attempt to destroy it in 1946 using eight 500-pound (230 kg)
bombs.[140] The Trinity monument, a rough-sided, lava-rock obelisk about 12 feet (3.7 m) high, marks the
explosion's hypocenter.[136] It was erected in 1965 by Army personnel from the White Sands Missile Range
using local rocks taken from the western boundary of the range.[141] A simple metal plaque reads: "Trinity
Site Where the World's First Nuclear Device Was Exploded on July 16, 1945." A second memorial plaque
on the obelisk was prepared by the Army and the National Park Service, and was unveiled on the 30th
anniversary of the test in 1975.]
A special tour of the site was conducted on July 16, 1995, to mark the 50th anniversary of the Trinity test.
About 5,000 visitors arrived to commemorate the occasion, the largest crowd for any open
house.[143] Since then, the open houses have usually averaged two to three thousand visitors. The site is
still a popular destination for those interested in atomic tourism, though it is only open to the public twice a
year during the Trinity Site Open House on the first Saturdays of April and October. [144][145] In 2014, the
White Sands Missile Range announced that due to budgetary constraints, the site would only be open
once a year, on the first Saturday in April. In 2015, this decision was reversed, and two events were

422
scheduled, in April and October. The base commander, Brigadier General Timothy R. Coffin, explained
that:
Trinity Site is a national historic testing landmark where the theories and engineering of some of the
nation's brightest minds were tested with the detonation of the first nuclear bomb, technologies which then
helped end World War II. It is important for us to share Trinity with the public even though the site is
located inside a very active military test range. We have travelers from as far away as Australia who travel
to visit this historic landmark. Facilitating access twice per year allows more people the chance to visit this
historic site.

FERMI PARADOX;

The Fermi paradox or Fermi's paradox, named after physicist Enrico Fermi, is the apparent contradiction
between the lack of evidence and high probability estimates, e.g., those given by the Drake equation, for
the existence of extraterrestrialcivilizations. The basic points of the argument, made by physicists Enrico
Fermi(19011954) and Michael H. Hart (born 1932), are:

There are billions of stars in the galaxy that are similar to the Sun,[2][3] many of which are billions of
years older than Earth.
With high probability, some of these stars will have Earth-like planets, and if the Earth is typical, some
might develop intelligent life.
Some of these civilizations might develop interstellar travel, a step the Earth is investigating now.
Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be
completely traversed in a few million years.
According to this line of reasoning, the Earth should have already been visited by extraterrestrial aliens. In
an informal conversation, Fermi noted no convincing evidence of this, leading him to ask, "Where is
everybody?"[9][10] There have been many attempts to explain the Fermi paradox,[11][12] primarily either
suggesting that intelligent extraterrestrial life is extremely rare or proposing reasons that such civilizations
have not contacted or visited Earth.

Enrico Fermi (19011954)

423
Basic of fermi paradox
The Fermi paradox is a conflict between arguments of scale and probabilitythat seem to favor intelligent
life being common in the universe, and a total lack of evidence of intelligent life having ever arisen
anywhere other than on the Earth.
The first aspect of the Fermi paradox is a function of the scale or the large numbers involved: there are an
estimated 200400 billion stars in the Milky Way[13] (24 1011) and 70 sextillion (71022) in
the observable universe.[14]Even if intelligent life occurs on only a minuscule percentage of planets around
these stars, there might still be a great number of extant civilizations, and if the percentage were high
enough it would produce a significant number of extant civilizations in the Milky Way. This assumes
the mediocrity principle, by which the Earth is a typical planet.
The second aspect of the Fermi paradox is the argument of probability: given intelligent life's ability to
overcome scarcity, and its tendency to colonize new habitats, it seems possible that at least some
civilizations would be technologically advanced, seek out new resources in space, and colonize their
own star system and, subsequently, surrounding star systems. Since there is no significant evidence on
Earth or elsewhere in the known universe of other intelligent life after 13.8 billion years of the universe's
history, then there is a conflict requiring a resolution. Some examples of possible resolutions are that
intelligent life is rarer than we think, that our assumptions about the general development or behavior of
intelligent species are flawed, or, more radically, that our current scientific understanding of the nature of
the universe itself is quite incomplete.
The Fermi paradox can be asked in two ways.[15] The first is, "Why are no aliens or their artifacts found
here on Earth, or in the Solar System?" If interstellar travel is possible, even the "slow" kind nearly within
the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the
galaxy.[16] This is relatively brief on a geological scale, let alone a cosmological one. Since there are many
stars older than the Sun, and since intelligent life might have evolved earlier elsewhere, the question then
becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable
to all alien civilizations, large-scale exploration of the galaxy could be possible by probes. These might
leave detectable artifacts in the Solar System, such as old probes or evidence of mining activity, but none
of these have been observed.

424
A graphical representation of the Arecibo messageHumanity's first attempt to use radio waves to
actively communicate its existence to alien civilizations

The second form of the question is "Why do we see no signs of intelligence elsewhere in the universe?"
This version does not assume interstellar travel, but includes other galaxies as well. For distant galaxies,
travel times may well explain the lack of alien visits to Earth, but a sufficiently advanced civilization could
potentially be observable over a significant fraction of the size of the observable universe.[17] Even if such
civilizations are rare, the scale argument indicates they should exist somewhere at some point during the
history of the universe, and since they could be detected from far away over a considerable period of time,
many more potential sites for their origin are within range of our observation. It is unknown whether the
paradox is stronger for our galaxy or for the universe as a whole.
Criticism of logical basis
The Fermi paradox has been criticized as being based on an inappropriate use of propositional logic.
According to a 1985 paper by Robert Freitas, when recast as a statement in modal logic, the paradox no
longer exists, and carries no probative value.

History and name

425
Los Alamos National Laboratory
In 1950, while working at Los Alamos National Laboratory, Fermi had a casual conversation while walking
to lunch with colleagues Emil Konopinski, Edward Teller and Herbert York.[20] The men discussed a recent
spate of UFO reports and an Alan Dunn cartoon[21] facetiously blaming the disappearance of municipal
trashcans on marauding aliens. The conversation shifted to other subjects, until during lunch Fermi
suddenly exclaimed, "Where are they?" (alternatively, "Where is everybody?"). Teller remembers, "The
result of his question was general laughter because of the strange fact that in spite of Fermi's question
coming from the clear blue, everybody around the table seemed to understand at once that he was talking
about extraterrestrial life."[22] Herbert York recalls that Fermi followed up on his comment with a series of
calculations on the probability of Earth-like planets, the probability of life, the likely rise and duration of high
technology, etc., and concluded that we ought to have been visited long ago and many times over.
Although Fermi's name is most commonly associated with the paradox, he was not the first to ask the
question. An earlier implicit mention was by Konstantin Tsiolkovsky in an unpublished manuscript from
1933.[23] He noted "people deny the presence of intelligent beings on the planets of the universe" because
"(i) if such beings exist they would have visited Earth, and (ii) if such civilizations existed then they would
have given us some sign of their existence." This was not a paradox for others, who took this to imply the
absence of ETs, but it was for him, since he himself was a strong believer in extraterrestrial life and the
possibility of space travel. Therefore, he proposed what is now known as the zoo hypothesis and
speculated that mankind is not yet ready for higher beings to contact us. ] That Tsiolkovsky himself may not
have been the first to discover the paradox is suggested by his above-mentioned reference to other
people's reasons for denying the existence of extraterrestrial civilizations.
Michael H. Hart published in 1975 a detailed examination of the paradox, which has since become a
theoretical reference point for much of the research into what is now sometimes known as the FermiHart
paradox. Geoffrey A. Landis prefers that name on the grounds that "while Fermi is credited with first
asking the question, Hart was the first to do a rigorous analysis showing that the problem is not trivial, and
also the first to publish his results". Robert H. Gray argues that the term Fermi paradox is a misnomer,
since in his view it is neither a paradox nor due to Fermi; he instead prefers the name HartTipler
argument, acknowledging Michael Hart as its originator, but also the substantial contribution of Frank J.
Tipler in extending Hart's arguments.
Other names closely related to Fermi's question ("Where are they?") include the Great
Silence,[28][29][30][31] and silentium universi (Latin for "silence of the universe"), though these only refer to
one portion of the Fermi Paradox, that we see no evidence of other civilizations.

Drake equation
Drake equation
The theories and principles in the Drake equation are closely related to the Fermi paradox.[32] The equation
was formulated by Frank Drake in 1961 in an attempt to find a systematic means to evaluate the numerous
probabilities involved in the existence of alien life. The speculative equation considers the rate of star
formation in the galaxy; the fraction of stars with planets and the number per star that are habitable; the
fraction of those planets that develop life; the fraction that develop intelligent life; the fraction that have

426
detectable, technological intelligent life; and finally the length of time such communicable civilizations are
detectable. The fundamental problem is that the last four terms are completely unknown, rendering
statistical estimates impossible.
The Drake equation has been used by both optimists and pessimists, with wildly differing results. The
original meeting, including Frank Drake and Carl Sagan, speculated that the number of civilizations was
roughly equal to the lifetime in years, and there were probably between 1000 and 100,000,000 civilizations
in the Milky Way galaxy.[33] Conversely, Frank Tipler and John D. Barrow used pessimistic numbers and
speculated that the average number of civilizations in a galaxy is much less than one.

Empirical projects
There are two parts of the Fermi paradox that rely on empirical evidencethat there are many
potential habitable planets, and that we see no evidence of life. The first point, that many suitable planets
exist, was an assumption in Fermi's time that is gaining ground with the discovery of many exoplanets, and
models predicting billions of habitable worlds in our galaxy.
The second part of the paradox, that we see no evidence of extraterrestrial life, is also an active field of
scientific research. This includes both efforts to find any indication of life,] and efforts specifically directed
to finding intelligent life. These searches have been made since 1960, and several are ongoing.
Mainstream astronomy and SETI

Although astronomers do not usually search for extraterrestrials, they have observed phenomena that they
could not immediately explain without positing an intelligent civilization as the source. For
example, pulsars, when first discoveredin 1967, were called little green men (LGM) because of the precise
repetition of their pulses.] In all cases, explanations with no need for intelligent life have been found for
such observations,[39] but the possibility of discovery remains.[40]Proposed examples include asteroid
mining that would change the appearance of debris disks around stars, or spectral lines from nuclear
waste disposal in stars. An ongoing example is the unusual transit light curves of star KIC 8462852, where
natural interpretations are not fully convincing. Although most likely a natural explanation will emerge,
some scientists are investigating the remote possibility that it could be a sign of alien technology, such as
a Dyson swarm.
Electromagnetic emissions
Further information: SETI, Project Ozma, Project Cyclops, Project Phoenix (SETI), SERENDIP, and Allen
Telescope Array

Radio telescopes are often used by SETI projects


Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for
technological species, theoretically creating effects that might be detected over interstellar distances. The
careful searching for non-natural radio emissions from space may lead to the detection of alien
civilizations. Sensitive alien observers of the Solar System, for example, would note unusually
intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the
absence of an apparent natural cause, alien observers might infer the existence of a terrestrial civilization.
It should be noted however that the most sensitive radio telescopes currently available on Earth would not
be able to detect non-directional radio signals even at a fraction of a light-year, so it is questionable
whether any such signals could be detected by an extraterrestrial civilization. Such signals could be either

427
"accidental" by-products of a civilization, or deliberate attempts to communicate, such as the Arecibo
message. A number of astronomers and observatories have attempted and are attempting to detect such
evidence, mostly through the SETI organization. Several decades of SETI analysis have not revealed any
unusually bright or meaningfully repetitive radio emissions.
Direct planetary observation

A composite picture of Earth at night, created with data from the Defense Meteorological Satellite
Program (DMSP) Operational Linescan System (OLS). Large-scale artificial lighting produced by
the human civilization is detectable from space.
Exoplanet detection and classification is a very active sub-discipline in astronomy, and the first
possibly terrestrial planet discovered within a star's habitable zone was found in 2007.[48] New refinements
in exoplanet detection methods, and use of existing methods from space (such as the Kepler Mission,
launched in 2009) are starting to detect and characterize Earth-size planets, and determine if they are
within the habitable zones of their stars. Such observational refinements may allow us to better gauge how
common potentially habitable worlds are.[49]
Conjectures about interstellar probes
Von Neumann probe and Bracewell probe
Self-replicating probes could exhaustively explore a galaxy the size of the Milky Way in as little as a million
years.[8] If even a single civilization in the Milky Way attempted this, such probes could spread throughout
the entire galaxy. Another speculation for contact with an alien probeone that would be trying to find
human beingsis an alien Bracewell probe. Such a hypothetical device would be an autonomous space
probe whose purpose is to seek out and communicate with alien civilizations (as opposed to Von
Neumann probes, which are usually described as purely exploratory). These were proposed as an
alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than
contending with the long delays a radio dialogue would suffer, a probe housing an artificial
intelligence would seek out an alien civilization to carry on a close-range communication with the
discovered civilization. The findings of such a probe would still have to be transmitted to the home
civilization at light speed, but an information-gathering dialogue could be conducted in real time.[50]
Attempts to find alien probes
Direct exploration of the Solar System has yielded no evidence indicating a visit by aliens or their probes.
Detailed exploration of areas of the Solar System where resources would be plentiful may yet produce
evidence of alien exploration,[51][52] though the entirety of the Solar System is vast and difficult to
investigate. Attempts to signal, attract, or activate hypothetical Bracewell probes in Earth's vicinity have not
succeeded.

428
Conjectures about stellar-scale artifacts

Dyson sphere, Kardashev scale, Alderson disk, Matrioshka brain, and Stellar engine

A variant of the speculative Dyson sphere. Such large scale artifacts would drastically alter the
spectrum of a star.
In 1959, Freeman Dyson observed that every developing human civilization constantly increases its
energy consumption, and, he conjectured, a civilization might try to harness a large part of the energy
produced by a star. He proposed that a Dyson sphere could be a possible means: a shell or cloud of
objects enclosing a star to absorb and utilize as much radiant energy as possible. Such a feat
of astroengineering would drastically alter the observed spectrum of the star involved, changing it at least
partly from the normal emission lines of a natural stellar atmosphere to those of black body radiation,
probably with a peak in the infrared. Dyson speculated that advanced alien civilizations might be detected
by examining the spectra of stars and searching for such an altered spectrum. [54][55][56]
There have been some attempts to find evidence of the existence of Dyson spheres that would alter the
spectra of their core stars.[57]Direct observation of thousands of galaxies has shown no explicit evidence of
artificial construction or modifications.[55][56][58][59] In October 2015, there was some speculation that a
pattern of light from star KIC 8462852, observed by the Kepler Space Telescope, could have been a result
of Dyson sphere construction.[60][61]

429
Hypothetical explanations for the paradox
Extraterrestrial life is rare or non-existent
Rare Earth hypothesis

Rare Earth hypothesis

The Rare Earth Hypothesis argues that planets with complex life, like Earth, are exceptionally rare
In planetary astronomy and astrobiology, the Rare Earth Hypothesisargues that the origin of life and
the evolution of biological complexitysuch as sexually reproducing, multicellular organisms on Earth (and,
subsequently, human intelligence) required an improbable combination
of astrophysical and geological events and circumstances. According to the hypothesis,
complex extraterrestrial life is a very improbable phenomenon and likely to be extremely rare. The term
"Rare Earth" originates from Rare Earth: Why Complex Life Is Uncommon in the Universe (2000), a book
by Peter Ward, a geologist and paleontologist, and Donald E. Brownlee, an astronomer and astrobiologist,
both faculty members at the University of Washington.
An alternative view point was argued in the 1970s and 1980s by Carl Sagan and Frank Drake, among
others. It holds that Earth is a typical rocky planet in a typical planetary system, located in a non-
exceptional region of a common barred-spiral galaxy. Given the principle of mediocrity (in the same vein
as the Copernican principle), it is probable that the universe teems with complex life. Ward and Brownlee
argue to the contrary: that planets, planetary systems, and galactic regions that are as friendly to complex
life as are the Earth, the Solar System, and our region of the Milky Way are very rare.

EARTHS RARE HYPOTHESIS


The Rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous
circumstances, such as a galactic habitable zone, a central star and planetary system having the requisite
character, the circumstellar habitable zone, a right sized terrestrial planet, the advantage of a gas giant
guardian like Jupiter and a large natural satellite, conditions needed to ensure the planet has
a magnetosphereand plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of
"evolutionary pumps" such as massive glaciation and rare bolide impacts, and whatever led to the
appearance of the eukaryotecell, sexual reproduction and the Cambrian explosion of animal, plant,
and fungi phyla. The evolution of human intelligence may have required yet further events, which are

430
extremely unlikely to have happened were it not for the CretaceousPaleogene extinction event 66 million
years ago which saw the decline of dinosaurs as the dominant terrestrial vertebrates.
In order for a small rocky planet to support complex life, Ward and Brownlee argue, the values of several
variables must fall within narrow ranges. The universe is so vast that it could contain many Earth-like
planets. But if such planets exist, they are likely to be separated from each other by many thousands
of light years. Such distances may preclude communication among any intelligent species evolving on
such planets, which would solve the Fermi paradox: "If extraterrestrial aliens are common, why aren't they
obvious?"

right location in the right kind of galaxy


Rare Earth suggests that much of the known universe, including large parts of our galaxy, cannot support
complex life; Ward and Brownlee refer to such regions as "dead zones". Those parts of a galaxy where
complex life is possible make up the galactic habitable zone. This zone is primarily a function of distance
from the Galactic Center. As that distance increases:
Star metallicity declines. Metals (which in astronomy means all elements other than hydrogen and helium)
are necessary to the formation of terrestrial planets.
The X-ray and gamma ray radiation from the black hole at the Galactic Center, and from nearby neutron
stars, becomes less intense. Radiation of this nature is considered dangerous to complex life, hence the
Rare Earth hypothesis predicts that the early universe, and galactic regions where stellar density is high
and supernovae are common, will be unfit for the development of complex life.
Gravitational perturbation of planets and planetesimals by nearby stars becomes less likely as the density
of stars decreases. Hence the further a planet lies from the Galactic Center or a spiral arm, the less likely it
is to be struck by a large bolide. A sufficiently large impact may extinguish all complex life on a planet.

Dense center of galaxies such as NGC 7331 (often referred to as a "twin" of the Milky Way[3]) have
high radiation levels toxic to complex life.

According to Rare Earth, globular clusters are unlikely to support life.


Item #1 rules out the outer reaches of a galaxy; #2 and #3 rule out galactic inner regions. As one
moves from the center of a galaxy to its furthest extremity, the ability to support life rises then
falls. Hence the galactic habitable zone may be ring-shaped, sandwiched between its uninhabitable
center and outer reaches.

While a planetary system may enjoy a location favorable to complex life, it must also maintain that location
for a span of time sufficiently long for complex life to evolve. Hence a central star with a galactic orbit that
steers clear of galactic regions where radiation levels are high, such as the Galactic Center and the spiral
arms, would appear most favourable. If the central star's galactic orbit is eccentric (elliptic or hyperbolic), it

431
will pass through some spiral arms, but if the orbit is a near perfect circle and the orbital velocity equals the
"rotational" velocity of the spiral arms, the star will drift into a spiral arm region only graduallyif at all.
Therefore, Rare Earth proponents conclude that a life-bearing star must have a galactic orbit that is nearly
circular about the center of its galaxy. The required synchronization of the orbital velocity of a central star
with the wave velocity of the spiral arms can occur only within a fairly narrow range of distances from the
Galactic Center. This region is termed the "galactic habitable zone". Lineweaver et al.[4] calculate that the
galactic habitable zone is a ring 7 to 9 kiloparsecs in radius, that includes no more than 10% of the stars in
the Milky Way.[5]Based on conservative estimates of the total number of stars in the galaxy, this could
represent something like 20 to 40 billion stars. Gonzalez, et al.[6] would halve these numbers; he estimates
that at most 5% of stars in the Milky Way fall in the galactic habitable zone.
Approximately 77% of observed galaxies are spiral galaxies, [7] two-thirds of all spiral galaxies are barred,
and more than half, like the Milky Way, exhibit multiple arms.[8] What makes our galaxy different, according
to Rare Earth, is that it is unusually quiet and dim (see argument below), representing just 7% of its
kind.[9] Even so, this would still represent more than 200 billion galaxies in the known universe.
A reason that our galaxy is considered rare by Rare Earth is because it appears to have suffered fewer
collisions with other galaxies over the last 10 billion years, and its peaceful history may have made it more
hospitable to complex life than galaxies which have suffered more collisions, and consequently more
supernovae and other disturbances.[10] The level of activity of the black hole at the centre of the Milky Way
may also be important: too much or too little and the conditions for life may be even rarer. The Milky Way
black hole appears to be just right.[11] The orbit of the Sun around the center of the Milky Way is indeed
almost perfectly circular, with a period of 226 Ma (1 Ma=1 million years), one closely matching the
rotational period of the galaxy. However, the majority of stars in barred spiral galaxies populate the spiral
arms rather than the halo and tend to move in gravitationally aligned orbits, so there is little that is unusual
about the Sun's orbit. While the Rare Earth hypothesis predicts that the Sun should rarely, if ever, have
passed through a spiral arm since its formation, astronomer Karen Masters has calculated that the orbit of
the Sun takes it through a major spiral arm approximately every 100 million years.[12] Some researchers
have suggested that several mass extinctions do correspond with previous crossings of the spiral arms. [13]

Orbiting at the right distance from the right type of star

Source;EvergreenerFish at English Wikipedia.


According to the hypothesis, Earth has an improbable orbit in the very narrow habitable zone (dark
green) around the Sun.

432
EvenGreenerFish at English Wikipedia
Range of the Solar System's habitable zone

CC BY-SA 3.0
File:Estimated extent of the Solar Systems habitable [Link]
Created: 21 December 2011

According to the hypothesis, Earth has an improbable orbit in the very narrow habitable zone (dark green)
around the Sun.
The terrestrial example suggests that complex life requires water in the liquid state, and a central star's
planet must therefore be at an appropriate distance. This is the core of the notion of the habitable
zone or Goldilocks Principle.[14] The habitable zone forms a ring around the central star. If a planet orbits
its sun too closely or too far away, the surface temperature is incompatible with water being in liquid form.
The habitable zone varies with the type and age of the central star. For advanced life the star must have a
high degree of stability. Stars with an age of 4.6 billion years, middle star life, are at the most stable state.
Proper metallicity and size are also very important to stability. The Sun has a low 0.1% solar
luminosity variation. A solar twin star, would be a star with low luminosity variation. To date no solar twin
with an exact match as that of the Sun has been found, however, there are some stars that come close to
being identical. The star must have no stellar companions, other close by stars as in binary systems,
would disrupt the orbits of planets. Estimates suggest that 50% or more of all star systems are binary
systems.[15][16][17][18] The habitable zone for a main sequence star very gradually moves out over time until
the star becomes a white dwarf, at which time the habitable zone vanishes. The habitable zone is closely
connected to the greenhouse warming afforded by atmospheric water vapor (H
2O), carbon dioxide (CO2), and/or other greenhouse gases. Even though the Earth's atmosphere contains
a water vapor concentration from 0% (in arid regions) to 4% (in rain forest and ocean regions) and as of
June 2013 only 400 parts per million of CO2, these small amounts suffice to raise the average surface
temperature of the Earth by about 40 C from what it would otherwise be,[19] with the dominant contribution
being due to water vapor, which together with clouds makes up between 66% and 85% of Earth's
greenhouse effect, with CO2 contributing between 9% and 26% of the effect.[20]
Rocky planets must orbit within the habitable zone for life to form. Although the habitable zone of such hot
stars as Sirius or Vega is wide:
Rocky planets that form too close to the star to lie within the habitable zone cannot sustain life. Hot stars
also emit much more ultraviolet radiation that ionizes any planetary atmosphere.
Hot stars, as mentioned above, may become red giants before advanced life evolves on their planets.
These considerations rule out the massive and powerful stars of type F6 to O (see stellar classification) as
homes to evolved metazoan life.
Small red dwarf stars conversely have small habitable zones wherein planets are in tidal lockone side
always faces the star and becomes very hot and the other always faces away and becomes very cold
and are also at increased risk of solar flares (see Aurelia) that would tend to ionize the atmosphere and be
otherwise inimical to complex life. Rare Earthproponents argue that life therefore cannot arise in such
systems and that only central stars that range from F7 to K1 stars are hospitable. Such stars are rare: G
type stars such as the Sun (between the hotter F and cooler K) comprise only 9% ]of the hydrogen-burning
stars in the Milky Way.
Such aged stars as red giants and white dwarfs are also unlikely to support life. Red giants are common in
globular clusters and elliptical galaxies. White dwarfs are mostly dying stars that have already completed
their red giant phase. Stars that become red giants expand into or overheat the habitable zones of their
youth and middle age (though theoretically planets at a much greater distance may become habitable).
An energy output that varies with the lifetime of the star will very likely prevent life (e.g., as Cepheid
variables). A sudden decrease, even if brief, may freeze the water of orbiting planets, and a significant
increase may evaporate them and cause a greenhouse effect that may prevent the oceans from reforming.
Life without complex chemistry is unknown. Such chemistry requires metals, namely elements other than
hydrogen or helium and thereby suggests that a planetary system rich in metals is a necessity for life.
The absorption spectrum of a star reveals the presence of metals within, and studies of stellar spectra
reveal that many, perhaps most, stars are poor in metals. Because heavy metals originate

433
in supernova explosions, metallicity increases in the universe over time. Low metallicity characterizes the
early universe: globular clusters and other stars that formed when the universe was young, stars in most
galaxies other than large spirals, and stars in the outer regions of all galaxies. Metal-rich central stars
capable of supporting complex life are therefore believed to be most common in the quiet suburbs of the
larger spiral galaxieswhere radiation also happens to be weak.

With the right arrangement of planets

Depiction of the Sun and planets of the Solar System and the sequence of planets.

Rare Earth argues that without such an arrangement, in particular the presence of the massive gas giant
Jupiter (fifth planet from the Sun and the largest), complex life on Earth would not have arisen.
Rare Earth proponents argue that a planetary system capable of sustaining complex life must be
structured more or less like the Solar System, with small and rocky inner planets and outer gas
giants.[23] Without the protection of 'celestial vacuum cleaner' planets with strong gravitational pull, the
number of asteroid collisions may have been larger, and a greater number of mass extinction events may
have occurred.
Observations of exo-planets have shown that arrangements of planets similar to the solar system are rare.
Most planetary systems have super Earths, several times larger than Earth, close to their star, whereas
the Solar System's inner region is depleted in mass with small rocky planets and none inside Mercury's

434
orbit. Only 10% of stars have giant planets similar to Jupiter and Saturn, and those few rarely have stable
nearly circular orbits distant from their star. Konstantin Batygin and colleagues argue that these features
can be explained if, early in the history of the Solar System, Jupiter and Saturn drifted towards the Sun,
sending showers of planetesimals towards the super-Earths which sent them spiralling into the Sun, and
ferrying icy building blocks into the terrestrial region of the Solar System which provided the building blocks
for the rocky planets. The two giant planets then drifted out again to their present position. However, in the
view of Batygin and his colleagues: "The concatenation of chance events required for this delicate
choreography suggest that small, Earth-like rocky planets and perhaps life itself could be rare
throughout the cosmos."
A continuously stable orbit
Rare Earth argues that a gas giant must not be too close to a body where life is developing. Close
placement of gas giant(s) could disrupt the orbit of a potential life-bearing planet, either directly or by
drifting into the habitable zone.
Newtonian dynamics can produce chaotic planetary orbits, especially in a system having large planets at
high orbital eccentricity.[25]
The need for stable orbits rules out stars with systems of planets that contain large planets with orbits
close to the host star (called "hot Jupiters"). It is believed that hot Jupiters formed much further from their
parent stars than they are now (see planetary migration), and have migrated inwards to their current orbits.
In the process, they would have catastrophically disrupted the orbits of any planets in the habitable
zone.[26] To exacerbate matters, Hot Jupiters are much more common orbiting F and G class stars. [27]

A terrestrial planet of the right size

Planets of the Solar System to scale. Rare Earth argues that complex life cannot exist on large
gaseous planets like Jupiter and Saturn (top row) or Uranus and Neptune (top middle) or smaller
planets such as Mars and Mercury

It is argued that life requires terrestrial planets like Earth and as gas giants lack such a surface, that
complex life cannot arise there.
A planet that is too small cannot hold much of an atmosphere. Hence the surface temperature becomes
more variable and the average temperature drops. Substantial and long-lasting oceans become
impossible. A small planet will also tend to have a rough surface, with large mountains and deep canyons.
The core will cool faster, and plate tectonics will either not last as long as they would on a larger planet or
may not occur at all. A planet that is too large will retain too much of its atmosphere and will be like Venus.
Venus is similar in size and mass to Earth, but has a surface atmosphere pressure that is 92 times that of
Earth's. Venus mean surface temperature is 735 K (462 C; 863 F) making Venus the hottest planet in the
Solar System. Earth had a similar early atmosphere to Venus, but lost it in the giant impact event.[29]

With plate tectonics

435
The Great American Interchange on Earth, around ~ 3.5 to 3 Ma, an example of species
competition, resulting from continental plate interaction

An artist's rendering of the structure of Earth's magnetic field-magnetosphere that protects Earth's
life from solar radiation. 1) Bow shock. 2) Magnetosheath. 3) Magnetopause. 4) Magnetosphere.
5) Northern tail lobe. 6) Southern tail lobe. 7) Plasmasphere.

436
Rare Earth proponents argue that plate tectonics and a large magnetic field are essential for the
emergence and sustenance of complex life.[30] Ward and Brownlee assert that biodiversity, global
temperature regulation, the carbon cycle, and the magnetic field of the Earth that make it habitable for
complex terrestrial life all depend on plate tectonics.
Ward and Brownlee contend that the lack of mountain chains elsewhere in the Solar System is direct
evidence that Earth is the only body with plate tectonics and as such the only body capable of supporting
life.
Plate tectonics is dependent on chemical composition and a long-lasting source of heat in the form
of radioactive decay occurring deep in the planet's interior. Continents must also be made up of less
dense felsic rocks that "float" on underlying denser mafic rock. Taylor] emphasizes that subduction zones
(an essential part of plate tectonics) require the lubricating action of ample water; on Earth, such zones
exist only at the bottom of [Link] and Brownlee and others such as Tilman Spohn of the German
Space Research Centre Institute of Planetary Research] argue that plate tectonics provides a means
of biochemical cycling which promotes complex life on Earth and that water is required to lubricate
planetary [Link] tectonics and as a result continental drift and the creation of separate land masses
would create diversified ecosystems which is thought to have promoted the diversification of species, and
that diversity is one of the strongest defences against extinction.
An example of species diversification and later competition on Earth's continents is the Great American
Interchange. This was the result of the tectonically induced connection between North and Middle America
with the South American continent, at around 3.5 to 3 Ma. The previously undisturbed fauna of South
America could evolve in their own way for about 30 million years, since Antarctica separated. Many
species were subsequently wiped out in mainly South America by competing Northern American animals.

A large moon

Tide pools resulting from tidal interaction of the Moon are said to have promoted the evolution of
complex life.

The Moon is unusual because the other rocky planets in the Solar System either have no satellites
(Mercury and Venus), or have tiny satellites that are probably captured asteroids (Mars).
The giant impact theory hypothesizes that the Moon resulted from the impact of a Mars-sized body, Theia,
with the very young Earth. This giant impact also gave the Earth its axial tilt and velocity of rotation. Rapid
rotation reduces the daily variation in temperature and makes photosynthesis viable. The Rare
Earthhypothesis further argues that the axial tilt cannot be too large or too small (relative to the orbital
plane). A planet with a large tilt (inclination) will experience extreme seasonal variations in climate,
unfriendly to complex life. A planet with little or no tilt will lack the stimulus to evolution that climate
variation provides. In this view, the Earth's tilt is "just right". The gravity of a large satellite also stabilizes
the planet's tilt; without this effect the variation in tilt would be chaotic, probably making complex life forms
on land impossible.
If the Earth had no Moon, the ocean tides resulting solely from the Sun's gravity would be only half that of
the lunar tides. A large satellite gives rise to tidal pools, which may be essential for the formation
of complex life, though this is far from certain.
A large satellite also increases the likelihood of plate tectonics through the effect of tidal forces on the
planet's crust. The impact that formed the Moon may also have initiated plate tectonics, without which
the continental crust would cover the entire planet, leaving no room for oceanic crust. It is possible that the

437
large scale mantle convection needed to drive plate tectonics could not have emerged in the absence of
crustal inhomogeneity.
If a giant impact is the only way for a rocky inner planet to acquire a large satellite, any planet in the
circumstellar habitable zone will need to form as a double planet in order that there be an impacting object
sufficiently massive to give rise in due course to a large satellite.

Earth's atmosphere
Source;Kelvin song.

Atmosphere

A terrestrial planet of the right size is needed to retain an atmosphere, like Earth and Venus. On Earth,
once the giant impact of Theia thinned Earth's Atmosphere other events were needed to make the
atmosphere capable of sustaining life for a long time span. On Earth the Late Heavy
Bombardment reseeded Earth with water lost after the impact of Theia.[39] The development of an ozone
layer formed protection from ultraviolet (UV) radiation from the Sun.[40][41] Nitrogen and carbon dioxide are
needed in a correct ratio for life to form. Nitrogen is needed for amino and nucleic acids.[42] Lightning is
needed for nitrogen fixation to happen.[43][43] The carbon dioxide gas needed for life comes from sources
such as volcanoes and geysers. Carbon dioxide is only needed at low levels[citation needed], in Earth's
atmosphere it is at 0.04 percent (400 ppm) by volume of the atmosphere. At high levels carbon dioxide is

438
poisonous.[44][45] Precipitation is needed to have a stable water cycle.[46] A proper atmosphere must
reduce temperature extremes between day and night (the diurnal temperature variation).[47][48]

One or more evolutionary triggers for complex life

This diagram illustrates the twofold cost of sex. If each individual were to contribute to the same
number of offspring (two), (a) the sexual population remains the same size each generation, where
the (b) asexual population doubles in size each generation

Regardless of whether planets with similar physical attributes to the Earth are rare or not, some argue that
life usually remains simple bacteria. Biochemist Nick Laneargues that simple cells (prokaryotes) emerged
soon after Earth's formation, but since almost half the planet's life had passed before they evolved into
complex ones (eukaryotes) all of whom share a common ancestor, this event can only have happened
once. In some views, prokaryotes lack the cellular architecture to evolve into eukaryotes because a
bacterium expanded up to eukaryotic proportions would have tens of thousands of times less energy
available; two billion years ago, one simple cell incorporated itself into another, multiplied, and evolved
into mitochondriathat supplied the vast increase in available energy that enabled the evolution of complex
life. If this incorporation occurred only once in four billion years or is otherwise unlikely, then life on most
planets remains simple.[49] An alternative view is that mitochondria evolution was environmentally
triggered, and that mitochondria-containing organisms appear very soon after first traces of oxygen appear
in Earth's atmosphere.

The evolution of sexual reproduction as well as its maintenance, is another mystery in biology. The
purpose of sexual reproduction is unclear, as in many organisms it has a 50% cost (tness disadvantage)
in relation to asexual reproduction.[51] Mating types (types of gametes, according to their compatibility) may
have arisen as a result of anisogamy(gamete dimorphism), or the male and female genders may have
evolved before anisogamy.[52][53] It is also unknown why most sexual organisms use a binary mating
system,[54] and why some organisms have gamete dimorphism. Charles Darwinwas the first to suggest
that sexual selection drives speciation (the formation of species); without sexual reproduction it is unlikely
that complex life would have evolved.

439
The right time in evolution

Timeline of evolution; human writings exists for only 0.000218% of Earth's history.
Source;Ladyofhats ;own work.

While life on Earth is regarded to have spawned relatively early in the planet's history, the evolution from
multicellular to intelligent organisms took around 800 million years.[55] Civilizations on Earth have existed
for about 12,000 years and radio communication reaching space has existed for less than 100 years.
Relative to the age of the Solar System (~4.57 Ga) this is a tiny age span, an age span in which extreme
climatic variations, super volcanoes or large meteorite impacts were absent. These events would severely
harm intelligent life, as well as life in general. For example, the Permian-Triassic mass extinction, caused
by widespread and continuous volcanic eruptions in an area the size of Western Europe, led to the
extinction of 95% of known species around 251.2 Ma ago. About 65 million years
ago,the Chicxulub impact at the CretaceousPaleogene boundary (~65.5 Ma) on
the Yucatnpeninsula in Mexico led to a mass extinction of the most advanced species at that [Link]
intelligent extraterrestrial civilizations did exist and with such an intelligence level that they could make
contact with distant Earth, they would have to live in the same time span in evolution. The nearest Earth-

440
like planets are around 4.2 light years away; probable planets as Proxima Centauri b around the
star Proxima Centauri, a star considered to be 4.65 Ga; 0.15 billion years older than the Sun.
Under the assumption that both the explosion of life and the development of civilization were to be relative
to the planet's age, they would have spawned 723 Ma and 12.691 ka, respectively. The time between the
life explosion if that had existed on an exoplanet and the dawn of civilizations is thus very large and the
time between civilization and radio signals evenly [Link] risk of intelligent-life destruction is not a Drake
equation factor; in the 33 million years since the Eocene-Oligocene extinction event there have been no
major mass [Link] chance of bigger impacts in the time span of evolution to intelligent life
depends on the amount of shielding by larger bodies, such as our system's Jupiter or the Moon. The
chance of a large impact and resulting mass extinction happening in a multi-planetary "protected" system
is, however, impossible to predict.

Rare Earth equation


The following discussion is adapted from Cramer. The Rare Earth equation is Ward and

Brownlee's riposte to the Drake equation. It calculates , the number of Earth-like planets in the Milky
Way having complex life forms, as:

According to Rare Earth, the Cambrian explosion that saw extreme diversification of chordata from simple
forms like Pikaia (pictured) was an improbable event

where:

N* is the number of stars in the Milky Way. This number is not well-estimated, because the Milky
Way's mass is not well estimated. Moreover, there is little information about the number of very small
stars. N* is at least 100 billion, and may be as high as 500 billion, if there are many low visibility stars.

is the average number of planets in a star's habitable zone. This zone is fairly narrow, because
constrained by the requirement that the average planetary temperature be consistent with water remaining

liquid throughout the time required for complex life to evolve. Thus is is a likely upper bound.

441
We assume . The Rare Earth hypothesis can then be viewed as asserting that the
product of the other nine Rare Earth equation factors listed below, which are all fractions, is no greater

than 1010 and could plausibly be as small as 1012. In the latter case, could be as small as 0 or 1.

Ward and Brownlee do not actually calculate the value of , because the numerical values of quite a
few of the factors below can only be conjectured. They cannot be estimated simply because we have but
one data point: the Earth, a rocky planet orbiting a G2 star in a quiet suburb of a large barred spiral galaxy,
and the home of the only intelligent species we know, namely ourselves.

is the fraction of stars in the galactic habitable zone (Ward, Brownlee, and Gonzalez estimate this
factor as 0.1]).

is the fraction of stars in the Milky Way with planets.

is the fraction of planets that are rocky ("metallic") rather than gaseous.

is the fraction of habitable planets where microbial life arises. Ward and Brownlee believe this fraction
is unlikely to be small.

is the fraction of planets where complex life evolves. For 80% of the time since microbial life first
appeared on the Earth, there was only bacterial life. Hence Ward and Brownlee argue that this fraction
may be very small.

is the fraction of the total lifespan of a planet during which complex life is present. Complex life cannot
endure indefinitely, because the energy put out by the sort of star that allows complex life to emerge
gradually rises, and the central star eventually becomes a red giant, engulfing all planets in the planetary
habitable zone. Also, given enough time, a catastrophic extinction of all complex life becomes ever more
likely.

is the fraction of habitable planets with a large moon. If the giant impact theory of the Moon's origin is
correct, this fraction is small.

is the fraction of planetary systems with large Jovian planets. This fraction could be large.

is the fraction of planets with a sufficiently low number of extinction events. Ward and Brownlee
argue that the low number of such events the Earth has experienced since the Cambrian explosion may
be unusual, in which case this fraction would be small.
The Rare Earth equation, unlike the Drake equation, does not factor the probability that complex life
evolves into intelligent life that discovers technology (Ward and Brownlee are not evolutionary biologists).
Barrow and Tipler[58] review the consensus among such biologists that the evolutionary path from primitive
Cambrian chordates, e.g., Pikaia to Homo sapiens, was a highly improbable event. For example, the
large brains of humans have marked adaptive disadvantages, requiring as they do an
expensive metabolism, a long gestation period, and a childhood lasting more than 25% of the average
total life span. Other improbable features of humans include:
Being one of a handful of extant bipedal land (non-avian) vertebrate. Combined with an unusual eyehand
coordination, this permits dextrous manipulations of the physical environment with the hands;
A vocal apparatus far more expressive than that of any other mammal, enabling speech. Speech makes it
possible for humans to interact cooperatively, to share knowledge, and to acquire a culture;
The capability of formulating abstractions to a degree permitting the invention of mathematics, and the
discovery of science and technology. Only recently did humans acquire anything like their current scientific
and technological sophistication.

442
Advocates

Authors that advocate the Rare Earth hypothesis:


Stuart Ross Taylor,[33] a specialist on the Solar System, firmly believes in the hypothesis. Taylor
concludes that the Solar System is probably very unusual, because it resulted from so many
chance factors and events.
Stephen Webb,[1] a physicist, mainly presents and rejects candidate solutions for the Fermi
paradox. The Rare Earth hypothesis emerges as one of the few solutions left standing by the end
of the book.
Simon Conway Morris, a paleontologist, endorses the Rare Earth hypothesis in chapter 5 of
his Life's Solution: Inevitable Humans in a Lonely Universe,[59] and cites Ward and Brownlee's
book with approval.[60]
John D. Barrow and Frank J. Tipler (1986. 3.2, 8.7, 9), cosmologists, vigorously defend the
hypothesis that humans are likely to be the only intelligent life in the Milky Way, and perhaps the
entire universe. But this hypothesis is not central to their book The Anthropic Cosmological
Principle, a thorough study of the anthropic principle and of how the laws of physics are peculiarly
suited to enable the emergence of complexity in nature.
Ray Kurzweil, a computer pioneer and self-proclaimed Singularitarian, argues in The Singularity Is
Near that the coming Singularity requires that Earth be the first planet on which sentient,
technology-using life evolved. Although other Earth-like planets could exist, Earth must be the
most evolutionarily advanced, because otherwise we would have seen evidence that another
culture had experienced the Singularity and expanded to harness the full computational capacity
of the physical universe.
John Gribbin, a prolific science writer, defends the hypothesis in a book devoted to it called Alone
in the Universe: Why our planet is unique.[61]
Guillermo Gonzalez, astrophysicist who coined the term galactic habitable zone uses the
hypothesis in his book The Privileged Planet to promote the concept of intelligent design.[62]
Michael H. Hart, astrophysicist who proposed a very narrow habitable zone based on climate
studies, edited the influential book "Extraterrestrials: Where are They" and authored one of its
chapters "Atmospheric Evolution, the Drake Equation and DNA: Sparse Life in an Infinite
Universe".[63]

Howard Alan Smith, astrophysicist and author of 'Let there be light: modern cosmology and
Kabbalah: a new conversation between science and religion'.

Criticism
Cases against the Rare Earth Hypothesis take various forms.

Anthropic reasoning

The anthropic principle is a philosophical consideration that observations of the Universe must be
compatible with the conscious and sapient life that observes it. Some proponents of the anthropic principle
reason that it explains why this universe has the age and the fundamental physical constants necessary to
accommodate conscious life. As a result, they believe it is unremarkable that this universe has
fundamental constants that happen to fall within the narrow range thought to be compatible with
life.[1][2] The strong anthropic principle (SAP) as explained by John D. Barrow and Frank Tipler states that
this is all the case because the universe is in some sense compelled to eventually have conscious and
sapient life emerge within it. Some critics of the SAP argue in favor of a weak anthropic principle (WAP)
similar to the one defined by Brandon Carter, which states that the universe's ostensible fine tuning is the
result of selection bias (specifically survivor bias): i.e., only in a universe capable of eventually supporting
life will there be living beings capable of observing and reflecting upon fine tuning. Most often such
arguments draw upon some notion of the multiverse for there to be a statistical population of universes to
select from and from which selection bias (our observance of only this universe, compatible with our life)
could occur.

443
The hypothesis concludes, more or less, that complex life is rare because it can evolve only on the surface
of an Earth-like planet or on a suitable satellite of a planet. Some biologists, such as Jack Cohen, believe
this assumption too restrictive and unimaginative; they see it as a form of circular reasoning.
According to David Darling, the Rare Earth hypothesis is neither hypothesis nor prediction, but merely a
description of how life arose on Earth.] In his view Ward and Brownlee have done nothing more than select
the factors that best suit their case.
What matters is not whether there's anything unusual about the Earth; there's going to be
something idiosyncratic about every planet in space. What matters is whether any of Earth's
circumstances are not only unusual but also essential for complex life. So far we've seen nothing to
suggest there is.
Critics also argue that there is a link between the Rare Earth Hypothesis and the creationist ideas
of intelligent design.

Exoplanets around main sequence stars are being discovered in large numbers
See also: Estimated frequency of Earth-like planets

An increasing number of extrasolar planet discoveries are being made with 3,639 planets in 2,729
planetary systems known as of 1 August 2017. Rare Earth proponents argue life cannot arise outside Sun-
like systems. However, some exobiologists have suggested that stars outside this range may give rise to
life under the right circumstances; this possibility is a central point of contention to the theory because
these late-K and M category stars make up about 82% of all hydrogen-burning stars.
Current technology limits the testing of important Rare Earth Criteria: surface water, tectonic plates, a large
moon and biosignatures are currently undetectable. Though planets the size of Earth are difficult to detect
and classify, scientists now think that rocky planets are common around Sun-like stars. The Earth
Similarity Index (ESI) of mass, radius and temperature provides a means of measurement, but falls short
of the full Rare Earth criteria.

Rocky planets orbiting within habitable zones may not be rare

444
Planets similar to Earth in size are being found in relatively large number in the habitable zones of
similar stars. The 2015 infographic depicts Kepler-62e, Kepler-62f, Kepler-186f, Kepler-
296e, Kepler-296f, Kepler-438b, Kepler-440b, Kepler-442b, Kepler-452b.

Source; NASA/Ames/JPL-Caltech - [Link]

Some argue that Rare Earth's estimates of rocky planets in habitable zones ( in the Rare Earth
equation) are too restrictive. James Kasting cites the Titius-Bode lawto contend that it is a misnomer to
describe habitable zones as narrow when there is a 50% chance of at least one planet orbiting within
one.[73] In 2013 a study that was published in the journal Proceedings of the National Academy of Sciences
calculated that about "one in five" of all sun-like stars are expected to have earthlike planets "within
the habitable zones of their stars"; 8.8 billion of them therefore exist in the Milky Way galaxy alone. On 4
November 2013, astronomers reported, based on Kepler space mission data, that there could be as many
as 40 billion Earth-sizedplanets orbiting in the habitable zones of sun-like stars and red dwarf stars within
the Milky Way Galaxy.[75][76] 11 billion of these estimated planets may be orbiting sun-like stars.[77]
Uncertainty over Jupiter's role
The requirement for a system to have a Jovian planet as protector (Rare Earth equation factor ) has
been challenged and this has a bearing on the number of proposed extinction events (Rare Earth equation

factor ). Kasting's 2001 review of Rare Earth questions whether a Jupiter protector has any
bearing on the frequency of complex life. Computer modelling including the 2005 Nice modeland
2007 Nice 2 model yield inconclusive results in relation to Jupiter's gravitational influence and impacts on
the inner planets. A study by Horner and Jones (2008) using computer simulation found that while the total
effect on all orbital bodies within the Solar System is unclear, Jupiter has caused more impacts on Earth
than it has prevented.[80] Lexell's Comet, a 1770 near miss that passed closer to Earth than any other
comet in recorded history, was known to be caused by the gravitational influence of Jupiter.[81] Grazier
(2017) claims that the idea of Jupiter as a shield is a misinterpretation of a 1996 study by George
Wetherill, and using computer models Grazier was able to demonstrate that Saturn protects Earth from
more asteroids and comets than does Jupiter.

Plate tectonics may not be unique to Earth or a requirement for complex life

Geological discoveries like the active features of Pluto's Tombaugh Regio appear to contradict the
argument that geologically active worlds like Earth are rare.

Ward and Brownlee argue that tectonics is necessary to support biogeochemical cycles required for
complex life to arise and predicted that such geological features would not be found outside of Earth,
pointing to a lack of observable orogenicevidence, specifically in the form of mountain ranges and
subduction zonesAn orogeny is an event that leads to a large structural deformation of the
Earth's lithosphere (crust and uppermost mantle) due to the interaction between tectonic plates.
An orogen or orogenic belt develops when a continental plate crumples and is pushed upwards to form
one or more mountain ranges; this involves many geological processes collectively called orogenesis.[1][2]

445
Orogeny is the primary mechanism by which mountains are built on continents.] The word "orogeny"
comes from Ancient Greek ( oros, "mountain" + genesis for "creation, origin").[3]Though it was
used before him, the term was employed by the American geologist G.K. Gilbert in 1890 to describe the
process of mountain building as distinguished from epeirogeny.
There is, however, no scientific consensus on the evolution of plate tectonics on Earth. Though it is
believed that tectonic motion first began around three billion years ago, by this time photosynthesis and
oxygenation had already begun. Furthermore, recent studies point to plate tectonics as an episodic
planetary phenomenon, and that life may evolve during periods of "stagnant-lid" rather than plate tectonic
states.
Recent evidence also points to similar activity either having occurred or continuing to occur elsewhere.
The geology of Pluto, for example, described by Ward and Brownlee as "without mountains or volcanoes
... devoid of volcanic activity",] has since been found to be quite the contrary, with a geologically active
surface possessing organic molecules] and mountain ranges] like Norgay Montesand Hillary
Montes comparable in relative size to those of Earth, and observations suggest the involvement of
endogenic processes. Plate tectonics has been suggested as a hypothesis for the Martian dichotomy and
in 2012 Geologist An Yin put forward evidence for active plate tectonics on Mars. ] Europa has long been
suspected to have plate tectonics and in 2014 NASA announced evidence of active subduction. In 2017,
scientists studying the Geology of Charon confirmed that icy plate tectonics also operated on Pluto's
largest moon.
Kasting suggests that there is nothing unusual about the occurrence of plate tectonics in large rocky
planets and liquid water on the surface as most should generate internal heat even without the assistance
of radioactive elements.[78] Studies by Valencia and Cowan suggest that plate tectonics may be inevitable
for terrestrial planets Earth sized or larger, that is, Super-Earths, which are now known to be more
common in planetary systems.

Free oxygen may neither be rare nor a prerequisite for multicellular life

Animals like Spinoloricus nov. sp. appear to defy the premise that animal life would not exist
without oxygen

The hypothesis that molecular oxygen, necessary for animal life to exist is rare and that a Great
Oxygenation Event (a condition for Rare Earth equation factor ), could only have been triggered and
sustained by tectonics as occurred on Earth, appears to have been invalidated by more recent discoveries.

Ward and Brownlee ask "whether oxygenat ion, and hence the rise of animals, would ever have
occurred on a world where there were no continents to erode". Extraterrestrial free oxygen has recently
been detected around other solid objects, including Mercury, Venus, MarsJupiter's four Galilean
moons, Saturn's moons Enceladus, Dione and Rhea and even the atmosphere of a comet. This has led
scientists to speculate whether processes other than photosynthesis could be capable of generating an
environment rich in free oxygen. Wordsworth (2014) concludes that oxygen generated other than
through photodissociation may be likely on Earth-like exoplanets, and could actually lead to false positive
detections of life. Narita (2015) suggests photocatalysis by titanium dioxide as a geochemical mechanism
for producing oxygen atmospheres.

Since Ward & Brownlee's assertion that "there is irrefutable evidence that oxygen is a necessary
ingredient for animal life", anaerobic metazoa have been found that indeed do metabolise without
oxygen. Spinoloricus nov. sp., for example, a species discovered in the hypersaline anoxic L'Atalante
basin at the bottom of the Mediterranean Sea in 2010, appears to metabolise with hydrogen,

446
lacking mitochondria and instead using hydrogenosomes. Stevenson (2015) has proposed other
membrane alternatives for complex life in worlds without oxygen. In 2017, scientists from the NASA
Astrobiology Institute discovered the necessary chemical preconditions for the formation of azotosomes on
Saturn's moon Titan, a world that lacks atmospheric oxygen. Independent studies by Schirrmeister and by
Mills concluded that Earth's multicellular life existed prior to the Great Oxygenation Event, not as a
consequence of it.

NASA scientists Hartman and McKay argue that plate tectonics may in fact slow the rise of oxygenation
(and thus stymie complex life rather than promote it).] Computer modelling by Tilman Spohn in 2014 found
that plate tectonics on Earth may have arisen from the effects of complex life's emergence, rather than the
other way around as the Rare Earth might suggest. The action of lichens on rock may have contributed to
the formation of subduction zones in the presence of water. Kasting argues that if oxygenation caused the
Cambrian explosion then any planet with oxygen producing photosynthesis should have complex life.
A magnetic field may not be a requirement

The importance of Earth's magnetic field to the development of complex life has been disputed. Kasting
argues that the atmosphere provides sufficient protection against cosmic rays even during times of
magnetic pole reversal and atmosphere loss by sputtering. Kasting also dismisses the role of the magnetic
field in the evolution of eukaryotes citing the age of the oldest known magnetofossils.

A large moon may neither be rare nor necessary

The requirement of a large moon (Rare Earth equation factor ) has also been challenged. Though
even if it were required, such an occurrence may not be as unique as predicted by the Rare Earth
Hypothesis. Recent work by Edward Belbruno and J. Richard Gott of Princeton University suggests that
giant impacts such as those that may have formed the Moon can indeed form in planetary trojan
points (L4 or L5 Lagrangian point) which means that similar circumstances may occur in other planetary
systems.[119]

Collision between two planetary bodies (artist concept).

Rare Earth's assertion that the Moon's stabilization of Earth's obliquity and spin is a requirement for
complex life has been questioned. Kasting argues that a moonless Earth would still possess habitats with
climates suitable for complex life and questions whether the spin rate of a moonless Earth can be
predicted. Although the giant impact theory posits that the impact forming the Moon increased Earth's
rotational speed to make a day about 5 hours long, the Moon has slowly "stolen" much of this speed to
reduce Earth's solar day since then to about 24 hours and continues to do so: in 100 million years Earth's
solar day will be roughly 24 hours 38 minutes (the same as Mars's solar day); in 1 billion years, 30 hours
23 minutes. Larger secondary bodies would exert proportionally larger tidal forces that would in turn
decelerate their primaries faster and potentially increase the solar day of a planet in all other respects like
earth to over 120 hours within a few billion years. This long solar day would make effective heat dissipation
for organisms in the tropics and subtropics extremely difficult in a similar manner to tidal locking to a red
dwarf star. Short days (high rotation speed) causes high wind speeds at ground level. Long days (slow
rotation speed) cause the day and night temperatures to be too extreme.

Many Rare Earth proponents argue that the Earth's plate tectonics would probably not exist if not for the
tidal forces of the Moon. The hypothesis that the Moon's tidal influence initiated or sustained Earth's plate
tectonics remains unproven, though at least one study implies a temporal correlation to the formation of

447
the Moon. Evidence for the past existence of plate tectonics on planets like Mars ] which may never have
had a large moon would counter this argument. Kasting argues that a large moon is not required to initiate
plate tectonics.

Complex life may arise in alternative habitats

Complex life may exist in environments similar to black smokers on Earth.


See also: Hypothetical types of biochemistry

Rare Earth proponents argue that simple life may be common, though complex life requires specific
environmental conditions to arise. Critics consider life could arise on a moon of a gas giant, however the
requirements and their complexity increase rather, if volcanicity is absolutely required for life. The moon
must have stresses to induce tidal heating, but not so dramatic as seen on Jupiter's Io. The paradox here
is the moon is within the gas giant's intense radiation belts, sterilizing any biodiversity before it can get
established. Dirk Schulze-Makuch argues that there is no evidence to support this conclusion,
hypothesizing alternative biochemistries as a method for complex life to arise in completely alien
conditions.] While Rare Earth proponents argue that only microbial extremophiles could exist in subsurface
habitats beyond Earth, some argue that complex life can also arise in these environments. Examples of
extremophile animals such as the Hesiocaeca methanicola, an animal that inhabits ocean floor methane
clathratessubstances more commonly found in the outer Solar System, the Tardigrade which can survive
in the vacuum of space] or Halicephalobus mephisto which exists in crushing pressure, scorching
temperatures and extremely low oxygen levels 3.6 kilometres deep in the Earth's crust, are sometimes
cited by critics as complex life capable of thriving in "alien" environments. Jill Tarter counters the classic
counterargument that these species adapted to these environments rather than arose in them, by
suggesting that we cannot assume conditions for life to emerge which are not actually known.[128] There
are suggestions that complex life could arise in sub-surface conditions which may be similar to those
where life may have arisen on Earth, such as the tidally heated subsurfaces of Europa or
Enceladus.[129][130] Ancient circumvental ecosystems such as these support complex life on Earth such
as Riftia pachyptilathat exist completely independent of the surface biosphere.
Giant tube worms, Riftia pachyptila, are marine invertebrates in the phylumAnnelida[1] (formerly grouped in
phylum Pogonophora and Vestimentifera) related to tube worms commonly found in
the intertidal and pelagic zones. Riftia pachyptila live over a mile deep, and up to several miles deep, on
the floor of the Pacific Ocean near black smokers, and can tolerate extremely high hydrogen sulfide levels.
These worms can reach a length of 2.4 m (7 ft 10 in) and their tubular bodies have a diameter of 4 cm
(1.6 in). Ambient temperature in their natural environment ranges from 2 to 30 degrees Celsius.
The common name "giant tube worm" is however also applied to the largest living species
of shipworm, Kuphus polythalamia, which despite the name "worm" is a bivalve mollusc, rather than
an annelid.

448
Giant tube worms

Scientific classification
Kingdom: Animalia
Phylum: Annelida
Class: Polychaeta
Order: Canalipalpata
Family: Siboglinidae
Genus: Riftia
Species: R. pachyptila
Binomial name
Riftia pachyptila
M. L. Jones, 1981
Giant tube worms, Riftia pachyptila, are marine invertebrates in the phylum Annelida.

Development

Riftia develop from a free-swimming, pelagic, non-symbiotic trochophore larva, which enters juvenile
(metatrochophore) development, becoming sessile and subsequently acquiring symbiotic bacteria.[3][4] The
symbiotic bacteria, on which adult worms depend for sustenance, are not present in the gametes, but are
acquired from the environment via the digestive tract. The digestive tract transiently connects from a
mouth at the tip of the ventral medial process to a foregut, midgut, hindgut and anus. After symbionts are
established in the midgut, it undergoes substantial remodelling and enlargement to become the
trophosome, while the remainder of the digestive tract has not been detected in adult specimens. [5]

Body structure

449
Hydrothermal vent tubeworms get organic compounds from bacteria that live in their trophosome.

They have a highly vascularized, red "plume" at the tip of their free end which is an organ for exchanging
compounds with the environment (e.g., H2S, CO2, O2, etc.). The tube worm does not have many predators.
If threatened, the plume may be retracted into the worm's protective tube. The plume provides essential
nutrients to bacteria living inside the trophosome. Tube worms have no digestive tract, but the bacteria
(which may make up half of a worm's body weight) convert oxygen, hydrogen sulfide, carbon dioxide, etc.
into organic molecules on which their host worms feed. This process, known as chemosynthesis, was
recognized within the trophosome by Colleen Cavanaugh.
The bright red color of the plume structures results from several extraordinarily complex hemoglobins,
which contain up to 144 globin chains (each presumably including associated heme structures). These
tube worm hemoglobins are remarkable for carrying oxygen in the presence of sulfide, without being
inhibited by this molecule as hemoglobins in most other species are.

Nitrate and nitrite are toxic, but nitrogen is required for biosynthetic processes. The chemosynthetic
bacteria within the trophosome convert this nitrate to ammonium ions, which then are available for
production of amino acids in the bacteria, which are in turn released to the tube worm. To transport nitrate
to the bacteria, R. pachyptila concentrate nitrate in their blood, to a concentration 100 times more
concentrated than the surrounding water. The exact mechanism of R. pachyptilas ability to withstand and
concentrate nitrate is still unknown.

Energy and nutrient source

With sunlight not available directly as a form of energy, the tube worms rely on bacteria in their habitat to
oxidize hydrogen sulfide,[10] using dissolved oxygen in the water as an electron acceptor. This reaction
provides the energy needed for chemosynthesis. For this reason, tube worms are partially dependent on
sunlight as an energy source, since they use free oxygen, which has been liberated by photosynthesis in
water layers far above, to obtain nutrients. In this way tube worms are similar to many forms of ocean life
which live at depths that sunlight cannot penetrate. However, tube worms are remarkable in being able to
use bacteria to indirectly obtain almost all the materials they need for growth from molecules dissolved in
water. Some nutrients have to be filtered out of the water. Tube worm growth resembles that of
hydroponically grown fungi more than it does that of typical animals which need to "eat". One other
species is known to have a very similar lifestyle: the giant shipworm (which is a mollusc, not a worm,
though it bears a superficial resemblance).
<=Reproduction=> To reproduce, Riftia pachyptila females release lipid-rich eggs into the surrounding
water so they start to float upwards. The males then unleash sperm bundles that swim to meet the eggs.
After the eggs have hatched, the larvae swim down to attach themselves to the rock.

Growth rate and age

Riftia pachyptila has the fastest growth rate of any known marine invertebrate. These organisms have
been known to colonize a new site, grow to sexual maturity and increase in length to 4.9 feet (1.5 m) in
less than two years.[11] This is in sharp contrast to Lamellibrachia luymesi, the tube worms that live at deep
sea cold seeps and grow very slowly for most of their lives. It takes from 170 to 250 years
for Lamellibrachia luymesi to grow 2 meters in length, and even longer worms have been discovered.

Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the
evolution of lifeor at least the evolution of biological complexityare rare or even unique to Earth. Under this
assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is
regarded as exceedingly unusual.
The Rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous
circumstances, such as a galactic habitable zone, a central star and planetary system having the requisite
character, the circumstellar habitable zone, a right sized terrestrial planet, the advantage of a giant guardian like
Jupiter and a large natural satellite, conditions needed to ensure the planet has a magnetosphere and plate
tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as

450
massive glaciation and rare bolide impacts, and whatever led to the appearance of the eukaryote cell, sexual
reproduction and the Cambrian explosion.

No other intelligent species have arisen


It is possible that even if complex life is common, intelligence (and consequently civilizations) is not. [63] While there
are remote sensing techniques that could perhaps detect life-bearing planets without relying on the signs of
technology,[64][65] none of them has any ability to tell if any detected life is intelligent. This is sometimes referred to
as the "algae vs. alumnae" problem

Intelligent alien species lack advanced technology


It may be that while alien species with intelligence exist, they are primitive or have not reached the level of
technological advancement necessary to communicate. Along with non-intelligent life, such civilizations would be
also very difficult for us to detect. To skeptics, the fact that in the history of life on the Earth only one species has
developed a civilization to the point of being capable of spaceflight and radio technology, lends more credence to
the idea that technologically advanced civilizations are rare in the universe.
It is the nature of intelligent life to destroy itself
See also: Great Filter

A 23-kiloton tower shot called BADGER, fired as part of the Operation UpshotKnothole nuclear test
series.
This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly
after developing radio or spaceflight technology. Possible means of annihilation are many, ]including war, accidental
environmental contamination or damage, resource depletion, climate change, or poorly designed artificial
intelligence. This general theme is explored both in fiction and in scientific hypothesizing. ] In 1966, Sagan
and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of
developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-
year timescales.[71] Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered
system that can sustain itself against the tendency to disorder, the "external transmission" or interstellar
communicative phase may be the point at which the system becomes unstable and self-destructs.
It is the nature of intelligent life to destroy others

Technological singularity and Von Neumann probe


Another hypothesis is that an intelligent species beyond a certain point of technological capability will destroy other
intelligent species as they appear. The idea that something, or someone, might be destroying intelligent life in the
universe has been explored in the scientific literature.[28] A species might undertake such extermination out of
expansionist motives, paranoia, or aggression. In 1981, cosmologist Edward Harrison argued that such behavior
would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies might
view any other species bent on galactic expansion as a threat.[73] It has also been suggested that a successful alien
species would be a superpredator, as are humans.[74][75]
Periodic extinction by natural events

451
New life might commonly die out due to runaway heating or cooling on their fledgling planets. [76] On Earth, there
have been numerous major extinction events that destroyed the majority of complex species alive at the time;
the extinction of the dinosaurs is the best known example. These are thought to have been caused by events such
as impact from a large meteorite, massive volcanic eruptions, or astronomical events such as gamma-ray
bursts.[77] It may be the case that such extinction events are common throughout the universe and periodically
destroy intelligent life, or at least its civilizations, before the species is able to develop the technology to
communicate with other species.

Inflation hypothesis and the youngness argument


Cosmologist Alan Guth proposed a multi-verse solution to the Fermi paradox. This hypothesis uses
the synchronous gauge probability distribution, with the result that young universes exceedingly outnumber
older ones (by a factor of e1037 for every second of age). Therefore, averaged over all universes, universes with
civilizations will almost always have just one, the first to develop. However, Guth notes "Perhaps this argument
explains why SETI has not found any signals from alien civilizations, but I find it more plausible that it is merely a
symptom that the synchronous gauge probability distribution is not the right one."
Intelligent civilizations are too far apart in space or time

NASA's conception of the Terrestrial Planet Finder

It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart
for meaningful two-way communication. If two civilizations are separated by several thousand light-years, it is
possible that one or both cultures may become extinct before meaningful dialogue can be established. Human
searches may be able to detect their existence, but communication will remain impossible because of distance. It
has been suggested that this problem might be ameliorated somewhat if contact/communication is made through
a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information.
Alternatively, a civilization may simply broadcast its knowledge, and leave it to the receiver to make what they may
of it. This is similar to the transmission of information from ancient civilizations to the present, [81] and humanity has
undertaken similar activities like the Arecibo message, which could transfer information about Earth's intelligent
species, even if it never yields a response or does not yield a response in time for humanity to receive it. It is also
possible that archaeological evidence of past civilizations may be detected through deep space observations.
A related speculation by Sagan and Newman suggests that if other civilizations exist, and are transmitting and
exploring, their signals and probes simply have not arrived yet. However, critics have noted that this is unlikely,
since it requires that humanity's advancement has occurred at a very special point in time, while the Milky Way is in
transition from empty to full. This is a tiny fraction of the lifespan of a galaxy under ordinary assumptions and
calculations resulting from them, so the likelihood that we are in the midst of this transition is considered low in the
paradox.

It is too expensive to spread physically throughout the galaxy

Project Daedalus, Project Orion (nuclear propulsion), and Project Longshot


Many speculations about the ability of an alien culture to colonize other star systems are based on the idea that
interstellar travel is technologically feasible. While the current understanding of physics rules out the possibility
of faster-than-light travel, it appears that there are no major theoretical barriers to the construction of "slow"
interstellar ships, even though the engineering required is considerably beyond our present capabilities. This idea

452
underlies the concept of the Von Neumann probe and the Bracewell probe as a potential evidence of
extraterrestrial intelligence.
It is possible, however, that present scientific knowledge cannot properly gauge the feasibility and costs of such
interstellar colonization. Theoretical barriers may not yet be understood, and the cost of materials and energy for
such ventures may be so high as to make it unlikely that any civilization could afford to attempt it. Even if
interstellar travel and colonization are possible, they may be difficult, leading to a colonization model based
on percolation theory. Colonization efforts may not occur as an unstoppable rush, but rather as an uneven
tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous
costs involved and the expectation that colonies will inevitably develop a culture and civilization of their own.
Colonization may thus occur in "clusters," with large areas remaining uncolonized at any one time.
If exploration, or backup from a home system disaster, is the primary motive for expansion, then it is possible
that mind uploading and similar technologies may reduce the desire to colonize by replacing physical travel with
much less-expensive communication.[86] Therefore the first civilization may have physically explored or colonized
the galaxy, but subsequent civilizations find it cheaper, faster, and easier to travel by contacting existing civilizations
rather than physically exploring or traveling themselves. This leads to little or no physical travel at the current
epoch, and only directed communications, which are hard to see except to the intended receiver

Human beings have not existed long enough


Humanity's ability to detect intelligent extraterrestrial life has existed for only a very brief periodfrom 1937
onwards, if the invention of the radio telescope is taken as the dividing lineand Homo sapiens is a geologically
recent species. The whole period of modern human existence to date is a very brief period on a cosmological scale,
and radio transmissions have only been propagated since 1895. Thus, it remains possible that human beings have
neither existed long enough nor made themselves sufficiently detectable to be found by extraterrestrial
intelligence.

We are not listening properly


There are some assumptions that underlie the SETI programs that may cause searchers to miss signals that are
present. Extraterrestrials might, for example, transmit signals that have a very high or low data rate, or employ
unconventional (in our terms) frequencies, which would make them hard to distinguish from background noise.
Signals might be sent from non-main sequence star systems that we search with lower priority; current programs
assume that most alien life will be orbiting Sun-like stars.
The greatest challenge is the sheer size of the radio search needed to look for signals (effectively spanning the
entire visible universe), the limited amount of resources committed to SETI, and the sensitivity of modern
instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory,
Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light-years, less than 1/10
the distance to the nearest star. A signal is much easier to detect if the signal energy is limited to either
a narrow range of frequencies, or directed at a specific part of the sky. Such signals could be detected at ranges of
hundreds to tens of thousands of light-years distance. However, this means that detectors must be listening to an
appropriate range of frequencies, and be in that region of space to which the beam is being sent.
Many SETI searches assume that extraterrestrial civilizations will be broadcasting a deliberate signal, like the
Arecibo message, in order to be found.
Thus to detect alien civilizations through their radio emissions, Earth observers either need more sensitive
instruments or must hope for fortunate circumstances: that the broadband radio emissions of alien radio
technology are much stronger than our own; that one of SETI's programs is listening to the correct frequencies from
the right regions of space; or that aliens are deliberately sending focused transmissions in our general direction.
Civilizations broadcast detectable radio signals only for a brief period of time
It may be that alien civilizations are detectable through their radio emissions for only a short time, reducing the
likelihood of spotting them. The usual assumption is that civilizations outgrow radio through technological advance.
However, even if radio is not used for communication, it may be used for other purposes such as power
transmission from solar power satellites. Such uses may remain visible even after broadcast emission are replaced
by less observable technology.
More hypothetically, advanced alien civilizations may evolve beyond broadcasting at all in the electromagnetic
spectrum and communicate by technologies not developed or used by mankind. Some scientists have hypothesized

453
that advanced civilizations may send neutrino signals. If such signals exist, they could be detectable by neutrino
detectorsthat are now under construction for other goals.

They tend to isolate themselves


It has been suggested that some advanced beings may divest themselves of physical form, create massive
artificial virtual environments, transfer themselves into these environments through mind uploading, and exist
totally within virtual worlds, ignoring the external physical universe.
It may also be that intelligent alien life develop an "increasing disinterest" in their outside world. ] Possibly any
sufficiently advanced society will develop highly engaging media and entertainment well before the capacity for
advanced space travel, and that the rate of appeal of these social contrivances is destined, because of their inherent
reduced complexity, to overtake any desire for complex, expensive endeavors such as space exploration and
communication. Once any sufficiently advanced civilization becomes able to master its environment, and most of its
physical needs are met through technology, various "social and entertainment technologies", including virtual
reality, are postulated to become the primary drivers and motivations of that civilization.
They are too alien

Microwave window as seen by a ground-based system. From NASA report SP-419: SETI the Search for
Extraterrestrial Intelligence
Source;Philip Morrison, John Billingham, John Wolfe - NASA report SP-419 SETI: The Search for
Extraterrestrial Intelligence
Microwave window as seen by ground based radio astronomy
Permission details
NASA document

454
Another possibility is that human theoreticians have underestimated how much alien life might differ from that on
Earth. Aliens may be psychologically unwilling to attempt to communicate with human beings. Perhaps
human mathematics is parochial to Earth and not shared by other life, [97] though others argue this can only apply
to abstract math since the math associated with physics must be similar (in results, if not in methods). [98]
Physiology might also cause a communication barrier. Carl Sagan speculated that an alien species might have a
thought process orders of magnitude slower (or faster) than ours. [citation needed] A message broadcast by that species
might well seem like random background noise to us, and therefore go undetected.
Another thought is that technological civilizations invariably experience a technological singularity and attain
a post-biological character. Hypothetical civilizations of this sort may have advanced drastically enough to render
communication impossible.
Everyone is listening, no one is transmitting
Alien civilizations might be technically capable of contacting Earth, but are only listening instead of transmitting. ] If
all, or even most, civilizations act the same way, the galaxy could be full of civilizations eager for contact, but
everyone is listening and no one is transmitting. This is the so-called SETI Paradox.
The only civilization we know, our own, does not explicitly transmit, except for a few small efforts. Even these
efforts, and certainly any attempt to expand them, are controversial. It is not even clear we would respond to a
detected signalthe official policy within the SETI community is that "[no] response to a signal or other evidence of
extraterrestrial intelligence should be sent until appropriate international consultations have taken place."
However, given the possible impact of any reply it may be very difficult to obtain any consensus on "Who speaks for
Earth?" and "What should we say?"
Earth is deliberately not contacted

Zoo hypothesis

The zoo hypothesis speculates as to the assumed behavior and existence of technically
advanced extraterrestrial life and the reasons they refrain from contacting Earth and is one of many
theoretical explanations for the Fermi paradox. The hypothesis is that alien life intentionally avoids
communication with Earth, and one of its main interpretations is that it does so to allow for
natural evolution and sociocultural development. The hypothesis seeks to explain the apparent absence of
extraterrestrial life despite its generally accepted plausibility and hence the reasonable expectation of its
existence.[1]
Aliens might, for example, choose to allow contact once the human species has passed certain
technological, political, or ethical standards. They might withhold contact until humans force contact upon
them, possibly by sending a spacecraft to planets they inhabit. Alternatively, a reluctance to initiate contact
could reflect a sensible desire to minimize risk. An alien society with advanced remote-sensing
technologies may conclude that direct contact with neighbors confers added risks to oneself without an
added benefit.

Assumptions
The zoo hypothesis assumes first that a large number of alien cultures exist, and second that these aliens
have great reverence for independent, natural evolution and development. In particular, assuming that
intelligence is a physical process that acts to maximize the diversity of a system's accessible futures,[2] a
fundamental motivation for the zoo hypothesis would be that premature contact would "unintelligently"
reduce the overall diversity of paths the universe itself could take.
These ideas are perhaps most plausible if there is a relatively universal cultural or legal policy among a
plurality of extraterrestrial civilizations necessitating isolation with respect to civilizations at Earth-like
stages of development. In a universe without a hegemonic power, random single civilizations with
independent principles would make contact. This makes a crowded Universe with clearly defined rules
seem more plausible.
If there is a plurality of alien cultures, however, this theory may break down under the uniformity of motive
concept because it would take just a single extraterrestrial civilization to decide to act contrary to the
imperative within our range of detection for it to be abrogated, and the probability of such a violation

455
increases with the number of civilizations.[4] This idea, however, becomes more plausible if all civilizations
tend to evolve similar cultural standards and values with regard to contact much like convergent
evolution on Earth has independently evolved eyes on numerous occasions, ] or all civilizations follow the
lead of some particularly distinguished civilization, such as the first civilization among them.

Fermi paradox
With this in mind, a modified Zoo Hypothesis becomes a more appealing answer to the Fermi paradox.
The time between the emergence of the first civilization within the Milky Way and all subsequent
civilizations could be enormous. Monte Carlo simulation shows the first few inter-arrival times between
emergent civilizations would be similar in length to geologic epochs on Earth. Just what could a civilization
do with a ten-million, one-hundred-million, or half-billion-year head start?
Even if this first grand civilization is long gone, their initial legacy could live on in the form of a passed-
down tradition, or perhaps an artificial life form dedicated to such a goal without the risk of death. Beyond
this, it does not even have to be the first civilization, but simply the first to spread its doctrine and control
over a large volume of the galaxy. If just one civilization gained this hegemony in the distant past, it could
form an unbroken chain of taboo against rapacious colonization in favour of non-interference in those
civilizations that follow. The uniformity of motive concept previously mentioned would become moot in
such a situation.
If the oldest civilization still present in the Milky Way has, for example, a 100-million-year time advantage
over the next oldest civilization, then it is conceivable that they could be in the singular position of being
able to control, monitor, influence or isolate the emergence of every civilization that follows within their
sphere of influence. This is analogous to what happens on Earth within our own civilization on a daily
basis, in that everyone born on this planet is born into a pre-existing system of familial associations,
customs, traditions and laws that were already long established before our birth and which we have little or
no control over.

Appearance in fiction

In Olaf Stapledon's 1937 novel Star Maker, great care is taken by the Symbiont race to keep its
existence hidden from "pre-utopian" primitives, "lest they should lose their independence of
mind". It is only when such worlds become utopian-level space travellers that the Symbionts make
contact and bring the young utopia to an equal footing.

Arthur C. Clarke's The Sentinel (first published in 1951) and its later novel adaptation 2001: A Space
Odyssey (1968) feature a beacon which is activated when the human race discovers it on the moon.
An alien race has apparently visited us in the distant past.
In Childhood's End, a novel by Arthur C. Clarke published in 1953, the alien cultures had been
observing and registering the Earth's evolution and human history for thousands (perhaps millions) of
years. At the beginning of the book, when mankind is about to achieve spaceflight, the aliens reveal
their existence and quickly end the arms race, colonialism, racial segregation and the Cold War.
In Star Trek, the Federation (including humans) has a strict Prime Directive policy of nonintervention
with less technologically advanced cultures which the Federation encounters. The threshold of
inclusion is the independent technological development of faster-than-light propulsion. In the show's
canon the Vulcan race limited their encounters to observation until Humans made their first warp flight,
after which they initiated first contact, indicating the practice predated the Human race's advance of
this threshold. Additionally, in the episode "The Chase (TNG)", a message from a first (or early)
civilization is discovered, hidden in the DNA of sentient species spread across many worlds,
something that could only have been fully discovered after a race had become sufficiently advanced.
In Julian May's 1987 novel Intervention, the five alien races of the Galactic Milieu keep the Earth
under surveillance, but do not intervene until humans demonstrate mental and ethical maturity through
a paranormal prayer of peace.
Bill Watterson's Calvin and Hobbes comic strip for November 8, 1989, alludes to the possibility of an
ethical threshold for first contact (or at least for the prudence of first contact) in Calvin's remark

456
"Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it
has tried to contact us."
In Robert J. Sawyer's SF novel Calculating God (2000), Hollus, a scientist from an advanced alien
civilization, denies that her government is operating under the prime directive.
In Hard to Be a God by Arkady and Boris Strugatsky, the (unnamed) medieval-esque planet where the
novel takes action is protected by the advanced civilization of Earth, and the observers from Earth
present on the planet are forbidden to intervene and make overt contact. One of the major themes of
the novel is the ethical dilemma presented by such a stance to the observers.
In Speaker for the Dead by Orson Scott Card, the human xenobiologists and xenologers, biologists
and anthropologists observing alien life, are forbidden from giving the native species, the Pequeninos,
any technology or information. When one of the xenobiologists is killed in an alien ceremony, they are
forbidden to mention it. This happens again until Ender Wiggin, the main character of Ender's Game,
explains to the Pequeninos that humans cannot partake in the ceremony because it kills them. While
this is not exactly an example of the zoo hypothesis, since humanity makes contact, it is very similar
and the humans seek to keep the Pequeninos ignorant of technology.
In South Park's inaugural episode of season seven, "Cancelled", aliens refrain from contacting Earth
because the planet is the subject and setting of a reality television show. Unlike most variations of the
zoo hypothesis where contact is not initiated in order to allow organic socioeconomic, cultural, and
technological development, the aliens in this episode refrain from contact for the sole purpose of
entertainment. In essence, the aliens treat all of Earth like the titular character in The Truman Show in
order to maintain the show's integrity.
In the 2008 video game Spore, which simulates the evolution and life of species on a fictional galaxy,
intelligent species in the "Space Stage" cannot contact those in previous stages, which did
not unify their planets, nor develop spaceflightyet. However, they are allowed to abduct their
citizens/members, to create crop circles in their terrain and to place in their planets a tool called
"monolith", which accelerates their technological evolution.
Iain M. Banks' The State of the Art depicts the Culture secretly visiting Earth and then deciding to
leave it uncontacted, watching its development as a control group, to confirm whether their
manipulations of other civilizations are ultimately for the best.

Schematic representation of a planetarium simulating the universe to humans. The "real" universe is outside the
black sphere, the simulated one projected on/filtered through it.
The zoo hypothesis states that intelligent extraterrestrial life exists and does not contact life on Earth to allow for
its natural evolution and development. This hypothesis may break down under the uniformity of motive flaw: all it
takes is a single culture or civilization to decide to act contrary to the imperative within our range of detection for it
to be abrogated, and the probability of such a violation increases with the number of civilizations.
Analysis of the inter-arrival times between civilizations in the galaxy based on common astrobiological assumptions
suggests that the initial civilization would have a commanding lead over the later arrivals. As such, it may have
established what we call the zoo hypothesis through force or as a galactic/universal norm and the resultant
"paradox" by a cultural founder effectwith or without the continued activity of the founder.

457
Earth is purposely isolated (planetarium hypothesis)

Planetarium hypothesis
The planetarium hypothesis, conceived in 2001 by Stephen Baxter, attempts to provide a solution to
the Fermi paradoxby holding that our astronomical observations represent an illusion, created by a Type III
civilization capable of manipulating matter and energy on galactic scales. He postulates that we do not see
evidence of extraterrestrial life because the universehas been engineered so that it appears empty of other
life.[1]

Criticism
The hypothesis has been considered by some authors as speculative and even next to useless in any
practical scientific sense and more related to the theological mode of thinking along with the zoo
hypothesis

A related idea to the zoo hypothesis is that, beyond a certain distance, the perceived universe is a simulated
reality. The planetarium hypothesis speculates that beings may have created this simulation so that the universe
appears to be empty of other life.

It is dangerous to communicate

An alien civilization might feel it is too dangerous to communicate, either for us or for them. After all, when very
different civilizations have met on Earth, the results have often been disastrous for one side or the other, and the
same may well apply to interstellar contact. Even contact at a safe distance could lead to infection by computer
code or even ideas themselves. Perhaps prudent civilizations actively hide not only from Earth but from everyone,
out of fear of other civilizations.

Perhaps the Fermi paradox itselfor the alien equivalent of it is the reason for any civilization to avoid contact with
other civilizations, even if no other obstacles existed. From any one civilization's point of view, it would be unlikely
for them to be the first ones to make first contact. Therefore, according to this reasoning, it is likely that previous
civilizations faced fatal problems with first contact and doing so should be avoided. So perhaps every civilization
keeps quiet because of the possibility that there is a real reason for others to do so.
Liu Cixin's novel The Dark Forest talks about such a situation.

The Simulation Theory


Simulated reality is the hypothesis that reality could be simulatedfor example by computer simulation
to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be
fully aware that they are living inside a simulation. This is quite different from the current, technologically
achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality;
participants are never in doubt about the nature of what they experience. Simulated reality, by contrast,
would be hard or impossible to separate from "true" reality. There has been much debate over this topic,
ranging from philosophical discourse to practical applications in computing.

Simulation argument
The simulation hypothesis was first published by Hans Moravec.[1][2][3] Later, the philosopher Nick
Bostrom developed an expanded argument examining the probability of our reality being a
simulation.[4] His argument states that at least one of the following statements is very likely to be true:
1. Human civilization is unlikely to reach a level of technological maturity capable of producing simulated
realities, or such simulations are physically impossible to construct.

458
2. A comparable civilization reaching aforementioned technological status will likely not produce a
significant number of simulated realities (one that might push the probable existence of digital entities
beyond the probable number of "real" entities in a Universe) for any of a number of reasons, such as,
diversion of computational processing power for other tasks, ethical considerations of holding entities
captive in simulated realities, etc.
3. Any entities with our general set of experiences are almost certainly living in a simulation.
In greater detail, Bostrom is attempting to prove a tripartite disjunction, that at least one of these
propositions must be true. His argument rests on the premise that given sufficiently advanced technology,
it is possible to represent the populated surface of the Earth without recourse to digital physics; that
the qualia experienced by a simulated consciousness are comparable or equivalent to those of a naturally
occurring human consciousness; and that one or more levels of simulation within simulations would be
feasible given only a modest expenditure of computational resources in the real world.
If one assumes first that humans will not be destroyed nor destroy themselves before developing such a
technology, and, next, that human descendants will have no overriding legal restrictions or moral
compunctions against simulating biospheres or their own historical biosphere, then it would be
unreasonable to count ourselves among the small minority of genuine organisms who, sooner or later, will
be vastly outnumbered by artificial simulations.
Epistemologically, it is not impossible to tell whether we are living in a simulation. For example, Bostrom
suggests that a window could pop up saying: "You are living in a simulation. Click here for more
information." However, imperfections in a simulated environment might be difficult for the native
inhabitants to identify, and for purposes of authenticity, even the simulated memory of a blatant revelation
might be purged programmatically. Nonetheless, should any evidence come to light, either for or against
the skeptical hypothesis, it would radically alter the aforementioned probability.

Computationalism
Computationalism and Mathematical universe hypothesis
Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is
relevant to the Simulation hypothesis in that it illustrates how a simulation could contain conscious
subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems
can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in
generating artificial consciousness or cognition, it would establish the theoretical possibility of a simulated
reality. However, the relationship between cognition and phenomenal qualia of consciousness is disputed.
It is possible that consciousness requires a vital substrate that a computer cannot provide, and that
simulated people, while behaving appropriately, would be philosophical zombies. This would
undermine Nick Bostrom's simulation argument; we cannot be a simulated consciousness, if
consciousness, as we know it, cannot be simulated. However, the skeptical hypothesis remains intact, we
could still be envatted brains, existing as conscious beings within a simulated environment, even if
consciousness cannot be simulated.
Some theorists have argued that if the "consciousness-is-computation" version
of computationalism and mathematical realism (or radical mathematical Platonism) are true then
consciousnesses is computation, which in principle is platform independent, and thus admits of simulation.
This argument states that a "Platonic realm" or ultimate ensemble would contain every algorithm, including
those which implement consciousness. Hans Moravec has explored the simulation hypothesis and has
argued for a kind of mathematical Platonism according to which every object (including e.g. a stone) can
be regarded as implementing every possible computation.

SPECIFIC CONCLUSION ON THE EXPERIMENT FOR THE FORMATION OF STARS.

As athought experiment, Lake Victoria Observations done on 6th -7th /Nov/2011 ; The study of galaxy evolution is
central to our understanding of the composition and evolution of the universe. However, linking observations to
theory is signifi-cantly impeded by many uncertainties, both observational and theoretical. Three issues have been

459
addressed in this thesis: the accuracy and interpretation of measurements of the sizes of high-redshift galaxies; the
more general determination
of galaxy structure and the discrepancy between light distributions and stellar mass distributions; and the
interpretation of observed evolutionary trends in the context of galaxy formation models.
Ariny Amoss main conclusions are the following:
On average, the effective radii of quiescent galaxies at z 2 are only 1 kpc (with a signifiant spread towards
smaller and larger sizes). These small sizes are not the result of surface brightness-dependent biases.
Quiescent galaxies at z 2 are structurally quite similar to present-day elliptical galaxies; their morphologies are
smooth and follow n 4 Sersic profiles.
A comparison of the surface brightness profiles of high-redshift quiescent galaxies to those of low-redshift
ellipticals suggests that quiescent galaxy growth occurs in an inside-out fashion.
The average size difference between quiescent galaxies at z = 2 and z = 0 is not a reflection of the growth of
individual galaxies. The growth of high-redshift quiescent galaxies may be as low as half of this average size
difference, with the remaining part driven by the addition of large, recently quenched galaxies to the quiescent
population.
Galaxy structure correlates with star formation activity at all redshifts up to z = 2, such that starforming galaxies
are mode disk-like and more extended than quiescent galaxies.
The overwhelming majority of galaxies has negative radial color gradients such that the cores of galaxies are
redder than the outskirts. These color gradients indicate the presence of mass-to-light ratio gradients.
The mass distributions of galaxies are on average 25% smaller than their rest-frame optical light distributions. The
difference between mass-weighted12 structure and light-weighted structure is independent of redshift and galaxy
properties.
Semi-analytic models robustly predict a rapid increase in the sizes of quiescent galaxies, at a rate that is close to
observations. This evolution is largely driven by the growth and subsequent quenching of starforming galaxies,
which evolve in lockstep with their parent halos.
Galaxies continue to grow in mass and size after quenching. This growth is such that high-mass galaxies lie on a
tight mass-size relation, due to repeated merger events. Fewer mergers occur at lower masses, as a result of which
the scatter in the mass-size plane is higher. Galaxy structure can currently be measured accurately, and at rest-
frame optical wavelengths, up to z 2 3. Over the past years it has become clear that, although the z = 2 universe
is different in many respects, many of the most
important galaxy relations were already in place. In the coming years it will become possible to extend these studies
to higher redshift, using K band data from either space-based instruments such as the James Webb Telescope, or
from adaptive optics-assisted ground-based telescopes. This will open up an interesting epoch to structural
measurements, where starforming galaxies still dominated the galaxy population at high-mass.
Our theoretical understanding of the universe is rapidly improving. Both simulations and semi-analytic models are
becoming more sophisticated, with the inclusion of complicated gas-based physics and more realistic treatments of
star formation. Despite these improvements, many basic observables are still
poorly reproduced, especially at high redshift. It is clear that our understanding is still lacking on many basic levels,
partially due to the difficulty of comparing precise simulated quantities to more vaguely defined observed
properties.
Cross-pollination between observers and theorists is of key importance in order to progress in this respect.
Although trends such as size evolution can be measured with good precision and accuracy, selection of galaxy
samples for such measurements is not straightforward.
The ideal would be to follow the changes in individual galaxies over time. Unfortunately, making a link between
progenitor galaxies and their descendants is not trivial. Currently most observational studies are based on mass
limited galaxy samples, since stellar mass is relatively easy to measure and correlates
well with many other galaxy properties. However, since galaxies grow with time, redshift trends based on samples
selected at constant stellar mass are not equivalent to actual galaxy evolution. Some progress has been made using
galaxy samples selected at constant (cumulative) number density. This method is effective at very high stellar
masses, where the rank order of galaxies tends to change very little. Finding a reliable way to trace real galaxy
growth over a larger mass range is one of the key challenges still facing this field

460
CONCLUSIONS FOR ASTRONOMER BOOK.

CONCLUSION 1,

Astronomy is the study of the sun, moon, stars, planets, comets, gas, galaxies, gas, dust and
other non-Earthly bodies and phenomena and person who studies is and Astronomer. In
curriculum for K-4 students, NASA defines astronomy as simple the study of stars, planets and
space. Astronomy and astrology were historically associated, but astrology is not a science and
is no longer recognized as having anything to do with astronomy. Below we discuss the history of
astronomy and related fields of study, including cosmology. astronomy has focused on
observations of heavenly bodies. It is a close cousin to astrophysics. Succinctly put, astrophysics
involves the study of the physics of astronomy and concentrates on the behavior, properties, and
motion of objects out there. However, modern astronomy includes many elements of the motions
and characteristics of these bodies, and the two terms are often used interchangeably today.
Modern astronomers tend to fall into two fields: the theoretical and the observational.

Observational astronomers in the observational field focus on direct study of stars, planets,
galaxies, and so forth.

Theoretical astronomers model and analyze how systems may have evolved.

Unlike most other fields of science, astronomers are unable to observe a system entirely from
birth to death; the life of worlds, stars, and galaxies span millions to billions of years. As such,
astronomers must rely on snapshots of bodies in various stages of evolution to determine how
they formed, evolved, and died. Thus, theoretical and observational astronomy tend to blend
together, as theoretical scientists use the information actually collected to create simulations,
while the observations serve to confirm the models or to indicate the need for tweaking them.

Astronomy is broken down into a number of subfields, allowing scientists to specialize in


particular objects and phenomena.

Planetary astronomers, for instance, focus on the growth, evolution, and death of planets, while
solar astronomers spend their time analyzing a single starour sun. Stellar astronomers turn
their eyes to the stars, including the black holes, nebulae, white dwarfs, and supernova that
survive stellar deaths.

Galactic astronomers study our galaxy, the Milky Way, while extragalactic astronomers peer
outside of it to determine how these collections of stars form, change, and die.

Cosmologists focus on the universe in its entirety, from its violent birth in the Big Bang to its
present evolution, all the way to its eventual death. Astronomy is often (not always) about very
concrete, observable things, whereas cosmology typically involves large-scale properties of the
universe and esoteric, invisible and sometimes purely theoretical things like string theory, dark
matter and dark energy, and the notion of multiple universes.

Astronomical observers rely on different wavelengths of the electromagnetic spectrum (from radio
waves to visible light and on up to X-rays and gamma rays) to study the wide span of objects in

461
the universe. The first telescopes focused on simple optical studies of what could be seen with
the naked eye, and many telescopes continue that today. [Celestial Photos: Hubble Space
Telescope's Latest Cosmic Views]

But as light waves become more or less energetic, they move faster or slower. Different
telescopes are necessary to study the various wavelengths. More energetic radiation, with
shorter wavelenghts, appears in the form of ultraviolet, x-ray, and gamma-ray wavelengths, while
less energetic objects emit longer-wavelength infrared and radio waveAstrometry, the most
ancient branch of astronomy, is the measure of the sun, moon, and planets. The precise
calculations of these motions allows astronomers in other fields to model the birth and evolution
of planets and stars, and to predict events such as eclipses, meteor showers, and the appearance
of comets.

Early astronomers noticed patterns in the sky and attempted to organize them in order to track and
predict their motion. Known as constellations, these patterns helped people of the past to measure
the seasons. The movement of the stars and other heavenly bodies was tracked around the
world, but was prevalent in China, Egypt, Greece, Mesopotamia, Central America, and India.

The image of an astronomer is a lone soul at a telescope during all hours of the night. In reality,
most hard-core astronomy today is done with observations made at remote telescopes on the
ground or in space that are controlled by computers, with astronomers studying computer-
generated data and images.

Since the advent of photography, and particularly digital photography, astronomers have
provided amazing pictures of space that not only inform science but enthrall the public. [All-Time
Great Galaxy Photos]

Astronomers and spaceflight programs also contribute to the study of our own planet, when
missions primed at looking outward (or travelling to the moon and beyond) look back and
snap great pictures of Earth from [Link] Surprising Facts About the Universe

After Galieo Galilei, Keplers contribution to the study of astronomy and planetary orbits reached well beyond the
Renaissance period. His discoveries had an impact on other areas of science, including physics, optics and
crystallography. Keplers interest in the deflection of light in different environments helped explain how lenses
work and how the eye functions like a camera. Kepler also wrote a long essay called a treatise on the geometry of
snowflakes. This was an early contribution to crystallography, the study of crystals. Kepler could be classified as an
unsung hero. He was not as well known as Galileo or Newton, but his discoveries were equally important. Without
Keplers laws of planetary motion, Isaac Newton may never have come to his conclusion about the Law of Gravity.
As an author, his knowledge of the solar system allowed his imagination to run wild and he wrote one of the first
works of science fiction, The Dream. Kepler would be proud to know that science fiction is a popular genre loved
by people of all ages. In astronomy, Kepler's laws of planetary motion are three scientific laws describing the
motion of planets around the Sun.

CONCLUSIONS 2
The presented predictions for the evolution of the atomic and molecular hydrogen content of galaxies from
z 6 0, based on a semi-analytic model of galaxy forma-tion, including new modeling of the partitioning
of cold gas in galactic discs into atomic, molecular, and ionised phases. We present results for two
dierent H2 formation recipes: one a pressure-based recipe motivated by the empirical rela-tion between

462
molecular fraction and gas midplane-pressure from Blitz & Rosolowsky (2006), and one based on
numerical hydrodynamic simulations in which the molecular fraction is highly dependent on the cold gas
metallicity as well as the local UV background (Gnedin & Kravtsov 2011). We com-pared our predictions to
local and high-redshift observations and adopted an alternate approach in which we estimate the CO
content of galaxies and compare directly with CO ob-servations. We summarize our main findings below.

Without any tuning, our models correctly predict the trends between gas fractions and gas-to-stellar-
mass ratios of Hi and H2 in local galaxies with mass and internal density. We furthermore reproduce the Hi
and H2 disc sizes of local and high redshift galaxies.
Both H2 formation recipes reproduce the observed z = 0 Hi mass function fairly well over the whole
range probed by observations. Both models predict a small excess of low-Hi-mass galaxies. The high-
mass end of the Hi mass function
remains remarkably constant at redshifts of z . 2.0 for both H 2 formation recipes.
Both recipes correctly predict the H2 mass function over the entire mass range probed. The number
density of H2-
massive galaxies increases from z 6 to z 4.0 after which it remains fairly constant, whereas the
number density of low-H2 mass galaxies decreases almost monotonically from z 4 to z 0.
Galaxy gas fractions remain relatively high (& 0.7) from z 6 3, then drop fairly rapidly. A similar
trend holds for the H2 fraction of galaxies, but the drop occurs at an even higher rate.
The metallicity-based recipe yields a much higher cos-mic density of cold gas over the entire redshift
range probed. The cosmic H2 fraction as predicted by the metallicity-based recipe is much lower than the
H2 fraction predicted by the pressure-based recipe.
The galaxies responsible for the high cosmic gas den-sity and low cosmic H2 fraction all reside in low-
mass halos (log (Mhalo/M) < 10), and contain negligible amounts of stellar material. The build-up of atomic
gas in these low-mass halos is driven by a lack of metals at high redshift, necessary to form molecular
gas, stars, and produce more metals.
The conversion of H2 masses to CO luminosities pro-vides valuable direct predictions for future
surveys with ALMA at low redshifts or radio interferometers such as the
VLA at higher redshifts. None of the presented methods for the CO-to-H2 conversion predicts perfect
agreement with observations from the literature, although the physically mo-tivated nature of the
Narayanan et al. (2012) and Feldmann, Gnedin & Kravtsov (2012) approaches are favoured over a
constant CO-to-H2 conversion factor.

The results presented in this book can serve as predic-tions for future surveys of the atomic and
molecular con-tent of galaxies. We look forward to observations from new and upcoming facilities, that will
be able to confront our predictions, further constraining the physics that drives the formation of molecules
and the evolution of gas in galaxies.
Astronomers recently used large radio telescopes in Germany and Australia to map the distribution of
hydrogen gas throughout our galaxy, the Milky Way. Meanwhile, another team of researchers is looking
back in time to track the so-called hydrogen epoch in early cosmic history, when the first celestial lights
in the universe turned on. Although all astronomers now recognize hydrogen as the most abundant
element in space and the main component of stars and galaxies, that fact was slow to dawn. In 1924,
when astronomy student Cecilia Payne discovered that stars consisted mostly of hydrogen, the idea
seemed ludicrous,Dava Sobel is a former science reporter for the New York Times and the author of five
books, including the New York Times best-sellers Longitude, Galileos Daughter, and The Planets. In
addition to having an asteroid named for her (#30935), Sobel is the recipient of the National Science
Boards prestigious Individual Public Service Award and the Boston Museum of Sciences Bradford
Washburn Award, among many others.

CONCLUSION 3

463
star Star formation is the process by which dense regions within molecular clouds in interstellar space,
sometimes referred to as "stellar nurseries" or "star-forming regions", fuse to form stars.[1] As a branch
of astronomy, star formation includes the study of the interstellar medium (ISM) and giant molecular
clouds (GMC) as precursors to the star formation process, and the study of protostars and young stellar
objects as its immediate products. It is closely related to planet formation, another branch of astronomy.
Star formation theory, as well as accounting for the formation of a single star, must also account for the
statistics of binary stars and the initial mass function.
A spiral galaxy like the Milky Way contains stars, stellar remnants, and a diffuse interstellar medium (ISM)
of gas and dust. The interstellar medium consists of 104 to 106 particles per cm3 and is typically
composed of roughly 70% hydrogenby mass, with most of the remaining gas consisting of helium. This
medium has been chemically enriched by trace amounts of heavier elements that were ejected from stars
as they passed beyond the end of their main sequencelifetime. Higher density regions of the interstellar
medium form clouds, or diffuse nebulae,[2] where star formation takes place.[3] In contrast to spirals,
an elliptical galaxy loses the cold component of its interstellar medium within roughly a billion years, which
hinders the galaxy from forming diffuse nebulae except through mergers with other galaxies. [4]
In the dense nebulae where stars are produced, much of the hydrogen is in the molecular (H2) form, so
these nebulae are called molecular clouds.[3] Observations indicate that the coldest clouds tend to form
low-mass stars, observed first in the infrared inside the clouds, then in visible light at their surface when
the clouds dissipate, while giant molecular clouds, which are generally warmer, produce stars of all
masses.[5] These giant molecular clouds have typical densities of 100 particles per cm 3, diameters of
100 light-years (9.51014 km), masses of up to 6 million solar masses (M),[6] and an average interior
temperature of 10 K. About half the total mass of the galactic ISM is found in molecular clouds [7] and in
the Milky Way there are an estimated 6,000 molecular clouds, each with more than 100,000 M.[8] The
nearest nebula to the Sun where massive stars are being formed is the Orion nebula, 1,300 ly
(1.21016 km) away.[9] However, lower mass star formation is occurring about 400450 light years distant
in the Ophiuchi cloud complex.[10]
A more compact site of star formation is the opaque clouds of dense gas and dust known as Bok globules;
so named after the astronomer Bart Bok. These can form in association with collapsing molecular clouds
or possibly independently.[11] The Bok globules are typically up to a light year across and contain a
few solar masses.[12] They can be observed as dark clouds silhouetted against bright emission nebulae or
background stars. Over half the known Bok globules have been found to contain newly forming stars Key
elements of star formation are only available by observing in wavelengthsother than the optical. The
protostellar stage of stellar existence is almost invariably hidden away deep inside dense clouds of gas
and dust left over from the GMC. Often, these star-forming cocoons known as Bok globules, can be seen
in silhouette against bright emission from surrounding gas.[31] Early stages of a star's life can be seen
in infrared light, which penetrates the dust more easily than visible light.[32] Observations from the Wide-
field Infrared Survey Explorer (WISE) have thus been especially important for unveiling numerous Galactic
protostars and their parent star clusters.[33][34] Examples of such embedded star clusters are FSR 1184,
FSR 1190, Camargo 14, Camargo 74, Majaess 64, and Majaess 98 .The structure of the molecular cloud
and the effects of the protostar can be observed in near-IR extinction maps (where the number of stars are
counted per unit area and compared to a nearby zero extinction area of sky), continuum dust emission
and rotational transitions of CO and other molecules; these last two are observed in the millimeter
and submillimeter range. The radiation from the protostar and early star has to be observed in infrared
astronomywavelengths, as the extinction caused by the rest of the cloud in which the star is forming is
usually too big to allow us to observe it in the visual part of the spectrum. This presents considerable
difficulties as the Earth's atmosphere is almost entirely opaque from 20m to 850m, with narrow windows
at 200m and 450m. Even outside this range, atmospheric subtraction techniques must be used.
X-ray observations have proven useful for studying young stars, since X-ray emission from these objects is
100100,000 times stronger than X-ray emission from main-sequence stars.[37] The earliest detections of
X-rays from T Tauri stars were made by the Einstein X-ray Observatory.[38][39] For low-mass stars X-rays
are generated by the heating of the stellar corona through magnetic reconnection, while for high-
mass O and early B-type stars X-rays are generated through supersonic shocks in the stellar winds.
Photons in the soft X-ray energy range covered by the Chandra X-ray Observatory and XMM Newton may
penetrate the interstellar medium with only moderate absorption due to gas, making the X-ray a useful

464
wavelength for seeing the stellar populations within molecular clouds. X-ray emission as evidence of stellar
youth makes this band particularly useful for performing censuses of stars in star-forming regions, given
that not all young stars have infrared excesses.[40] X-ray observations have provided near-complete
censuses of all stellar-mass objects in the Orion Nebula Cluster and Taurus Molecular Cloud.[41][42]
The formation of individual stars can only be directly observed in the Milky Way Galaxy, but in distant
galaxies star formation has been detected through its unique spectral signature.

CONCLUSION 4
Spiral arms are regions of stars that extend from the center of spiral and barred spiral galaxies. These
long, thin regions resemble a spiral and thus give spiral galaxies their name. Naturally,
different classifications of spiral galaxies have distinct arm-structures. Sc and SBc galaxies, for instance,
have very "loose" arms, whereas Sa and SBa galaxies have tightly wrapped arms (with reference to the
Hubble sequence). Either way, spiral arms contain many young, blue stars (due to the high mass density
and the high rate of star formation), which make the arms so bright.
Spiral galaxies consist of five distinct components:

A flat, rotating disc of (mostly newly created) stars and interstellar matter
A central stellar bulge of mainly older stars, which resembles an elliptical galaxy
A near-spherical halo of stars, including many in globular clusters
A supermassive black hole at the very center of the central bulge
A near-spherical dark matter halo
The relative importance, in terms of mass, brightness and size, of the different components varies from
galaxy to galaxy.
A bulge is a huge, tightly packed group of stars. The term commonly refers to the central group of stars
found in most spiral galaxies.
Using the Hubble classification, the bulge of Sa galaxies is usually composed of Population II stars, that
are old, red stars with low metal content. Further, the bulge of Sa and SBa galaxies tends to be large. In
contrast, the bulges of Sc and SBc galaxies are much smaller and are composed of young,
blue Population I stars. Some bulges have similar properties to those of elliptical galaxies (scaled down to
lower mass and luminosity); others simply appear as higher density centers of disks, with properties similar
to disk galaxies.
Many bulges are thought to host a supermassive black hole at their centers. Such black holes have never
been directly observed, but many indirect proofs exist. In our own galaxy, for instance, the object
called Sagittarius A* is believed to be a supermassive black hole. There is a tight correlation between the
mass of the black hole and the velocity dispersion of the stars in the bulge, the M-sigma relation.
The bulk of the stars in a spiral galaxy are located either close to a single plane (the galactic plane) in
more or less conventional circular orbits around the center of the galaxy (the Galactic Center), or in
a spheroidal galactic bulge around the galactic core.
However, some stars inhabit a spheroidal halo or galactic spheroid, a type of galactic halo. The orbital
behaviour of these stars is disputed, but they may describe retrograde and/or highly inclined orbits, or not
move in regular orbits at all. Halo stars may be acquired from small galaxies which fall into and merge with
the spiral galaxyfor example, the Sagittarius Dwarf Spheroidal Galaxy is in the process of merging with
the Milky Way and observations show that some stars in the halo of the Milky Way have been acquired
from it.

Unlike the galactic disc, the halo seems to be free of dust, and in further contrast, stars in the galactic halo
are of Population II, much older and with much lower metallicity than their Population I cousins in the

465
galactic disc (but similar to those in the galactic bulge). The galactic halo also contains many globular
clusters.
The motion of halo stars does bring them through the disc on occasion, and a number of small red
dwarfs close to the Sun are thought to belong to the galactic halo, for example Kapteyn's
Star and Groombridge 1830. Due to their irregular movement around the center of the galaxyif they do
so at allthese stars often display unusually high proper motion.
The pioneer of studies of the rotation of the Galaxy and the formation of the spiral arms was Bertil
Lindblad in 1925. He realized that the idea of stars arranged permanently in a spiral shape was untenable.
Since the angular speed of rotation of the galactic disk varies with distance from the centre of the galaxy
(via a standard solar system type of gravitational model), a radial arm (like a spoke) would quickly become
curved as the galaxy rotates. The arm would, after a few galactic rotations, become increasingly curved
and wind around the galaxy ever tighter. This is called the winding problem. Measurements in the late
1960s showed that the orbital velocity of stars in spiral galaxies with respect to their distance from the
galactic center is indeed higher than expected from Newtonian dynamics but still cannot explain the
stability of the spiral structure.
Since the 1960s, there have been two leading hypotheses or models for the spiral structures of
galaxies:star formation caused by density waves in the galactic disk of the galaxy. the SSPSF model star
formation caused by shock waves in the interstellar medium. These different hypotheses do not have to be
mutually exclusive, as they may explain different types of spiral arms.
CONCLUSION 5

The Origins of Stars and Planets

Like the giant galaxies in which they appear, stars and their planets form when clumps of gas and dust contract to
much smaller sizes. During the first phases of star formation, each of these contracting clumps was too cool to
produce visible light. Within these clumps, the attraction of each part for all the other parts caused the clumps to
shrink steadily, squeezing their material into ever-smaller volumes. As the clumps continued to contract, the resulting
increase in density caused a corresponding rise in temperature at the clumps' center. Eventually, as this central
temperature rose above 10 million degrees, atomic nuclei began to fuse. The onset of nuclear fusion, which marks the
birth of a new star, occurred nearly 5 billion years ago in the case of our Sun. In the case of the oldest stars that shine,
this onset of nuclear fusion began 10 to 14 billion years ago.

During the later stages of the contraction process, a rotating disk of gas and dust formed around the central mass that
would become a star. To detect these protoplanetary disks, the precursors of planetary systems around stars that are in
the process of formation, requires telescopes with an improved angular resolution, sufficient to reveal more than the
disks bare outlines. We now know that other stars have planets, as revealed by recent astronomical measurements
that detected the pull exerted on their stars by large, Jupiter-like planets.

Many of the initiatives recommended by the Astronomy and Astrophysics Survey Committee will address the origins
of stars and planets. NGST and GSMT will probe the dusty environments of star-forming regions with unprecedented
sensitivity and angular resolution. Existing ground-based telescopes will be made much more powerful through new
instruments provided by the Telescope System Instrumentation Program. Protoplanetary disks are much cooler than
stars and emit most of their radiation in the infrared region of the spectrum. To permit observations in the far
infrared, the committee recommends the development of the Single Aperture Far Infrared Observatory (SAFIR).
Observations at millimeter and different infrared wavelengths will enable astronomers to measure the concentrations
of different species of atoms and molecules in the disk. It will also be possible to determine the speeds at which these
particles are moving and the temperatures to which they have been heated.

The Telescope System Instrumentation Program (TSIP) will leverage non-federal investment in large new ground-
based telescopes.

466
The AASCs highest-priority recommendation in the moderate cost category for both space-and ground-based
initiatives promotes not an instrument but a program, one that will fund instruments for the new generation of large
telescopes that are being constructed at university and independent observatories. The Telescope System
Instrumentation Program (TSIP) will leverage these investments by markedly improving the equipment that detects
and analyzes the radiation reaching these telescopes. In particular, TSIP will assist the development of systems for
adaptive optics, which continuously readjust the reflecting surface of a telescope, canceling the blurring effects of the
atmosphere. Adaptive optics will allow a manyfold increase in the angular resolving power of all large telescopes.
This improvement will give these telescopes an increased ability to study a host of phenomena. Among these are the
atmospheres of the other planets in the solar system, the structure of protoplanetary disks around other stars, the
behavior of matter in active galactic nuclei, the history of star formation in young galaxies, and the nature of the
objects that produce mysterious bursts of gamma rays.

The single aperture far infrared observatory (safir)will provide our most sensitive eye on the far-infrared frontier

The Next Generation Space Telescope (NGST) will enable infrared observations with about three times the angular
resolution and 100 times the sensitivity of the HST. However, the NGST cannot observe infrared radiation with the
longest wavelengths-the far-infrared domain of the spectrum. This spectral region is rich in information about stars
and galaxies in the process of forming; brown dwarfs ("failed stars" that have too little mass to begin nuclear fusion);
and ultraluminous, infrared-radiating galaxies. Although significant improvements in observations of the far- infrared
domain will occur with the coming deployment of the Space Infrared Telescope Facility, the airborne Stratospheric
Observatory for Infrared Astronomy, and the European Space Agency's Herschel Space Observatory, longer-
wavelength observations with greater sensitivity are needed. The recommended next step for observing the cosmos at
far- infrared wavelengths is the space-borne Single Aperture Far Infrared (SAFIR) Observatory. SAFIR will include
both a telescope with a mirror at least as large as that of the NGST and a set of cooled instruments. Its size and
temperature will give it an angular precision and an ability to detect faint sources that will make it roughly a million
times superior to existing instruments that observe the far-infrared spectral domain. Because the NGST will pioneer
cost- effective development of space-borne telescopes with mirrors larger than the HST's, SAFIR can be designed
and built more cheaply than the NGST. other longer-wavelength tools,The Combined Array for Research in
Millimeter-wave Astronomy and the South Pole Submillimeter-wave Telescope will be powerful tools for studying
star-forming molecular clouds and other dusty parts of the universe, as well as clusters of galaxies.

CONCLUSION 6

The first person we know of to suggest that the Sun is a star up close (or, conversely, that stars are Suns far away)
was Anaxagoras, around 450 BC. It was again suggested by Aristarchus of Samos, but this idea did not catch on.
About 1800 years later, around AD 1590, Giordano Bruno suggested the same thing, and was burnt at the stake for it.
Through the work of Galileo, Kepler, and Copernicus during the 16th and 17th centuries the nature of the solar
system and the Sun's place in it became clear, and finally in the 19th century the distances to stars and other things
about them could be measured by various people. Only then was it proved that the Sun is a star.

For most of human history, almost all people have thought that the Earth was in the center of a giant sphere (or ball,
called the "celestial sphere") with the stars stuck to the inside of the sphere. The planets, Sun, and Moon were thought
to move between the sphere of stars and the Earth, and to be different from both the Earth and the stars.
Anaxagoras, who lived in Athens, Greece, around 450 BC (about 2450 years ago), thought that the Sun and stars
were fiery stones, that the stars were too far away for their heat to be felt, and that the Sun was perhaps more than a
few hundred miles in size. With that Anaxagoras was, as far as we know, the first one to suggest that the Sun is a star.
His ideas were met with disapproval and he was finally imprisoned for impiety, because his ideas did not fit the
prejudices of the time.
Aristarchus of Samos (Samos is a Greek island in the Aegean Sea) lived from about 310 to 230 BC, about 2250 years
ago. He measured the size and distance of the Sun and, though his observations were inaccurate, found that the Sun is
much larger than the Earth. Aristarchus then suggested that the small Earth orbits around the big Sun rather than the
other way around, and he also suspected that stars were nothing but distant suns, but his ideas were rejected and later
forgotten, and he, too, was threatened for suggesting such things. Aristarchus and Anaxagoras had no way of actually
measuring the sizes of or distances to stars (except the Sun), so they had no proof for their ideas.

467
Claudius Ptolemaeus (commonly called Ptolemy by speakers of English) of Alexandria (a Greek city in what is now
Egypt) around AD 140 (about 1860 years ago) described a geocentric (= earth-centered) model of the universe, with
the Earth in the center of the Universe, the Sun as one of the wanderers ("planetes" in Greek) that move relative to the
stars, and the stars fixed to the outermost celestial sphere. In this model, the stars and the Sun were completely
different. The universe described in this book (which book came to be known as the Almagest) was accepted as the
truth by practically everybody for the next 14 centuries, mostly because it was endorsed by the Roman Catholic
Church, which became very powerful during that time. This model described fairly accurately how planets move, but
not why they moved in just that way, and it lumped the Sun together with the planets rather than with the stars.
Mikolaj Kopernik (known as Nicholas Copernicus outside of his native Poland) lived from 1473 to 1543. In 1543,
just before he died, he published a book called "De revolutionibus orbium celestium" in which he proposed a
heliocentric (= sun-centered) solar system with the Sun in the center and the Earth merely one of the planets orbiting
the Sun, just like the other ones. This model was simpler than Ptolemy's geocentric model, though either one could be
used to predict planetary motion. The model of Copernicus set the Sun apart from the planets, but did not say
anything about the stars. Copernicus waited as long as possible before publishing this book because he was afraid the
Church would not approve of it. At first, most opposition to his ideas actually came from Protestants, not Catholics.
Martin Luther, one of the main early figures in protestantism, declared loudly that Copernicus was a fool for "setting
the Earth in motion".
Giordano Bruno, an Italian philosopher, lived from 1548 to 1600. He decided that if the Earth is a planet just like the
others, then it does not make sense to divide the Universe into a sphere of fixed stars and a solar system. He said that
the Sun is a star, that the Universe is infinitely large, and that there are many worlds. He was condemned by both the
Roman Catholic and Reformed Churches for this as well as other things and was burnt alive in Rome in 1600 for
heresy (claiming something that does not fit the ideas accepted by the Church).
Galileo Galilei, an Italian scientist, lived from 1564 to 1642. In 1610, he was the first person we know of to use the
newly invented telescope to look at the stars and planets. He discovered the satellites of Jupiter, which showed that
Ptolemy's and the Church's idea that there was only one center of orbits in the Universe (namely, the Earth) was
incorrect. Based on his observations, Galilei argued for the heliocentric model of Copernicus. He noticed that stars
look like little points even when seen through a telescope, and concluded that stars must be very far away indeed.
In part because Bruno (a convicted heretic) supported them, the ideas of Copernicus were condemned by the Catholic
Church in 1616, and Galileo was tried and convicted of heresy in 1633. He was forced to publicly deny the ideas of
Copernicus, and was held under house arrest until he died in 1642. In 1979 a reinvestigation of this conviction was
started by the Church and finally the conviction was overturned, about 340 years after Galileo's death. A famous
story, but perhaps untrue, has Galileo mutter (of the Earth) "And yet she moves!" on his death-bed. Yet, Galileo, like
Bruno and Aristarchus before him, had no proof that the Sun and stars are alike.
Johannes Kepler of Germany lived from 1571 to 1630. He studied the positions of planets very carefully and from
that determined three Laws of planetary motion that firmly put the Sun in the center of the solar system with the
planets orbiting the Sun. It was now clear that the Sun is not a planet, though why these laws of planetary motion
should be the way theY are still unclear.
Christiaan Huygens of Holland lived from 1629 to 1695. He determined the distance to the star Sirius, assuming that
that star was as bright as the Sun and appeared faint only because of its great distance. He found that the distance to
Sirius must be very great. At this time, then, the idea that the Sun is a star was considered seriously by scientists.
Isaac Newton, an English scientist, lived from 1642 to 1727. In 1665 he realized that it was gravity that held the solar
system together. Another famous story, probably untrue, has this thought pop into Newton's head when an apple falls
on his head while he sits under an apple tree, watching the Moon. Newton then determined the formula that describes
how gravity works and showed that this explains the orbits and motion of the planets around the Sun and of moons
around planets, and therefore also Kepler's three Laws of planetary motion. The motion of the planets and moons
were now explained by a single formula: Newton's Law of Gravity. People speculated that this same law might be
valid all through the universe.
Finally, in 1838, Friedrich Bessel for the first time measured the distance to a star without any assumptions about the
nature of stars and found it to be enormous. Distances to other stars followed soon, and then people could calculate
the true brightnesses of stars, corrected for their distance to us, and discovered them to be about as bright as the Sun.
When other things about the Sun were also found to be like those of stars, such as its surface temperature and
chemical composition, then the proof was finally here that the Sun is a [Link] Sun is now classified as a G2V star: a
main-sequence dwarf star of moderate temperature.

468
CONCLUSION 7

Astar is formed by processes of nuclear fission and nuclear fusion,In nuclear physics and nuclear
chemistry, nuclear fission is either a nuclear reaction or a radioactive decay process in which
the nucleus of an atom splits into smaller parts (lighter nuclei). The fission process often produces
free neutrons and gamma photons, and releases a very large amount of energy even by the energetic
standards of radioactive decay.
Nuclear fission of heavy elements was discovered on December 17, 1938 by German Otto Hahn and his
assistant Fritz Strassmann, and explained theoretically in January 1939 by Lise Meitner and her
nephew Otto Robert Frisch. Frisch named the process by analogy with biological fission of living cells. It is
an exothermic reactionwhich can release large amounts of energy both as electromagnetic radiation and
as kinetic energy of the fragments (heating the bulk material where fission takes place). In order for fission
to produce energy, the total binding energy of the resulting elements must be less negative (higher energy)
than that of the starting element.
Fission is a form of nuclear transmutation because the resulting fragments are not the same element as
the original atom. The two nuclei produced are most often of comparable but slightly different sizes,
typically with a mass ratio of products of about 3 to 2, for common fissile isotopes.[1][2] Most fissions are
binary fissions (producing two charged fragments), but occasionally (2 to 4 times per 1000
events), three positively charged fragments are produced, in a ternary fission. The smallest of these
fragments in ternary processes ranges in size from a proton to an argonnucleus.

In nuclear physics,, nuclear fusion is a reaction in which two or more atomic nuclei come close enough to
form one or more different atomic nuclei and subatomic particles (neutrons or protons). The difference in
mass between the products and reactants is manifested as the release of large amounts of energy. This
difference in mass arises due to the difference in atomic "binding energy" between the atomic nuclei
before and after the reaction. Fusion is the process that powers active or "main sequence" stars, or
other high magnitude stars.
The fusion process that produces a nucleus lighter than iron-56 or nickel-62 will generally yield a net
energy release. These elements have the smallest mass per nucleon and the largest binding
energy per nucleon, respectively. Fusion of light elements toward these releases energy
(an exothermic process), while a fusion producing nuclei heavier than these elements, will result in energy
retained by the resulting nucleons, and the resulting reaction is endothermic. The opposite is true for the
reverse process, nuclear fission. This means that the lighter elements, such as hydrogen and helium, are
in general more fusible; while the heavier elements, such as uranium and plutonium, are more fissionable.
The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei into elements
heavier than iron as in

In gamma-ray astronomy, contributes to the formation stars. gamma-ray bursts (GRBs) are extremely
energetic explosions that have been observed in distant galaxies. They are the
brightest electromagnetic events known to occur in the universe.[1] Bursts can last from ten milliseconds to
several hours.[2][3][4]After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at
longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave and radio).[5]
The intense radiation of most observed GRBs is believed to be released during
a supernova or hypernova as a rapidly rotating, high-mass star collapses to form a neutron star, quark
star, or black hole. A subclass of GRBs (the "short" bursts) appear to originate from a different process:
the merger of binary neutron stars. The cause of the precursor burst observed in some of these short
events may be the development of a resonance between the crust and core of such stars as a result of the
massive tidal forces experienced in the seconds leading up to their collision, causing the entire crust of the
star to shatter.[6]
The sources of most GRBs are billions of light years away from Earth, implying that the explosions are
both extremely energetic (a typical burst releases as much energy in a few seconds as the Sun will in its

469
entire 10-billion-year lifetime)[7] and extremely rare (a few per galaxy per million years [8]). All observed
GRBs have originated from outside the Milky Way galaxy, although a related class of phenomena, soft
gamma repeater flares, are associated with magnetars within the Milky Way. It has been hypothesized that
a gamma-ray burst in the Milky Way, pointing directly towards the Earth, could cause a mass
extinction event.[9]
GRBs were first detected in 1967 by the Vela satellites, which had been designed to detect covert nuclear
weapons tests. Following their discovery, hundreds of theoretical models were proposed to explain these
bursts, such as collisions between comets and neutron stars.[10] Little information was available to verify
these models until the 1997 detection of the first X-ray and optical afterglows and direct measurement of
their redshifts using optical spectroscopy, and thus their distances and energy outputs. These discoveries,
and subsequent studies of the galaxies and supernovae associated with the bursts, clarified the distance
and luminosity of GRBs, definitively placing them in distant galaxies.

CONCLUSION 8

In twentieth century, Schrodingers Cats Paradox and Heisenbergs Uncertainty Principle: Two Ideas that Changed
the World. The beginning of the twentieth century marked the golden era of theoretical physics. This summit was
obviously achieved as a result of the epic contributions by scientists who dared to break sacred conventions.
However, credit must also be given to the editors of scientific journals who similarly had the audacity to publish such
unconventional theories. I highly doubt whether today a magazine would publish the ground breaking [Link]
of Schrodinger and Heisenberg,Richard Feynmann, Albert Einstein, Paul Dirac,and others whose remarkable ideas
changed our outlook on the world.

To this very day, the ideas developed by Schrodinger and Heisenberg are difficult to comprehend. My own wave
theory is based on the idea that every single quant formation is composed of two, perpendicular energetic swirls
(loops): a largely invisible electric loop (dark matter) and a magnetic (shinning) loop that is visible and palpable. In
fact, research from all disciplines of the the natural sciences inevitably leads us to this wave formation in electric
shock. It is the logical conclusion of all empirical studies as well as astronomical and biological observations.
Although the theory also utilizes mathematical equations, it does not get bogged down in such calculations.

Einstein had deep reservations about his colleagues works and sensed that physicists were overlooking some sort of
element that would synchronize all the theories into a coherent whole. He felt that theoretical physics had failed to
offer an adequate explanation for vast formations, but also admitted that his own research was far from flawless.
Wave theorys basic two loop structure not only proves that Einstein, Planck, Schrodinger and Heisenberg were
correct, but provides the missing link the electric loop (swirl) to the magnetic loop (swirl) that thus unites all their
theories. In other words, all these ideas are compatible with each other. Furthermore, wave theory covers the behavior
of both the largest and smallest formations.

Ariny Amos always awestruck over how these revolutionary scientists reached their conclusions without any
empirical backing. Today, there is sound evidence of an inseparable bond between the magnetic and electric
components of every natural energetic formation. In effect, both components combine to form one entity (quant)
despite the fact that they possess intrinsically different behaviors. Consequently, the wave formation is constantly in a
state of both superposition and internal competition. This sophisticated structure essentially unites the ideas of all the
above-mentioned physicists. Although this is an observable phenomenon, I too still have great difficulties digesting
this baffling relationship.

Michael Faraday reasoned that space is not empty, but connected by a line of forces. Charles Augustine de Coulomb
discovered that two like-charged entities could either pull or push each other without being connected, depending on
their polarity in space. Hans Chrisitian Oersted found that a charged wire creates magnetic fields that influence the
orientation of nearby compass needles . Furthermore, he discovered that Isaac Newtons equation of gravitational
force also governed electric forces.

470
Every experiment that has examined empty space suggests that it contains mysterious forces that link various
entities together. Given the ability of this mediating matter to cause magnetic needles to rotate, we may conclude that
it is not static. In fact, it transfers energy via existing paths or creates paths on its own.

As early as 1900, scientists were familiar with Faradays experiments and Maxwells equations, which confirmed the
formers works. Wave theory attempts to prove that these invisible forces are energetic (electro-magnetic) matter and
that they are indeed the building blocks of the entire universe. In other words, energetic matter creates the primary
formations that Gell-Mann Murray referred to as quarks

In bubble chamber experiments, particles that are destroyed in collisions leave behind spiral formations. These
formations must be quarks, as rudimentary energetic matter cannot exist on its own for long and must join a stable
wave formation. All waves are composed of two loops, which provide them with their rigidity.

Objections will undoubtedly be raised over my contention that the universe and the wave formation is a
VIRTUAL entity. However, when I use the term "virtual," Ariny Amos not implying that it is a structure that can
easily be disposed of or sundered apart Moreover, all of us are aware of how unpleasant and real the supposedly
virtual sensation of an electric shock can be.

Our very universe is also a wave formation with two energetic swirls that comprise a stable wave. On account of its
two loops, the wave formation takes up its own space (the Pauli Exclusion Principle) and is endowed with its own
energetic capacity and direction it does not randomly turn over. It is both elastic and rigid! Moreover, it contains
genes that were formed over the course of its entire existence. Even photons the simplest formations of energetic
matter known to science provide us with access to the entire history of the universe and many other facts. The
wave formation is ostensibly simple, but it will take generations to fully comprehend, as the notion that nothing
created everything is beyond our wildest imaginations and contradicts our most basic instincts.

Dated 6th -7th /Nov/2017 Astronomical observations have provided us with many examples of this dual structure, such
as double galactic and dual star formations (like Galaxy 2207). Moreover, there are photographs of formations in
which stars that are connected in a chain-like pattern by gaseous bridges depart from galactic clouds (see pictures
below). This duality can also be found in photon streams as well as in biological entities like DNA strands. Nature
strives for simplicity. While the double loop (helix) formation is very simple, it is nevertheless extremely
sophisticated.

Schrodinger and Heisenberg lacked the invisible energetic loop. However, they overcame this deficiency by virtue of
their equations. In a roundabout way, their ideas helped us formulate the concept of an invisible (or missing)
energetic presence (the additional loop), which all such formations must contain.

All wave formations consist of peaks and valleys , known as quantum vacuum energy, in which the magnetic swirl
constitutes the highpoint. This involves a constant superposition that moves in a popping manner within the loop of
the wave formation . Energetic matter behaves in a mercurial fashion, but its range is limited to closed formations
along its own path.

Both Schrodinger and Heisenberg came to the conclusion that there must be some invisible (missing) entity that is
linked to the visible magnetic formation. Heisenbergs uncertainty principle clearly shows that formations must
contain an additional energetic dimension in order to account for the behavior of subatomic particles. Wave theory
explains how the invisible loop connects to and compliments the visible quantifiable loop by forming a tunnel, as
seen in the primary formation of photon streams and DNA
This same hunch led to Schrodingers declaration of superposition, which he explained using his famous Cats
Paradox. Oerstad (the magnetic needle) and Faraday's experiments also provide firm evidence that every energetic
behavior is the product of dual formations. I have devised the following equations in order to prove this hypothesis:
The Equation of Everything, Phase Transition Equations ;
1. High Phase Transition: The superposition of energetic matter. Energetic matter.

2. Wave-Particle Phase Transition:

471
"Everything Equation"
3Lower Phase Transition

Wave theory similarly proves that the entanglement phenomenon is viable. In other words, it displays how non-
local (distant) subatomic particleswith identical proportions and properties can instantly cooperate and
communicate, regardless of the distance between them ,all universes respond to universes, galaxies react to other
galaxies, and stars to stars within the same galactic wave. Similarly, planets only interact with other planets within
their solar constellation (wave) and moons to other moons within the same planetary wave. Moreover, atoms only
communicate with atoms within the same wave constellation. Consequently, energetic matter must belong to the
same, all-encompassing wave formation (the universe) in which all formations immediately react to any changes that
occur to any particles of the same variety (size, space, and energy).
Dark matter (also referred to as an invisible loop or dark energy) facilitates the transmission of information between
magnetic loops. Communication is most efficient when it involves similar energetic formations, as seen in photons.
This also appears to occur in strands of DNA in which energy simultaneously flows back and forth These ideas that
were introduced by the 20th centurys preeminent scientists ; Maxwell, Einstein, Planck, Born, Bohr, Heisenberg,
Schrodinger, De Broigle, and others are the springboard and focal point of 21st century physics. Wave theory the
simple, logical process of uniting electric and magnetic behaviors as a superposition of identical matter (energetic
matter) is the X-factor that all these outstanding physicists laboriously searched for and will catapult physics onto the
next level.

RECOMMENDATIONS

A VISION FOR ASTRONOMY AND ASTROPHYSICS IN THE NEW CENTURY

Inclusion of African Astronomical society, University programs for studies in Astronomy and physics in
African continent, In the year 1000 AD there were astronomers in only a few places on Earth: in Asia,
particularly China, in the Middle East, and in Mesoamerica. These astronomers were aware of only six of
the nine planets that orbit the Sun. Although they studied the stars, they did not know that the stars
were like the Sun, nor did they have any concept of their distances from Earth. By the year 2000 AD,
humanitys horizons had expanded to include the entire universe. We now know that our Sun is but one
of 100 billion stars in the Milky Way Galaxy, which is but one of about 100 billion galaxies in the visible
universe. More remarkably, our telescopes have been able to peer billions of years into the past to see
the universe when it was young in one case, when it was only a few hundred thousand years old. All
these observations can be interpreted in terms of the inflationary Big Bang theory, which describes how
the universe has evolved since the first 1036 seconds of cosmic time.

It is impossible to predict where astronomy will be in the year 3000 AD. But it is clear that for the
foreseeable future, the defining questions for astronomy and astrophysics will be these:

How did the universe begin, how did it evolve from the soup of elementary particles into the structures
seen today, and what is its destiny?

How do galaxies form and evolve?

How do stars form and evolve?

How do planets form and evolve?

472
Is there life elsewhere in the universe?

Researchers now have at least the beginnings of observational data that are relevant to all of these
questions. However, a relatively complete answer exists for only one of themhow stars evolve. The
development and observational validation of the theory of stellar evolution was one of the great
triumphs of 20th-century astrophysics. For the 21st century, the long-term goal is to develop a
comprehensive understanding of the formation, evolution, and destiny of the universe and its
constituent galaxies, stars, and planetsincluding the Milky Way, the Sun, and Earth.

In order to do this, the committee believes that astronomers must do the following:

Map the galaxies, gas, and dark matter in the universe, and survey the stars and planets in the
Galaxy. Such complete surveys will reveal, for example, the formation of galaxies in the early universe
and their evolution to the present, the evolution of primordial gas from the Big Bang into matter
enriched with all the elements by stars and supernovae, the formation of stars and planets from
collapsing gas clouds, the variety and abundance of planetary systems in the Galaxy, and the distribution
and nature of the dark matter that constitutes most of the matter in the universe.

Search for life beyond Earth, and, if it is found, determine its nature and its distribution in the
Galaxy. This goal is so challenging and of such importance that it could occupy astronomers for the
foreseeable future. The search for evidence of life beyond Earth through remote observation is a major
focus of the new interdisciplinary field of astrobiology.

Use the universe as a unique laboratory to test the known laws of physics in regimes that are not
accessible on Earth and to search for new [Link] is remarkable that the laws of physics developed
on Earth appear to be consistent with phenomena occurring billions of light-years away and under
conditions far more extreme than those for which the laws were derived and tested. However,
researchers have only begun to probe the conditions near the event horizons of black holes or in the very
early universe, where the tests of the laws of physics will be much more stringent and where new
physical processes may be revealed that shed light on the unification of the forces and particles of
nature.

Develop a conceptual framework that accounts for all that astronomers have observed. As with all
scientific theories, such a framework must be subject to continual checks by further observation.

For the new decade, astronomers are poised to make progress in five particular areas:

Determining the large-scale properties of the universe: its age, the nature (amount and distribution) of
the matter and energy that make it up, and the history of its expansion;

Studying the dawn of the modern universe, when the first stars and galaxies formed;

Understanding the formation and evolution of black holes of all sizes;

473
TABLE 2.1 Science Goals for the New Initiatives

Initiativea

Science Goal Primaryb Secondaryb

Determining large-scale properties of NGST, GSMT, LSST (MAP, Planck, SIM) Con-X
the universe

Studying the dawn of the modern NGST, SKA, LOFAR (ALMA) Con-X, EVLA, SAFIR,
universe GLAST, LISA, EXIST, SPST

Understanding black holes Con-X, GLAST, LISA, EXIST, ARISE EVLA, LSST, VERITAS,
SAFIR

Studying star formation and planets NGST, GSMT, EVLA, LSST, TPF, SAFIR, AST, SDO, Con-X, EXIST
TSIP, CARMA, SPST (ALMA, SIM, SIRTF,
SOFIA)

Understanding the effects of the LSST, AST, SDO, FASR GLAST


astronomical environment on Earth

NOTE: Acronyms are defined in the appendix.


a
Missions and facilities listed in parentheses are those that were recommended previously but have not
yet begun operation.
b
Projects or missions listed in the primary category are expected to make major contributions toward
addressing the stated goal, while secondary projects or missions would have capabilities that address
the goal to a lesser degree.

Studying the formation of stars and their planetary systems, and the birth and evolution of giant and
terrestrial planets; and

Understanding the effects of the astronomical environment on Earth.

Table 2.1 lists these science goals and the new initiatives that will address them.

474
In addition, the time is ripe for using astronomy as a gateway to enhance the publics understanding of
science and as a catalyst to improve teachers education in science and to advance interdisciplinary
training of the technical work force.

THE FORMATION AND EVOLUTION OF PLANETS

The discovery of extrasolar planets in the past decade was one of the most remarkable achievements of
the 20th century and represented the culmination of centuries of speculation about planets orbiting stars
other than our Sun. These observations confirmed for the first time that a significant fraction of the stars
in the Milky Way Galaxy have planetary systems; at the same time, the observations brought the
surprising news that a number of planetary systems are very different from our solar system. In fact, the
first extrasolar planetary system discovered is quite exotic: Although it involves terrestrial-mass planets,
the central star is not a normal star like the Sun, but a rapidly spinning neutron star. The first planet
detected around a Sun-like star is much more massive than Earth. Its mass is at least half that of Jupiter,
the largest planet in the solar system, but its orbit is only one-tenth as large as that of the innermost
planet, Mercury (Figure 2.1). Further discoveries indicate that such hot Jupitersgas giant planets
orbiting 100 times closer to the host star than their analogs in our own solar systemare surprisingly
common, being found around a few percent of all solar-type stars. It may even be that our own planetary
system is the exception and hot Jupiters the rule.

We are witnessing the birth of a new observational science of planetary systems. The new measurements
of masses and orbital distances of planets demand explanation. The first step is to carry out a census of
extrasolar planetary systems in order to answer the following questions: What fraction of stars have
planetary systems? How many planets are there in a typical system, and what are their masses and
distances from the central star? How do these characteristics depend on the mass of the star, its age, and
whether it has a binary companion?

Astronomers have a number of methods to detect extrasolar planets: astrometry, measurement of


Doppler shifts, photometry, observations of gravitational microlensing, and direct imaging. SIM will
utilize astrometry, a method that uses the back-and-forth motion of stars in the sky to infer the presence
of an orbiting planet, to increase the census of Jovian-mass planets orbiting at relatively large distances
from their central stars. GSMT and other ground-based telescopes will measure small shifts in the
wavelengths of the observed radiation, or the Doppler shifts, caused by the motion of stars toward and
away from us as the planets orbit the stars. The Doppler method has been used almost exclusively in the
past decade and favors small orbital separation and

475
FIGURE 2.1 The discovery of the first planet orbiting a Sun-like star outside the solar system was made by
observing small oscillations in the radial velocity Vr of the star 51 Pegasi. These oscillations are caused by
the planet as it orbits the star every 4.2 days. The phase represents the time in units of the 4.2-day
cycle. Courtesy of M. Mayor, D. Queloz, and S. Udry (Universite de Geneve). Reprinted by permission
from Nature378:355-359, copyright 1995 Macmillan Magazines Ltd.

relatively large planets. Photometry measures the small decrease in the light from a star when a planet
orbits between the observer and the star, partially eclipsing the star. Because photometry depends on a
favorable inclination of the orbit, surveys of a large number of stars are required to find the frequency of
planetary systems. Space-based photometry is sufficiently precise that it could extend the census to
planets with masses as low as those of the terrestrial planets. Sensitive photometry of distant stars can
also reveal planets through gravitational microlensing: The

gravitational field of an intervening faint star close to the line of sight to a distant star acts as a lens that
amplifies the light of the distant star; planets orbiting the intervening star can change the amplification in
a detectable manner. However, these methods all detect planets indirectly by their small perturbations
of the light from the central star. The ultimate goal is to see and study the radiation from the planets

476
themselves. Direct imaging of giant planets can be done from the ground with adaptive optics, but TPF or
an enhanced NGST is needed for terrestrial planets.

Once direct imaging is possible, radiation from extrasolar planets can be analyzed to characterize the
atmospheres of the planets: How do the atmospheres depend on the mass of the planet, its separation
from its host star, and the mass of the host star? Do any of the planets appear habitable? Are there any
biological marker materials such as methane, molecular oxygen, or ozone? Observation of the
atmospheres is extremely challenging, owing to confusion with the enormously brighter host star. TPF is
designed to address this problem by using interferometry to null out the radiation from the host star;
with the addition of an occulter NGST may contribute to this goal.

The planetary census, together with new observations of protoplanetary disks, will provide the data
needed to understand planet formation. Observations over the past two decades have established that
protostars are accompanied by disks of gas and dust. These disks are believed to feed the growth of the
stars and are regions where planets could form. Todays instruments do not have the resolution or the
sensitivity to find evidence for the existence of planets in protostellar disks, but ALMA, NGST, and TPF
will. Theory shows that gas giants should create gaps in the disks that will be readily observable by these
powerful instruments. Young giant planets (10 million years old) will emit enough radiation in the near
infrared to be detectable by both NGST and GSMT in the nearby molecular clouds where star formation is
occurring. These observations will reveal how protostellar disks evolve and the conditions under which
planets can form. The existing census of extrasolar planets already indicates a surprising number of
massive planets orbiting extremely close to the central star. Are these planets formed in the outer
regions of the disk and then pushed into tighter orbits by the gravitational interaction with the disk
material or with other planets? The Sun is in the minority in not having a stellar companion. Now do
companion stars affect planet formation? Most stars form in large clusters containing massive stars, such
as the cluster associated with the Trapezium in Orion. What is the effect of such an environment on

planet formation? Hubble pictures showing the destruction of protostellar disks in the Orion Nebula
(Figure 2.2) suggest that such an environment is very hostile to planet formation.

Some recent discoveries within our own solar system point the way toward another approach to filling in
some details of the picture of planet formation and evolution. The Kuiper Belt consists of a ring or disk of
subplanetary bodies circling the Sun beyond Neptune. Some 200 Kuiper Belt objects (KBOs) are now
known, with diameters mostly in the 100- to 800-km range (Figure 2.3). Smaller KBOs are too faint to
have been detected in existing surveys; larger ones almost certainly exist but await detection by deep,
all-sky surveys such as will be conducted by LSST. It is thought that as many as 10 more objects of Pluto
size (with a diameter of 2,000 km) await discovery. These KBOs are but the tip of an iceberg. Probably
100,000 objects larger than 100 km exist at distances 30 to 50 times Earths distance from the Sun. The
number of objects larger than 1 km lies in the range of 1 billion to 10 billion. These objects are fossil
remnants of the Suns planetary accretion disk, and their motions provide direct evidence of the
protoplanetary disks physical characteristics. Collisions between these objects provide a long-term
source for tiny dust particles in the solar system. Similar dust disks have been detected recently around
some other main-sequence stars. The Kuiper Belt is probably the source of most short-period comets.

477
Near-infrared spectra of the KBOs capitalizing on the huge light-collecting capability of GSMT will, for the
first time, reveal the composition of comets in their pristine state, prior to entry into the inner solar
system.

The atmospheres of planets can be studied primarily in our own solar system. Except for Uranus, the gas
giant planets emit more energy than they receive from the Sun. Their internal heat production drives
complex and poorly understood systems of convection. The main external manifestations include
differential rotation (as in the Sun) and energetic, weather-like, circulation patterns at the visible cloud
tops. Planetary convection also powers dynamo action, causing the gas giants to support huge radio-
bright magnetospheres. New adaptive optics systems on large-aperture telescopes will provide 10-
milliarcsec resolution in the near infrared (Figure 2.4), enabling the study of long-term changes in
planetary circulation (at Jupiter, 10 milliarcsec = 35 km; at Neptune, 200 km). Such studies will also
provide the context for in situ investigations by NASA spacecraft.

Near-Earth objects (NEOs) are asteroids with orbits that bring them close to Earth. The orbits of many
NEOs actually cross that of Earth,

478
FIGURE 2.2 Protoplanetary disks in the Orion Nebula. These dark silhouetted disks, sometimes
surrounded by bright ionized gas flows as seen in the cometary shape above, are being destroyed by
intense ultraviolet radiation from nearby massive stars. The rapidity of their destruction may interrupt
planet formation in these disks. Courtesy of C.R. ODell (Rice University) and NASA.

FIGURE 2.3 Plan view of the solar system, showing the orbits of the 200 Kuiper Belt objects (KBOs) known
as of October 1999. Red orbits denote KBOs in orbits that are in resonance with Neptune, including Pluto;
blue orbits show nonresonant or classical KBOs; and the large, eccentric orbits with labels denote KBOs
that have been scattered by the gravity of the giant planets. The orbit of Jupiter at 5 AU (AU =
astronomical unit, the distance from Earth to the Sun) is shown for scale. Observations with LSST should
increase the number of known KBOs to 10,000, permitting intensive investigation of the dynamical
structure imprinted on this fossil protoplanetary disk by the formation process. Courtesy of D. Jewitt
(University of Hawaii).

479
FIGURE 2.4 An image of Neptune taken by the Keck Adaptive Optics Facility in the methane absorption
band at 1.17 m. The angular resolution of this image is approximately 0.04 arcsec, about an order of
magnitude better than the resolution obtained without adaptive optics. Courtesy of the W.M. Keck
Observatory Adaptive Optics Team. (This figure originally appeared in Publications of the Astronomical
Society of the Pacific [Wizinowich, P., et al., 2000, vol. 112, pp. 315-319], copyright 2000, Astronomical
Society of the Pacific; reproduced with permission of the Editors.)

making NEOs an impact threat to our planet. Extrapolations from existing data suggest that about 1,000
NEOs are larger than 1 km in diameter, and that between 100,000 and 1 million are larger than 100 m.
The effects of past NEO impacts on Earth range from the destruction of hundreds of square miles of
Siberian forest at Tunguska in 1908 by a relatively small NEO to substantial disruption of the biosphere at
the end of the Cretaceous period some 60 million years ago by a large (10-km) NEO. Interplanetary space
is vast, so the probability of a substantial NEO hitting Earth is small: For example, it is estimated that the
probability that an NEO larger than 300 m will strike Earth during this century is about 1 percent.
Nonetheless, it behooves us to learn much more about these objects. Over a decade, LSST will discover
90 percent of the NEOs larger than 300 m, providing information about the origin of these objects in the
process. However, comets also pose a substantial impact hazard, as was dramatically illustrated by the
impact of Comet Shoemaker-Levy on Jupiter (Figure 2.5). Although LSST will discover much about
comets, it will not provide long-term warning of potentially hazardous long-period comets.

480
FIGURE 2.5 The impact of fragment G of Comet Shoemaker-Levy 9 onto Jupiter in July 1994 left dark
rings of substantially altered atmosphere (lower left section of the planet). The thick dark outermost
rings inner edge has a diameter about the size of Earths. The impact had an explosive energy equivalent
to roughly a million megatons of TNT. Courtesy of H. Hammel (Massachusetts Institute of Technology)
and NASA.

481
STARS AND STELLAR EVOLUTION

The development and confirmation of the theory of the structure and evolution of stars represent one of
the great achievements of 20th-century science. Stars are the building blocks of galaxies and are the
atoms of the universe. Essentially all the elements in our bodies except hydrogen were created in the
nuclear fires in stellar interiors. The discovery in the past decade of brown dwarfs, stars too small to
burn hydrogen, has extended the range of stellar masses over which the theory applies. Despite the great
success of this theory, it has a gaping hole: It neither predicts nor explains how stars form. Such
knowledge is critical for understanding not only how planets form, but also how systems of stars, such as
galaxies, must evolve.

STAR FORMATION

Star formation proceeds in the densest regions of opaque clouds of gas and dust that are scattered
throughout the interstellar medium of a galaxy (Figure 2.6). Most of the gas in these clouds is molecular,
and it is highly inhomogeneous. Stars form in the densest parts of molecular clouds when the mutual
gravitational attraction of the gas overcomes the thermal pressure, turbulent motions, and magnetic
fields that support the cloud. The ensuing collapse forms a single star, a binary, or less often, a multiple-
star system. Theory suggests, and observations confirm, that most stars are encircled by disks when they
first form. These disks are the birthplaces of planets. As stars grow by accretion of material from their
disks, powerful bipolar winds are created perpendicular to the disks. These winds interact strongly with
the infalling material and the natal molecular cloud. The mass of a star is the primary determinant of its
characteristics over most of its life, yet researchers do not know what determines the stars birth mass.
There are many other important unsolved problems in star formation as well, including understanding
how molecular clouds form in the interstellar medium, how these clouds evolve to form protostellar
cores, what tips the scales in favor of gravitational collapse, what determines when binaries form, how
stars form in clusters, and how protostellar winds affect star formation.

From a theoretical perspective, studying star formation is challenging because it requires following the
evolution of matter from the very tenuous gas in the interstellar medium, where densities are measured
in the number of particles per cubic centimeter, to stellar interiors, where

482
FIGURE 2.6 Pillars of interstellar gas being eroded by radiation from massive stars in the Eagle Nebula,
revealing low-mass stars in the process of formation. HST image courtesy of J. Hester and P. Scowen
(Arizona State University), and NASA.

the densities are measured in grams per cubic centimetera trillion trillion times greater. Nevertheless,
considerable progress has been made toward developing a theory, particularly for isolated stars with
masses similar to that of the Sun. Numerical simulation on supercomputers is playing an important role
in this effort. Theories of massive star formation are less advanced because of the strong interaction of
the radiation from these luminous stars with the infalling gas and dust. The theory of star formation in
clusters is similarly primitive because of the complicated interaction of the cores and protostellar winds
in these regions.

From an observational perspective, star formation is challenging because dust obscures the regions of
star formation, rendering them largely invisible to optical telescopes. Observation of the formation of
massive stars is even more challenging since the sites of massive-star formation are rare and therefore
on average more distant; furthermore, recent observations show that they are obscured by even more
dust than are the regions of low-mass star formation. Infrared, submillimeter, millimeter, and hard x-ray
radiation penetrate the obscuring dust; in addition, the gas and dust that form stars, disks, and planets
radiate primarily at infrared and longer wavelengths. The substantial improvements in sensitivity and
spatial resolution at these wavelengths obtained with many of the recommended new initiatives,
together with facilities now under development, should lead to great advances in solving the important
problems in star formation (see Table 2.1).

483
THE SUN

As the nearest star, the Sun provides us with the opportunity to test with exquisite accuracy our
understanding of stellar structure. Using a powerful combination of theory and observation, solar
physicists have done just that over the past decade: By studying tiny oscillations in the Sun (a technique
termed helioseismology), they have shown that theoretical models for the internal structure of the Sun
are accurate to within about 0.1 percent. Solar models are sufficiently accurate that the Sun can now be
used as a well-calibrated source of neutrinos to carry out investigations of the basic physics of these
fundamental particles.

Although understanding of the equilibrium properties of the Sun has been validated by helioseismology,
understanding of the nonequilibrium propertiesassociated primarily with magnetic fieldsremains
poor. Magnetic fields play a crucial role in astrophysical phenomena ranging

from the formation of stars to the extraction of energy from supermassive black holes in galactic nuclei.
The Sun provides a natural laboratory for the study of cosmic magnetism on scales not accessible on
Earth and not resolvable in distant astronomical objects (see Figure 2.7). Solar magnetic fields lead to
space weather, which can destroy satellite electronics and disrupt radio communications. These fields
are also believed to be responsible for the variations in the Suns luminosity that lead to variations in
Earths climate on a time scale of centuries. Such climate variations have undoubtedly influenced the
evolution of life on Earth. Other stars are observed to have larger variations in their luminosity, which
could have a correspondingly stronger effect on any life that might exist on planets in those systems.

The first scientific goal for advancing the current understanding of solar magnetism is to measure the
structure and dynamics of the magnetic field at the solar surface down to its fundamental length scale.
This length scale is believed to be determined by the pressure scale height, which is about 70 km, or 0.1
arcsec in angle from Earth; numerical simulations suggest that the size of magnetic flux tubes might be
about half this. AST is designed to achieve this angular resolution. With the collecting area of a 4-m
mirror, it will also have sufficient sensitivity to measure weak magnetic fields on this scale at the requisite
time resolution. AST will permit substantial progress in the understanding of the physical processes in
sunspots. At night, AST will obtain complementary information on the role of stellar magnetic fields by
observing other stars, which can behave quite differently from the Sun. Constellation-X will contribute to
these studies by providing accurate measurements of physical conditions in the coronae of other stars.

The second scientific goal is to measure the properties of the magnetic field throughout the entire solar
volume, extending from below the surface out to 18 solar radii. Below the visible surface of the Sun,
magnetic fields are trapped in the solar gas and move with it. The turbulent convection and the
apparently random emergence of magnetic fields cause surface magnetic fields to be mixed on a range of
scales. An important development of the past decade was the use of acoustic tomography to create
three-dimensional maps of these field structures. Above the surface, in the solar corona, the gas density
drops very rapidly and the situation is reversed: There, the highly conducting solar gases are forced to
move with the magnetic fields, so that the entire outer atmosphere responds continuously to the
motions of the footpoints of the magnetic field trapped in the surface. Extreme ultraviolet measurements

484
FIGURE 2.7 An image of the full disk of the Sun at x-ray wavelengths (0.0171 m), which are sensitive
to the emission from a highly ionized iron atom (eight electrons removed). This emission arises from
gases with temperatures between 600,000 and 1 million K. The image is a photomosaic of 42
overlapping 8.5 by 8.5 arcsec images taken by the TRACE spacecraft. Courtesy of NASA and the
Stanford-Lockheed Institute for Space Research.

made by the TRACE spacecraft have shown that as a result, coronal structures are rapidly evolving and
highly inhomogeneous, with loops at 30,000 K adjacent to loops at 3 million K (see Figure 2.7). When
regions with opposite polarity collide, the overlying magnetic fields reconnect and restructure. These
processes release enormous amounts of energy that are responsible for the heating of the outer solar
atmosphere, flares, coronal mass ejections, and the acceleration of the solar wind toward Earth. SDO,
which combines observations of the subsurface, surface, and corona, is designed to collect data to
answer fundamental questions about the interaction of gas flows and magnetic fields, reconnection and
restructuring of magnetic fields, rapid energy release processes, and outward acceleration of solar
material.

485
Together, AST and SDO will provide a comprehensive view of the dynamics of the solar magnetic field
and lead to a much deeper understanding of cosmic magnetism. In addition, these projects will
revolutionize our understanding of space weather and global change, which are influenced by the Sun
because Earth and the space surrounding it are bathed by the Suns outer atmosphere.

STELLAR METAMORPHOSIS

Most living things slow down as they age and eventually cease to be able to generate new life. Stars
behave in the opposite fashion: Evolution accelerates when they near the end of their lives as normal
stars, and during the final stages a significant fraction of their mass, enriched with heavy elements
generated in their interiors, is dispersed into surrounding space (Figure 2.8). The ejected gas, mixed with
the local interstellar medium, can then be recycled to form new stars and planetary systems. Left behind
is a compact stellar remnanta white dwarf, with a radius 100 times smaller than that of the Sun; a
neutron star, with a radius 1,000 times smaller; or a black hole, with an effective radius that, for a mass
comparable to that of a neutron star, is several times smaller yet. Stellar death is thus a
metamorphosis in which stars that are powered by nuclear reactions, like the Sun, are reborn as compact
objects.

Most stars with a mass more than about eight times that of the Sun end their lives in a titanic explosion,
a supernova, leaving behind a neutron star or a black hole (Figure 2.9). Stars less massive than about
eight times the mass of the Sun evolve into red giants, so large that at the position of the Sun they would
envelop the orbit of Earth. Their distended envelopes are ejected soon afterward, leaving behind a white

486
FIGURE 2.8 Hubble Space Telescope image of the planetary nebula NGC 6543, commonly known as the
Cats Eye Nebula. The inset shows the lower-magnification ground-based image made using the 2.1-m
telescope at Kitt Peak National Observatory under excellent atmospheric conditions. Stars with a mass
less than about eight times that of the Sun evolve to red giant stars, and the red giants end their lives by
ejecting their outer envelopes. The ejected envelopes glow in visible light and are called planetary
nebulae. This image shows the ejected gas, enriched in elements such as carbon by the nucleosynthesis
that occurred in the parent star, as it travels outward into the interstellar medium to be incorporated
eventually into new stars and planets. The Hubble image was obtained by J.P. Harrington and K.J.
Borkowski (University of Maryland), and NASA, and was recolored by B. Balick (University of Washington)
with permission. The ground-based image is courtesy of B. Balick.

487
FIGURE 2.9 Two supernova remnants observed by the Chandra X-ray Observatory. On the left is an x-
ray color image of Cassiopeia A, the remnant of a supernova that exploded about 300 years ago.

The red, green, and blue regions show where the intensity of low-, medium-, and high-energy x rays,
respectively, is greatest. The x rays from Cassiopeia A are produced by collisions between hot electrons
and ions. The point source near the center is believed to be the compact stellar remnanta neutron star
or black holeleft behind by the explosion. On the right is an x-ray intensity image of the Crab Nebula,
the remnant of the supernova of 1054 AD. The x rays from the Crab Nebula are produced by electrons
that accelerate to nearly the speed of light and then spiral in the magnetic field of the nebula. Images
courtesy of NASA, the Chandra X-ray Observatory Center, the Smithsonian Astrophysical Observatory,
and J. Hughes (Rutgers University).

488
dwarf remnant (see Figure 2.8). For all these stars, some newly created elements are ejected from the
surface in stellar winds before the final collapse.

A major goal of stellar astrophysics is to understand the various mechanisms of mass loss and how they
contribute to the continually increasing abundance of heavier elements in the universe. Many of the
recommended new facilities will make strong contributions to the necessary investigations: ALMA and
CARMA by studying the chemistry of the outflows, GSMT by acquiring spatially resolved spectra, and
Constellation-X by observing the newly formed elements in supernova ejecta.

If a white dwarf has a closely orbiting companion star, it may accrete matter from the companion and
become a supernova itself. Such supernovae (called Type Ia) have luminosities that can be calibrated, so
that they can be used as standard candles. This means that their apparent brightness can be converted to
distance. By measuring the distances and redshifts of many supernovae, it is possible to probe the
geometry of the universe (is it flat or curved?) and determine how its expansion rate is changing with
time. One of the major goals of stellar research during this decade will be to understand Type Ia
supernovae both observationally and theoretically in order to calibrate their luminosities. LSST will aid in

489
discovering large numbers of supernovae, and both NGST and GSMT will enable detailed study of their
spectra even when they are at high redshifts.

Stars that are reborn as compact objects have such strong gravitational fields at their surfaces that they
radiate high-energy photons when material falls on them, thus making them observable in the x-ray
region of the spectrum. Neutron stars and white dwarfs also radiate the thermal energy stored in them
at birth, and if they are magnetized and spinning, they can accelerate particles that also radiate. These
objects provide laboratories in which matter can be studied under extreme conditions that cannot be
duplicated on Earth. For example, the past decade saw the discovery of the theoretically predicted
magnetars, which are neutron stars with magnetic fields 100 times that of normal neutron stars and a
billion times that of the largest static fields in the laboratory. One of the major goals of Constellation-X is
to image gas indirectly as it accretes onto a black hole, by studying how its spectrum evolves with time.
Another goal is to measure accurately how the radius of a neutron star depends on its mass, which will
tell researchers about the properties of matter at nuclear densities.

Gamma-ray bursts are mysterious phenomena discovered by satel-

skies for possible thermonuclear test explosions. At its peak, the energy flux observed from a single burst
can be greater than that from all of the nighttime stars and galaxies in the universe! The apparent
brightness of the bursts led many astronomers to conclude that they had to be in our galaxy, but during
the 1990s the Compton Gamma Ray Observatory found them to be equally distributed over the whole
sky and therefore almost certainly extragalactic. The Italian-Dutch BeppoSAX satellite permitted more
accurate localization of a few of the bursts, leading to the discovery of theoretically predicted afterglows
at other wavelengths (Figure 2.10). Observational monitoring of these afterglows confirmed that the
bursts originate from the far reaches of the universe. While the precise origin of the bursts remains a
mystery, it is believed that they are most likely associated with the formation of compact stellar objects
such as neutron stars and black holes (Figure 2.11). With GLAST, EXIST, and MIDEX missions such as
Swift, it will be possible to find gamma-ray bursts that are fainter than those previously visible and to
locate them more quickly for prompt follow-up observations at other wavelengths. Because they are so
luminous, bursts associated with the first generation of star formation may be detectable.

GALAXIES

On very large scales, galaxies are the building blocks of the universe, as fundamental to astrophysics as
ecosystems are to the environment. They come in a variety of types, ranging from disk galaxies like the
Milky Way to elliptical and irregular systems. While visible primarily through the light from the stars they
contain, galaxies are actually far more complex than a simple grouping of stars. Most of their matter is
dark in that it is not visible at the sensitivity limits of todays telescopes. Many galaxies, including our
own, harbor supermassive black holes in their nuclei, and these will almost certainly have an important
role in galactic evolution. Finally, in most galaxies there is a significant amount of gas and dust between
the stars, out of which new stars continue to form.

490
FORMATION AND EVOLUTION OF GALAXIES

During the past decade astronomers were for the first time able to study galaxies so distant that their
light was emitted when the universe was only a small fraction of its present age. From the work of Edwin

FIGURE 2.10 Observations at many wavelengths are needed to understand gamma-ray bursts. This
gamma-ray burst was discovered by the Compton Gamma Ray Observatory (CGRO) on January 23,
1999. The optical flash from the gamma-ray burst was observed by the Robotic Optical Transient Search
Experiment (ROTSE) 22 seconds later. Subsequently, the BeppoSAX satellite detected the x-ray emission
from the burst. Based on preliminary information from BeppoSAX, astronomers at the Palomar

491
Observatory identified the precise location. Astronomers at one of the Keck telescopes were then able to
obtain the spectrum and determine the distance. Within a day, radio astronomers used the Very Large
Array to observe the fading afterglow of the burst. After 17 days, the burst had faded enough so that
astronomers using the Hubble Space Telescope could observe the host galaxy. This is probably the most
energetic gamma-ray burst ever recorded. Images courtesy of NASA, CGRO BATSE Team, ROTSE Project,
J. Bloom (Caltech), BeppoSAX GRB Team, W.M. Keck Observatory, NSF/NRAO, A. Fruchter (STScI), and P.
Tyler (NASA GSFC).

Hubble in the 1920s, astronomers have learned that the universe is expanding in such a way that distant
galaxies are moving away from us at higher speeds than are nearby ones. The expansion of the universe
redshifts radiation to longer wavelengths, or from blue to red. Greater redshifts correspond to more
distant galaxies. Since it takes light longer to travel greater distances, greater redshifts also correspond to
earlier epochs in the universe (Figure 2.12). Galaxies have been discovered at redshifts up to about 5.

Astronomers are also able to study galaxies at high redshifts by taking advantage of the sensitivity and
angular resolution available with the Hubble Space Telescope (HST). Deep observations of two patches of
sky, one in the north and one in the south, have revealed the morphology of these distant galaxies (the
northern deep field is shown on the cover of this report). The conclusion of these studies is that galaxies
have undergone enormous evolution since they were young, with large galaxies probably growing out of
mergers of smaller ones. Observations at submillimeter wavelengths have suggested that some galaxies
contain sufficient dust so that they reprocess a significant fraction of their starlight into far-infrared
emission. As a consequence, optical and near-infrared observations are blind to as much as one-half of
the star formation that has occurred in galaxiesa problem that observations with ALMA, SAFIR, FIRST,
and SIRTF will overcome. SPST will survey the sky at submillimeter wavelengths, finding many high-
redshift galaxies that these other telescopes can target.

Galaxies are often found in clusters, and these clusters are thought to grow in size by the merging of
smaller clusters. As gas falls into clusters, it is heated to very high temperatures and emits x rays.
Constellation-X will

492
FIGURE 2.11 Simulation of two neutron stars spiraling into each other, ejecting hot gas (orange
filamentary structures) and neutron-rich matter (blue/green, snail-shaped structure) in the process.

Green represents higher-density matter than does blue. Gamma-ray bursts could be produced by such
mergers. The white dots represent background stars added for visual effect. Simulation by P. Gressman
(Washington University in St. Louis), and visualization by W. Benger (Max-Planck-Institut fur Gravitations
Physik, Konrad-Zuse-Institut). Courtesy of the NASA Neutron Star Grand Chal-lenge Project.

be able to observe this emission from the first clusters of galaxies that form in the universe, revealing
how they formed. Complementary observations with NGST and GSMT will show the evolution of
clustering in cosmic time and how the cluster environment affects the evolution of galaxies.

As remarked above, present observations of galaxies do not extend much beyond a redshift of 5. The
time between the recombination epoch at a redshift of about 1,000, when the cosmic background
radiation was emitted, and that of redshift 5 remains completely unexplored. This period contains the
dark ages, when the visible light of the Big Bang faded and darkness descended. The dark ages ended
with the formation of the first stars and galaxiesthe dawn of the modern universe. The new decade
brings the possibility of seeing the first generation of stars and galaxies that mark this dawn. NGST is
designed to have the sensitivity and wavelength coverage to detect light from the first generation of
galaxies, out to a redshift of about 20. With NGST it will be possible to address a number of fundamental
questions: When did the first galaxies and stars form? What is the history of galaxy formation in the
universe? What is the history of star formation and element production in galaxies? The ability of
ground-based optical and infrared telescopes to address these questions is severely compromised by the
opacity and the thermal emission from the atmosphere at wavelengths longer than 2 m. NGST will
cover the spectrum out to wavelengths of at least

5 m, so that, for example, it can observe the hydrogen-alpha line produced in regions of massive star
formation to a redshift of about 6 and the 0.4-m stellar absorption feature to a redshift in excess of 10.

493
Extending the sensitivity of NGST farther into the thermal infrared would greatly increase its ability to
study galaxies at high redshifts.

Most of the stars and most of the heavy elements in the universe were formed after the epoch
corresponding to redshift 5. As described above, the past decade has seen pioneering studies of galaxies
in this redshift range, but the sensitivity and resolution have not been adequate to determine how the
morphological and dynamical structure of galaxies has evolved over time. With adaptive optics and its
enormous light-gathering power, GSMT will be a powerful complement to NGST for addressing such
questions. Existing observations indicate very disturbed morphologies, possibly due to mergers, for
galaxies at redshifts beyond 1;

FIGURE 2.12 Relationship between the redshift and the time since the Big Bang (the age of the
universe) for two different cosmological models.

Both models are flat, but the one represented by the bottom curve has the critical density in matter,
whereas that represented by the top curve (the currently favored model) has a cosmological constant so
that only 30 percent of the critical density is in matter. At the time of the Big Bang, the age of the
universe was zero, and the redshift (z) was extremely large. Later, prior to the formation of the first stars,
the universe went through an epoch in which there was very little optical lightthe dark ages.
Estimates are shown of the redshifts at which the first stars formed, quasars formed, and the Milky Way
formed. Today, 1 + z = 1; the age of the universe is 14 billion years in the currently favored model. See
discussion on p. 86. Courtesy of M. Turner (University of Chicago).

494
GSMT and NGST will be able to distinguish the effects of mergers from those of rapid star formation. By
means of spatially resolved spectroscopy, GSMT will be able to measure the masses of distant galaxies,
thus providing crucial data for studying how galaxies evolve.

The history of galaxy evolution can also be inferred by studying the stellar populations of local galaxies at
the present epoch. To do this requires the determination of the ages and elemental abundances of stars
as a function of position in nearby galaxies. The high angular resolution available with GSMT means that
it will be able to obtain the spectra of individual stars close to the nuclei of the Milky Ways nearest large
companion galaxies, M31 and M32 (Figure 2.13).

EVOLUTION OF THE INTERSTELLAR MEDIUM IN GALAXIES

The interstellar medium in a galaxy controls the rate of star formation and thus the evolution of the
galaxy itself. It is the repository of the heavy elements produced in stars. If star formation becomes too
violent, interstellar gas may be ejected from a galaxy into the surrounding intergalactic medium. An
understanding of the interstellar medium is necessary if researchers are to address such key questions as
the following: What are the physical processes that determine the rate at which stars form in a galaxy?
What is the feedback between star formation and the interstellar medium? (See Figure 2.6 for an
example.) What is the effect of the extragalactic environment on star formation?

All these issues come into play when the formation of the first galaxies is considered. The first galaxies
formed out of enormous clouds of neutral atomic hydrogen. Once the galaxies had formed, the
interstellar media of these galaxies remained primarily atomic hydrogen, although with increasing
amounts of heavier elements as massive, short-lived stars ejected new elements into the medium. The
hydrogen gas should be observable at redshifts above 10 with LOFAR. When the SKA is built, it will be
able to map the atomic hydrogen up to redshifts of about 10. Within galaxies, some of the atomic gas will
be converted to molecular form on its way to being incorporated into stars. If the earliest stars have
ejected enough carbon and oxygen into the interstellar medium, the broad spectral capabilities of the
EVLA will enable observation of carbon monoxide, the most abundant molecule after molecular
hydrogen, out to redshifts beyond 10. Newly formed stars ionize some of the gas, producing emission
lines detectable by NGST. Supernovae heat large volumes

495
FIGURE 2.13 This optical wavelength picture shows the large spiral galaxy M31 (also known as the
Andromeda Galaxy) and its small companions M32, lower center, and M110, to the upper right.

Andromeda is the Milky Ways closest large neighbor at a distance of about 2.2 million light-years, and it
is very similar in appearance to, and slightly larger than, the Milky Way. In fact, M31 is visible to the
naked eye, although we can see only the bright inner bulge. This image comes from photographic plates
taken with the 0.6-m Burrell Schmidt telescope of the Warner and Swasey Observatory of Case Western
Reserve University. GSMT will be able to study individual stars near Andromedas center, which is a very
tightly packed star cluster not visible in this saturated image. Courtesy of B. Schoening (National Optical
Astronomy Observatories) and V. Harvey (University of Nevada, Las Vegas, Research Experience for
Undergraduates program sponsored by AURA/NOAO/NSF).

of the interstellar gas to millions of degrees, and x rays from this hot gas will be measured by
Constellation-X to determine the temperature, pressure, and elemental abundances in this hot plasma.
These same instruments will also permit astronomers to trace the evolution of gas in galaxies through
cosmic time, as the universe synthesizes the elements needed to form planets and eventually to enable
life.

496
Structure in the interstellar medium of a galaxy spans a wide range of scales, from much less than 1 light-
year for the molecular cores that produce individual stars to 100,000 light-years for the galaxy as a
whole. The gaseous galactic halo extends farther; it comprises both gas blown out of the disk and gas
accreting from the intergalactic medium. Much of the mass of interstellar gas in disk galaxies is atomic
and molecular gas that is quite cold, with a temperature that is less than 100 degrees above absolute
zero. A substantial (but uncertain) fraction of the volume of such galaxies is filled by gas that has been
heated to more than a million degrees by supernova explosions. There is also a significant amount of gas
at intermediate temperatures that is heated by starlight. All this gas is permeated by cosmic rays,
particles moving almost at the speed of light, and by magnetic fields. The primary hindrance to a greater
understanding of how the interstellar medium mediates the evolution of galaxies is ignorance of the
spatial distribution of these various components of the interstellar medium and how they are
interrelated. Surveys of the interstellar medium in nearby galaxies with the recommended radio,
infrared, x-ray, and gamma-ray facilities will provide valuable data on these issues. Understanding the
complex structure of the interstellar medium and how it interacts with the process of star formation is a
daunting theoretical problem for this decade.

GALACTIC NUCLEI

The nucleus of a galaxy is like a deep well: It is easy to fall in, but hard to get out. As a result, gas and stars
accumulate there. In the 1960s, astronomers discovered that some galactic nuclei were truly remarkable:
They could outshine an entire galaxy from a volume not much larger than that of the solar system. These
objects, termed quasars, are the most luminous type of active galactic nucleus. Theorists immediately
conjectured that such prodigious power output could come only from the accretion of gas onto a
supermassive black hole; later it was realized that energy could be extracted from the spin of the black
hole as well. A consequence of these ideas is that many galaxies should harbor

supermassive black holes in their nuclei. Three decades later, this conjecture has been amply verified.
Observations of both gas and stars have shown that even in our own backyard, the Milky Way Galaxy
harbors a black hole 3 million times more massive than the Sun (Figure 2.14)and that black hole
masses in the nuclei of other galaxies can exceed a billion solar masses. Exquisitely precise
measurements of the positions and three-dimensional velocities of water masers made with the Very
Long Baseline Array (VLBA) toward the nucleus of the galaxy NGC 4258 provided incontrovertible
evidence for the presence of a supermassive black hole (Figure 2.15). ARISE has the power to study the
water emission in other galactic nuclei to search for black holes and determine their mass and the
characteristics of the accreting gas.

497
FIGURE 2.14 Evidence for a massive black hole at the galactic center, denoted Sgr A*. The data points
are estimates of the distribution of mass, as determined from the motions of stars close to Sgr A *.

The data (filled blue rectangles and light blue bars) are consistent with a 2.9 million solarmass black hole
(thick red curve) or a hypothetical very dense dark cluster (thin dashed green curve) that can be ruled out
on theoretical grounds. The data rule out that the observed motions are caused by the gravitational field
of the observed stellar cluster (short and long dashed green curve). Courtesy of R. Genzel (Max-Planck-
Institut fur extraterrestrische Physik).

498
FIGURE 2.15 Some of the data that provide strong evidence for the presence of a supermassive black
hole in the center of the nearby spiral galaxy NCG 4258.

The top panel is the actual image of the point-like maser clouds constructed from very long baseline
interferometry (VLBI) data having a resolution of 200 microarcsec (with a wire grid depicting unseen
parts of the disk). Also shown is the image of the continuum emission at 1.3-cm wavelength caused by
synchrotron radiation from relativistic electrons emanating from the position of the dynamical center
(black dot). The central mass required to gravitationally bind the system is 39 million solar masses. Since
all the mass must be within the inner boundary of the molecular disk of about 0.13 pc, this mass is
probably in the form of a supermassive black hole. The bottom panel shows on a larger scale the
synchrotron emission that arises from relativistic electrons ejected along the spin axis of the black hole.
Courtesy of J. Moran and L. Greenhill (Harvard-Smithsonian Center for Astrophysics), and J. Herrnstein
(National Radio Astronomy Observatory).

Observations with HST have confirmed that most nearby galaxies harbor supermassive black holes in
their nuclei.

499
How do these supermassive black holes form and evolve? Do they grow from stellar seeds or do they
originate at the very beginning of the formation of a galaxy? These key questions are ripe for a frontal
attack now. Addressing them will require the observation of active galactic nuclei (AGNs) when they first
turn on, over the entire electromagnetic spectrum. With its enormous sensitivity in the infrared, NGST
will be able to detect AGNs out to redshifts beyond 10. Radiation emitted in the thermal infrared will be
redshifted into the band detectable by SAFIR. The EVLA will detect much longer wavelength radio
emission from AGNs to redshifts beyond 5. Constellation-X will be able to observe the first quasars even
if they are heavily obscured by dust. EXIST will make a census of obscured, low-redshift AGNs over the
whole sky; this sample can be compared with younger AGNs, seen at high redshifts by Constellation-X, to
study how the AGNs evolve. In this case the energies of the most penetrating hard x rays will be
conveniently shifted by the expansion of the universe into the energy region of maximum sensitivity of
the telescope. Furthermore, by observing the spectrum of hot gas as it disappears into supermassive
black holes, Constellation-X will provide a laboratory for studying the physical processes occurring near
the event horizons of black holes under conditions that differ substantially from those near stellar-mass
black holes.

In a tremendously scaled-up version of the process of mass ejection from disks around protostars,
massive black holes not only accrete material but also eject from their vicinity powerful jets at nearly the
speed of light (Figure 2.16). This highly relativistic material is thought to generate extremely energetic
photons, with frequencies more than 100 billion times that of visible light. VERITAS has the power to
detect individual photons of this radiation interacting with Earths atmosphere, and can therefore probe
the relativistic particle acceleration occurring near these massive black holes. Observing somewhat less
energetic photons, GLAST will help determine how jets are powered and confined. ARISE has the spatial
resolution to resolve the base of the jet and thereby provide a complementary probe of the acceleration
region.

Galaxy mergers are inferred to be common, and it is quite possible that the massive black holes in their
nuclei would merge as well. Such a cataclysmic event would produce powerful gravity waves that could
be detected by LISA out to very large distances (redshifts up to at least 20). This gravitational radiation
would be detectable for up to a year before

the actual merger, enabling accurate prediction of the final event so that it could be observed by
telescopes sensitive to the entire range of electromagnetic radiation. Observation of such a merger
would provide a unique test of Einsteins theory of general relativity in the case of strong gravitational
fields. Further discussion of what scientists can learn about black holes can be found in the physics survey
report Gravitational Physics: Exploring the Structure of Space and Time (NRC, 1999).

Galactic nuclei can become extremely luminous as a result of intense bursts of star formation or the
presence of a supermassive black hole.

500
FIGURE 2.16 The jet produced by the central black hole in the galaxy M87. The Very Large Array (VLA)
image at the upper left shows the radio emission powered by the jet.

The Hubble Space Telescope (HST) image at the upper right shows the narrow jet at similar resolution.
Finally, the Very Long Baseline Array (VLBA) image at the bottom, with more than 100 times the
resolution of the HST image, is the closest view of the origin of such a jet yet obtained. Courtesy of NRAO,
STScI, W. Junor (University of New Mexico), J.A. Biretta and M. Livio (STScI), and NASA. Reprinted by
permission from Nature401:891-892, copyright 1999 Macmillan Magazines Ltd.

These starbursts may be associated with the initial formation of the galaxy, or they may be triggered by
an interaction with another galaxy. Starbursts are of great interest because they represent an extreme
form of star formation that is not understood; for example, it is not known whether they produce the
same distribution of stellar masses as that observed in our galaxy. Distinguishing starbursts from
supermassive black holes is complicated by the fact that AGNs are often shrouded in dust, so that much
of the direct emission is hidden from view. Long wavelengths penetrate the dust more readily, so the
EVLA, SAFIR, and NGST with an extension into the thermal infrared are all suitable for separating the two
phenomena. Very-high-energy photons can also penetrate the dust, so Constellation-X and EXIST will
provide relevant data as well.

Active galactic nuclei may be the source of ultrahigh-energy cosmic rays (gamma-ray bursts and
intergalactic shocks have also been suggested as the source of these enigmatic particles). These cosmic
rays are generally assumed to be protons that have been accelerated to very high energies. The energies

501
are so largeequivalent to the energy of 1 billion to 100 billion protons at restthat these cosmic rays
can propagate only a limited distance before losing their energy through interactions with the cosmic
microwave background radiation. Ongoing experiments with the Flys Eye in Utah and proposed
experiments with the Southern Hemisphere Pierre Auger Observatory project will add greatly to our
knowledge of these cosmic rays, particularly if the experiments are able to identify their sources.

THE UNIVERSE

Observations by NGST should witness the first light from distant galaxies. Long before the stars that
emitted this light were formed, the matter making up the galaxies had to accumulate from the
intergalactic medium. This process of galaxy formation occurred within the background of an expanding
universe. How has the universe evolved through cosmic time? How did structures such as galaxies and
clusters of galaxies develop in the expanding universe? Finally, observations show that not all the matter
that makes up galaxies and clusters of galaxies is visible: What in fact is the composition of the universe?

THE EVOLUTION OF THE UNIVERSE

Evidence indicates that somewhat more than 10 billion years ago the universe was created in a titanic
explosion the Big Bang. What may have preceded this event is unknown. The Big Bang theory allows us to
trace the evolution of the universe back to a time when it was just a soup of elementary particles a few
microseconds after the beginning. Researchers have promising ideas that would enable extending
understanding back to a time before particles existed, when even the largest objects in the universe were
quantum fluctuations. How has the universe expanded since the Big Bang? Astronomers measure the
expansion of the universe through the redshift of the radiation observed. The greater the redshift of light
from an observed object, the more the universe has expanded since that radiation was emitted. The
relationship between the redshift and timethe calibration of the cosmic clock determines how long ago
the radiation was emitted (see Figure 2.12). Using the speed of light to convert time to distance, this
relationship can be also be used to determine the geometry of the universe (whether space is flat or
curved). The current time scale for the expansion is set by a parameter known as the Hubble constant,
which gives the relation between redshift and distance. Using HST and other telescopes, it has been
possible to establish the value of the Hubble constant with an accuracy approaching 10 percent.

In order to derive the age of the universe from the measured value of the Hubble constant, it is necessary
to know how the expansion has accelerated or decelerated with time. The history of the expansion of the
universe depends on the total density of matter in the universe (both ordinary matter and dark matter)
and on the possibly non-zero cosmological constant, which might characterize a sort of dark energy
in the universe. These parameters determine the geometry of the universe and its ultimate fate, whether
it will expand forever or eventually recollapse. Theory suggests that the geometry of the universe is flat;
in this case, the total density of matter and energy is said to have its critical value. Observations of
distant clusters of galaxies indicate that the density of matter is about 30 percent of the critical value.

One of the most exciting developments of the past decade has been the discovery that the cosmological
constant may not be zeroour universe appears to be filled with dark energy. This discovery is based on

502
two independent sets of observations. First, astronomers have found a way to determine the luminosity
of Type Ia supernovae from the rate at which their light declines. Knowledge of the luminosity enables
the determination (or calculation) of the distance to such a supernova by measuring its brightness. The
results show that distant supernovae appear fainter than expected, suggesting that the expansion of the
universe is accelerating. When combined with other data, the observations of supernovae lead to the
conclusion that dark energy makes up perhaps 70 percent of the total density of matter and energy.
Second, observations of fluctuations in the cosmic microwave background (discussed below) strongly
suggest that the universe is indeed flat, so that the total density of matter and energy is at the critical
value. Since estimates of the masses of clusters of galaxies show that the matter density of the universe
has only about 30 percent of the critical value, it follows that the dark energy must make up the
remaining 70 percent. Together with the value of the Hubble constant determined above, the estimated
values of the matter and energy densities yield an age for the universe of about 14 billion years.

During this decade, observers and theorists will work to understand and extend these observations.
Confirmation that dark energy exists, with a density that rivals that of matter, would be a physical
discovery of the most fundamental significance. Planned observations of the cosmic microwave
background will provide more accurate values of the cosmological parameters, including the density of
ordinary matter. This value of the matter density, when compared to an equally precise determination
derived from a measurement of the primeval deuterium abundance, will allow a fundamental
consistency test of the standard cosmology. Recent measurements of the deuterium abundance in
distant galaxies indicate that this test is feasible; however, a definitive measurement of deuterium is still
needed. NGST will permit the observation of many supernovae at high redshifts, to confirm whether the
universe is actually accelerating. Discovery of a much larger number of supernovae with LSST, followed
up by more sensitive and precise measurements from ground- or space-based telescopes, will permit the
cosmic clock to be calibrated with much greater precision. It should then be possible to determine
whether the cosmological constant is really constant, as Einstein assumed, or evolving with time, as some
current theories suggest.

THE EVOLUTION OF STRUCTURE IN THE UNIVERSE

The seeds of the structure of the universe down to the scale of galaxies, and probably even smaller, were
planted by tiny quantum fluctuations in the first instants of the Big Bang. In order to study how the large-
scale structure in the universe grew from these seeds, it is necessary to study how galaxies are
distributed in space today. Surveys of galaxies carried out more than a decade ago revealed large voids
where few galaxies were visible, and other regions where the density of galaxies was enhanced on scales
up to 300 million light-years in extent. Surveys of galaxies during the past decade have shown that this
appears to be the limiting scale on which large fluctuations in density occur: On larger scales, the
universe appears to be smooth. Surveys under way now, particularly the Sloan Digital Sky Survey, will
provide a far more accurate map of the distribution of galaxies in the nearby universe.

Direct evidence for the early fluctuations that led to this structure is imprinted on the oldest radiation in
the universe, the cosmic microwave background (CMB). This radiation was emitted at a redshift of about
1,000, or a time only several hundred thousand years after the Big Bang, when the temperature of the

503
radiation was somewhat less than that at the surface of the Sun. Today, the temperature of the
background radiation is 1,000 times lower, just 3 degrees above absolute zero, having been cooled by the
expansion of the universe. This radiation was observed with remarkable accuracy by the Cosmic
Background Explorer (COBE), launched in 1989. Data from this satellite showed that the radiation had the
theoretically predicted spectrum of a blackbody. COBE data also revealed tiny spatial ripples in the
intensity of the radiation (Figure 2.17), indicative of density fluctuations that could lead to the observed
large-scale structure of the universe. This set of satellite observations provided, for the first time, direct
experimental evidence for a basic paradigm of scientists cosmological speculations and established the
quantitative basis for all subsequent work in this field.

By design, the COBE satellite had very low angular resolution, and therefore it was able to measure
structure in the background radiation only on the largest scales. The characteristics of the background
radiation on smaller scales depend on the matter and energy content of the universe; in concert with
studies at lower redshifts, such as the Sloan Digital Sky Survey and searches for supernovae, these data
can be used to determine all the fundamental properties of the universe, including its age and the
amount of matter and energy it contains. Recent observa-

tions imply that the total density of matter and energy is very close to what is needed to make the
geometry of the universe flat (see Figure 2.18). NASAs MAP, the European Space Agencys Planck
Surveyor satellite, the ground-based Cosmic Background Imager, and future balloon observations will
dramatically increase the sensitivity of studies of the background radiation. In addition to measuring the
fundamental cosmological parameters with great precision, these missions will provide stringent tests of
current cosmological theories. Ground-based studies will measure the distortion of the spectrum of the
background radiation caused by the hot gas in intervening clusters of galaxies. Combined with
observations by Constellation-X of the properties of this hot gas, these observations will enable
researchers to determine the distances to these clusters, constrain the value of the Hubble constant, and
probe the large-scale geometry of the universe.

One aspect of the cosmic microwave background that these missions will only begin to investigate is its
polarization. Gravitational waves excited during the first instants after the Big Bang should have
produced effects that polarized the background radiation. More precise measure-

504
FIGURE 2.17 The COBE satellite detected tiny variations in the intensity of the cosmic microwave
background.

The amplitude of the temperature fluctuations is only about 0.00001 K, which reflects the smoothness of
the universe at the time this radiation was emitted, and dramatically confirms the theoretical
expectation that the universe began from a dense, hot, highly uniform state. The COBE data sets were
developed by NASAs Goddard Space Flight Center under the guidance of the COBE Science Working
Group and were provided by the National Space Science Data Center.

505
FIGURE 2.18 The spectrum of the primordial sound produced by the Big Bang. The sound waves can be
observed through the fluctuations they produce in the temperature of the cosmic microwave
background. Plotted is the mean-square temperature difference between two points in the sky as a
function of their angular separation parameterized by the multipole number l (angular separation ~ 180
degrees/l). The observations were made with the BOOMERANG and MAXIMA balloon-borne telescopes;
data from the COBE differential microwave radiometers (DMR) are also included. The peak in the
spectrum at about 1 degree (l ~ 200) indicates that the universe is nearly spatially flat. The data can be fit
well by models (such as that shown by the solid blue curve) in which only a small fraction of the matter is
normal baryonic matter. Courtesy of the BOOMERANG and MAXIMA collaborations.

ments of the properties of this polarizationto be made by the generation of CMB missions beyond
Planckwill enable a direct test of the current paradigm of inflationary cosmology, and at the same time
they will shed light on the physics of processes that occurred in the early universe at energies far above
those accessible to Earth-bound accelerators.

506
COMPOSITION OF THE UNIVERSE

Ordinary matter is made up of the same atoms as are known to us on Earth. The nucleus of an atom
consists of protons and neutrons. The electrons encircling the nucleus are equal in number to the
protons, although some of these electrons are stripped from the atom if the atom is ionized. Atoms can
combine together into molecules, which in turn combine together to form all the matter we see on Earth.
Atoms can produce light, and by observing light from stars astronomers have concluded that the stars,
too, are made up of atoms. But when astronomers observe larger objects, such as the outer parts of
galaxies or entire clusters of galaxies, they have found that the amount of matter they see in glowing gas
and stars is not enough to hold these objects together by gravity. They therefore have postulated a form
of matter too faint to see through its radiation: dark matter.

The current state of knowledge of the composition of the universe is shown in Figure 2.19. As discussed
above, recent observations have suggested that the total density of matter and energy is the critical
value necessary for a flat universe. Of this total critical value, about two-thirds is dark energy, whose
nature is unknown, and one-third is matter. Ordinary matter is about 5 percent of the total, and luminous
stars make up only about 0.5 percent. Where is the ordinary matter that is not in luminous stars? A
leading contender for at least some of this missing ordinary matter is hot intergalactic gas, and
Constellation-X will test this hypothesis. An even greater mystery is the nature of the matter that is not
made up of atomsthe dark matter. Some of this matter is composed of neutrinos left over from the Big
Bang. Although the uncertainty in their mass makes it difficult to determine exactly how much,
astrophysical observations suggest that neutrinos do not account for the bulk of the dark matter. The
rest is believed to be in the form of dark matter particles or objects that move relatively slowly, and are
therefore called cold dark matter. Determination of the nature of this cold dark matter is one of the
great unsolved problems in modern astrophysics.

507
FIGURE 2.19 The makeup of our universe.

Two-thirds of the matter and energy in the universe is in the form of a mysterious form of dark energy
that is causing the expansion of the universe to speed up, rather than slow down. The other third is in the
form of matter, the bulk of which is dark and which scientists believe is composed of slowly moving
elementary particles (cold dark matter) remaining from the earliest moments after the birth of the
universe. All forms of ordinary matter account for only about 5 percent of the total, of which only about
one-tenth is in stars and a very tiny amount is in the periodic tables heavier elements (carbon, nitrogen,
oxygen, and so on). The idea of particle dark matter was reinforced by recent indications that neutrinos
have mass and thereby account for almost as much mass as do stars. Adapted from a drawing courtesy of
M. Turner (University of Chicago).

The large-scale distribution of the dark matter can be studied through observations of gravitational
lensing. Studies of gravitational lensing have given astronomers their best look at the distribution of dark
matter both in clusters of galaxies and around some individual galaxies. In this decade, surveys of galaxies
over vast areas of the sky with LSST and other telescopes will provide lensing data that describe the dark
matter distribution over supercluster scalesinformation crucial for understanding the growth of large-
scale structure.

Two leading possibilities for the makeup of dark matter are (1) elementary particles left over from the
earliest moments of creation and (2) objects of stellar mass (massive compact halo objects, or MACHOs).
It is a mark of the uncertainty in this field that these two candidates differ in mass by more than 57
orders of magnitude.

Theorists predicted that MACHOs, though too faint to be detected by their own emission, could be
detected by gravitational lensing as well: The light of the background star would be amplified as the
MACHO passed in front of the star. During the past decade, several groups independently detected this
phenomenon, which is called microlensing because the mass of the lens is so small compared with that of
galaxies (Figure 2.20). The nature of the MACHOs is a significant mystery: Are they stars made up of
ordinary matter, or are they objects made up of an exotic form of matter? Accurate determination of
their masses would help resolve this question, but to date, definitive measurements have not been
possible; the best estimate is that the typical mass of a MACHO is somewhat less than a solar mass. By
resolving the apparent motion of the stars that are imaged by the MACHOs, SIM will measure the masses
of the MACHOs. Studies of microlensing have had several important spinoffs, including resolution of the
surface of the star being lensed, and demonstration that it should be possible to detect planets as small
as Earth through microlensing observations, as discussed in The Formation and Evolution of Planets
section of this chapter.

As yet it is unclear how much MACHOs contribute to the dark matter in the Galaxy. If MACHOs are made
of ordinary matter, then they cannot account for the bulk of the dark matter known to exist in the
universe or even in our own galaxy. As a result, a number of efforts are under way in laboratories around
the world to discover the particle dark matter that may be holding our own Milky Way together. There
are two important

508
FIGURE 2.20 The first gravitational microlensing light curve, showing the amplification of the light of a
background star by the gravitational field of an intervening object.

These intervening objects, of unknown nature, may contribute to the dark matter in the Galaxy. The
similarity of the curves in red light and blue light helps confirm that the brightening is caused by
gravitational lensing. Courtesy of the MACHO collaboration. Reprinted by permission
from Nature 365:621-623, copyright 1993 Macmillan Magazines Ltd.

ongoing efforts in the United States: (1) the Cryogenic Dark Matter Search II, a search for a particle with
roughly atomic mass called the neutralino, and (2) the U.S. Axion Experiment, a search for an extremely
light dark matter particle called the axion. The existence of the neutralino is a prediction of superstring
theory, a bold and promising attempt to unify gravity with the other forces of nature. The discovery that
neutralinos or axions are the dark matter that binds our own galaxy would shed light not only on the
astrophysical dark matter problem, but also on the unification of the fundamental forces and particles of
nature.

509
EXTERNAL LINKS

American Astronomical Society


European Astronomical Society
International Astronomical Union
Astronomical Society of the Pacific
Space's astronomy news
African astronomical society
Hazewinkel, Michiel, ed. (2001) [1994], "Schrdinger equation", Encyclopedia of Mathematics,
Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
Quantum Physics textbook by Benjamin Crowell with a treatment of the time-independent
Schrdinger equation
Linear Schrdinger Equation at EqWorld: The World of Mathematical Equations.
Nonlinear Schrdinger Equation at EqWorld: The World of Mathematical Equations.
The Schrdinger Equation in One Dimension as well as the directory of the book.
All about 3D Schrdinger Equation
Mathematical aspects of Schrdinger equations are discussed on the Dispersive PDE Wiki.
Web-Schrdinger: Interactive solution of the 2D time-dependent and stationary Schrdinger
equation
An alternate reasoning behind the Schrdinger Equation
Online software-Periodic Potential Lab Solves the time-independent Schrdinger equation for
arbitrary periodic potentials.
What Do You Do With a Wavefunction?
The Young Double-Slit Experiment

Erwin Schrdinger (1935) The Present Situation in Quantum Mechanics (translation of 3-part Schrdinger,
Erwin (November 1935). "Die gegenwrtige Situation in der Quantenmechanik (The present situation in
quantum mechanics)". Naturwissenschaften. 23 (48): 823807828812. Bibcode:1935NW.....23..807S.
doi:10.1007/BF014918914. and pp. 823828, 844849) Schrdinger's cat paper
o Einstein, B. Podolsky, N. Rosen (1935) Can Quantum-Mechanical Description of Physical Reality
Be Considered Complete?, Physical Review, Vol. 47, p. 777. The EPR paper
Phillip Yam (October 10, 2012) Bringing Schrdinger's Cat to Life, Scientific American. Describes
investigations of quantum "cat states" and wavefunction collapse by Serge Haroche and David J. Wineland
for which they won the 2012 Nobel Prize in Physics
Tony Leggett (August 2000) New Life for Schrdinger's Cat, Physics World, p. 23-24. Article on
experiments with "cat state" superpositions in superconducting rings, in which the electrons go around the
ring in two directions simultaneously.
Information Philosopher on Schrdinger's cat More diagrams and an information creation explanation.
Poliakoff, Martyn (2009). "Schrdinger's Cat". Sixty Symbols. Brady Haran for the University of
Nottingham.
Schrdinger's cat in audio produced by Sift
[Link], [Link], [Link] SCHRODINGER'S CAT PARADOX RESOLUTION USING
GRW COLLAPSE MODEL,International Journal of Recent advances in Physics (IJRAP) Vol.3, No.3,
August 2014.

510
NOTES
o Webb 2002
o Wikipedia encyclopedia/files
o Ward & Brownlee 2000, pp. 2729
o Morphology of Our Galaxy's 'Twin' Spitzer Space Telescope, Jet Propulsion Laboratory,
NASA.
o Lineweaver, Charles H.; Fenner, Yeshe; Gibson, Brad K. (2004). "The Galactic Habitable
Zone and the Age Distribution of Complex Life in the Milky
Way" (PDF). Science. 303 (5654): 59
62. Bibcode:2004Sci...303...59L. PMID 14704421. arXiv:astro-ph/0401024
. doi:10.1126/science.1092322.
o Ward & Brownlee 2000, p. 32
o ^ Jump up to:a b Gonzalez, Brownlee & Ward 2001
o Loveday, J. (February 1996). "The APM Bright Galaxy Catalogue". Monthly Notices of the
Royal Astronomical Society. 278 (4): 1025
1048. Bibcode:1996MNRAS.278.1025L. arXiv:astro-ph/9603040
. doi:10.1093/mnras/278.4.1025.
o D. Mihalas (1968). Galactic Astronomy. W. H. Freeman. ISBN 978-0-7167-0326-6.
o Hammer, F.; Puech, M.; Chemin, L.; Flores, H.; Lehnert, M. D. (2007). "The Milky Way,
an Exceptionally Quiet Galaxy: Implications for the Formation of Spiral Galaxies". The
Astrophysical Journal. 662 (1): 322334. Bibcode:2007ApJ...662..322H. arXiv:astro-
ph/0702585 . doi:10.1086/516727.
o "Sibling Rivalry". New Scientist. 31 March 2012.
o Scharf, 2012
o How often does the Sun pass through a spiral arm in the Milky Way?, Karen
Masters, Curious About Astronomy
o Dartnell 2007, p. 75
o Hart, M.H. (January 1979). "Habitable Zones Around Main Sequence
Stars". Icarus. 37 (1): 3517. Bibcode:1979Icar...37..351H. doi:10.1016/0019-
1035(79)90141-6.
o NASA, Science News, Solar Variability and Terrestrial Climate, 8 January 2013
o University of Nebraska-Lincoln astronomy education group, Stellar Luminosity Calculator
o National Center for Atmospheric Research, The Effects of Solar Variability on Earth's
Climate, 2012 Report
o Most of Earths twins arent identical, or even close!, by Ethan on 5 June 2013
o Ward & Brownlee 2000, p. 18
o Schmidt, Gavin (6 April 2005). "Water vapour: feedback or forcing?". RealClimate.
o The One Hundred Nearest Star Systems, Research Consortium on Nearby Stars.
o Ward & Brownlee 2000, pp. 1533
o Minard, Anne (27 August 2007). "Jupiter Both an Impact Source and Shield for Earth".
Retrieved 14 January 2014. without the long, peaceful periods offered by Jupiter's shield,
intelligent life on Earth would never have been able to take hold.
o Batygin et al, pp. 23-24
o Hinse, T.C. "Chaos and Planet-Particle Dynamics within the Habitable Zone of Extrasolar
Planetary Systems (A qualitative numerical stability study)" (PDF). Niels Bohr Institute.
Retrieved 31 October 2007. Main simulation results observed: [1] The presence of high-
order mean-motion resonances for large values of giant planet eccentricity [2] Chaos
dominated dynamics within the habitable zone(s) at large values of giant planet mass.
o "Once you realize that most of the known extrasolar planets have highly eccentric orbits
(like the planets in Upsilon Andromedae), you begin to wonder if there might be something
special about our solar system" (UCBerkeleyNews quoting Extra solar planetary

511
researcher Eric Ford.) Sanders, Robert (13 April 2005). "Wayward planet knocks
extrasolar planets for a loop". Retrieved 31 October 2007.
o Sol Company, Stars and Habitable Planets, 2012Archived 28 June 2011 at the Wayback
Machine.
o pg 220 Ward & Brownlee
o Lissauer 1999, as summarized by Conway Morris 2003, p. 92; also see Comins 1993
o Ward & Brownlee 2000, p. 191
o Ward & Brownlee 2000, p. 194
o Ward & Brownlee 2000, p. 200
o Taylor 1998
o [Link]
o W ARD, R. D. & BROWNLEE, D. 2000. Plate tectonics essential for complex evolution - Rare
Earth - Copernicus Books
o [Link], Fact or Fiction: The Days (and Nights) Are Getting Longer, By
Adam Hadhazy, 14 June 2010
o Dartnell 2007, pp. 6970
o A formal description of the hypothesis is given in: Lathe, Richard (March 2004). "Fast tidal
cycling and the origin of life". Icarus. 168 (1): 18
22. Bibcode:2004Icar..168...18L. doi:10.1016/[Link].2003.10.018. tidal cycling,
resembling the polymerase chain reaction (PCR) mechanism, could only replicate and
amplify DNA-like polymers. This mechanism suggests constraints on the evolution of
extra-terrestrial life. It is taught less formally here: Schombert, James. "Origin of Life".
University of Oregon. Retrieved 31 October 2007. with the vastness of the Earth's oceans
it is statistically very improbable that these early proteins would ever link up. The solution
is that the huge tides from the Moon produced inland tidal pools, which would fill and
evaporate on a regular basis to produce high concentrations of amino acids.
o [Link], Most of Earth's Water Came from Asteroids, Not Comets, By Charles Q.
Choi, 10 December 2014
o NASA, Formation of the Ozone Layer
o NASA, Ozone and the Atmosphere, Goddard Earth Sciences (GES) Data and Information
Services Center
o Emsley, p. 360
o ^ Jump up to:a b Rakov, Vladimir A.; Uman, Martin A. (2007). Lightning: Physics and
Effects. Cambridge University Press. p. 508. ISBN 978-0-521-03541-5.
o NASA, Effects of Changing the Carbon Cycle
o The International Volcanic Health Hazard Network, Carbon Dioxide (CO2)
o NASA, The Water Cycle, by Dr. Gail Skofronick-Jackson
o NASA, What's the Difference Between Weather and Climate?, 1 February 2005
o NASA, Earth's Atmospheric Layers, 21 January 2013
o Lane, 2012
o Origin of Mitochondria
o Ridley M (2004) Evolution, 3rd edition. Blackwell Publishing, p. 314.
o T. Togashi, P. Cox (Eds.) The Evolution of Anisogamy. Cambridge University Press,
Cambridge; 2011, p. 22-29.
o Beukeboom, L. & Perrin, N. (2014). The Evolution of Sex Determination. Oxford
University Press, p. 25 [2]. Online resources, [3].
o Czrn, T.L.; Hoekstra, R.F. (2006). "Evolution of sexual asymmetry". BMC Evolutionary
Biology. 4: 3446. doi:10.1186/1471-2148-4-34.
o ^ (in English) 800 million years for complex organ evolution - Heidelberg University
o Cramer 2000
o Ward & Brownlee 2000, pp. 2715
o Barrow, John D.; Tipler, Frank J. (1988). The Anthropic Cosmological Principle. Oxford
University Press. ISBN 978-0-19-282147-8. LCCN 87028148. Section 3.2

512
o Conway Morris 2003, Ch. 5
o Conway Morris, 2003, p. 344, n. 1
o Gribbin 2011
o Gonzalez, Guillermo (December 2005). "Habitable Zones in the Universe". Origins of Life
and Evolution of Biospheres. 35 (6): 555606. arXiv:astro-ph/0503298
. doi:10.1007/s11084-005-5010-8.
o Extraterrestrials: Where are They? 2nd ed., Eds. Ben Zuckerman and Michael H. Hart
(Cambridge: Press Syndicate of the University of Cambridge, 1995), 153.
o Harvard Astrophysicist Backs the Rare Earth Hypothesis
o Darling 2001
o Darling 2001, p. 103
o Frazier, Kendrick. 'Was the 'Rare Earth' Hypothesis Influenced by a Creationist?' The
Skeptical Inquirer. 1 November 2001
o Schneider, Jean. "Interactive Extra-solar Planets Catalog". The Extrasolar Planets
Encyclopaedia.
o Howard, Andrew W.; et al. (2013). "A rocky composition for an Earth-sized
exoplanet". Nature. 503 (7476): 381
384. Bibcode:2013Natur.503..381H. PMID 24172898. arXiv:1310.7988
. doi:10.1038/nature12767.
o [Link]
o Stuart Gary New approach in search for alien life ABC Online. 22 November 2011
o Clavin, Whitney; Chou, Felicia; Johnson, Michele (6 January 2015). "NASA's Kepler
Marks 1,000th Exoplanet Discovery, Uncovers More Small Worlds in Habitable
Zones". NASA. Retrieved 6 January 2015.
o Kasting 2001, pp. 123
o Borenstein, Seth (4 November 2013). "8.8 billion habitable Earth-size planets exist in
Milky Way alone". [Link]/. Retrieved 5 November 2013.
o Overbye, Dennis (4 November 2013). "Far-Off Planets Like the Earth Dot the
Galaxy". New York Times. Retrieved 5 November 2013.
o Petigura, Eric A.; Howard, Andrew W.; Marcy, Geoffrey W. (31 October
2013). "Prevalence of Earth-size planets orbiting Sun-like stars". Proceedings of the
National Academy of Sciences of the United States of America. 110: 19273
19278. Bibcode:2013PNAS..11019273P. PMC 3845182
. PMID 24191033. arXiv:1311.6806 . doi:10.1073/pnas.1319909110. Retrieved 5
November2013.
o Khan, Amina (4 November 2013). "Milky Way may host billions of Earth-size planets". Los
Angeles Times. Retrieved 5 November 2013.
o Kasting 2001, pp. 118120
o Brumfiel, Geoff (2007). "Jupiter's protective pull
questioned". news@nature. doi:10.1038/news070820-11.
o Horner, J.; Jones, B.W. (2008). "Jupiter friend or foe? I: the asteroids". International
Journal of Astrobiology. 7(3&4): 251261. Bibcode:2008IJAsB...7..251H. arXiv:0806.2795
. doi:10.1017/S1473550408004187.
o Cooper, Keith (12 March 2012). "Villain in disguise: Jupiters role in impacts on Earth".
Retrieved 2 September 2015.
o Howell, Elizabeth (8 February 2017). "Saturn Could Be Defending Earth From Massive
Asteroid Impacts". [Link]. Retrieved 9 February 2017.
o Gipson, Lillian (24 July 2015). "New Horizons Discovers Flowing Ices on Pluto". NASA.
Retrieved 24 July 2015.
o Ward & Brownlee 2000, pp. 191193
o Kranendonk, V.; Martin, J. (2011). "Onset of Plate Tectonics". Science. 333 (6041): 413
414. Bibcode:2011Sci...333..413V. PMID 21778389. doi:10.1126/science.1208766.

513
o ONeill, Craig; Lenardic, Adrian; Weller, Matthew; Moresi, Louis; Quenette, Steve; Zhang,
Siqi (2016). "A window for plate tectonics in terrestrial planet evolution?". Physics of the
Earth and Planetary Interiors. 255: 8092. doi:10.1016/[Link].2016.04.002.
o Stern, S. A.; Cunningham, N. J.; Hain, M. J.; Spencer, J. R.; Shinn, A. (2012). "FIRST
ULTRAVIOLET REFLECTANCE SPECTRA OF PLUTO AND CHARON BY THEHUBBLE
SPACE TELESCOPECOSMIC ORIGINS SPECTROGRAPH: DETECTION OF
ABSORPTION FEATURES AND EVIDENCE FOR TEMPORAL CHANGE". The
Astronomical Journal. 143 (1): 22. Bibcode:2012AJ....143...22S. doi:10.1088/0004-
6256/143/1/22.
o Hand, Eric (2015). "UPDATED: Pluto's icy face revealed, spacecraft 'phones
home'". Science. doi:10.1126/science.aac8847.
o Barr, Amy C.; Collins, Geoffrey C. (2015). "Tectonic activity on Pluto after the Charon-
forming impact". Icarus. 246: 146155. Bibcode:2015Icar..246..146B. arXiv:1403.6377
. doi:10.1016/[Link].2014.03.042.
o Yin, A. (2012). "Structural analysis of the Valles Marineris fault zone: Possible evidence
for large-scale strike-slip faulting on Mars". Lithosphere. 4 (4): 286
330. doi:10.1130/L192.1.
o Greenberg, Richard; Geissler, Paul; Tufts, B. Randall; Hoppa, Gregory V. (2000).
"Habitability of Europa's crust: The role of tidal-tectonic processes". Journal of
Geophysical Research. 105 (E7):
17551. Bibcode:2000JGR...10517551G. doi:10.1029/1999JE001147.
o "Scientists Find Evidence of 'Diving' Tectonic Plates on Europa". [Link].
NASA. 8 September 2014. Retrieved 30 August 2015.
o Emspak, Jesse (25 January 2017). "Pluto's Moon Charon Had Its Own, Icy Plate
Tectonics". [Link]. Retrieved 26 January 2017.
o Valencia, Diana; O'Connell, Richard J.; Sasselov, Dimitar D (November 2007).
"Inevitability of Plate Tectonics on Super-Earths". Astrophysical Journal Letters. 670 (1):
L45L48. Bibcode:2007ApJ...670L..45V. arXiv:0710.0699 . doi:10.1086/524012.
o Cowan, Nicolas B.; Abbot, Dorian S. (2014). "WATER CYCLING BETWEEN OCEAN
AND MANTLE: SUPER-EARTHS NEED NOT BE WATERWORLDS". The Astrophysical
Journal. 781 (1): 27. Bibcode:2014ApJ...781...27C. arXiv:1401.0720 . doi:10.1088/0004-
637X/781/1/27.
o Mayor, M.; Udry, S.; Pepe, F.; Lovis, C. (2011). "Exoplanets: the quest for Earth
twins". Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences. 369 (1936):
574. Bibcode:2011RSPTA.369..572M. doi:10.1098/rsta.2010.0245.
o ^ Jump up to:a b Ward & Brownlee 2000, p. 217
o Killen, Rosemary; Cremonese, Gabrielle; Lammer, Helmut; et al. (2007). "Processes that
Promote and Deplete the Exosphere of Mercury". Space Science Reviews. 132 (24):
433509. Bibcode:2007SSRv..132..433K. doi:10.1007/s11214-007-9232-0.
o Grller, H.; Shematovich, V. I.; Lichtenegger, H. I. M.; Lammer, H.; Pfleger, M.; Kulikov,
Yu. N.; Macher, W.; Amerstorfer, U. V.; Biernat, H. K. (2010). "Venus' atomic hot oxygen
environment". Journal of Geophysical
Research. 115 (E12). Bibcode:2010JGRE..11512017G. doi:10.1029/2010JE003697.
o Mahaffy, P. R.; et al. (2013). "Abundance and Isotopic Composition of Gases in the
Martian Atmosphere from the Curiosity Rover". Science. 341 (6143): 263
266. Bibcode:2013Sci...341..263M. PMID 23869014. doi:10.1126/science.1237966.
o Spencer, John R.; Calvin, Wendy M.; Person, Michael J. (1995). "Charge-coupled device
spectra of the Galilean satellites: Molecular oxygen on Ganymede". Journal of
Geophysical Research. 100 (E9):
19049. Bibcode:1995JGR...10019049S. doi:10.1029/95JE01503.
o Esposito, Larry W.; et al. (2004). "The Cassini Ultraviolet Imaging Spectrograph
Investigation". Space Science Reviews. 115 (14): 299
361. Bibcode:2004SSRv..115..299E. doi:10.1007/s11214-004-1455-8.

514
o Tokar, R. L.; Johnson, R. E.; Thomsen, M. F.; Sittler, E. C.; Coates, A. J.; Wilson, R. J.;
Crary, F. J.; Young, D. T.; Jones, G. H. (2012). "Detection of exospheric O2+at Saturn's
moon Dione". Geophysical Research Letters. 39(3): n/a
n/a. Bibcode:2012GeoRL..39.3105T. doi:10.1029/2011GL050452.
o Glein, Christopher R.; Baross, John A.; Waite, J. Hunter (2015). "The pH of Enceladus
ocean". Geochimica et Cosmochimica Acta. 162: 202
219. Bibcode:2015GeCoA.162..202G. arXiv:1502.01946
. doi:10.1016/[Link].2015.04.017.
o Teolis; et al. (2010). "Cassini Finds an Oxygen-Carbon Dioxide Atmosphere at Saturn's
Icy Moon Rhea". Science. 330 (6012): 1813
1815. Bibcode:2010Sci...330.1813T. PMID 21109635. doi:10.1126/science.1198366.
o [Link]
o Hall, D. T.; Strobel, D. F.; Feldman, P. D.; McGrath, M. A.; Weaver, H. A. (1995).
"Detection of an oxygen atmosphere on Jupiter's moon Europa". Nature. 373 (6516): 677
679. Bibcode:1995Natur.373..677H. PMID 7854447. doi:10.1038/373677a0.
o Narita, Norio; Enomoto, Takafumi; Masaoka, Shigeyuki; Kusakabe, Nobuhiko
(2015). "Titania may produce abiotic oxygen atmospheres on habitable
exoplanets". Scientific Reports. 5: 13977. Bibcode:2015NatSR...513977N. PMC 4564821
. PMID 26354078. arXiv:1509.03123 . doi:10.1038/srep13977.
o Oxygen-Free Animals Discovered-A First, National Geographic news
o Danovaro R; Dell'anno A; Pusceddu A; Gambi C; et al. (April 2010). "The first metazoa
living in permanently anoxic conditions". BMC Biology. 8 (1): 30. PMC 2907586
. PMID 20370908. doi:10.1186/1741-7007-8-30.
o Stevenson, J.; Lunine, J.; Clancy, P. (2015). "Membrane alternatives in worlds without
oxygen: Creation of an azotosome". Science Advances. 1 (1): e1400067
e1400067. Bibcode:2015SciA....1E0067S. PMC 4644080
. PMID 26601130. doi:10.1126/sciadv.1400067.
o [Link]
form-membranes/
o Schirrmeister, B. E.; de Vos, J. M.; Antonelli, A.; Bagheri, H. C. (2013). "Evolution of
multicellularity coincided with increased diversification of cyanobacteria and the Great
Oxidation Event". Proceedings of the National Academy of Sciences. 110 (5): 1791
1796. Bibcode:2013PNAS..110.1791S. PMC 3562814
. PMID 23319632. doi:10.1073/pnas.1209927110.
o Mills, D. B.; Ward, L. M.; Jones, C.; Sweeten, B.; Forth, M.; Treusch, A. H.; Canfield, D.
E. (2014). "Oxygen requirements of the earliest animals". Proceedings of the National
Academy of Sciences. 111 (11): 4168
4172. Bibcode:2014PNAS..111.4168M. PMC 3964089
. PMID 24550467. doi:10.1073/pnas.1400547111.
o Hartman H, McKay CP "Oxygenic photosynthesis and the oxidation state of Mars." Planet
Space Sci. 1995 Jan-Feb;43(1-2):123-8.
o Choi, Charles Q. (2014). "Does a Planet Need Life to Create Continents?". Astrobiology
Magazine. Retrieved 6 January 2014.
o Kasting 2001, p. 130
o Kasting 2001, pp. 128129
o Belbruno, E.; J. Richard Gott III (2005). "Where Did The Moon Come From?". The
Astronomical Journal. 129 (3): 172445. Bibcode:2005AJ....129.1724B. arXiv:astro-
ph/0405372 . doi:10.1086/427539.
o [Link] What If Earth Became Tidally Locked? 2 February 2013
o Ward & Brownlee 2000, p. 233
o Nick, Hoffman (11 June 2001). "The Moon And Plate Tectonics: Why We Are
Alone". Space Daily. Retrieved 8 August 2015.

515
o Turner, S.; Rushmer, T.; Reagan, M.; Moyen, J.-F. (2014). "Heading down early on? Start
of subduction on Earth". Geology. 42 (2): 139
142. Bibcode:2014Geo....42..139T. doi:10.1130/G34886.1.
o UCLA scientist discovers plate tectonics on Mars By Stuart Wolpert 9 August 2012.
o Dirk Schulze-Makuch; Louis Neal Irwin (2 October 2008). Life in the Universe:
Expectations and Constraints. Springer Science & Business Media. p. 162. ISBN 978-3-
540-76816-6.
o Dean, Cornelia (7 September 2015). "The Tardigrade: Practically Invisible, Indestructible
Water Bears". New York Times. Retrieved 7 September 2015.
o Mosher, Dave (2 June 2011). "New "Devil Worm" Is Deepest-Living Animal Species
evolved to withstand heat and crushing pressure". National Geographic News.
o Tarter, Jill. "Exoplanets, Extremophiles, and the Search for Extraterrestrial
Intelligence" (PDF). State University of New York Press. Retrieved 11 September 2015.
o Reynolds, R.T.; McKay, C.P.; Kasting, J.F. (1987). "Europa, Tidally Heated Oceans, and
Habitable Zones Around Giant Planets". Advances in Space Research. 7(5): 125
132. Bibcode:1987AdSpR...7..125R. doi:10.1016/0273-1177(87)90364-4.
o For a detailed critique of the Rare Earth hypothesis along these lines, see Cohen &
Stewart 2002.
o Vaclav Smil (2003). The Earth's Biosphere: Evolution, Dynamics, and Change. MIT
Press. p. 166. ISBN 978-0-262-69298-4.

o "Physicist Erwin Schrdinger's Google doodle marks quantum mechanics work". The
Guardian. 13 August 2013. Retrieved 25 August 2013.
o Schrdinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and
Molecules" (PDF). Physical Review. 28 (6): 1049
1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/PhysRev.28.1049. Archived from the
original (PDF) on 17 December 2008.
o Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice
Hall, ISBN 0-13-111892-7
o Laloe, Franck (2012), Do We Really Understand Quantum Mechanics, Cambridge
University Press, ISBN 978-1-107-02501-1
o Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Kluwer
Academic/Plenum Publishers. ISBN 978-0-306-44790-7.
o [Link]
o Sakurai, J. J. (1995). Modern Quantum Mechanics. Reading, Massachusetts: Addison-
Wesley. p. 68.
o Nouredine Zettili (17 February 2009). Quantum Mechanics: Concepts and Applications.
John Wiley & Sons. ISBN 978-0-470-02678-6.
o Ballentine, Leslie (1998), Quantum Mechanics: A Modern Development, World Scientific
Publishing Co., ISBN 9810241054
o David Deutsch, The Beginning of infinity, page 310
o de Broglie, L. (1925). "Recherches sur la thorie des quanta" [On the Theory of
Quanta] (PDF). Annales de Physique. 10 (3): 22128. Translated version at the Wayback
Machine (archived 9 May 2009).
o Weissman, M.B.; V. V. Iliev; I. Gutman (2008). "A pioneer remembered: biographical
notes about Arthur Constant Lunn". Communications in Mathematical and in Computer
Chemistry. 59 (3): 687708.
o Kamen, Martin D. (1985). Radiant Science, Dark Politics. Berkeley and Los Angeles, CA:
University of California Press. pp. 2932. ISBN 0-520-04929-2.
o ^ Schrodinger, E. (1984). Collected papers. Friedrich Vieweg und Sohn. ISBN 3-7001-
0573-8. See introduction to first 1926 paper.

516
o ^ Jump up to:a b Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC
publishers, 1991, (Verlagsgesellschaft) 3-527-26954-1, (VHC Inc.) ISBN 0-89573-752-3
o Sommerfeld, A. (1919). Atombau und Spektrallinien. Braunschweig: Friedrich Vieweg und
Sohn. ISBN 3-87144-484-7.
o For an English source, see Haar, T. "The Old Quantum Theory".
o Rhodes, R. (1986). Making of the Atomic Bomb. Touchstone. ISBN 0-671-44133-7.
o ^ Jump up to:a b Erwin Schrdinger (1982). Collected Papers on Wave Mechanics: Third
Edition. American Mathematical Soc. ISBN 978-0-8218-3524-1.
o Schrdinger, E. (1926). "Quantisierung als Eigenwertproblem; von Erwin
Schrdinger". Annalen der Physik. 384: 361
377. Bibcode:1926AnP...384..361S. doi:10.1002/andp.19263840404.
o Erwin Schrdinger, "The Present situation in Quantum Mechanics," p. 9 of 22. The
English version was translated by John D. Trimmer. The translation first appeared first
in Proceedings of the American Philosophical Society, 124, 32338. It later appeared as
Section I.11 of Part I of Quantum Theory and Measurement by J.A. Wheeler and W.H.
Zurek, eds., Princeton University Press, New Jersey 1983.
o Einstein, A.; et. al. "Letters on Wave Mechanics: SchrodingerPlanckEinsteinLorentz".
o ^ Jump up to:a b c Moore, W.J. (1992). Schrdinger: Life and Thought. Cambridge
University Press. ISBN 0-521-43767-9.
o It is clear that even in his last year of life, as shown in a letter to Max Born, that
Schrdinger never accepted the Copenhagen interpretation.[23]:220
o Takahisa Okino (2013). "Correlation between Diffusion Equation and Schrdinger
Equation". Journal of Modern Physics (4): 612615.
o ^ Jump up to:a b Molecular Quantum Mechanics Parts I and II: An Introduction to Quantum
Chemistry (Volume 1), P.W. Atkins, Oxford University Press, 1977, ISBN 0-19-855129-0
o The New Quantum Universe, [Link], [Link], Cambridge University Press,
2009, ISBN 978-0-521-56457-1
o Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-
855493-1
o Physics of Atoms and Molecules, B.H. Bransden, [Link], Longman, 1983, ISBN 0-
582-44401-2
o Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd Edition), R.
Resnick, R. Eisberg, John Wiley & Sons, 1985, ISBN 978-0-471-87373-0
o c
Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-
145546-9
o Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press,
2008, ISBN 978-0-521-57572-0
o N. Zettili. Quantum Mechanics: Concepts and Applications (2nd ed.). p. 458. ISBN 978-0-
470-02679-3.
o Physical chemistry, P.W. Atkins, Oxford University Press, 1978, ISBN 0-19-855148-7
o Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John
Wiley & Sons, 2010, ISBN 978-0-471-92804-1
o Physics for Scientists and Engineers with Modern Physics (6th Edition), P. A. Tipler, G.
Mosca, Freeman, 2008, ISBN 0-7167-8964-7
o David Griffiths (2008). Introduction to elementary particles. Wiley-VCH. pp. 162
. ISBN 978-3-527-40601-2. Retrieved 27 June 2011.
o ^ c Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc,
2004, ISBN 978-0-13-146100-0
o [Link]
o Takahisa Okino (2015). "Mathematical Physics in Diffusion Problems". Journal of Modern
Physics (6): 21092144.
o ^ Ajaib, Muhammad Adeel (2015). "A Fundamental Form of the Schrdinger
Equation". [Link]. 45 (2015) no.12, 1586-1598. doi:10.1007/s10701-015-9944-z.

517
o Ajaib, Muhammad Adeel (2016). "Non-Relativistic Limit of the Dirac
Equation". International Journal of Quantum Foundations.
o Lvy-Leblond, J-.M. (1967). "Nonrelativistic particles and wave equations". Comm. Math.
Pays. 6 (
o

REFERENCES.

, S. W. & Palla, F. (2004). The Formation of Stars. Weinheim: Wiley-VCH. ISBN 3-527-
40559-3.
o Hayashi, C. (1966). "The Evolution of Protostars". Annual Review of Astronomy and
Astrophysics. 4: 171. doi:10.1146/[Link].04.090166.001131.
White (2008). p 63, 66.
o Alan Stern considers these all to be "planets", but that conception was rejected by the
International

Astronomical Union.
b"Astronomer Mike Brown". [Link]. 2013-11-01. Archived from the
o original on 2011-10-18.

Stahler Retrieved 2014-06-15.
Zee, A. (2013). Einstein gravity in a nutshell. Princeton: Princeton University Press. pp.
451 454. ISBN 9780691145587.
"Gallery : The shape of Planet Earth". [Link]. Retrieved 2014-06-15.
Weinberg, Steven (2008). Cosmology. New York: Oxford University Press. pp. 7071. ISBN 978-
0-19-
852682-7.
Savage, Don; Jones, Tammy; Villard, Ray (1995-04-19). "Asteroid or Mini-Planet? Hubble Maps
theAncient Surface of Vesta (Key Stages in the Evolution of the Asteroid Vesta)". Hubble Site
News Release STScI-1995-20. Retrieved 2006-10-17

Scholars Past. Willy Hartner, Otto Neugebauer, B. L. van der Waerden


Scholars Present. Stephen G. Brush, Stephen J. Dick, Owen Gingerich, Bruce Stephenson, Michael Hoskin,
Alexander R. Jones, Curtis A. Wilson
Astronomer-historians. J. B. J. Delambre, J. L. E. Dreyer, Donald Osterbrock, Carl Sagan, F. Richard
Stephenson
Aaboe, Asger. Episodes from the Early History of Astronomy. Springer-Verlag 2001 ISBN0-387-95136-9
Aveni, Anthony F. Skywatchers of Ancient Mexico. University of Texas Press 1980 ISBN0-292-77557-1
Dreyer, J. L. [Link] of Astronomy from Thales to Kepler, 2nd edition. Dover Publications 1953 (revised
reprint of History of the Planetary Systems from Thales to Kepler, 1906)
Eastwood, Bruce. The Revival of Planetary Astronomy in Carolingian and Post-Carolingian Europe,
Variorum Collected Studies Series CS 279 Ashgate 2002 ISBN0-86078-868-7
Evans, James (1998), The History and Practice of Ancient Astronomy, Oxford University Press, ISBN 0-19-
509539-1.
Antoine Gautier, L'ge d'or de l'astronomie ottomane, in L'Astronomie, (Monthly magazine created by
Camille Flammarion in 1882), December 2005, volume 119.
Hodson, F. R. (ed.). The Place of Astronomy in the Ancient World: A Joint Symposium of the Royal
Society and the British Academy. Oxford University Press, 1974 ISBN0-19-725944-8
Hoskin, Michael. The History of Astronomy: A Very Short Introduction. Oxford University Press. ISBN0-
19-280306-9
McCluskey, Stephen C. Astronomies and Cultures in Early Medieval Europe. Cambridge University Press
1998 ISBN0-521-77852-2
Pannekoek, Anton. A History of Astronomy. Dover Publications 1989
Pedersen, Olaf. Early Physics and Astronomy: A Historical Introduction, revised edition. Cambridge
University Press 1993 ISBN0-521-40899-7

518
Pingree, David (1998), "Legacies in Astronomy and Celestial Omens", in Dalley, Stephanie, The Legacy of
Mesopotamia, Oxford University Press, pp. 125137, ISBN 0-19-814946-8.
Rochberg, Francesca (2004), The Heavenly Writing: Divination, Horoscopy, and Astronomy in
Mesopotamian Culture, Cambridge University Press.
Larson, R. B. (1969). "Numerical Calculations of the Dynamics of a Collapsing Protostar". Monthly Notices
of the Royal Astronomical Society. 145: 271. Bibcode:1969MNRAS.145..271L.
doi:10.1093/mnras/145.3.271.
Winkler, K.-H. A. & Newman, M. J. (1980). "Formation of Solar-Type Stars in Spherical Symmetry: I. The
Key Role of the Accretion Shock". Astrophysical Journal. 236: 201. Bibcode:1980ApJ...236..201W.
doi:10.1086/157734.
Stahler, S. W., Shu, F. H., and Taam, R. E. (1980). "The Evolution of Protostars: I. Global
Formulation and Results". Astrophysical Journal. 241: 637. Bibcode:1980ApJ...241..637S.
doi:10.1086/158377.
"Infant Stars First Steps". Retrieved 10 November 2015.

Guinness Book of Astronomical Records: sections on "the Sun" and "history of Astronomy".
The Cambridge Encyclopedia of Astronomy: section on "The history of Astronomy".
Cosmos: by Carl Sagan. This is an execellent video series and book about astronomy, with lots of
background stories about the important people in the history of astronomy (especially in Chapter 7: "The
Backbone of Night").

Isaac Newton: "In [experimental] philosophy particular propositions are inferred from the phenomena and
afterwards rendered general by induction": "Principia", Book 3, General Scholium, at p.392 in Volume 2 of
Andrew Motte's English translation published 1729.
- Proposition 75, Theorem 35: p.956 - [Link] Cohen and Anne Whitman, translators: Isaac Newton, The
Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by
[Link] Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4
The Michell-Cavendish Experiment, Laurent Hodges
Bullialdus (Ismael Bouillau) (1645), "Astronomia philolaica", Paris, 1645.
Borelli, G. A., "Theoricae Mediceorum Planetarum ex causis physicis deductae", Florence, 1666.
D T Whiteside, "Before the Principia: the maturing of Newton's thoughts on dynamical astronomy, 1664-
1684", Journal for the History of Astronomy, i (1970), pages 5-19; especially at page 13.
H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press,
1960), giving the Halley-Newton correspondence of May to July 1686 about Hooke's claims at pp.431-448,
see particularly page 431.
Hooke's 1674 statement in "An Attempt to Prove the Motion of the Earth from Observations" is available
in online facsimile here.
Purrington, Robert D. (2009). The First Professional Scientist: Robert Hooke and the Royal Society of
London. Springer. p. 168. ISBN 3-0346-0036-4. Extract of page 168
See page 239 in Curtis Wilson (1989), "The Newtonian achievement in astronomy", ch.13 (pages 233-274)
in "Planetary astronomy from the Renaissance to the rise of astrophysics: 2A: Tycho Brahe to Newton",
CUP 1989.
Calendar (New Style) Act 1750
Page 309 in H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge
University Press, 1960), document #239.

519
Myers, P. C. & Benson, P. J. (1983). "Dense Cores in Dark Clouds: II. NH3 Observations
and Star Formation". Astrophysical Journal. 266: 309. Bibcode:1983ApJ...266..309M.
doi:10.1086/160780.
Shu, F. H. (1977). "Self-Similar Collapse of Isothermal Spheres and Star Formation".
Astrophysical Journal. 214: 488. Bibcode:1977ApJ...214..488S. doi:10.1086/155274.
Evans, N. J., Lee, J.-E., Rawlings, J. M. C., and Choi, M. (2005). "B335 - A Laboratory for Astrochemistry
in a Collapsing Cloud". Astrophysical Journal. 626: 919.
Bibcode:2005ApJ...626..919E. arXiv:astro-ph/0503459 . doi:10.1086/430295.
"A diamond in the dust". Retrieved 16 February 2016.
Stahler, S. W. (1988). "Deuterium and the Stellar Birthline". Astrophysical Journal. 332: 804.
Bibcode:1988ApJ...332..804S. doi:10.1086/166694.
Adams, F. C., Lada, C. J., and Shu, F. H. (1987). "The Spectral Evolution of Young Stellar Objects".
Astrophysical Journal. 312: 788. Bibcode:1987ApJ...312..[Link].1086/164924.
Andre, P, Ward-Thompson, D. and Barsony, M. (1993). "Submillimeter Continuum Observations
of rho Ophiuchi A: The Candidate Protostar VLA 1623 and Prestellar Clumps". Astrophysical
Journal. 406: 122. Bibcode:1993ApJ...406..122A.
doi:10.1086/172425.
"IMPRS" (PDF). [Link].
Task Group on Astronomical Designations from IAU Commission 5 (April 2008). "NamingAstronomical
Objects". International Astronomical Union (IAU). Archived from the
original on 2 August 2010. Retrieved 4 July 2010.
Narlikar, Jayant V. (1996). Elements of Cosmology. Universities Press. ISBN 81-7371-043-0.
Smolin, Lee (1998). The life of the cosmos. Oxford University Press US. p. 35. ISBN 0-19-512664-5.
Buta, Ronald James; Corwin, Harold G.; Odewahn, Stephen C. (2007). The de Vaucouleurs
atlas of galaxies. Cambridge University Press. p. 301. ISBN 0-521-82048-0.
Astronomical Objects for Southern Telescopes. ISBN 0521318874. Retrieved 13 February
2017.
Elmegreen, Bruce G. (January 2010). "The nature and nurture of star clusters". Star clusters: basic
galactic building blocks throughout time and space, Proceedings of the International Astronomical
Union, IAU Symposium. 266. pp. 313.

Hansen, Carl J.; Kawaler, Steven D.; Trimble, Virginia (2004). Stellar interiors: physical principles,
structure, and evolution. Astronomy and astrophysics library (2nd ed.).

Harding E. Smith (1999-04-21). "The Hertzsprung-Russell Diagram". Gene Smith's Astronomy Tutorial.
Center for Astrophysics & Space Sciences, University of California,

Richard Powell (2006). "The Hertzsprung Russell Diagram". An Atlas of the Universe. Retrieved 2009-
10-29.

520
[1]
Abt, Helmut A., and Levy, Saul G. Multiplicity Among Solar-Type Stars. The Astrophysical
Journal Supplement Series 30 (March 1976): 273306.
[2]
White, R. J., and Ghez, A. M. Constraints on the Formation and Evolution of Binary Stars.
The Astrophysical Journal 556 (20 July 2001): 265295.
[3]
Tohline, Joel E. The Origin of Binary Stars, in Annual Reviews of Astronomy and Astrophysics,
vol. 40. Palo Alto: Annual Reviews, 2002: 349385.
o
Adams, Fred C.; Laughlin, Gregory (April 1997). "A Dying Universe: The Long Term Fate
and Evolution of Astrophysical Objects". Reviews of Modern Physics. 69 (2): 337372.
Bibcode:1997RvMP...69..337A. arXiv:astro-ph/9701131 .
doi:10.1103/RevModPhys.69.337.
Gilmore, Gerry (2004). "The Short Spectacular Life of a Superstar". Science. 304 (5697):
19151916. PMID 15218132. doi:10.1126/science.1100370. Retrieved 2007-05-01.
"The Brightest Stars Don't Live Alone". ESO Press Release. Retrieved 27 July 2012. o Longair,
Malcolm S. (2006). The Cosmic Century: A History of Astrophysics and
Cosmology. Cambridge University Press. pp. 2526. ISBN 0-521-47436-1.
Brown, Laurie M.;Pais, Abraham;Pippard, A. B., eds. (1995). Twentieth Century Physics.
Bristol; New York: Institute of Physics, American Institute of Physics. p. 1696. ISBN 0-7503-0310-7.
OCLC 33102501.
Russell, H. N. (1913). ""Giant" and "dwarf" stars". The Observatory.36: 324
[Link]:1913Obs....36..324R.
Strmgren, Bengt (1933). "On the Interpretation of the Hertzsprung-Russell-Diagram".Zeitschrift fr
Astrophysik. 7: 222248. Bibcode:1933ZA......7..222S.
Schatzman, Evry L.; Praderie, Francoise (1993). The Stars. Springer. pp. [Link] 3-540-54196-9.
Morgan, W. W.; Keenan, P. C.; Kellman, E. (1943).An atlas of stellar spectra, with anoutline of
spectral classification. Chicago, Illinois: The University of Chicago press. Retrieved 2008-08-12.
Unsld, Albrecht (1969). The New Cosmos. Springer-Verlag New York Inc. p. [Link] 0-387-90886-2.
Gloeckler, George; Geiss, Johannes (2004). "Composition of the local interstellar mediumas diagnosed
with pickup ions". Advances in Space Research. 34 (1): 5360. Bibcode:2004AdSpR..34...53G.
doi:10.1016/[Link].2003.02.054.
Kroupa, Pavel (2002)."The Initial Mass Function of Stars: Evidence for Uniformity in
Variable Systems". Science. 295 (5552): 8291. Bibcode:2002Sci...295...82K.
PMID 11778039. arXiv:astro-ph/0201098 . doi:10.1126/science.1067524. Retrieved 2007-12-03.
Schilling, Govert (2001)."New Model Shows Sun Was a Hot Young Star". Science. 293(5538):
21882189. PMID 11567116. doi:10.1126/science.293.5538.2188. Retrieved 2007-02-04.
"Zero Age Main Sequence". The SAO Encyclopedia of Astronomy. Swinburne [Link]
2007-12-09.

521
Clayton, Donald D. (1983). Principles of Stellar Evolution and Nucleosynthesis. University

"Main Sequence Stars". Australia Telescope Outreach and Education. Retrieved 2007-12-04.
Moore, Patrick (2006). The Amateur Astronomer. Springer. ISBN 1-85233-878-4.
"White Dwarf". COSMOSThe SAO Encyclopedia of Astronomy. Swinburne University. Retrieved 2007-
12-04.
"Origin of the Hertzsprung-Russell Diagram". University of Nebraska. Retrieved 2007-12-06.
"A course on stars' physical properties, formation and evolution" (PDF). University of St. Andrews.
Retrieved 2010-05-18.
Siess, Lionel (2000). "Computation of Isochrones". Institut d'Astronomie et d'Astrophysique, Universit
libre de Bruxelles. Retrieved [Link], for example, the model isochrones generated for
a ZAMS of 1.1 solar masses. This is listed in the table as 1.26 times the solar luminosity. At metallicity
Z=0.01 the luminosity is
1.34 times solar luminosity. At metallicity Z=0.04 the luminosity is 0.89 times the solar
luminosity.
Zombeck, Martin V. (1990). Handbook of Space Astronomy and Astrophysics (2nd ed.). Cambridge
University Press. ISBN 0-521-34787-4. Retrieved 2007-12-06.
"SIMBAD Astronomical Database". Centre de Donnes astronomiques de Strasbourg.

Luck, R. Earle; Heiter, Ulrike (2005). "Stars within 15 Parsecs: Abundances for a Northern
Sample". The Astronomical Journal. 129 (2): 10631083. Bibcode:2005AJ....129.1063L.
doi:10.1086/427250.
"LTT 2151 High proper-motion Star". Centre de Donnes astronomiques de Strasbourg. Retrieved 2008-
08-12.
Staff (2008-01-01). "List of the Nearest Hundred Nearest Star Systems". Research Consortium on Nearby
Stars. Archived from the original on 2012-05-13. Retrieved 2008-08-12.
Brainerd, Jerome James (2005-02-16). "Main-Sequence Stars". The Astrophysics Spectator.
Retrieved 2007-12-04.
Karttunen, Hannu (2003). Fundamental Astronomy. Springer. ISBN 3-540-00179-4.
Bahcall, John N.; Pinsonneault, M. H.; Basu, Sarbani (2001-07-10). "Solar Models: Current Epoch and
Time Dependences, Neutrinos, and Helioseismological Properties". The Astrophysical Journal. 555 (2):
9901012. Bibcode:2003PhRvL..90m1301B.
arXiv:astro-ph/0212331 . doi:10.1086/321493.
Salaris, Maurizio; Cassisi, Santi (2005). Evolution of Stars and Stellar Populations. John Wiley and Sons.
p. 128. ISBN 0-470-09220-3.
Oey, M. S.; Clarke, C. J. (2005). "Statistical Confirmation of a Stellar Upper Mass Limit".
The Astrophysical Journal. 620 (1): L43L46. Bibcode:2005ApJ...620L..43O. arXiv:astro-
ph/0501135 . doi:10.1086/428396.

522
Ziebarth, Kenneth (1970). "On the Upper Mass Limit for Main-Sequence Stars".
Astrophysical Journal. 162: 947962. Bibcode:1970ApJ...162..947Z. doi:10.1086/150726.
Burrows, A.; Hubbard, W. B.; Saumon, D.; Lunine, J. I. (March 1993). "An expanded set of brown dwarf
and very low mass star models". Astrophysical Journal, Part 1. 406 (1):
158171. Bibcode:1993ApJ...406..158B. doi:10.1086/172427.
Aller, Lawrence H. (1991). Atoms, Stars, and Nebulae. Cambridge University
A.C. Maury; E.C. Pickering (1897). "Spectra of bright stars photographed with the 11-inch
Draper Telescope as part of the Henry Draper Memorial". Annals of Harvard College
Observatory. 28: 1
Bibcode:1897AnHar..28....1M.
Hertzprung, Ejnar (1908). "ber die Sterne der Unterabteilung c und ac nach der
Spektralklassifikation
von Antonia C. Maury". Astronomische Nachrichten. 179 (24): 373
Bibcode:1909AN....179..373H. doi:10.1002/asna.19081792402.
Rosenberg, Hans (1910). "ber den Zusammenhang von Helligkeit und Spektraltypus
in den Plejaden". Astronomische Nachrichten. 186 (5): 71
Bibcode:1910AN....186...71R. doi:10.1002/asna.19101860503.
andenberg, D. A.; Brogaard, K.; Leaman, R.; Casagrande, L. (2013). "THE AGES OF 55
GLOBULAR CLUSTERS AS DETERMINED USING AN IMPROVED $\Delta V^{m HB}_{m TO}$
METHOD ALONG WITH COLOR-MAGNITUDE DIAGRAM CONSTRAINTS, AND THEIR
IMPLICATIONS FOR BROADER
ISSUES". The Astrophysical Journal. 775 (2): 134. Bibcode:2013ApJ...775..134V.
arXiv:1308.2257
. doi:10.1088/0004-637X/775/2/134.
b Hertzsprung,E., Publ. Astrophys. Observ. Potsdam, Vol. 22, 1, 1911
^ Russell,Henry Norris (1914). "Relations Between the Spectra and Other Characteristics of the

Kwok, Sun (2006). Physics and chemistry of the interstellar medium. University Science Books.
pp. 435 437. ISBN 1-891389-46-7.
Prialnik, Dina (2000). An Introduction to the Theory of Stellar Structure and Evolution.
Cambridge University Press. pp. 198199. ISBN 0-521-65937-X.
And theoretically Black dwarfs - but: "...no black dwarfs are expected to exist in the
universe yet" o Class. Quantum Grav. 13 (1996)p393
"Gravitational collapse and space-time singularities", R. Penrose, Phys. Rev. Let. 14 (1965) p 57
B. Carter, "Axisymmetric black hole has only two degrees of freedom", Phys. Rev. Let. 26 (1971)
p331
"Black Holes - Planck Unit? WIP". Physics Forums. Archived from the original on 2008-08-02. o
Brill, Dieter (19 January 2012)."Black Hole Horizons and How They Begin". Astronomical
Review.
Bedran, ML et al.(1996)."Model for nonspherical collapse and formation of black holes by
emission ofneutrinos, strings and gravitational waves", Phys. Rev. D 54(6),3826.
Bodif et al., 1985
G. Bodif, De LooreOscillations in star formation and contents of a molecular cloud
complexo Astron. Astrophys., 142 (1985), p. 297
Bodif, 1986
G. BodifStar formation regions as galactic dissipative
structures o Astrophys. Space Sci., 122 (1986), p. 41
s, 1994
P. Hellings

523
Astrophysics with a PC. An introduction to Computational Astrophysics, Willmann-Bell, Inc.,
Richmond,Virginia, USA (1994)

Peer review under responsibility of National Research Institute of Astronomy and Geophysics.
Deriving the Metallicity distribution function of galactic systems
The Metallicity Distribution Function in the disc of the Milky Way and near the Sun
The Metallicity Distribution Function of Centauri
The Metallicity Distribution Function of the Halo of the Milky Way
The Metallicity Distribution Function of Field Stars in M31's Bulge
The Metallicity Distribution Functions of SEGUE G and K dwarfs: Constraints for Disk Chemical
Evolution and Formation
Yong, David; Norris, John E.; Bessell, M. S.; Christlieb, N.; Asplund, M.; Beers, Timothy C.;
Barklem, P. S.; Frebel, Anna; Ryan, S. G. (1 January 2013). "The Most Metal-Poor Stars. III. The
Metallicity Distribution Function and CEMP Fraction". 762 (1): 27. doi:10.1088/0004-
637X/762/1/27. Retrieved 3 March 2017 via [Link].
Rix H.-W., Rieke M., 1993, ApJ, 418, 123
Weiner B., 1999, Talk at 'disk99' workshop, MPIA Heidelberg

Webb 2002
Ward & Brownlee 2000, pp. 2729
1 Morphology of Our Galaxy's 'Twin' Spitzer Space Telescope, Jet Propulsion Laboratory, NASA.
Lineweaver, Charles H.; Fenner, Yeshe; Gibson, Brad K. (2004). "The Galactic Habitable Zone
and the Age Distribution of Complex Life in the Milky Way" (PDF). Science. 303 (5654): 59
62. Bibcode:2004Sci...303...59L. PMID 14704421. arXiv:astro-ph/0401024
. doi:10.1126/science.1092322.

Ward & Brownlee 2000, p. 32


o b Gonzalez, Brownlee & Ward 2001
o Loveday, J. (February 1996). "The APM Bright Galaxy Catalogue". Monthly Notices of the
Royal Astronomical Society. 278 (4): 10251048.
o Hart, M.H. (January 1979). "Habitable Zones Around Main Sequence
Stars". Icarus. 37 (1): 3517. Bibcode:1979Icar...37..351H. doi:10.1016/0019-
1035(79)90141-6.
o NASA, Science News, Solar Variability and Terrestrial Climate, 8 January 2013
o University of Nebraska-Lincoln astronomy education group, Stellar Luminosity Calculator
o National Center for Atmospheric Research, The Effects of Solar Variability on Earth's
Climate, 2012 Report
o Most of Earths twins arent identical, or even close!, by Ethan on 5 June 2013
o Ward & Brownlee 2000, p. 18
o Schmidt, Gavin (6 April 2005). "Water vapour: feedback or forcing?". RealClimate.
Batygin, Konstantin; Laughlin, Gregory; Morbidelli, Alexandro (May 2016). "Born of
Chaos". Scientific American: 2229.
Barrow, John D.; Tipler, Frank J. (1988). The Anthropic Cosmological Principle. Oxford University
Press. ISBN 978-0-19-282147-8. LCCN 87028148.
Cirkovic, Milan M.; Bradbury, Robert J. (2006). "Galactic Gradients, Postbiological Evolution, and
the Apparent Failure of SETI"(PDF). New Astronomy. 11 (8): 628
639. Bibcode:2006NewA...11..628C. doi:10.1016/[Link].2006.04.003.
Comins, Neil F. (1993). What If the Moon Didn't Exist? Voyages to Earths that might have been.
HarperCollins.
Conway Morris, Simon (2003). Life's Solution: Inevitable Humans in a Lonely Universe.
Cambridge University Press. ISBN 0 521 82704 3.

524
Cohen, Jack; Stewart, Ian (2002). What Does a Martian Look Like: The Science of Extraterrestrial
Life. Ebury Press. ISBN 0-09-187927-2.
Cramer, John G. (September 2000). "The 'Rare Earth' Hypothesis". Analog Science Fiction &
Fact Magazine.
Darling, David (2001). Life Everywhere: The Maverick Science of Astrobiology. Basic
Books/Perseus. ISBN 0-585-41822-5.
Dartnell, Lewis (2007). Life in the Universe, a Beginner's Guide. Oxford: One World.
Gonzalez, Guillermo; Brownlee, Donald; Ward, Peter (July 2001). "The Galactic Habitable Zone:
Galactic Chemical Evolution". Icarus. 152 (1): 185
200. Bibcode:2001Icar..152..185G. arXiv:astro-ph/0103165 . doi:10.1006/icar.2001.6617.
Gribbin, John (2011). Alone in the Universe: Why our planet is unique. Wiley.
Kasting, James; Whitmire, D. P.; Reynolds, R. T. (1993). "Habitable zones around main
sequence stars". Icarus. 101 (1): 108
28. Bibcode:1993Icar..101..108K. PMID 11536936. doi:10.1006/icar.1993.1010.
Kasting, James (2001). "Peter Ward and Donald Brownlee's "Rare Earth"". Perspectives in
Biology and Medicine. 44 (1): 117131. doi:10.1353/pbm.2001.0008.
Meija, J.; et al. (2016). "Atomic weights of the elements 2013 (IUPAC Technical Report)". Pure
Appl. Chem. 88(3): 26591. doi:10.1515/pac-2015-0305.
J. Emsley (2001). Nature's Building Blocks: An AZ Guide to the Elements. Oxford University
Press. p. 178. ISBN 0-19-850340-7.
G. N. Zastenker; et al. (2002). "Isotopic Composition and Abundance of Interstellar Neutral
Helium Based on Direct Measurements". Astrophysics. 45 (2): 131
142. Bibcode:2002Ap.....45..131Z. doi:10.1023/A:1016057812964.
"Helium Fundamentals".
The Encyclopedia of the Chemical Elements. p. 264.
Bradford, R. A. W. (27 August 2009). "The effect of hypothetical diproton stability on the
universe" (PDF). Journal of Astrophysics and Astronomy. 30 (2): 119131. doi:10.1007/s12036-
009-0005-x.
Nuclear Physics in a Nutshell, C. A. Bertulani, Princeton University Press, Princeton, N.J., 2007,
Chapter 1, ISBN978-0-691-12505-3.
Physicists discover new kind of radioactivity, in [Link] Oct 24, 2000.
P. Atkins and J. de Paula, Atkins' Physical Chemistry, 8th edition ([Link] 2006), p. 451
2 ISBN 0-7167-8759-8
Matthews, M.J.; Petitpas, G.; Aceves, S.M. (2011). "A study of spin isomer conversion kinetics in
supercritical fluid hydrogen for cryogenic fuel storage technologies". Appl. Phys. Lett. 99:
081906. doi:10.1063/1.3628453.
Rock, Peter A., Chemical thermodynamics; principles and applications (Macmillan 1969) Table
p.478 shows (No/Np)H2 = 0.002 at 20K ISBN 1-891389-32-7
F. T. Wall (1974). Chemical Thermodynamics, 3rd Edition. W. H. Freeman and Company
"Thermophysical Properties of Fluid Systems". [Link]. Retrieved 2015-05-14.
Michael Polanyi and His Generation: Origins of the Social Construction of Science Mary Jo Nye,
University of Chicago Press (2011) p.119 ISBN 0-226-61065-9
Werner Heisenberg Facts [Link]
[Link]
"The Hydrogen 21-cm Line". Hyperphysics. Georgia State University. 2005-10-30.
Retrieved 2009-03-18.
Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 0-8053-
8714-5.
Bohr, Niels (1985), "Rydberg's discovery of the spectral laws", in Kalckar, J., N. Bohr: Collected
Works, 10, Amsterdam: North-Holland Publ., pp. 3739

525
"CODATA Recommended Values of the Fundamental Physical Constants:
2006" (PDF). Committee on Data for Science and Technology (CODATA). NIST.
Lyman, Theodore (1906), "The Spectrum of Hydrogen in the Region of Extremely Short Wave-
Length", Memoirs of the American Academy of Arts and Sciences, New Series, 13 (3): 125
146, ISSN 0096-6134, JSTOR 25058084
Lyman, Theodore (1914), "An Extension of the Spectrum in the Extreme Ultra-
Violet", Nature, 93: 241, Bibcode:1914Natur..93..241L, doi:10.1038/093241a0
Wiese, W. L.; Fuhr, J. R. (2009). "Accurate Atomic Transition Probabilities for Hydrogen, Helium,
and Lithium". Journal of Physical and Chemical Reference Data. 38 (3):
565. Bibcode:2009JPCRD..38..565W. doi:10.1063/1.3077727.
Balmer, J. J. (1885), "Notiz uber die Spectrallinien des Wasserstoffs", Annalen der
Physik, 261 (5): 8087, Bibcode:1885AnP...261...80B, doi:10.1002/andp.18852610506
Paschen, Friedrich (1908), "Zur Kenntnis ultraroter Linienspektra. I. (Normalwellenlngen bis
27000 .-E.)", Annalen der Physik, 332 (13): 537
570, Bibcode:1908AnP...332..537P, doi:10.1002/andp.19083321303
Brackett, Frederick Sumner (1922), "Visible and Infra-Red Radiation of Hydrogen", Astrophysical
Journal, 56: 154, Bibcode:1922ApJ....56..154B, doi:10.1086/142697
Pfund, A. H. (1924), "The emission of nitrogen and hydrogen in infrared", J. Opt. Soc. Am., 9 (3):
193196, doi:10.1364/JOSA.9.000193
Kramida, A. E.; et al. (November 2010). "A critical compilation of experimental data on spectral
lines and energy levels of hydrogen, deuterium, and tritium". Atomic Data and Nuclear Data
Tables. 96 (6): 586644. Bibcode:2010ADNDT..96..586K. doi:10.1016/[Link].2010.05.001.
Humphreys, C.J. (1953), "The Sixth Series in the Spectrum of Atomic Hydrogen", J. Research
Natl. Bur. Standards, 50
P. A. M. Dirac (1958). The Principles of Quantum Mechanics (4th ed.). Oxford University Press.
B.H. Bransden & C.J. Joachain (2000). Quantum Mechanics (2nd ed.). Prentice Hall
PTR. ISBN 0-582-35691-1.
David J. Griffiths (2004). Introduction to Quantum Mechanics (2nd ed.). Benjamin
Cummings. ISBN 0-13-124405-1.
Richard Liboff (2002). Introductory Quantum Mechanics (4th ed.). Addison Wesley. ISBN 0-8053-
8714-5.
David Halliday (2007). Fundamentals of Physics (8th ed.). Wiley. ISBN 0-471-15950-6.
Serway, Moses, and Moyer (2004). Modern Physics (3rd ed.). Brooks Cole. ISBN 0-534-49340-8.
Schrdinger, Erwin (December 1926). "An Undulatory Theory of the Mechanics of Atoms and
Molecules". Phys. Rev. 28(6): 1049
1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/PhysRev.28.1049.
Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to
Schrdinger Operators. Providence: American Mathematical Society. ISBN 978-0-8218-4660-5.

Schrdinger, Erwin (November 1935). "Die gegenwrtige Situation in der Quantenmechanik (The present
situation in quantum mechanics)". Naturwissenschaften. 23 (48): 807812. Bibcode:1935NW.....23..807S.
doi:10.1007/BF01491891.
Jump up ^ Moring, Gary (2001). The Complete Idiot's Guide to Theories of the Universe. Penguin. pp. 192
193. ISBN 1440695725.
Jump up ^ Gribbin, John (2011). In Search of Schrodinger's Cat: Quantum Physics And Reality. Random
House Publishing Group. p. 234. ISBN 0307790444.
Jump up ^ Greenstein, George; Zajonc, Arthur (2006). The Quantum Challenge: Modern Research on the
Foundations of Quantum Mechanics. Jones & Bartlett Learning. p. 186. ISBN 076372470X.
^ Jump up to: a b Tetlow, Philip (2012). Understanding Information and Computation: From Einstein to
Web Science. Gower Publishing, Ltd. p. 321. ISBN 1409440400.
Jump up ^ Herbert, Nick (2011). Quantum Reality: Beyond the New Physics. Knopf Doubleday Publishing
Group. p. 150. ISBN 030780674X.

526
Jump up ^ Charap, John M. (2002). Explaining The Universe. Universities Press. p. 99. ISBN 8173714673.
^ Jump up to: a b Polkinghorne, J. C. (1985). The Quantum World. Princeton University Press. p. 67. ISBN
0691023883.
Debattista V. P., Sellwood J. A., 2000, astro-ph/0006275
Gerhard O., 1999, astro-ph/9902247
Kauffmann G., Colberg J., Diaferio A., White S. D. M., 1999, MNRAS, 303, 188
Maller A., et al., 2000, ApJ, 533, 194
Navarro J., Frenk C., White S. D., 1996, ApJ, 462, 563
Navarro J., Frenk C., White S. D., 1997, ApJ, 490, 493
The Penguin Dictionary of Physics, ed. Valerie Illingworth, 1991, Penguin Books, London
Lectures in Physics, Vol, 1, 1963, pg. 30-1, Addison Wesley Publishing Company Reading, Mass [1]
N. K. VERMA, Physics for Engineers, PHI Learning Pvt. Ltd., Oct 18, 2013, p. 361. [2]
Tim Freegard, Introduction to the Physics of Waves, Cambridge University Press, Nov 8, 2012. [3]
Quantum Mechanics, Kramers, H.A. publisher Dover, 1957, p. 62 ISBN 978-0-486-66772-0
Solem, J. C.; Biedenharn, L. C. (1993). "Understanding geometrical phases in quantum mechanics: An
elementary example". Foundations of Physics. 23 (2): 185195. Bibcode:1993FoPh...23..185S.
doi:10.1007/BF01883623.
Dirac, P.A.M. (1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press, Oxford
UK, p. 14.
Mechanical Engineering Design, By Joseph Edward Shigley, Charles R. Mischke, Richard Gordon Budynas,
Published 2004 McGraw-Hill Professional, p. 192 ISBN 0-07-252036-1
Finite Element Procedures, Bathe, K. J., Prentice-Hall, Englewood Cliffs, 1996, p. 785 ISBN 0-13-301458-4
Brillouin, L. (1946). Wave propagation in Periodic Structures: Electric Filters and Crystal Lattices, McGraw
Hill, New York, p.

527

You might also like