Sunday, March 27, 2011

6. DISTANT WORLDS



So far, we have discussed the development of those parts of the science of Physics which relate to the understanding of the universe around us. Theorization from a mass of data collected from observation and experimentation, or theoretical analysis resulting in the prediction of what would be discovered later, are the two main modes by which Science grows. However, there is no dearth of anomalies in Science, some of which have been occasionally highlighted in the preceding text.

One of the topics which has not been discussed at any considerable length in spite of being significant to the aims and objects of this work is Cosmology which combines astronomy with the more recently developed techniques of radiotelescopy and spectroscopy. With these techniques, it is now established that the Sun, like other stars, is a dense mass of plasma with a core temperature of about thirteen million degrees Centigrade consisting mainly of hydrogen and small amounts of heavier elements formed by nuclear fusion which constantly releases heat energy producing the blinding illumination. It is also believed that planets have formed due to the separation of small quantities of spinning gaseous clouds containing heavier elements which gradually cooled and condensed into bodies such as the Earth.

However, it is curious that while ninety two elements and hundreds of isotopes are formed in the complex process of nuclear fusion, transmutation and, probably fission in stars, the dozen elements of the man-made trans-Uranic series are not formed naturally.

Even more curious is the fact that the steady state theory of the formation of the universe assumes that stars were formed by the gravitational collapse of very large clouds of hydrogen gas which created such high temperatures and pressures that nuclear fusion reactions took place in them without any external detonation. But regular hydrogen found abundantly in the water of our oceans and whose nucleus has only one proton does not take part in nuclear fusion no matter what you do with it. Only isotopes of hydrogen such as deuterium whose nucleus has both a proton and neutron and tritium whose nucleus has one proton and two neutrons take part in nuclear fusion. So it seems that the original gas clouds would have had predominantly deuterium and tritium and so would be the case with our Sun. And since our Earth is believed to have originated from the Sun, the waters in our oceans should have been mostly heavy water in the beginning which may have transformed into normal or light water over billions of years. But then the same would have happened in the stars including our Sun and the fuel for the fusion reactions should have ended ages ago.

The advances in the techniques of radio transmission and reception during the first half of the twentieth century, and the invention of radar and eventually radio telescopes resulted, during the 1960's, in the discovery of some unique phenomena called Quasars, Pulsars and Black Holes. The quasars are very large stars, though much smaller than galaxies, at great distances beyond the limits of our own galaxy (The Milky Way which is known to be so large that light takes a hundred thousand years to travel across it; and containing roughly a hundred thousand million stars) which are not only visible with optical telescopes but also emit radio waves and ultraviolet radiation. Scientists are unable to explain how such relatively small objects can generate such inconceivably large amounts of energy that they appear bright at such huge distances. Pulsars are celestial bodies, again at very great distances, but estimated to have planet (Earth) size dimensions characterized by the emission of regular pulses of radio waves every second or fraction thereof.

Our knowledge about the Earth, the Solar system and the Universe as a whole has been considerably augmented by the data obtained from satellites equipped with sophisticated electronic gadgetry sent into outer space, earth orbit and orbit around the Sun with the help of powerful rockets developed initially for military purposes. The paths of these satellites have, however, remained within the range of the orbital planes of the planets of our Solar system.

To explain the complex cosmological phenomena, a new science of astrophysics has come into existence which correlates the known facts of other disciplines particularly physics with astronomical observations. For instance, several mathematical models of the universe's evolution and lives of stars have been based on Einstein's general relativity. The equations represent such fundamental canons of physics as preservation of the sum total of mass and energy in the universe, the conservation of momentum, the balance of forces etc., and are solved starting with the initial condition that all matter and energy was concentrated at a point in a boundless vacuum where it suddenly exploded, expanded and eventually diluted into what we see as the universe around us, which is still believed to be expanding.

Thus although the void of the universe is assumed to be infinite, the expanded material part must always have a dynamic boundary within which all our observations are limited, enveloped by a larger expanding sphere of radiation energy which must have escaped in all directions at the speed of light; a sort of universe within universe within universe, the radius of the radiation envelop being about twenty thousand million light years if the current estimate of the age of the universe is correct.

As for what came out of the big bang initially, it is obvious that the vital fundamental particles were formed which then combined in their own ways to form subatomic particles and so on.

Oddly enough, all the estimated matter and energy in the universe fails to account for all the gravitational forces that are supposed to be acting in it; and hence the existence of large quantities of some sort of speculative dark matter beyond perception is suspected, perhaps similar to the apparitions of some religious beliefs. However, there is a spectrum of solutions to the equations of general relativity that range from a continuous and indefinite expansion at the present rate to an eventual slowing and subsequent contraction to a dense state. Identical analyses can be carried out for either the entire universe or a single star or galactic system. The pulsars, thought to be the remnants of such collapsed stars are also called neutron stars and are believed to consist largely of tightly packed neutrons with a density that is a few billion times that of earth such that a cubic centimeter of their matter would weigh a few million tons on earth.

Many astrophysicists believe on theoretical grounds that the gravitational collapse of a star could create a high density object whose gravitational field would be too strong to allow anything -- including light waves -- ever to leave the body. Such hypothetical objects are called "black holes", because light and other signals being unable to emerge from them, their matter has literally disappeared from view, although they continue to influence the motion of other stars in their vicinity. Based on irregularities in star movements and the emission of very short wavelength radiation, a number of black-hole locations have been conjectured by astrophysicists. What is more interesting is that scientists believe that the normal laws of physics would not be applicable in the vicinity of black holes which are said to be capable of "devouring" any matter that approaches them and loses its physical structure.

We thus find that an extension of our knowledge of Physics and Astronomy brings us to a point where we must start looking for a new set of physical laws which could naturally bifurcate into two sets of physical laws, one governing the events in our galaxy, and the other in the vicinity of the black holes. For those who believe that the laws of physics are universal and everlasting, this must be a minor shock. On the lighter side, blackholism seems to have existed among people for some considerable time in the form of persons who are always ready to grab but never willing to give away anything, and somehow they seem impervious to all norms and laws.







Wednesday, March 23, 2011

Democracy



After years of correlation of information I have come to the conclusion that democracy can and should be defined as follows:

Democracy is that form of government in which all lands of the country along with its assets, state treasures and right to make laws other than religious tenets belong to the people; no individual including the heads of state and government has the right to spend state or public funds at his or her discretion; and all citizens including holderes of high office, public representatives, civil servants and military are bound by the rules and instructions collectively issued by a body of elected representatives of the people on an equal basis while having the right to exercise basic human rights and a universal obligation to accept and obey the judgment of courts of law.

Sunday, March 06, 2011

Conversation in Heaven – 2



 Is the Earth maintaining its course and orientation?


Yes. All the volcanoes fired at the right time and course and attitude corrections have been taking place automatically.


How do volcanic eruptions affect life on Earth?


In the old days animals used to panic and some would be killed or injured by the lava, debris and smoke. Now human beings too suffer losses of life and property. Sometimes entire town are flattened.


Can they find a way to protect themselves?


Human beings have been trying to unveil the secret causes of natural disasters by scientific investigation. They have set up warning stations so that areas likely to suffer may be evacuated.


Has it affected the evolution process?


Yes, we upgraded the genetic code to ensure that human beings sympathize with and help each other in the case of natural disasters overriding their tribal differences.














Sunday, February 27, 2011

5. THE QUANTUM LEAP IN SCIENCE





The dawn of the twentieth century heralded a new era in intellectual and experimental pursuits of understanding the nature of the universe around us. Not only did man achieve a long cherished dream of flying in the air, but nature was seen from viewpoints previously unimagined. In 1901, Max Plank in connection with "black body" radiation, proposed the quantum theory of radiation, which was definitely formulated by Einstein in 1905. According to the Quantum theory radiation occurs, not continuously permitting all possible values as assumed by the wave theory, but in a discrete quantified form, as integral multiples of an elementary quantum of energy. This means that energy should be considered atomic in nature like matter. The concept was verified by Compton in connection with the scattering of X-rays and Plank's hypothesis was extended to the extent that radiation is considered corpuscular in nature, made up of discrete quanta, which are shot out in space with the velocity of light. This, however, did not nullify the previous proofs that light and other radiations are waves. The quantum is now known as photon, and its energy is given by hf or more accurately (1/2+n)hf, where n is an integer, h the Plank's constant having an accurately determined value of 6.624x10-34 joule-second, and f the frequency of radiation. (Looking at the construction of the energy expression and the unit of the Plank's constant this author is tempted to suggest that it could be an integral of energy with respect to time. It is dimensional problems like this that have resulted in the introduction of dimensionless expressions in thermodynamics and fluid mechanics.) The photon is said to have no mass of its own for reasons to be discussed later. The quantum theory has been instrumental in explaining the photoelectric effect, commonly observed in solar cells which would have worked just as well even without the quantum theory, though their development may not have been as rapid.

It may be stated here that zeros are basically of two kinds -- the subtractional zero which can be obtained by repeated subtractions and represents nothingness, and the divisional zero which is obtained by repeated divisions and represents a negligible fraction. The quantum theory, in a way, defined the ultimate fraction or arithmetical atom that could be treated as a basic unity; thus doing away with the divisional zero that could have diverse negligible values. Mathematical propriety dictates that if a=0 and also b=0 then you cannot write a=b since it would lead to a/b=1 or 0/0=1 which amounts to making something out of nothing. Indeed, many such situations arise due to taking x^0, whose value is unity, as a function or part of a function of x. In fact, a constant numerical term in a function definition indicates origin relocation.

In 1905, Einstein theorized the constancy of the speed of light irrespective of the motion of the source or the observer by putting forward his famous theory of Special Relativity. The two fundamental postulates used in the theory are:

a) Measurement of absolute motion is impossible to an observer stationed on a moving system.

Comment: The measurement of absolute motion requires reference to a permanently fixed or unmoving point within the field of observation or perhaps the universe which we do not seem to have discovered yet because we have not looked for it. Equally it is impossible to conceive relative motion without assigning absolute motion to the observer or the observed or both.

b) The velocity of light is constant, independent of the relative motion of the source and observer.

Comment: It would apply perfectly to waves generated in a vast medium such as water (the ether is undone) in which two persons may be rowing their boats, but when you consider a travelling source throwing corpuscular projectiles being observed by another moving observer it becomes complicated to say the least. Waves possess velocity of propagation which can not be compared with the velocity of transportation possessed by particles.

He then went on to set up the constitutive equations of motion for two sets of reference frames having a uniform relative translatory motion with respect to an event visible from both reference frames, using independent time and space variables for each system with the speed of light as a constant. He used a unique technique of implied division by masked zeros developing equations of the form ax0=bx0 which, depending on progressive manipulations, can give various solutions of the type a=bxc where the sign and value of the constant c would depend on other constants and intermediate mathematical operations. By dexterous manipulation of these equations he deduced the interrelationships of the time and space variables in the two systems, which turned out to be identical to the Lorentz's transformation equations. Later scientists have used different mathematical tools to arrive at the same conclusions. In a nutshell, these equations mean that simultaneous time and distance measurements made from two vantage points moving relative to each other will have different values, and events that appear simultaneous to one observer may not seem so to the other. The two should agree to disagree. However, they do not contradict the fact that a particular event should instantaneously appear exactly the same to any observer irrespective of his speed and direction if occupying a particular location in space at a particular time, e.g. pictures taken with a high speed camera.

Since the variables are related to each other by a numerical quantity given by (1-v^2/c^2)^1/2 where v is the relative speed of the systems and c the speed of light, and since it is inconceivable for the term to be an imaginary quantity, v cannot be faster than c; or in other words, nothing can move faster than light. A corollary of the theory also gives an equation for the addition of velocities which ensures that the resultant will not exceed the velocity of light, so that two photons linearly approaching one another, each travelling with the speed of light have a relative velocity equal to the velocity of light, not twice. Such a postulate can only be satisfied by adjusting or redefining the unit of time as a half of the normal second with reference to the specific situation. This compression of the second would, of course, be only a mathematical manipulation and not affect any natural phenomena. The basic flaw in the time dilation concept seems to be that time lag or observation delay due to the time taken by light to reach an observer, which is an aberration and changes with distance irrespective of speed of travel, has been confused with real time.

Another expression derived from the principles of relativity is of the form m = m0/(1-v2/c2)1/2; where m0 is the rest mass i.e the mass of a body measured when at rest relative to the observer, m the "effective mass" i.e. mass of the body measured when it is moving with a velocity v relative to the same observer, and c is the velocity of light. Since v is always less than c, m will always be greater than m0 except when v itself has an imaginary value which is regarded as impossible. It can be shown that the increase in mass is approximately equal to the kinetic energy divided by c squared.

This brings us to the most famous of Einstein's deductions -- the mass-energy relationship given by E = mc2, where E is the energy equivalent of mass m. In other words, one kilogram of any substance is equal to 9x10^16 joules or 2.51x10^10 kilowatt-hours of energy and vice versa. This means that even if all the earth's known resources of energy production were utilized, no more than a few tons of matter would be produced in a year. However, it is this conversion of matter into energy that produces all the heat in the nuclear fission reactors. But of course, the reactors would have worked the same even if the equation was not known. The mass-energy relationship was verified by Kaufman and Bucherer using the electrons of Beta rays from radium with widely different velocities, ranging up to as high as 0.99c. The mass-energy relationship carries interesting implications for the elastician. If the velocity of electromagnetic waves or light c is taken as the equivalent of distortional waves in a solid elastic medium, and the energy equivalent of mass as strain energy, then mass becomes a function of strain in the medium.

The confirmation of special relativity and its corollaries ushered in a new era of natural philosophy. The classical theory of mechanics in which the sums of forces and moments at a point had to be separately zero and matter and energy had to be conserved individually was superseded. It was obvious that accurate results could only be obtained by balancing the accounts of the energy equivalent of mass as well as the other forms of energy such as kinetic, potential, inertial, deformational etc., or the mass equivalent of the energies as the case might be.

Rutherford carried out experiments on the scattering of alpha particles, which are the positively charged bare helium nuclei, by thin foils of matter and found that although most of the alpha particles suffered only small deflection due to multiple scattering, yet there were a certain number that were scattered through much larger angles. To accommodate this phenomenon he proposed, in 1911, the nuclear atom model in which most of the mass and positive charge was concentrated at the center forming the nucleus, and the electrons revolved in circles around the nucleus in a manner similar to the planets revolving around the Sun. Although this model solved Rutherford's immediate problem, it was found to be in conflict with the electromagnetic theory, as revolving electrons must emit radiation at all times, and constantly consume energy which would compromise the stability of the atomic structure. Twelve years later in 1923, Niels Bohr applied the quantum theory to the Rutherford atom model and developed his theory of atomic structure which is now widely accepted in a further modified form as it resolves many of the dilemmas faced by earlier physicists. The theory is based on two postulates reproduced below as stated by Rajam:

i) The first postulate referring to the electronic structure, states the electrons cannot revolve in all possible orbits as suggested by the classical theory, but only in certain definite orbits satisfying quantum conditions. These orbits may, therefore, be considered as privileged orbits, non radiating paths of the electron.

ii) The second postulate referring to the origin of spectral lines states that radiation of energy takes place only when an electron (instantaneously) jumps from one permitted orbit to another (without existing in any intermediate location - hence the phrase quantum leap). The energy thus radiated which is equal to the difference in the energies of the two orbits involved must be a quantum of energy hv.

The mathematical deduction based on the above postulates produced quantitative results which agreed with available experimental data, and thus the assumptions were accepted as physical laws.

Bohr's simple theory of circular orbits, in spite of its many successes, was unable to explain certain fine details of the hydrogen spectrum such as the Balmer series, suggesting that for each quantum number there might be several orbits of slightly different energies. Somerfeld, in 1915, modified Bohr's theory by introducing the ideas of motion of electrons in elliptical orbits and of the consequent relativistic variation of the mass of the electron. By the application of the relativistic mass energy relationship, he found the path of the electron to be a complicated curve, known as a rosette -- a precessing ellipse, doubly periodic. In interpreting the observed fine structure of spectral lines, he was forced to introduce a selection rule to preclude some of the orbits permitted by his mathematics. The Rutherford-Bohr-Somerfeld atom model was only a partial success as it could only predict three out of five components of the H-alpha line. Nevertheless, Somerfeld was able to underscore the concept of "Spatial Quantization" i.e not only distances and forms, but also directions must have discrete values with permissible conditions. Nature's limitations were getting more and more exposed. The void was beginning to develop holes. In 1923, Uhlenbeck and Gouldsmit in order to explain satisfactorily the intricate spectral phenomena such as the fine structure and the Zeeman effect i.e the splitting of spectral lines under the influence of an applied magnetic field, put forward the hypothesis of the spinning electron, with a spin quantum number which is always 1/2 for an electron although most other quantum numbers are integral. Plank's joule-seconds had to be multiplied with something having time in the denominator to give straight forward energy.

Thus came into being the Quantized vector atom model which was further developed by Pauli, Stern and Gerlach with the help of a host of others. Although this is not the last word in atom models, we will stop here and take a look at some of the parallel developments.

Minkowsky, using the principles of special relativity and the four dimensional geometry of Rieman was able, in 1908, to present to the world a new and unique concept of four dimensional space time continuum which is represented mathematically by the second order differential equation ds2 = dx2 + dy2 + dz2 + c2dt2. The square of time has no practical significance by itself, but multiplied by the square of velocity it represents an area like the other terms of the equation. Similarly, in the Cartesian coordinate system x*y, y*z and z*x represent area but x*x means nothing. In terms of 20th century physical science this means that the minute displacement represented by ds is not completely described by the three coordinate dimensions dx, dy and dz, as in classical Euclidean geometry; but in addition by the time dimension dt also, forming the fourth coordinate of relativistic geometry. The quantity ds is renamed as a "point event" and is not exactly a distance in space, but an element in the four dimensional Minkowsky space-time continuum which is `simultaneously finite and boundless.' However, in this continuum, matter is able to assert itself as "the pressure of matter distorts the curvature of the four dimensional space-time continuum which is the physical universe." Einstein was able to correlate the idea with the balancing of centripetal and centrifugal forces in a string and stone respectively, which are attached and whirled giving the stone a curvilinear motion. In 1915, he extended his special theory of relativity that was limited to systems with uniform linear motion, to encompass systems moving in any way, even with accelerated velocity, and in particular to a special case of accelerated motion that is involved in the most common phenomenon of gravitation. Using, in turn, Minkowski's model, Einstein in his General Theory of Relativity worked out the law governing the motion of a body in a distorted and curved space-time continuum using the advanced mathematical tool of tensor calculus. He was able to show that Newton's inverse square law of gravitation is a first approximation of his relativistic law which is claimed to have been successfully tested in several astrophysical phenomena, such as the advance of perihelion of the planet mercury, the shift of spectral lines in the light received from the companions of Sirius etc. Now, if space can be pulverized (quantized) and curved under pressure, it can hardly be a void; it seems more like a personality.

With the advent of the quantum theory, physicists were obliged to admit a dual nature, wave and particle, for radiant energy for the simple reason that the Plank energy equation contains a frequency term for the photon that is assumed to be a massless particle. In 1924, Louis de Brogli took a bold step forward and suggested that matter, like radiation, has dual nature i.e particles believed to possess discrete rest mass such as molecules, atoms, protons, electrons and the like might exhibit wave-like properties under appropriate circumstances. There was a certain amount of initial pessimism about the theory of matter waves which later came to be known as de Brogli waves because of their difference from electromagnetic waves as their velocity of propagation is given by u = c2/v, where v is the velocity of the moving matter and c the velocity of light. The de Brogli waves have a wavelength L=h/mv, where h is Plank's constant and m the mass of the matter particle. However, the discovery of the diffraction of electrons in 1927 by Davisson and Gremer provided experimental confirmation of the theory. Dempster (1927) and Esterman (1930) obtained diffraction effects with hydrogen and helium atoms, thus lending support to the matter wave theory. The only snag is that u can be greater than c since v is less than c which, oddly enough, Einstein did not mind. So, in view of the rule that wave velocity is the product of wave-length and frequency, it may be said that every particle of matter has a characteristic frequency given by f = mc2/h, for notations defined earlier or a characteristic time period of 1/f.



Sunday, February 20, 2011

4. ELEMENTARY PARTICLES



Only a few months after the discovery of X-Rays by Roentgen in 1896, Henri Bacquerel accidentally discovered the phenomenon of radioactivity in uranium sulphate. The invisible Bacquerel rays which possessed the ability to affect wrapped photographic paper were soon found to be emitted by a number of other heavy elements such as thorium, actinium etc. But the most famous was the discovery of radioactive elements Radium and Polonium by the Curies in 1998. The radiation emitted by radioactive substances was soon identified to consist of three different kinds of emissions named alpha, beta and gamma rays after the Greek letters. The alpha radiation was found to consist of positively charged particles having a positive electric charge of magnitude twice that of the electron and a mass four times that of the hydrogen atom. It was immediately concluded and is still believed that the alpha particle is the same as the bare nucleus of a Helium atom which has lost both its electrons. It is emitted at a velocity one fifteenth that of light and is stopped by a 0.1 millimeter thick foil of aluminum.

Beta rays were found to consist of negatively charged particles of very light mass emitted with a velocity that varied between one third to 99.8 percent of the velocity of light and penetrative power nearly one hundred times that of the alpha particles. The variations in their masses were later accounted for by the special theory of relativity and these were identified as electrons. Gamma rays have been found to be electromagnetic radiations similar to X-rays, but of extremely short wavelength of the order of one angstrom or 10^-10 meters and an ability to penetrate several centimeters of lead.

In 1911, Rutherford came forth with his own physical model for subatomic structure, as an interpretation for some unexpected experimental results. In it, the atom is made up of a central charge now called nucleus surrounded by a cloud of orbiting electrons. In subsequent years, rules were established for the numbers of atoms in various orbits and it was explained that electromagnetic radiation was released by electrons shifting between orbits and releasing energy when they moved from an orbit of higher energy to one of lower energy.

The discovery of the neutron by Chadwick in 1932 completed the well known picture of the nuclear structure consisting of protons and neutrons, the number of protons, carrying a positive charge equal and opposite that of electrons, being the same as electrons in the atom, and various isotopes being the result of differing numbers of electrically neutral or chargeless neutrons whose mass is very nearly the same as protons which is about 1836 times that of the electron.

The good thing about the electron is that anybody can see it in the electrical arc formed between two separated ends of a conductor if an electrical circuit is broken. It is this seeing and believing that lies at the heart of willing acceptance of later discoveries of subatomic particles none of which can be actually seen but whose existence is proven by inference. The good thing about atomic models is that they work in predicting properties of materials and have resulted in the development of science and technology of electronics which has revolutionized our lives.

Democritus's atoms were no longer the fundamental particles, but instead were composed of varying numbers of fundamental particles which were then established as being the electron, the proton and the neutron. The measurement of the rate of decay in radioactive materials, commonly denoted by half-life, made it possible to estimate the age of the earth which is now believed to be about 4.6x10^9 (four thousand six hundred million) years

In the year 1900, Wilson, Elster and Geital discovered that charged electroscopes exhibited a small residual leak in spite of the best insulation. The surrounding air was eliminated as a possible cause since its conductivity was found to be constant. In 1903, Rutherford and Cooke demonstrated by the use of absorbing screens of iron and lead that the radiation responsible for the discharge of the electroscope came from outside the instrument. Initially, it was thought that the radiation was the result of the contamination of Earth's surface and the surrounding air by radioactive materials which had also been recently discovered. However, when Gokel, Hess and Kolhorster during 1909-1914 sent sealed ionization chambers up in balloons up to 9,000 meters high, it was found that the intensity of radiation increased with height continuously, as much as five to ten times the value at ground level. Hence it was concluded that an extremely penetrative type of radiation, whose origin was entirely beyond our atmosphere, was falling upon the earth from above the atmosphere and it was called cosmic radiation. The methods developed and used in the study of cosmic radiation include ionization chambers, photographic emulsions and bubble chambers developed in 1952 by D.A. Glaser.

One of the greatest achievements of the cosmic rays studies was the discovery in 1932 by C.D. Anderson of the "Positron" or the antiparticle of the electron which is identical to the electron except that it carries a positive charge of the same magnitude. The positron had, however been predicted theoretically by P.A.M. Dirac in 1928. Similarly, two years after Yukawa's theoretical prediction, mesotrons or mesons were discovered in 1937 by Neddermeyer, Anderson, Street and Stevenson by a study of cloud chamber tracks of cosmic radiation.

The development of the cyclotron by Prof. E.O. Lawrence in 1932 provided a controllable means of producing beams of high energy accelerated particles that could be used to bombard other particles and nuclei and break them into pieces to render different elements and new fundamental particles. The technique is called transmutation, and has resulted in a proliferation of newly identified fundamental particles of diverse sizes and qualities. The second half of the twentieth century has seen the construction of a number of particle accelerators in various parts of the world; and could well be called the era of the elementary particles, as far as physics is concerned.

The first `atomic pile' -- the forerunner of nuclear reactors -- was activated by Enrico Fermi in December 1942 in a Squash court in Chicago University which among other things produced a significant quantity of the radioactive element Plutonium that had not existed naturally on earth before then and was hailed as the first synthesized or artificial element beyond the natural ninety-two. This great achievement, unfortunately, created the delusion among many that man had after all succeeded in overtaking nature and was now more powerful than God, if One existed. The conceit showed itself in the pathetic development and tragic use of atomic bombs much before the nuclear reactor could be used to produce useful energy and medicinal radioisotopes, apart from research in nuclear physics.

These studies have resulted in the identification of five basic types of forces that exist in nature as 1) gravity, 2) electric force, 3) strong nuclear forces which hold neutrons and protons together in the nucleus, 4) weak nuclear forces which control the change in charge states of nucleons, and 5) color forces which are postulated to operate on quark particles only about which we shall learn later. Of course, all other forces that we experience in our daily lives are supposed to be the complex consequences of these five. The method by which atomic, nuclear and subatomic particles interact is believed to be via the exchange of energy which can alternatively be considered as particle exchange. The interaction between electric charges is known to proceed by the exchange of photons which are massless particles which can carry force or energy for an infinite distance at the speed of light. Any particle that travels at the speed of light has to be massless (i.e have zero rest mass) in order for the theory of relativity to be held true, as a particle with any initial mass would assume infinite mass at the velocity of light. The hypothetical particle graviton carries the gravitational force effect between particles of matter. Neutrino and antinutrino are also chargeless and massless particles which can only be distinguished from photons by having different spin. Then, of course, there are muons, pions, kaons, lambda particles, sigma particles, omega particles, psi particles and a whole host of antiparticles, some stable and some unstable. The unstable only last between 10^-17 and 10^-21 seconds, some of which are called resonances. "Strange" particles have life times of the order of 10^-9 seconds. There are quark particles which were initially regarded as the fundamental building blocks of all particles and which possess the property of charm. Two varieties of the quarks have been named truth and beauty. Then of course, there are monopoles which as the very name suggests should be a sort of half-magnet that would be attracted by the north pole of a magnet and repelled by the south pole in any orientation or vice versa. In fact, it is curious that so far radions (particles of radio frequency radiation) and thermion (particles of heat radiation) do not seem to have made their mark. However, the growing diversity of fundamental particles is by no means a cause for discord. Some physicists are optimistic that it may be possible to identify a group or family of particles which could form the basis of the synthesis of all other particles; and it may even be possible to integrate these particles with the quantum theory and, in turn, with the theory of relativity, thus producing the grand unification and the final answers to all physical questions.



Tuesday, February 15, 2011

Conversation in Heaven - 1

How are things on earth?

Quite good! Our indoor animal project has succeeded beyond our expectations. The human beings as they are called now are getting ready for the next phase of evolution.

What genetic parameters were changed in the previous upgrades?

Five major steps were accomplished:

1. The neck was shortened and both eyes were placed in front making them vulnerable from behind and creating a need for building shelters. The apes, of course did not build homes. They just started living on trees.

2. The ability to stand erect and move on two legs in a balanced manner. It freed the arms to hold, carry and manipulate things. Experiments with Lemurs helped in establishing the balancing algorithms.

3. A set of pilot teeth to sample available food and second permanent set to suit the available food e.g. plant, vegetable, meat etc. The designing of the inception of a new set of bones in flesh a few years old took quite some time.

4. Delinking of cognitive process from ancestral info-base and cell allocation for new data from experience. This allows dumb parents to have brilliant offspring who innovate and develop new ideas. A partial linkage has been maintained to allow the development of tribal instincts useful for the safety of the young ones. The system needs review to reduce excessive divergence in personalities.

5. Association and correlation of sounds and images, allocation of memory for audio perception and control of vocal chords to develop ability of meaningful speech and literacy.

What have human beings achieved with these new faculties?

(To be continued.)

Thursday, February 10, 2011

Rambo in Lahore

The triple murder in Lahore on 27th January, 2011 is unique because it is being owned by the US Embassy in Pakistan. Video clip released by Dunya TV clearly shows that the accused of two murders Raymond Davis did not claim diplomatic immunity before the police on arrest. It seems very likely that he is some sort of a secret agent and was on a mission to eliminate two local agents Faizan Haider and Faheem Ahmed who may have lost cover or got out of line. If that is true, perhaps it would be safer for Raymond Davis who has now lost his own cover to live out the rest of his life in a Pakistani jail.
The interesting thing is that there are at least a dozen assasinations involving Americans in my memeory from Dallas in 1963 to Lahore in 2011 which carry the same signature, "In broad daylight on a busy street".  For quite some time I have been wondering if there is a heinous murder squad of influential Americans still living in the cowboy era that orchestrates these murders.
The high level US delegations that have visited Pakistan in the last few days to secure the release of RD may also have committed indiscretion and exposed their membership of the murder squad. Time will tell.

Friday, January 21, 2011

Points of Conflict between Islam and other Belief Systems



The prohibition of alcohol and pork in Islam creates a major conflict with other civilizations in which these items are part of daily food and nourishment. The orthodox Muslims regard alcoholic drinks, pork and lard as absolute filth and if they come in contact with any of these, they must wash and cleanse themselves. The reason is simple. There is no reason given in the Qur’an for the prohibition of pork and wine. The mullahs not finding a plausible reason declared these as filthy. At the same time, people who are engaged in a business related to pork and wine such as pig-farming or breweries feel threatened by the growth of Islam as it would make them jobless. The pros and cons of alcoholic drinks are not a matter of personal preferences. A careful examination of the Biblical text indicates that on the night of the Last Supper, Jesus Christ was forced to renounce his opposition to drinking wine and eating bread made with lard (the consumption of which he likened to the consumption of his blood and flesh respectively), and then he was crucified.

Trinity and calling Jesus the son of God is totally unacceptable in Islam. The Qur’an is quite specific in verse 4.171:

O people of the Book! Commit no excesses in your religion: nor say of Allah aught but truth. Christ Jesus the son of Mary was (no more than) an Apostle of Allah and His Word which He bestowed on Mary and a Spirit proceeding from Him: so believe in Allah and His Apostles. Say not "Trinity": desist: it will be better for you: for Allah is One Allah: glory be to him: (for Exalted is He) above having a son. To Him belong all things in the heavens and on earth. And enough is Allah as a Disposer of affairs.

There is an interesting historical conflict between Muslims and Jews. Verses 37:101-107 of the Qur’an read as follows:

101. So We gave him (Abraham) the good news of a boy ready to suffer and forbear.
102. Then when (the son) reached (the age of) (serious) work with him he said: "O my son! I see in vision that I offer thee in sacrifice: now see what is thy view!" (The son) said: "O my father! do as thou art commanded: thou will find me if Allah so wills one practicing Patience and Constancy!"
103. So when they had both submitted their wills (to Allah) and He had laid Him prostrate on his forehead (for sacrifice)
104. We called out to him "O Abraham!
105. "Thou hast already fulfilled the vision!" thus indeed do We reward those who do right.
106. For this was obviously a trial
107. And We ransomed him with a momentous sacrifice:

The Muslims believe that the blessed son was Ismail, the elder of the two sons of Abraham and the event took place near Makkah. On the other hand Jewish and Christian traditions identify the son as Isaac who they believe was the elder and also believe that the event took place near Jerusalem.

The conflict between Islam and Hinduism is also two-fold. Muslims, like people of most other civilizations relish beef, but Hindus worship the cow as a sacred animal and killing or eating the cow is a sacrilege and cardinal sin for them. The fanatic Hindus would not hesitate to kill a man whom they find killing a cow. This happens very often in India and was perhaps one of the considerations of dividing the Subcontinent into two countries. Secondly, idolatry is the very basis of the Hindu religion. Hindu temples are full of deities of all descriptions. Islam strictly prohibits the worship of anything but the Idea of God; there can be no model of God. To a Muslim, a deity is an insult to God and according to the Qur’an and hadith all biblical prophets from Abraham to Muhammad have condemned it.

There are also other conceptual differences between Islamic and secular ideologies. The Muslim temperament regards the selling of rights a breach of honor. Unfortunately, it is a basic tool of western commerce as developed by Jews and Christians. In the Islamic civilization, any law or agreement or decision that leaves any party helpless or without recourse is anathema. Prophet Muhammad strongly resented debt and constant indebtedness is just not acceptable in Islam.

The cosmologists, the evolutionists and the geneticists also seem to be at loggerheads with Islam. Whereas Islam teaches that God alone conceived, designed, engineered and created everything that exists, these scientists insist otherwise. The big-bangers would have us believe that at the beginning of time all matter and energy that now forms the universe was confined into an extremely dense sphere only a few miles across, which exploded with such an intensity that all its mass turned into billions of gigantic balls of fiery gases flung billions of miles across in the limitless universe. They don’t give a damn about where that awesome sphere came from or how the electrons and protons were formed and got arranged into such fascinating orbits, or how the value of the Universal Gravitational Constant got established. On the other hand, there are others who insist that from the beginning there were a lot of clouds of chilly hydrogen gas floating around in the universe which condensed and collapsed by gravitational pull with such intensity that they exploded into very hot stars in which other elements were formed by nuclear fusion. Both groups believe that the energy being generated in stars comes from the nuclear fusion of hydrogen, but they forget that deuterium and not hydrogen participates in fusion reactions. If the stars are formed of deuterium then most of the water in the seas on earth should have been heavy water and hydrogen should have been rare. In reality, the opposite is true.

Darwin’s theory of evolution might have found great intellectual appeal a century ago, but in the computer age the earth seems to have slipped from underneath it. If random mutations can work to create such complex living organisms, there is no reason why they would not work on minerals such as quartz and produce elementary logic circuits in grains of sand that exist in literally infinite numbers on earth. Nobody has yet found a grain of sand with a logic circuit in it formed by random mutations.

Geneticists have tried to replace God with the Genome, which they claim contains the complete code for the development and growth of each living being, without being able to explain the mechanism and dynamics of gene replication.

A broad and open conflict between Islam and Christianity will definitely go in favor of Islam. The better-educated Christians would start studying Islam initially to know the enemy. But as they discover that Islam is actually reformed Christianity, the new Crusaders will find very few followers or supporters. As large numbers of Christians embrace Islam in the more affluent western countries, the leadership of the Muslim world will also shift from the oriental mullahs to the new leaders who would emerge there.

However, the worst enemies of Islam at present are Al-Qaida and Taliban. They are the biggest killers of Muslims and destroyres of their property. They have made every effort to defame and discredit Islam. They have tried to present Islam as a crude and dreadful religion. In fact, they appear ton be a reincarnation of the pre-Islamic Arab clans who tried to destroy it in its infancy.

Saturday, January 08, 2011

My Audience

The following the distribution of my audience as recorded by the statistics service of Blogger:
United States 184
Russia 24
Brazil 16
Ukraine 15
Germany 14
China 10
Pakistan 10
Canada 9
Denmark 7
Slovenia 6
It is interesting no one in India or UK is interested in my Blog. Or could it be that it is being blocked in some countries?

Saturday, December 18, 2010

3. THE SCIENTIFIC ERA



Unfortunately, the great debate started by Democritus and Aristotle was halted by political upheavals in the region for nearly a millennium until the scholars of the great Islamic civilization of the Middle East and Central Asia, including Arabia and Persia, revived and translated the great Greek philosophical works. Even today, in the Muslim countries there are schools of thought devoted to the teachings and techniques of Pythagoras and Aristotle, with inevitable adaptations and variations. Nawab Mohammad Yamin Khan in his relatively recent book "God, Soul and Universe in Science and Islam" describes God as a Universal Intelligence that controls all phenomena. Averros (Ibn Rushd, 1126-96 C.E) is said to have believed in the eternity of the world (not as a single act of creation but as a continuous process) and in the eternity of a universal intelligence, indivisible but shared in by all. Similar ideas are attributed to Ibnul Arabi. The pioneering work in physics, chemistry, astronomy, mathematics, optics, medicine etc. by such scientist of those days as Ali Ibn Sina (Avicenna, 980-1037 C.E), Al-Khwarizmi (780-880 C.E), Al-Kindi (800-873 C.E), Jaber bin Hayyan (Geber, 721-815 C.E), Omar Khayyam (1050-1123 C.E), Ibnul Haytham (965-1039 C.E) and Al-Beiruni (973-1048 C.E) etc. paved the way for the European scientists and mathematicians who developed the current scientific vocabulary and techniques. The separation and identification of generic chemicals, the discovery of the principles of optics, the development of algebra, geometry, exponents, polynomials, logarithms and the concepts of calculus, the cataloging of botanical species, indexing of astronomical observations etc. provided a useful information base for scientists in areas where metals and other resources were easily available and working close to fire was relatively comfortable. The initiation of the use of mechanical implements, the exploitation of wind power and techniques of evaporative cooling were, indeed, pointers to things that were to come in future. Alchemy or the attempt to produce gold by combining or altering other substances was, perhaps, the embryonic form of the industrial movement whose object is to add value to insignificant materials. Today, apparently worthless materials are processed to the extent that the final products may exceed the worth of their weight in gold. Many Muslim philosophers of the era, commonly known as Sufis, put forward the notion that the universe consists of an interplay of light and darkness in a continuum of geometric space and time. They chose mostly poetry rather than mathematics as their medium for expression and it takes a complex analytical approach to decode their ideas in contemporary parlance.

From the sixteenth to the nineteenth centuries, Europe saw some very interesting philosophical discussions cutting across the borders of science and religion, on questions such as: whether or not God exists, and if so how he interacts with the material universe; whether or not the soul exists, and if so how it interacts with the body; and whether or not man is free to shape his own destiny. Some of the well-known participants in the debate were Bacon, Hobbes, Descartes, Spinoza, Pascal, Calvin, Luther, Locke, Hume, Leibnitz, Voltaire, Rousseau, Kant, Fichte, Schopenhauer, Schelling and Hegel who, together with such others as Nietzsche, Dante and Goethe etc. produced the bulk of what could be called modern European philosophical heritage. By the end of the nineteenth century, Marx and Engels developed their concept of dialectical materialism that contradicted everything that made sense until then in social, religious, political and economic fields. Science was the only undisputed branch of knowledge. The Marxist reaction does not seem altogether incongruous when one considers the fact that many serious intellectuals of those days were convinced that mankind had finally evolved into two distinct species discernible as the ruling and working classes. Some other examples of dichotomacious illusion are Christian-nonchristian, Muslim-nonmuslim, Jewish-nonjewish and Aryan-nonaryan -- both Indian and German varieties.

During the 16th and early 17th centuries, the works of Copernicus, Brahe, Kepler and Galileo established that the earth and other planets rotate and revolve around the Sun in elliptical orbits and telescopes revealed the distances and motions of the stars. The principle of gravitational acceleration was deduced and the invention of the telescope paved the way for the calculation of the distances of various stars from the earth. Simply by measuring the change of the angular bearings of a star over a six month interval and dividing the diameter of the earth's orbit around the Sun by this quantity, it became possible to calculate its distance from the earth or the sun. Apparently fixed stars were used for reference. Today we believe that the universe consists of thousands of galaxies, millions of light years apart, which in turn comprise of thousands of star systems and that Aristotle's crystal spheres do not exist.

In the period spanning the 17th and early 18th centuries, a group of brilliant scientists and mathematicians, notably Newton, Leibnitz, Huygens, Hooke, and Boyle produced calculus and the laws of mechanics, optics, thermodynamics and observations on magnetism and gravitation with experimental verification that transformed the face of the earth. Things would no longer happen by the will of the gods or kings, but by the laws of physics and other sciences to be developed in subsequent years; in spite of the fact that Newton himself is stated to have "considered that God had made the universe from `small indivisible grains of matter'." Newton perceived the formula for gravitational forces between objects also known as `Newton's Law of Gravitation' F=G.m1.m2/d^2 where F is force between two bodies of masses m1 & m2, a distance d apart and G the universal gravitational constant and measured it from experiments with brass balls and torsional pendulum now believed to be 6.673x10^-11 Nm2/Kg2. From this and the value of the acceleration due to gravity on earth's surface as 9.81 meters per second per second (9.81 m/s2) it is possible to calculate earth's mass as 5.97x10^24 Kg. By equating the gravitational force between Earth and the Sun with the centripetal force (F=mrω2 where m is mass of Earth, r is distance between Earth and Sun and ω the angular velocity of Earth around Sun) it was possible to determine the mass of the Sun and similarly all other heavenly bodies.

It is common observation that cohesive forces among the molecules of a body weaken as its temperature rises and become neutralized when it turns into liquid. The force further weakens with increase in temperature until it becomes repulsive when the substance turns into a gas, and the repulsion continues to grow with temperature. It is not clear whether gravitation is modified by temperature or thermal repulsion is a different phenomenon.

Although not commonly highlighted, perhaps, the greatest contribution of Newton and his contemporaries was the development of the consciousness that momentum and energy - kinetic and potential - are essential components in the existence of matter. Momentum is given by the product of mass and its velocity or the integration of force over time; whereas energy is produced by the integration of force over distance. Conversely, the differentiation of momentum with respect to time produces force and so should the differentiation of energy with respect to length or distance: an idea not yet fully developed although it is quite apparent in pulling and pushing and the effects of receiving or emitting radiation at one surface. In simple terms, it is this author's assertion that applying force at a point is nothing but creating an energy differential across it and motion from a higher energy state or location towards a lower energy state or location in a dynamic or kinematic sense is similar to the thermodynamic process of convection. Heat itself does not have a physical existence, but is a measure of energy level with a specific transfer mechanism. The same could be said of electricity, except for its sign i.e positive and negative. Both momentum and energy are considered indestructible. While momentum can only be transferred from one body to another, energy can manifest itself in many forms i.e kinetic, potential, thermal, light, sound, electrical charge, magnetism etc. Although momentum and energy are both derived from velocity, they do not reconcile if all the energy and momentum of one body is transferred to another of differebt mass ( v2/v1=m1/m2 from momentum and (m1/m2)^0.5 from energy).

Integration over time is like collecting water flowing from a hose into a bucket. The water flow rate is the differential and the bucketful the integral. Now it is not possible to reverse the conditions in the hose and bucket, except on paper by differentiation or in reality by applying a new set of appliances; although a new differential can be created by tilting the bucket. However, in elastic systems a certain amount of reversal in distance is possible by virtue of the stored elastic energy which is often dissipated by oscillation; but reversal in time is not possible. It should be noted that if the flow-rate in the aforementioned hose is not constant, and differs with the time of the day then the amount of water collected in the bucket in say five minutes will be different depending on whether it was collected in the morning, noon or evening of a particular day. It is, therefore, important that the symbol representing the result of integration must indicate the relevant limits of integration. It is interesting that scientists have been so preoccupied with Pythagoras' theorem that they are convinced that velocities can be added vectorially because they can be represented by straight lines and not as the consequence of the summation of kinetic energies related to the velocity components in two perpendicular directions (0.5mV^2 = 0.5mVx^2 + 0.5mVy^2). Similarly, the resultant of numerous forces acting on a body is not a game of arrows but the identification of the sum of components in a unique direction in whose normal plane the projections of all the forces add up to nothing.

Newton's laws of motion are regarded as a breakthrough in science, and paved the way to the eventual landing of man on the moon. However, some very interesting situations arise if one analyses the motion of things as one observes on earth, and as they would appear from outside the earth. Let us take a seemingly stationary brick of one kilogram mass and apply a force of one newton to it so that it is accelerated by one meter per second per second for a period of two seconds, at about noon on a clear sunny day close to the equator. During these two seconds, the brick will attain a linear velocity of two meters per second and a kinetic energy of two joules. The work done by one newton over a distance of two meters would also be two joules. So everything works out fine. But the earth, due to its daily rotation around an axis and annual revolutionary motion around the sun, has a surface linear velocity of nearly 29,336 meters per second, which also applies to any person or object on the earth. Hence, to an observer outside the earth, the brick had an initial velocity of 29,336 meters per second which was increased to 29,338 meters per second. This should result in a kinetic energy increase of 1/2x293382-1/2x293362 = 58,674 joules. Similarly, the extraterrestrial observer would also notice that the brick moved a total of 58,674 meters during the two seconds, 58,672 meters with the earth and 2 meters on it under the force of one newton; so that the work done should be force times distance i.e 58,674 joules. By the same token, an observer beyond the solar system would get even larger figures for the same phenomenon as he would also add the drift of the solar system. The question, therefore, is: Who is right? Obviously, the earth-based observer can support his calculation with the muscular or mechanical energy spent in the exercise which compares with the two joules figure. Similarly, the extraterrestrial observer might notice a slowing down of the earth or a change in the entropy of the overall solar system comparable with his own observation of changes in velocity and kinetic energy of the brick. One could satisfy himself by saying that whereas the earth-based observer is accounting for events relative to the time when the brick started accelerating, the observer out in space is doing the same with reference to the time when the parent system, the earth started moving. But such a statement would be neither mathematically nor philosophically convincing. Only it can be said with certainty that in a moving reference frame if the primary and secondary motions are parallel, then the apparent work done equals the apparent change in kinetic energy whether one considers the relative or the absolute motions. The primary motion defines the positive direction. If the secondary motion is at an angle to the primary motion then its components parallel to the primary motion and at right angles to it should be treated separately for the application of the conservation principle. Now, could it be that for a very modest expenditure of energy by us, nature sometimes has to pay significantly more in overheads? The problem can be looked at in two different perspectives by analogies. The first analogy is that of delivering a parcel to someone in another city. One can do it himself by taking a taxi to the airport, then a plane to the other city, then again a taxi to the address where the parcel is to be delivered. This would involve a considerable expenditure, not counting the return journey. The same result can be achieved much cheaply by walking to the nearest courier service or post office and making a relatively small payment. The difference in cost seems intriguing until one finds out that the courier who travels to deliver the parcel carries many other parcels also with him, or there is a complex but well organized postal system which makes the job so cheap.



The second analogy is that of driving a car. To a nontechnical person it involves the filling of fuel tank with petrol, turning the ignition key, moving the steering wheel, manipulating the pedals, changing gears by moving a lever and giving signals by operating the indicators. It all involves such an insignificant amount of effort and energy. Most of us have observed the child who jumps on the driving seat and enthusiastically swings the steering wheel hoping that it would move the car because that is what his observation has been limited to. Similarly, a naive researcher may spend a lifetime studying the behavior of cars and succeed in drawing only a few conclusions about when the blinking lights occur and so on; but ever thing may seem quite simple if he talks to drivers and mechanics or reads the highwaycode and the car manual. However, the automobile engineer sees the motion of the car in terms of the flow of currents, the flow of fuel and air into the engine, the movement of the butterfly in the carburetor and other valves, the ignition of the mixture in the cylinders, the thrust on the pistons, the torque on the crank shaft, the transmission of forces through the gearbox, connecting rod, and differential, the multiplication of forces and movements in the steering mechanism and so on. The designer has his own considerations in terms of thermodynamics, mass flow, kinematics, aerodynamics and strength of materials, to name a few topics. Could it be that our perception of the universe, in spite of all our scientific knowledge, is of the order of the comfortable driver rather than the automobile engineer who sees many a horsepower at work in the engine? In any case, the above observations are enough to caution us that an immodest manipulation of forces or energies on earth could create a considerably magnified effect somewhere to cause instability in one of the systems essential for our survival.



Another divergence that one comes across in the same era of scientific investigation is in the theories of light as put forward by Newton and Huygens. Newton, on the basis of the observations of reflection and refraction of light by Kepler, Descartes, Snell etc. suggested that light consists of minute corpuscles given out by a luminous body. Different colors of light were assigned to different sizes of the corpuscles. Huygens, on the other hand, studied the diffraction of light and was convinced that light is a sort of wave generated by a luminous body in ether, an all pervading elastic substance which filled the entire universe including the intermolecular spaces of solids. A number of optical phenomena that could not be explained by the corpuscular theory were found to yield to analysis based on the wave theory. Maxwell's development of the theory of electromagnetic radiation in the nineteenth century confirmed the wave theory of light and brought the concept of ether to the forefront of scientific investigation.

The eighteenth and nineteenth centuries saw prolific developments in physics and chemistry combined with a very rapid rate of innovation and invention. The progress in basic sciences led to the development of better and more precise equipment and machines which in turn enhanced the capabilities of scientists to conduct experiments and make new discoveries. Some of the notable scientists of the era include Pascal, Benjamin Franklin, Galvani, D'alembert, Volta, Faraday, Joule, Charles, Gauss, Carnot, Ampere, Kelvin, Hertz, Mendelyev, Coulumb, Rankine, Roentgen, and the Curies. Although their work did not directly influence the theories about the universe, the increase in knowledge and understanding of the properties of matter and the scope of their application indirectly contributed to the developments that took place in the twentieth century. John Dalton of this period is credited with the formulation of the modern chemical atomic theory. J.J. Thomson, in 1897, discovered the electron and postulated that atoms consisted of two parts; the spherical shell that contained most of the mass and uniformly distributed positive charges with negatively charged electrons embedded in it. Experiments with Thomson's cathode ray discharge tube and Millikan's oil drop apparatus helped to determine the vital statistics of the electron as being a mass of 9.1x10^-31 kilogram, a radius of 2.8178x10^-15 meter and a charge of 31.85x10^-16 coulombs equal to 1.6x10^-19 joule also known as one electron volt. The electron, thus, has a mass density of 9.721x10^12 kilogram per cubic meter i.e. a liter of electrons would weigh about one hundred million tons if packed closely. This, of course, would never happen as even in the common electrical wire the charge only flows in a thin outer skin because of the repulsion of like charges.

Romer in 1673 determined the velocity of light as approximately 3x10^8 meters per second from observations on the eclipses of the satellites of Jupiter. Bradlay was able to calculate the velocity of light in 1726 from the apparent displacement or aberration of the fixed stars, which he discovered, due to the earth's orbital velocity of about 18.5 miles per second which yielded the value of 185,000 miles per second. Fizeau, using a toothed wheel to interrupt at very fast intervals a light beam which was reflected along the same path, measured the velocity of light as 3.13x10^8 meters per second in air. Foucault using a rotating mirror to produce an image shift measured the velocity of light to be 298,000 kilometers per second. Using laboratory sized equipment and a tube filled with water he found that "light traveled more slowly in water than in air." Later, Michelson with similar apparatus showed that the ratio of the velocities of light in air and water was equal to 1.33 - in good agreement with the value of the refractive index. The velocity of light in water is thus 2.26x10^8 meters per second. However, Michelson found that the velocity of yellow light was 1.76 times greater in air than in carbon bisulphite; the corresponding refractive index of carbon bisulphite is 1.64. This resulted in the discovery of dispersion and the concept of wave velocity and group velocity in dispersive media. As a result of these experiments the wave nature of light was established.



However, neither the corpuscular nor the wave theory explained all the phenomena related to light; although each explained some. Similarly, the assumption of the presence of ether created it's own problems. Michelson and Morley (1881-1887) carried out an ingenious experiment to verify the existence or otherwise of ether by splitting a beam of light into two beams at right angles and then making the two component beams converge at one point to produce interference patterns. But the rotation of the system to align one or the other beam with the orbital movement of the earth (30,000 meters per second) in concurrent and opposite directions did not cause any change in the interference fringe pattern. It was thus concluded that the ether did not really exist and that the velocity of light is invariant in a given medium or empty space where it is maximum and remains constant in all directions. It was also argued that Maxwell's equation did not necessarily require the existence of a hypothetical medium, but could be interpreted in terms of geometrical space. The void was back in place. Until then, time was regarded as a universally invariant phenomenon and distances were considered absolute. However, Lorentz starting with Maxwell's line of reasoning and introducing a moving observer came up with a set of equations that could only be true if time and space were to dilate or become nonabsolute, having different values for different observers.

The realization of the finite velocity of light resulted in the recognition of an obvious element of uncertainty in observed phenomena. A physical change would not be regarded as existing until light from it reached an observer, and even then he would only know the state of affairs that existed when the light reaching him had started from the subject of observation. There is, thus, a time lag between the occurrence and observation of any phenomenon, no matter how small this time lag is.