Sunday, February 27, 2011

5. THE QUANTUM LEAP IN SCIENCE





The dawn of the twentieth century heralded a new era in intellectual and experimental pursuits of understanding the nature of the universe around us. Not only did man achieve a long cherished dream of flying in the air, but nature was seen from viewpoints previously unimagined. In 1901, Max Plank in connection with "black body" radiation, proposed the quantum theory of radiation, which was definitely formulated by Einstein in 1905. According to the Quantum theory radiation occurs, not continuously permitting all possible values as assumed by the wave theory, but in a discrete quantified form, as integral multiples of an elementary quantum of energy. This means that energy should be considered atomic in nature like matter. The concept was verified by Compton in connection with the scattering of X-rays and Plank's hypothesis was extended to the extent that radiation is considered corpuscular in nature, made up of discrete quanta, which are shot out in space with the velocity of light. This, however, did not nullify the previous proofs that light and other radiations are waves. The quantum is now known as photon, and its energy is given by hf or more accurately (1/2+n)hf, where n is an integer, h the Plank's constant having an accurately determined value of 6.624x10-34 joule-second, and f the frequency of radiation. (Looking at the construction of the energy expression and the unit of the Plank's constant this author is tempted to suggest that it could be an integral of energy with respect to time. It is dimensional problems like this that have resulted in the introduction of dimensionless expressions in thermodynamics and fluid mechanics.) The photon is said to have no mass of its own for reasons to be discussed later. The quantum theory has been instrumental in explaining the photoelectric effect, commonly observed in solar cells which would have worked just as well even without the quantum theory, though their development may not have been as rapid.

It may be stated here that zeros are basically of two kinds -- the subtractional zero which can be obtained by repeated subtractions and represents nothingness, and the divisional zero which is obtained by repeated divisions and represents a negligible fraction. The quantum theory, in a way, defined the ultimate fraction or arithmetical atom that could be treated as a basic unity; thus doing away with the divisional zero that could have diverse negligible values. Mathematical propriety dictates that if a=0 and also b=0 then you cannot write a=b since it would lead to a/b=1 or 0/0=1 which amounts to making something out of nothing. Indeed, many such situations arise due to taking x^0, whose value is unity, as a function or part of a function of x. In fact, a constant numerical term in a function definition indicates origin relocation.

In 1905, Einstein theorized the constancy of the speed of light irrespective of the motion of the source or the observer by putting forward his famous theory of Special Relativity. The two fundamental postulates used in the theory are:

a) Measurement of absolute motion is impossible to an observer stationed on a moving system.

Comment: The measurement of absolute motion requires reference to a permanently fixed or unmoving point within the field of observation or perhaps the universe which we do not seem to have discovered yet because we have not looked for it. Equally it is impossible to conceive relative motion without assigning absolute motion to the observer or the observed or both.

b) The velocity of light is constant, independent of the relative motion of the source and observer.

Comment: It would apply perfectly to waves generated in a vast medium such as water (the ether is undone) in which two persons may be rowing their boats, but when you consider a travelling source throwing corpuscular projectiles being observed by another moving observer it becomes complicated to say the least. Waves possess velocity of propagation which can not be compared with the velocity of transportation possessed by particles.

He then went on to set up the constitutive equations of motion for two sets of reference frames having a uniform relative translatory motion with respect to an event visible from both reference frames, using independent time and space variables for each system with the speed of light as a constant. He used a unique technique of implied division by masked zeros developing equations of the form ax0=bx0 which, depending on progressive manipulations, can give various solutions of the type a=bxc where the sign and value of the constant c would depend on other constants and intermediate mathematical operations. By dexterous manipulation of these equations he deduced the interrelationships of the time and space variables in the two systems, which turned out to be identical to the Lorentz's transformation equations. Later scientists have used different mathematical tools to arrive at the same conclusions. In a nutshell, these equations mean that simultaneous time and distance measurements made from two vantage points moving relative to each other will have different values, and events that appear simultaneous to one observer may not seem so to the other. The two should agree to disagree. However, they do not contradict the fact that a particular event should instantaneously appear exactly the same to any observer irrespective of his speed and direction if occupying a particular location in space at a particular time, e.g. pictures taken with a high speed camera.

Since the variables are related to each other by a numerical quantity given by (1-v^2/c^2)^1/2 where v is the relative speed of the systems and c the speed of light, and since it is inconceivable for the term to be an imaginary quantity, v cannot be faster than c; or in other words, nothing can move faster than light. A corollary of the theory also gives an equation for the addition of velocities which ensures that the resultant will not exceed the velocity of light, so that two photons linearly approaching one another, each travelling with the speed of light have a relative velocity equal to the velocity of light, not twice. Such a postulate can only be satisfied by adjusting or redefining the unit of time as a half of the normal second with reference to the specific situation. This compression of the second would, of course, be only a mathematical manipulation and not affect any natural phenomena. The basic flaw in the time dilation concept seems to be that time lag or observation delay due to the time taken by light to reach an observer, which is an aberration and changes with distance irrespective of speed of travel, has been confused with real time.

Another expression derived from the principles of relativity is of the form m = m0/(1-v2/c2)1/2; where m0 is the rest mass i.e the mass of a body measured when at rest relative to the observer, m the "effective mass" i.e. mass of the body measured when it is moving with a velocity v relative to the same observer, and c is the velocity of light. Since v is always less than c, m will always be greater than m0 except when v itself has an imaginary value which is regarded as impossible. It can be shown that the increase in mass is approximately equal to the kinetic energy divided by c squared.

This brings us to the most famous of Einstein's deductions -- the mass-energy relationship given by E = mc2, where E is the energy equivalent of mass m. In other words, one kilogram of any substance is equal to 9x10^16 joules or 2.51x10^10 kilowatt-hours of energy and vice versa. This means that even if all the earth's known resources of energy production were utilized, no more than a few tons of matter would be produced in a year. However, it is this conversion of matter into energy that produces all the heat in the nuclear fission reactors. But of course, the reactors would have worked the same even if the equation was not known. The mass-energy relationship was verified by Kaufman and Bucherer using the electrons of Beta rays from radium with widely different velocities, ranging up to as high as 0.99c. The mass-energy relationship carries interesting implications for the elastician. If the velocity of electromagnetic waves or light c is taken as the equivalent of distortional waves in a solid elastic medium, and the energy equivalent of mass as strain energy, then mass becomes a function of strain in the medium.

The confirmation of special relativity and its corollaries ushered in a new era of natural philosophy. The classical theory of mechanics in which the sums of forces and moments at a point had to be separately zero and matter and energy had to be conserved individually was superseded. It was obvious that accurate results could only be obtained by balancing the accounts of the energy equivalent of mass as well as the other forms of energy such as kinetic, potential, inertial, deformational etc., or the mass equivalent of the energies as the case might be.

Rutherford carried out experiments on the scattering of alpha particles, which are the positively charged bare helium nuclei, by thin foils of matter and found that although most of the alpha particles suffered only small deflection due to multiple scattering, yet there were a certain number that were scattered through much larger angles. To accommodate this phenomenon he proposed, in 1911, the nuclear atom model in which most of the mass and positive charge was concentrated at the center forming the nucleus, and the electrons revolved in circles around the nucleus in a manner similar to the planets revolving around the Sun. Although this model solved Rutherford's immediate problem, it was found to be in conflict with the electromagnetic theory, as revolving electrons must emit radiation at all times, and constantly consume energy which would compromise the stability of the atomic structure. Twelve years later in 1923, Niels Bohr applied the quantum theory to the Rutherford atom model and developed his theory of atomic structure which is now widely accepted in a further modified form as it resolves many of the dilemmas faced by earlier physicists. The theory is based on two postulates reproduced below as stated by Rajam:

i) The first postulate referring to the electronic structure, states the electrons cannot revolve in all possible orbits as suggested by the classical theory, but only in certain definite orbits satisfying quantum conditions. These orbits may, therefore, be considered as privileged orbits, non radiating paths of the electron.

ii) The second postulate referring to the origin of spectral lines states that radiation of energy takes place only when an electron (instantaneously) jumps from one permitted orbit to another (without existing in any intermediate location - hence the phrase quantum leap). The energy thus radiated which is equal to the difference in the energies of the two orbits involved must be a quantum of energy hv.

The mathematical deduction based on the above postulates produced quantitative results which agreed with available experimental data, and thus the assumptions were accepted as physical laws.

Bohr's simple theory of circular orbits, in spite of its many successes, was unable to explain certain fine details of the hydrogen spectrum such as the Balmer series, suggesting that for each quantum number there might be several orbits of slightly different energies. Somerfeld, in 1915, modified Bohr's theory by introducing the ideas of motion of electrons in elliptical orbits and of the consequent relativistic variation of the mass of the electron. By the application of the relativistic mass energy relationship, he found the path of the electron to be a complicated curve, known as a rosette -- a precessing ellipse, doubly periodic. In interpreting the observed fine structure of spectral lines, he was forced to introduce a selection rule to preclude some of the orbits permitted by his mathematics. The Rutherford-Bohr-Somerfeld atom model was only a partial success as it could only predict three out of five components of the H-alpha line. Nevertheless, Somerfeld was able to underscore the concept of "Spatial Quantization" i.e not only distances and forms, but also directions must have discrete values with permissible conditions. Nature's limitations were getting more and more exposed. The void was beginning to develop holes. In 1923, Uhlenbeck and Gouldsmit in order to explain satisfactorily the intricate spectral phenomena such as the fine structure and the Zeeman effect i.e the splitting of spectral lines under the influence of an applied magnetic field, put forward the hypothesis of the spinning electron, with a spin quantum number which is always 1/2 for an electron although most other quantum numbers are integral. Plank's joule-seconds had to be multiplied with something having time in the denominator to give straight forward energy.

Thus came into being the Quantized vector atom model which was further developed by Pauli, Stern and Gerlach with the help of a host of others. Although this is not the last word in atom models, we will stop here and take a look at some of the parallel developments.

Minkowsky, using the principles of special relativity and the four dimensional geometry of Rieman was able, in 1908, to present to the world a new and unique concept of four dimensional space time continuum which is represented mathematically by the second order differential equation ds2 = dx2 + dy2 + dz2 + c2dt2. The square of time has no practical significance by itself, but multiplied by the square of velocity it represents an area like the other terms of the equation. Similarly, in the Cartesian coordinate system x*y, y*z and z*x represent area but x*x means nothing. In terms of 20th century physical science this means that the minute displacement represented by ds is not completely described by the three coordinate dimensions dx, dy and dz, as in classical Euclidean geometry; but in addition by the time dimension dt also, forming the fourth coordinate of relativistic geometry. The quantity ds is renamed as a "point event" and is not exactly a distance in space, but an element in the four dimensional Minkowsky space-time continuum which is `simultaneously finite and boundless.' However, in this continuum, matter is able to assert itself as "the pressure of matter distorts the curvature of the four dimensional space-time continuum which is the physical universe." Einstein was able to correlate the idea with the balancing of centripetal and centrifugal forces in a string and stone respectively, which are attached and whirled giving the stone a curvilinear motion. In 1915, he extended his special theory of relativity that was limited to systems with uniform linear motion, to encompass systems moving in any way, even with accelerated velocity, and in particular to a special case of accelerated motion that is involved in the most common phenomenon of gravitation. Using, in turn, Minkowski's model, Einstein in his General Theory of Relativity worked out the law governing the motion of a body in a distorted and curved space-time continuum using the advanced mathematical tool of tensor calculus. He was able to show that Newton's inverse square law of gravitation is a first approximation of his relativistic law which is claimed to have been successfully tested in several astrophysical phenomena, such as the advance of perihelion of the planet mercury, the shift of spectral lines in the light received from the companions of Sirius etc. Now, if space can be pulverized (quantized) and curved under pressure, it can hardly be a void; it seems more like a personality.

With the advent of the quantum theory, physicists were obliged to admit a dual nature, wave and particle, for radiant energy for the simple reason that the Plank energy equation contains a frequency term for the photon that is assumed to be a massless particle. In 1924, Louis de Brogli took a bold step forward and suggested that matter, like radiation, has dual nature i.e particles believed to possess discrete rest mass such as molecules, atoms, protons, electrons and the like might exhibit wave-like properties under appropriate circumstances. There was a certain amount of initial pessimism about the theory of matter waves which later came to be known as de Brogli waves because of their difference from electromagnetic waves as their velocity of propagation is given by u = c2/v, where v is the velocity of the moving matter and c the velocity of light. The de Brogli waves have a wavelength L=h/mv, where h is Plank's constant and m the mass of the matter particle. However, the discovery of the diffraction of electrons in 1927 by Davisson and Gremer provided experimental confirmation of the theory. Dempster (1927) and Esterman (1930) obtained diffraction effects with hydrogen and helium atoms, thus lending support to the matter wave theory. The only snag is that u can be greater than c since v is less than c which, oddly enough, Einstein did not mind. So, in view of the rule that wave velocity is the product of wave-length and frequency, it may be said that every particle of matter has a characteristic frequency given by f = mc2/h, for notations defined earlier or a characteristic time period of 1/f.



Sunday, February 20, 2011

4. ELEMENTARY PARTICLES



Only a few months after the discovery of X-Rays by Roentgen in 1896, Henri Bacquerel accidentally discovered the phenomenon of radioactivity in uranium sulphate. The invisible Bacquerel rays which possessed the ability to affect wrapped photographic paper were soon found to be emitted by a number of other heavy elements such as thorium, actinium etc. But the most famous was the discovery of radioactive elements Radium and Polonium by the Curies in 1998. The radiation emitted by radioactive substances was soon identified to consist of three different kinds of emissions named alpha, beta and gamma rays after the Greek letters. The alpha radiation was found to consist of positively charged particles having a positive electric charge of magnitude twice that of the electron and a mass four times that of the hydrogen atom. It was immediately concluded and is still believed that the alpha particle is the same as the bare nucleus of a Helium atom which has lost both its electrons. It is emitted at a velocity one fifteenth that of light and is stopped by a 0.1 millimeter thick foil of aluminum.

Beta rays were found to consist of negatively charged particles of very light mass emitted with a velocity that varied between one third to 99.8 percent of the velocity of light and penetrative power nearly one hundred times that of the alpha particles. The variations in their masses were later accounted for by the special theory of relativity and these were identified as electrons. Gamma rays have been found to be electromagnetic radiations similar to X-rays, but of extremely short wavelength of the order of one angstrom or 10^-10 meters and an ability to penetrate several centimeters of lead.

In 1911, Rutherford came forth with his own physical model for subatomic structure, as an interpretation for some unexpected experimental results. In it, the atom is made up of a central charge now called nucleus surrounded by a cloud of orbiting electrons. In subsequent years, rules were established for the numbers of atoms in various orbits and it was explained that electromagnetic radiation was released by electrons shifting between orbits and releasing energy when they moved from an orbit of higher energy to one of lower energy.

The discovery of the neutron by Chadwick in 1932 completed the well known picture of the nuclear structure consisting of protons and neutrons, the number of protons, carrying a positive charge equal and opposite that of electrons, being the same as electrons in the atom, and various isotopes being the result of differing numbers of electrically neutral or chargeless neutrons whose mass is very nearly the same as protons which is about 1836 times that of the electron.

The good thing about the electron is that anybody can see it in the electrical arc formed between two separated ends of a conductor if an electrical circuit is broken. It is this seeing and believing that lies at the heart of willing acceptance of later discoveries of subatomic particles none of which can be actually seen but whose existence is proven by inference. The good thing about atomic models is that they work in predicting properties of materials and have resulted in the development of science and technology of electronics which has revolutionized our lives.

Democritus's atoms were no longer the fundamental particles, but instead were composed of varying numbers of fundamental particles which were then established as being the electron, the proton and the neutron. The measurement of the rate of decay in radioactive materials, commonly denoted by half-life, made it possible to estimate the age of the earth which is now believed to be about 4.6x10^9 (four thousand six hundred million) years

In the year 1900, Wilson, Elster and Geital discovered that charged electroscopes exhibited a small residual leak in spite of the best insulation. The surrounding air was eliminated as a possible cause since its conductivity was found to be constant. In 1903, Rutherford and Cooke demonstrated by the use of absorbing screens of iron and lead that the radiation responsible for the discharge of the electroscope came from outside the instrument. Initially, it was thought that the radiation was the result of the contamination of Earth's surface and the surrounding air by radioactive materials which had also been recently discovered. However, when Gokel, Hess and Kolhorster during 1909-1914 sent sealed ionization chambers up in balloons up to 9,000 meters high, it was found that the intensity of radiation increased with height continuously, as much as five to ten times the value at ground level. Hence it was concluded that an extremely penetrative type of radiation, whose origin was entirely beyond our atmosphere, was falling upon the earth from above the atmosphere and it was called cosmic radiation. The methods developed and used in the study of cosmic radiation include ionization chambers, photographic emulsions and bubble chambers developed in 1952 by D.A. Glaser.

One of the greatest achievements of the cosmic rays studies was the discovery in 1932 by C.D. Anderson of the "Positron" or the antiparticle of the electron which is identical to the electron except that it carries a positive charge of the same magnitude. The positron had, however been predicted theoretically by P.A.M. Dirac in 1928. Similarly, two years after Yukawa's theoretical prediction, mesotrons or mesons were discovered in 1937 by Neddermeyer, Anderson, Street and Stevenson by a study of cloud chamber tracks of cosmic radiation.

The development of the cyclotron by Prof. E.O. Lawrence in 1932 provided a controllable means of producing beams of high energy accelerated particles that could be used to bombard other particles and nuclei and break them into pieces to render different elements and new fundamental particles. The technique is called transmutation, and has resulted in a proliferation of newly identified fundamental particles of diverse sizes and qualities. The second half of the twentieth century has seen the construction of a number of particle accelerators in various parts of the world; and could well be called the era of the elementary particles, as far as physics is concerned.

The first `atomic pile' -- the forerunner of nuclear reactors -- was activated by Enrico Fermi in December 1942 in a Squash court in Chicago University which among other things produced a significant quantity of the radioactive element Plutonium that had not existed naturally on earth before then and was hailed as the first synthesized or artificial element beyond the natural ninety-two. This great achievement, unfortunately, created the delusion among many that man had after all succeeded in overtaking nature and was now more powerful than God, if One existed. The conceit showed itself in the pathetic development and tragic use of atomic bombs much before the nuclear reactor could be used to produce useful energy and medicinal radioisotopes, apart from research in nuclear physics.

These studies have resulted in the identification of five basic types of forces that exist in nature as 1) gravity, 2) electric force, 3) strong nuclear forces which hold neutrons and protons together in the nucleus, 4) weak nuclear forces which control the change in charge states of nucleons, and 5) color forces which are postulated to operate on quark particles only about which we shall learn later. Of course, all other forces that we experience in our daily lives are supposed to be the complex consequences of these five. The method by which atomic, nuclear and subatomic particles interact is believed to be via the exchange of energy which can alternatively be considered as particle exchange. The interaction between electric charges is known to proceed by the exchange of photons which are massless particles which can carry force or energy for an infinite distance at the speed of light. Any particle that travels at the speed of light has to be massless (i.e have zero rest mass) in order for the theory of relativity to be held true, as a particle with any initial mass would assume infinite mass at the velocity of light. The hypothetical particle graviton carries the gravitational force effect between particles of matter. Neutrino and antinutrino are also chargeless and massless particles which can only be distinguished from photons by having different spin. Then, of course, there are muons, pions, kaons, lambda particles, sigma particles, omega particles, psi particles and a whole host of antiparticles, some stable and some unstable. The unstable only last between 10^-17 and 10^-21 seconds, some of which are called resonances. "Strange" particles have life times of the order of 10^-9 seconds. There are quark particles which were initially regarded as the fundamental building blocks of all particles and which possess the property of charm. Two varieties of the quarks have been named truth and beauty. Then of course, there are monopoles which as the very name suggests should be a sort of half-magnet that would be attracted by the north pole of a magnet and repelled by the south pole in any orientation or vice versa. In fact, it is curious that so far radions (particles of radio frequency radiation) and thermion (particles of heat radiation) do not seem to have made their mark. However, the growing diversity of fundamental particles is by no means a cause for discord. Some physicists are optimistic that it may be possible to identify a group or family of particles which could form the basis of the synthesis of all other particles; and it may even be possible to integrate these particles with the quantum theory and, in turn, with the theory of relativity, thus producing the grand unification and the final answers to all physical questions.



Tuesday, February 15, 2011

Conversation in Heaven - 1

How are things on earth?

Quite good! Our indoor animal project has succeeded beyond our expectations. The human beings as they are called now are getting ready for the next phase of evolution.

What genetic parameters were changed in the previous upgrades?

Five major steps were accomplished:

1. The neck was shortened and both eyes were placed in front making them vulnerable from behind and creating a need for building shelters. The apes, of course did not build homes. They just started living on trees.

2. The ability to stand erect and move on two legs in a balanced manner. It freed the arms to hold, carry and manipulate things. Experiments with Lemurs helped in establishing the balancing algorithms.

3. A set of pilot teeth to sample available food and second permanent set to suit the available food e.g. plant, vegetable, meat etc. The designing of the inception of a new set of bones in flesh a few years old took quite some time.

4. Delinking of cognitive process from ancestral info-base and cell allocation for new data from experience. This allows dumb parents to have brilliant offspring who innovate and develop new ideas. A partial linkage has been maintained to allow the development of tribal instincts useful for the safety of the young ones. The system needs review to reduce excessive divergence in personalities.

5. Association and correlation of sounds and images, allocation of memory for audio perception and control of vocal chords to develop ability of meaningful speech and literacy.

What have human beings achieved with these new faculties?

(To be continued.)

Thursday, February 10, 2011

Rambo in Lahore

The triple murder in Lahore on 27th January, 2011 is unique because it is being owned by the US Embassy in Pakistan. Video clip released by Dunya TV clearly shows that the accused of two murders Raymond Davis did not claim diplomatic immunity before the police on arrest. It seems very likely that he is some sort of a secret agent and was on a mission to eliminate two local agents Faizan Haider and Faheem Ahmed who may have lost cover or got out of line. If that is true, perhaps it would be safer for Raymond Davis who has now lost his own cover to live out the rest of his life in a Pakistani jail.
The interesting thing is that there are at least a dozen assasinations involving Americans in my memeory from Dallas in 1963 to Lahore in 2011 which carry the same signature, "In broad daylight on a busy street".  For quite some time I have been wondering if there is a heinous murder squad of influential Americans still living in the cowboy era that orchestrates these murders.
The high level US delegations that have visited Pakistan in the last few days to secure the release of RD may also have committed indiscretion and exposed their membership of the murder squad. Time will tell.