杨振宁先生:麦克斯韦方程与规范理论的起源

2018 新年伊始, 物理学泰斗杨振宁先生就开场布道了!

于2018年元月3日下午4点,在国家天文台学术交流委员会举行了一场报告—-新年专场:

杨振宁先生:   麦克斯韦方程与规范理论的起源

http://view.inews.qq.com/a/KEP2017121802038600

yang 1对这个报告,腾讯视频进行了现场直播。报告现场气氛热烈。杨振宁先生不仅回顾了电磁场理论的发展历史,解释了规范场论的形成的过程,直到标准模型的伟大成就,并认为LHC发现的Higgs玻色子为SM完成最后的验证,似乎为SM画上完满的句号。

yang 3

yang 5

这些工作对中国年轻人还是很有启发意义,特别是他回答年轻人提出的问题有很大的指导意义且很有针对性,相当精彩!这个报告对中国物理界也有很大的引领与启示作用。

现代物理学是人类最伟大的成就,而杨振宁先生是当代最伟大的物理学家。他的贡献不亚于爱因斯坦、泡利、温伯格等其他诺贝尔奖获得者。因为他参与了现代物理学形成的这个进程,并作出了个人的努力和杰出贡献。因此,他对现代物理是什么有极好的理解。

现代物理学有两个分支:现代经典理论和当代龚学理论。杨振宁先生是前一个分支达到顶峰的代表人物。杨先生关于历史的渊源对话中确切地显示了物理学应该是什么。当然,大多数正在学习与成长的年轻物理学家不会透彻理解杨先生所有的观点。

传统物理学的认识论是根据现象学获得的实验数据,利用数学理论加以抽象描述成基本方程。也就是说,数学只是传统物理学的一种工具或一种语言,而杨先生将数学列为与物理同样揭示宇宙世界的基本方法。杨振宁先生的高明之处,还在于努力探寻现象背后的抽象物理数学原理的实质问题。这些工作充分发挥了杨振宁先生的数学长处,但似乎缺失一点哲学高度!当然,所有的物理学抽象原理都有一个底线:必须接受自然世界最后的判断;也就说,如果不符合自然客观,再漂亮的数学或高深的哲学都不能在物理学中起任何作用。

龚学理论是现代物理学的最高成就。龚学认识论给出一种哲学高度的第一原理为基础,采用预定设计的公理和定义,构建了一个虚构的宇宙系统FU。然后,FU与自然客观世界的现实结果进行比较。
同样,FU必须接受自然世界最终裁判。任何不符合这个最终裁判的理论只能是垃圾,并被历史扬弃。

关键是龚学理论恰恰满足现代物理客观观测的所有重要结果,而且,未来还有很多物理观测数据的出现,还会进一步证实龚学理论必然与这个客观世界的观测结果相一致。

如:1)未来几年LHC 对HIGGS粒子进一步认识,将会发现该玻色子是龚学的真空子。2) 未来几年LIGO及其量子引力研究,将会证明龚学的统一力方程的正确无误。3)未来数学纤维丛结构、天体物理学观测以及CKM精密测量等验证将揭示龚弦宇宙的结构的精妙,而这个精妙结构的龚学计算早已经摆在那里。

我们预测,这些工作将在十年内得到完全的验证。(这样的预测在两年前就已做出。因此,严格意义上说:在未来八年内,以上工作将得到完全的验证)。从而显示,龚学理论将引领现代物理学的发展。

附:

2014年6月,杨振宁先生在新加坡举办的第8届全球华人物理学家大会上所作的类似的报告:

——————————————————–

The conceptual origins of Maxwell’s equations and gauge theory

Chen Ning Yang

 Already in Faraday’s electrotonic state and Maxwell’s vector potential, gauge freedom was an unavoidable presence. Converting that presence to the symmetry principle that underpins our successful standard model is a story worth telling.

It is often said that after Charles Augustin de Coulomb, Carl Friedrich Gauss, AndréMarie Ampère, and Michael Faraday discovered the four experimental lawsconcerning electricity and magnetism, James Clerk Maxwell added the displacement current and thereby created the great set of Maxwell’s equations.That view is not entirely wrong, but it obscures the subtle interplay betweensophisticated geometrical and physical intuitions that led not only to thereplacement of “action at a distance” by field theory in the 19th century butalso, in the 20th century, to the very successful standard model of particle physics.

In 1820 Hans Christian Oersted (1777–1851) discovered that an electric current would always cause magnetic needles in its neighborhood to move. The discovery electrified the whole of Europe and led to the successful mathematical theoryof “action at a distance” by Ampère (1775–1836). In England, Faraday(1791–1867) was also greatly excited by Oersted’s discovery. But he lacked the mathematical training needed to understand Ampère. In a letter to Ampère dated 3 September 1822, Faraday lamented, “I am unfortunate in a want of mathematical knowledge and the power of entering with facility into abstract reasoning. I am obliged to feel my way by facts closely placed together.” 1 

Faraday’s “facts” were his experiments, both published and unpublished. During a period of 23 years, 1831–54, he compiled the results of those experiments into three volumes, called Experimental Researches in Electricity, which we shall refer to as ER (figure 1). A most remarkable thing is that there was not a single formula in this monumental compilation, which showed that Faraday was feeling his way, guided only by geometric intuition without any precise algebraic formulation.

 of Faraday’s , published separately in 1839, 1844, and 1855. On the right is the first page of the first volume.

STEFAN KABEN, NIELS BOHR LIBRARY AND ESVA

 

Figure 2 shows a diagram from Faraday’s diary, dated 17 October 1831, the day he found that moving a bar magnet either into or out of a solenoid would generate electric currents in the solenoid.Thus he had discovered electric induction, which, as we know, eventually led to making big and small generators of electricity and thereby changed the technological history of mankind.

 in an etched portrait. The inset shows the diagram Faraday drew in his diary on 17 October 1831, the day he discovered induction.

AIP EMILIO SEGRè VISUAL ARCHIVES, E. SCOTT BARR COLLECTION


Throughout the volumes of ER, Faraday explored variations of his induction experiment: He changed the metal used for winding the solenoid, immersed the solenoid in various media, created induction between two coils, and so on. He was especially impressed by two facts—namely, that the magnet must be moved to produce induction, and that induction seemed to produce effects perpendicular to the cause.

Feeling his way toward an understanding of induction, he introduced two geometric concepts: magnetic lines of force and the electrotonic state. The former was easily visualized by sprinkling iron filings around magnets and solenoids.Those lines of force are today designated by the symbol H, the magnetic field. The latter, the electrotonic state, remained indefinite and elusive throughout the entire ER. It first appeared early involume 1, section 60, without any precise definition. Later it was variously called the peculiar state, state of tension, peculiar condition, and other things. For example, in section 66 he wrote, “All metals take on the peculiar state,” and in section 68, “The state appears to be instantly assumed.” More extensively, we read in section 1114,

If we endeavour to consider electricity and magnetism as the results of two faces of a physical agent, or a peculiar condition of matter, exerted in determinate directions perpendicular to each other, then, it appears to me, that we must consider these two states or forces as convertible into each other in a greater or smaller degree.

When Faraday ceased his compilation of ER in 1854 at age 63, his geometric intuition, the electrotonic state, remained undefined and elusive.

Maxwell

Also in1854, Maxwell (1831–79) graduated from Trinity College. He was 23 years old and full of youthful enthusiasm. On February 20 he wrote to William Thomson,

Suppose a man to have a popular knowledge of electric show experiments and a little antipathy to Murphy’s Electricity, how ought he to proceed in reading & working so as to get a little insight into the subject wh[sic] may be of use infurther reading?

If he wished to read Ampere Faraday &c how should they be arranged, and at what stage & in what order might he read your articles in the Cambridge Journal? 2

Thomson (later Lord Kelvin, 1824–1907) was a prodigy. At the time, he had already occupied the Chair of Natural Philosophy at Glasgow University for eight years.Maxwell had chosen well: Earlier in 1851 Thomson had introduced what we nowcall the vector potential A to express the magnetic field H through

          (1)

an equation that would be of crucial importance for Maxwell, as we shall see.

We do not know how Thomson responded to Maxwell’s inquiry. What we do know is that, amazingly, only a little more than one year later, Maxwell was able to use equation (1) to give meaning to Faraday’selusive electrotonic state and publish the first of his three great papers,which revolutionized physics and forever changed human history. Those three papers together with others by Maxwell had been edited by William Davidson Nivenin 1890 into a two-volume collection, Scientific Papers of JamesClerk Maxwell, which we shall refer to as JM.

Maxwell’s first paper, published in 1856, is full of formulas and therefore easier to read than Faraday’s ER. Its central ideas are contained in part 2,which has as its title “Faraday’s Electro-tonic State.” On page 204 in this part 2 we find an equation that in today’s vector notation is

    (2)

where A is Faraday’s electrotonic intensity.

Three pages later, on page 207 of JM, the result is restated inwords:

Law VI. The electro-motive force on any element of a conductor is measured by the instantaneous rate of change of the electro-tonic intensity on that element, whether in magnitude or direction.

The identification of Faraday’s elusive idea of the electrotonic state (or electrotonic intensity, or electrotonic function) with Thomson’s vector potential A defined in equation (1) above is, in my opinion, the first great conceptual breakthrough in Maxwell’s scientific research: Taking the curl of both sides of equation (2) we obtain

    (3)

which is the modern form of Faraday’s law. Another modern form of it is

    (4)

where d l is a line element and d σ is an element of area. Maxwell did not write Faraday’s law in either of the two forms in equations (3) and (4) because his aim was to give precise definition to Faraday’s elusive concept of the electrotonic state. Indeed, the concept of the vector potential A remained central in Maxwell’s thinking throughout his life.

Maxwell was aware of what we now call the gauge freedom in equations 1–3, namely, that the gradient of an arbitrary scalar function can be added to A without changing the result. He discussed that freedom explicitly in theorem 5 on page 198 of JM. So which gauge did he choosefor A in equations 1–3? He did not touch on that question, but left it completely indeterminate. My conclusion: Maxwell implied that there exists a gauge for A in which equations 1–3 are satisfied.

Maxwell was also fully aware of the importance of his identification of Faraday’selectrotonic intensity with Thomson’s A. He was afraid that Thomson might take offense concerning the priority question. He therefore concluded part 2 of his first paper with the following remark:

With respect to the history of the present theory, I may state that the recognitionof certain mathematical functions as expressing the “electro-tonic state” of Faraday, and the use of them in determining electro-dynamic potentials and electro-motive forces is, as far as I am aware, original; but the distinct conception of the possibility of the mathematical expressions arose in my mind from the perusal of Prof. W. Thomson’s papers. ( JM, page 209)

Maxwell’s vortices

Five years after completing his first paper, Maxwell began publishing his second, which appeared in four parts during 1861 and 1862. In contrast to the earlier paper, the second is very difficult to read. The main idea of the paper was to account for electromagnetic phenomena “on the hypothesis of the magnetic field being occupied with innumerable vortices of revolving matter, their axes coinciding with the direction of the magnetic force at every point of the field,” as we read in JM on page 489.

Maxwell gave an explicit example of such an intricate group of vortices in a diagram reproduced here as figure 3 , which he explained in detailon page 477 of JM in the following passage:

 from a plate in the 1890 collection , facing page 488. The directions of the arrows in two of the hexagonal vortices in the second row from the bottom are incorrect, presumably mistakes of Maxwell’s draftsman.

Let AB, Plate VIII., p. 488, fig. 2, represent a current of electricityin the direction from Ato B. Let the large spaces above and below AB represent the vortices, and let the small circles separating the vortices represent the layers of particles placed between them, which in our hypothesis represent electricity.

Now let an electric current from left to right commence in AB. The row of vortices ghabove AB will be set in motion in the opposite direction to that of a watch. (We shall call this direction +, and that of a watch –.) We shall suppose the row of vortices kl still at rest, then the layer of particles between these rows will be acted on by the row gh on their lower sides, and will be at rest above. If they are free to move, they will rotate in the negative direction, and will at the same time move from right to left, or in the opposite direction from the current, and so form an induced electric current.

That detailed explanation of Maxwell’s model appeared in part 2 of his second paperand was published in Philosophical Magazine, volume 21, April–May 1861. Maxwell evidently took his intricate network of vortices very seriously and devoted the remaining 11 pages of part 2 to detailed studies of the model.

Then in January and February 1862, Maxwell published part 3 of his second paper withthe title, “The Theory of Molecular Vortices Applied to Statical Electricity.” Seven pages of analysis led to his proposition 14: “To correct the equations ofelectric currents for the effect due to the elasticity of the medium” ( JM, page 496). The correction was to add a “displacement current” Ė to Ampère’s law, which in modern notation then reads  × H = 4 π j + Ė.

I had made several attempts to read the last 11 pages of part 2 and the first 7 pages of part 3, trying to see how Maxwell was led to his correction. In particular, I wanted to learn what he meant by “the effect due to the elasticity of the medium.” All my attempts failed. It is noteworthy that in the last 11 pages ofpart 2, the word “displacement” occurs only once, on page 479, in an unimportant sentence, whereas in the beginning 7 pages of part 3 that word becomes the center of Maxwell’s focus. Thus it seems that in the eight months between the publication of the two parts, Maxwell had found new features of his network of vortices to explore, leading to the displacement current.

After proposition 14 Maxwell quickly concluded that there should be electromagnetic waves. He calculated their velocity, compared it with the known velocity of light, and reached the momentous conclusion that “We can scarcely avoid the inference that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena” (page 500; the italics are Maxwell’s).

Maxwell was a religious person. I wonder whether after this momentous discovery he had in his prayers asked for God’s forgiveness for revealing one of His greatest secrets.

The birth of field theory

Maxwell’s third paper, published in 1865, gave rise to what today we call Maxwell’s equations, of which there are four in vector notation. Maxwell used 20 equations: He wrote them in component form and also included equations for dielectrics and electric currents.

That third paper is historically the first to give a clear enunciation of the conceptual basis of field theory—that energy resides in the field:

In speaking of the Energy of the field, however, I wish to be understood literally. All energy is the same as mechanical energy, whether it exists in the form of motion or in that of elasticity, or in any other form. The energy in electromagnetic phenomena is mechanical energy. The only question is, Where does it reside? On the old theories it resides in the electrified bodies,conducting circuits, and magnets, in the form of an unknown quality called potential energy, or the power of producing certain effects at a distance. On our theory it resides in the electromagnetic field, in the space surrounding the electrified and magnetic bodies, as well as in those bodies themselves, and is in two different forms, which may be described without hypothesis as magnetic polarization and electric polarization, or, according to a very probable hypothesis, as the motion and the strain of one and the same medium. ( JM, page 564)

But, inconformity with the prevalent ideas of the time, Maxwell also wrote,

We have therefore some reason to believe, from the phenomena of light and heat, that there is an aethereal medium filling space and permeating bodies, capable of being set in motion and of transmitting that motion from one part to another,and of communicating that motion to gross matter so as to heat it and affect it in various ways. ( JM, page 528)

Maxwell realized the great importance of his discovery of the displacement current and his conclusion that light is electromagnetic waves. In his third paper, he collected the formulas of the two earlier papers and listed them together. In the process he must have reviewed the arguments that had led to those formulas. So after his review, how did he feel about the intricate network of vortices that had led to the displacement current three years earlier? Maxwell did not discuss that point. But we notice that the word “vortex” did not appear in any of the 71 pages of his third paper. It thus seems reasonable to assume that in 1865 Maxwell no longer considered as relevant the network of vortices of hissecond paper. But he still saw the necessity of “an aethereal medium filling spaceand permeating bodies.”

In 1886 Heinrich Hertz (1857–94) experimentally verified an important consequence of Maxwell’s equations: that electromagnetic waves can be generated by one set of electric circuits and detected by another.

Starting in the mid 1880s Oliver Heaviside (1850–1925) and Hertz independently discovered that one can eliminate from Maxwell’s equations the vector potential A. The simplified equations then have the additional attractive feature of exhibiting a high degree of symmetry between the electric and magnetic fields. We now know that in quantum mechanics, the vector potential cannot be eliminated. It has observable effects as in the Aharonov–Bohm effect.

Into the 20th century

A conceptual revolution in field theory came early in the 20th century following Albert Einstein’s 1905 special theory of relativity, which asserted that there is no other medium at all: The electromagnetic field is the medium. The vacuum is then the state of a region of spacetime where there is no electromagnetic radiation and no material particles. That solved the puzzle posed by the 1887 Michelson–Morley experiment, which looked for the aethereal medium but failed to find it. Most physicists today believe Einstein’s motivation in formulating the theory of special relativity was not to solve the puzzle posed by the Michelson–Morley experiment but rather to recognize the correct meaning of the concept of simultaneity.

In the years 1930–32, with the experimental discovery of the positron, it became necessary to drastically modify one’s view of the vacuum and to adopt instead Paul Dirac’s theory of the infinite sea of negative-energy particles. That was another conceptual revolution in field theory, and it culminated in the theory of quantum electrodynamics. QED proved successful for low-order calculations in the 1930s but was beset with infinity-related difficulties in calculations carried out to higher orders.

In a series of brilliant and dramatic experimental and theoretical breakthroughs in the 1947–50 period, QED became quantitatively successful through the method of renormalization—a recipe for calculating high-order corrections. The latest report of the calculated value of the anomalous magnetic moment of the electron, 3 a = ( g − 2)/2, is in agreement withits experimental value to an incredible accuracy of one part in 10 9 (see the Quick Study by Gerald Gabrielse, Physics Today, December 2013, page64).

With the success of the renormalization program of QED and with the experimental discovery of many mesons and strange particles, efforts were made to extend field theory to describe the interactions between all of the new particles. Papers and books appeared on scalar meson theory with vector interaction, pseudoscalar meson theory with pseudoscalar interaction, and other esoteric topics. None of those efforts produced fundamental advances in our conceptual understanding of interactions. There were also enthusiastic supporters of efforts to find alternatives to field theory, but again no real breakthroughs.

Return to field theory

In the1970s physicists returned to field theory, specifically to non-abelian gauge theory, which was an elegant generalization of Maxwell’s theory. The term non-abelian means that the order in which rotations or other operations take place matters. (I discussed gauge theories and other things in “Einstein’s impact on theoretical physics,” Physics Today, June 1980, page 42.For a more technical discussion, see the article by Isidore Singer, PhysicsToday, March 1982, page 41.) Gauge theory is today recognized as of fundamental conceptual importance in the structure of interactions in the physical universe. It started with three papers published in 1918–19 by mathematician Hermann Weyl, who was influencedby Einstein’s call for the geometrization of electromagnetism. 4

Weyl was motivated by the importance of parallel displacement. He argued that “the fundamental conception on which the development of Riemann’s geometry must be based if it is to be in agreement with nature, is that of the infinitesimal parallel displacement of a vector.” Weyl then said that if, in the infinitesimal displacement of a vector, its direction keeps changing, then “Warum nicht auch seine Länge?” (Why not also its length?)  Thus Weyl proposed a nonintegrable “Strecken facktor,” or “Proportionalitäts facktor,” which here lated to the electromagnetic field through

     (5)

in which A μ is the four-dimensional vector potential and the coefficient γ is real. Weyl attached such a stretch factor to every charged object moving through spacetime. To the second of Weyl’s three papers, Einstein appended a postscript that criticized Weyl’s idea of length changes indisplacements. Weyl was unable to effectively respond to this devastating criticism.

After the development of quantum mechanics in 1925–26, Vladimir Fock and Fritz London independently pointed out that in the new quantum framework, it was necessaryto replace ( p – e A) by

     (6)

which suggested the replacement in equation (5) of eA μdx μ γ by ieA μdx μ ħ, that is, of γ by − .

Evidently, Weyl accepted the idea that γ should be imaginary, and in1929 he published an important paper in which he explicitly defined the concept of gauge transformation in QED and showed that under such a transformation, Maxwell’s theory in quantum mechanics is invariant.

Under a gauge transformation, Weyl’s length-change factor is replaced by

    (7)

which evidently should have been called a phase-change factor. The replacement also renders inoperative Einstein’s criticism of Weyl’s idea mentioned above.

That Maxwell’s equations have a high degree of symmetry was first shown in 1905–07 by Einstein and Hermann Minkowski, who separately discovered the equations’ Lorentz invariance. Weyl’s 1929 discovery that Maxwell’s equations are invariant under gauge transformations revealed another symmetry property of the equations. Today we realize that these symmetry properties make Maxwell’sequations a fundamental pillar in the structure of the physical universe.

Weyl’s gauge transformation involves, at every spacetime point, a so-called U(1) rotation—essentially a simple rotation in the complex plane. There is thus a striking similarity between Weyl’s gauge transformation and Maxwell’s network of rotating vortices. The similarity is, of course, fortuitous.

Mathematically, the phase factors of formula 7 form a Lie group U(1), and one of Weyl’s favorite research fields was Lie groups. Going one step further for the more technical reader, had fiber-bundle theory been developed before 1929, Weyl could certainly have realized that electromagnetism was a U(1) bundle theory and would likely have generalized it to non-abelian gauge theory as a natural mathematical extension in 1929.

In the event, the extension was made in 1954, motivated not by mathematical considerations but by the need to find a principle for interactions in the new field of particle physics in which there were found many new “strange” particles. The physical motivation was concisely stated in a short 1954 abstract:

The electric charge serves as a source of electromagnetic field; an important concept in this case is gauge invariance which is closely connected with (1) the equation of motion of the electromagnetic field, (2) the existence of a current density, and (3) the possible interactions between a charged field and the electromagnetic field. We have tried to generalize this concept of gauge invariance to apply to isotopic spin conservation. 5

That extension led to a non-abelian field theory that was very beautiful but was not embraced by the physics community for many years because it seemed to require the existence of massless charged particles.

To give mass to the massless particles in a non-abelian field theory, the concept of spontaneous symmetry breaking was introduced in the 1960s. That concept in turn led to a series of major advances, and finally to a U(1) × SU(2) × SU(3) gauge theory of electroweak interactions and strong interactions that we now call the standard model. In the fifty-some years since 1960, the international theoretical and experimental research community working in “particles and fields” combined their individual and collective efforts to develop and verify the standard model. Those efforts met with spectacular success, climaxing in the discovery of the Higgs boson in 2012 by two large experimental groups at CERN, each consisting of several thousand physicists (see Physics Today, September 2012,page 12).

Despite its impressive success, the standard model is not the final story. To start with, dozens of constants need to enter the model. More important, one of its chief ingredients, the symmetry-breaking mechanism, is a phenomenological construct that in many respects is similar to Fermi’s proposed “four- ψ interaction” to explain beta decay. 6 That 1934 theory was also successful for almost 40 years. But it was finally replaced by the deeper U(1) × SU(2) electroweak theory.

Gauge freedom was explicitly known to Thomson and Maxwell in the 1850s. It probably had also been vaguely sensed by Faraday in his elusive formulation of the electrotonic state. The gauge freedom was converted by Weyl in 1929 to a symmetry (or invariant) property of Maxwell’s equations in quantum mechanics.That symmetry property, now called gauge symmetry, forms the structural backbone of the standard model today.

Maxwell’sequations are linear. In non-abelian gauge theory, the equations are nonlinear. The nonlinearity arises conceptually from the same origin as the nonlinearityof the equations of general relativity. About the latter nonlinearity Einsteinhad written,

We shall speak only of the equations of the pure gravitational field.

The peculiarity of these equations lies, on the one hand, in their complicated construction, especially their non-linear character as regards the field-variables and their derivatives, and, on the other hand, in the almost compelling necessity with which the transformation-group determines this complicated field-law. (reference 7 , page 75)

The true laws can not be linear nor can they be derived from such. (page 89)

Entirely independent of developments in physics, there emerged during the first half of the 20th century a mathematical theory called fiber-bundle theory, which had diverse conceptual origins, including differential forms (mainly due to élie Cartan), statistics (Harold Hotelling), topology (Hassler Whitney), global differential geometry (Shiing-Shen Chern), and connection theory (Charles Ehresmann). The great diversity of its conceptual origin indicates that the fiber bundle is a central mathematical construct.

It came as a great shock to both physicists and mathematicians in the 1970s that the mathematics of gauge theory, both abelian and non-abelian, turned out to be exactly the same as that of fiber-bundle theory. 8 But it was a welcome shock because it served to bring back the close relationship between the two disciplines, a relationship that had been interrupted by the increasingly abstract nature of mathematics since the middleof the 20th century.

In 1975, after learning the rudiments of fiber-bundle theory from my mathematiciancolleague James Simons, I showed him the 1931 paper by Dirac on the magnetic monopole. He exclaimed, “Dirac had discovered trivial and nontrivial bundles before mathematicians.”

It is perhaps not inappropriate to conclude this brief sketch of the conceptual origin of gauge theory by quoting a few paragraphs from Maxwell’s tribute upon Faraday’sdeath in 1867:

The way in which Faraday made use of his idea of lines of force in co-ordinating the phenomena of magneto-electric induction shows him to have been in reality a mathematician of a very high order—one from whom the mathematicians of the future may derive valuable and fertile methods… .

From the straight line of Euclid to the lines of force of Faraday this has been the character of the ideas by which science has been advanced, and by the free use of dynamical as well as geometrical ideas we may hope for further advance… .

We are probably ignorant even of the name of the science which will be developed out of the materials we are now collecting, when the great philosopher next after Faraday makes his appearance.

REFERENCES

1.    F. A. J.L. James, ed., The Correspondence of Michael Faraday, Vol. 1, Institution of Electrical Engineers (1991), p. 287.

2.    J. Larmor, Proc. Cambridge Philos. Soc. 32, 695 (1937), p. 697. http://dx.doi.org/10.1017/S0305004100019472

3.    T.Kinoshita, in Proceedings of the Conference in Honour of the 90th Birthday of Freeman Dyson, K. K. Phua et al., eds., World Scientific (2014), p. 148.

4.    For this and the ensuing history, see C. N. Yang, in Hermann Weyl, 1885–1985: CentenaryLectures, K. Chandrasekharan, ed., Springer (1986), p. 7;and

A. C. T.Wu, C. N. Yang, Int. J. Mod. Phys. A 21, 3235 (2006). http://dx.doi.org/10.1142/S0217751X06033143

5.    C. N.Yang, R. Mills, Phys. Rev. 95, 631 (1954).

6.    An English translation of Fermi’s original paper is available in F. L. Wilson, Am. J. Phys. 36, 1150(1968). http://dx.doi.org/10.1119/1.1974382

7.    P. A.Schilpp, ed., Albert Einste in: Philosopher-Scientist, Open Court (1949). The two passages are English translations of Einstein’s autobiographical notes written in 1946 when Einstein was 67 years old.

8.    T. T. Wu, C. N. Yang, Phys.Rev. D 12, 3845(1975). http://dx.doi.org/10.1103/PhysRevD.12.3845

is the honorary director of the Institute for Advanced Study at Tsinghua University in Beijing and a Distinguished Professor-at-Large in the physics department at the Chinese University of Hong Kong.

杨振宁先生的报告发表 在2014年11月的《今日物理(Physics Today)》上,国内施郁 博士也有博文介绍。

主流物理界对暗物质研究的现状

李小坚  龚天任

本文简要介绍主流物理界对暗物质研究和理解的现状。

一、 引言

最近关于“悟空”(DAMPE)卫星数据的好消息,给中国科学界带来了一次狂欢。媒体纷纷报道,其中有两点共识:

第一,暗物质的存在是毫无疑问的,暗物质与可见物质的比率大约在5比1之间。

第二,迄今为止没有人知道这个问题的答案:暗物质是什么?

是的,以上这两点基本上是正确的。

暗物质是什么?在主流物理学界没有人知道答案!

中国科学院院长白春礼,谈暗物质的视频介绍:
暗物质是什么?中科院院长告诉你_网易新闻 http://news.163.com/17/1130/11/D4G3RP0I00018AOQ.html

白春礼院士介绍,这是令世界物理主流困惑不解,更是令全世界普通民众迷惑的大问题!

最终,全球科学界将通过科学的方法一一排除那些疑似暗物质的候选者,确定{暗物质不是什么}。

的确,主流科学在{暗物质不是什么}已经取得许多成果。但关键问题是{暗物质是什么?},这个问题必须要有理论突破!

因此,我们可以从网上看到全球科学界正在努力,试图揭开这个谜底。{暗物质不是什么} 与{暗物质是什么?}成为全球人类的热点问题!

那么,我们现在来看一看,国际主流物理界关于暗物质问题的探索情况。

二、主流物理界对暗物质的认识

1.已知的暗物质

国际主流界公认的已知的暗物质有两种:1)中微子,2)黑洞。

然而,这两种已知的暗物质不能解释全部整个暗物质。也就是说,还有其他的物质也扮演着暗物质的角色。事实上,在许多天体物理调查中,这两个已知的暗物质只占总暗物质的很小的百分比(小于1%),见:“2017的暗能量调查”结果。

2. 我们先说黑洞

去年,以及今年LIGO多次发现双黑洞的合并凝聚,显示出宇宙中的黑洞密度很高,从而,似乎“黑洞暗物质假说”又死灰复燃。
宇宙中有两种方式产生黑洞。

第一种,黑洞就是一颗恒星的残余。这个恒星演变过程我们现在非常清楚地知道了解。我们还可以计算出每个星系中的黑洞数量。对于银河系,它有大约300万个黑洞,每个黑洞的平均质量为10个太阳质量。这300万个黑洞中的暗物质约占银河系总质量的0.001%。显然,这一种黑洞不能成为整个宇宙暗物质的候选者。

第二种,可能有在大爆炸期间产生的一些原始黑洞。它们携带的质量,大约可以从0.1到10亿个太阳质量不等。此外,我们不知道这些黑洞的密度。也就是说,它很有可能代表整个宇宙的暗物质。然而,通过分析LIGO的数据,给出了否定的回答:没有!
原始黑洞无法解释宇宙全部的暗物质。也就是说,一定还有其他东西扮演暗物质的角色。

参见:LIGO不硬气:原始黑洞、暗物质和Ia型超新星的引力透镜效应。(https://arxiv.org/abs/1712.02240)。

3.其他暗物质候选者,包括中微子
在主流物理学,基本上还有这两类暗物质候选者:1)不基于粒子的任何暗物质,如修改引力定律(MOND)。2)以粒子为基础的暗物质候选者:一些未知的粒子,如弱互相质量粒子WIMP(例如,无菌中微子、轴子、暗光子等);

2017年10月16日宣布的LIGO双中子星合并,它几乎完全排除了MOND存在的证据。参见{ gw170817暗物质仿真器(https://arxiv.org/abs/1710.06168)}。

外,我国发射的 “悟空”(DAMPE)的数据,还没有发现任何MOND关联的证据。因此,现在还没有理论支持MOND的结果

那么,主流物理的重点搜索范围放在以粒子为基础的暗物质候选者身上。

三、基于粒子的暗物质探索

在此,我们将回顾主流物理所开展的基于粒子的暗物质探索及其搜索途径。

1. 大型强子对撞机LHC的2 TeV实验,已经排除了所有的SUSY粒子。它也排除了大质量弱相互作用WIMP粒子(如LUX和PANDAx,2017),这些数据的的搜索现在已经非常接近中微子可能出现的底部。

SC-13

2.最新的天文数据几乎排除了无菌中微子。

而且,最新的数据也几乎完全排除了“大爆炸核合成(BBN)”作为暗物质。BBN的适合分析说明中微子是狄拉克费米子(没有一个大规模的合作伙伴)。如果中微子是马约拉纳粒子(要求有一个隐藏的巨大的合作伙伴,如无菌中微子),BBN没有符合观测的数据。
参见:https://arxiv.org/pdf/1709.01211.pdf。
在米诺斯,米诺斯+反应器实验排除了惰性中微子(https://arxiv.org/abs/1710.06488);
最近的LIGO {中子星碰撞的中微子参数空间,(https://arxiv.org/abs/1710.06370)}。

3. 排除轴子假设。

AXon
4. 探测未知粒子运行所有可能躲藏的地方,排除任何大质量弱相互作用粒子(WIMPs)(非对称),看到pico-60数据。

PICO60
5. 没有发现暗黑的光子
http://newscenter.lbl.gov/2017/11/08/scientists-narrow-search-dark-photon-dark-matter/。

darkphoton

“探测器中暗光子的特征是极其简单的:一个高能光子,没有任何其他活动。”

暗黑光子也被用来解释标准模型中观察μ介子自旋的性质和它的预测值之间的差异。

最新结果:“基于BaBar规则的这些暗黑光子理论作为G-2异常解释,有效地关闭这个窗口。”

日本的一个实验,类似于BaBar的升级,叫Belle II,将在明年开始运行。“最终,Belle II将产生高于BaBar统计的100倍的数据。”

还有,2014年基本排除了以前假设的冷暗物质(ΛCDM, CDM+ )、暖暗物质 (WDM) 、自相互作用暗物质 (SIDM) 。这些都是废弃、过时了的暗物质候选者。

关于基于粒子的暗物质探索问题的更详细文献和数据将在附录中列出。

四、相似的实验与理论思考

“悟空”(DAMPE)实验类似于丁肇中的阿尔法磁谱仪AMS02 ,但“悟空”(DAMPE)比AMS02 具有更高的灵敏度和探测能力。然而,阿尔法磁谱仪AMS02 的经验可以为“悟空”数据分析提供一些启示。

从阿尔法磁谱仪AMS02 可以看到两点(2013和2015):

SC-14

1.  过量的正电子和反质子。

2.  数据的大幅度下降拐点(尤其是正电子)。

然而,这些正电子过剩和大倾角被排除了由暗物质DM衰变中产生的可能。
再次,反质子过剩的阿尔法磁谱仪AMS02 可以由已知的宇宙的过程解释。
(参见https://home.cern/about/updates/2017/03/cosmic-collisions-lhcb-experiment)。
从而阿尔法磁谱仪AMS02 的这种反质子数据也排除了是暗物质的可能。

csc-anti-position
有很多原因,排除阿尔法磁谱仪AMS02 系统数据的倾角。最重要的一点是,对于阿尔法磁谱仪AMS02发现暗物质候选者 的理论基础是SUSY,现在已经排除了所有2 TeV的SUSY粒子。从而注定AMS02发现暗物质的机会很小很小,可以说一定会失败!

因此,虽然“悟空”发现了比阿尔法磁谱仪数据更高的能量(1.4 TeV)数据,它将无法超越和摆脱已知的超对称约束,除非它是基于一个新的非超对称物的候选者的理论。

也就是说,即使“悟空”最新发现的数据突出点完成统计学分析和确认,我们仍然需要新的理论来解释这种异常性态要求。其中一个例子就是费米神秘伽玛射线信号,它们在暗物质湮灭的源头基本上被排除了,发现毫秒脉冲星是这个神秘伽玛射线信号源。参见:“在银河内部解决γ射线点源的证据。”(2016年2月3日,参见https://arxiv.org/abs/1506.05124)。

五、最后的理论检验

当我们祝贺“悟空”取得的成就,我们必须敦促中国理论物理学家继续努力,加班加点找出一个新的理论基础,而不是用SUSY来解释这一新的发现。

plk2013

现在,这个宇宙的组成现在已经被黑暗能量调查和普朗克CMB(2013和2015)数据所确定(见上、下图)。

UNIVERS-CONTENTS-11

 

eggcarton204
也就是说,新的暗物质理论必须得出这个客观观测结论,这是对任何新的暗物质理论的最后检验。

六、结束语

无论什么样的暗物质理论,必须满足与这个宇宙的客观观测数据相匹配。这是检验这个科学理论的试金石。

悟空卫星、阿尔法磁谱仪AMS02和未来其他科学探测仪器所发现的这个宇宙世界的暗物质、暗能量、宇宙学常数、粒子精细结构常数等客观数据,将进一步推动人类对这个宇宙的认识走向更加深入透彻,甚至是彻底革命性的更新。

二十一世纪物理世界上空的两朵暗云必将烟消云散。

参考文献:

1. 蔡荣根  周宇峰 ,暗物质与暗能量研究新进展
中国科学院理论物理研究所, 北京 100190 ,2010,03

中图分类号: O41 文献标识码: A
文章编号: 10092412( 2010) 03000307
DO I: 10. 3969 /.j issn. 1009
2412. 2010. 03. 001

中国基础科学  综述评述

注1:最近,蔡荣根院士报告表明:暗物质模型有上百种!

但问题是:哪一个是中国人提出的?有何验证?

附1:

于2016年8月6日在人民大会堂,我与原全国青联朋友中国科学院院长、书记白春礼院士有过一个简短交谈,我告诉了他我们有了重要成果,并写上了我们的网址:www.pptv1.com,我要他关注。我们曾在全国青联科学组,我们青联朋友一起开过很多次会。

8月6日 我与原全国青联朋友中国科学院院长书记白春礼院士交流
附2:附录
                                     Appendix:

* Exclusions from the LHC. https://arxiv.org/abs/1709.02304  andhttps://arxiv.org/abs/1510.01516

* Exclusions from Xenon-100 https://arxiv.org/abs/1709.02222

* Exclusions of Charming Dark Matter theories. https://arxiv.org/abs/1709.01930

* Theodorus Maria Nieuwenhuizen “Subjecting dark matter candidates to the cluster test” (October 3, 2017, see https://arxiv.org/abs/1710.01375 ):

Galaxy clusters, employed by Zwicky to demonstrate the existence of dark matter, pose new stringent tests. If merging clusters demonstrate that dark matter is self-interacting with cross section σ/m∼2 cm2/gr, MACHOs, primordial black holes and light axions that build MACHOs are ruled out as cluster dark matter. Recent strong lensing and X-ray gas data of the quite relaxed and quite spherical cluster A1835 allow to test the cases of dark matter with Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac distribution, next to Navarro-Frenck-White profiles. Fits to all these profiles are formally rejected at over 5σ, except in the fermionic situation. The interpretation in terms of (nearly) Dirac neutrinos with mass of 1.61+0.19−0.30 eV/c2 is consistent with results on the cluster A1689, with the WMAP, Planck and DES dark matter fractions and with the nondetection of neutrinoless double β-decay. The case will be tested in the 2018 KATRIN experiment.

A variety of searches for sterile neutrinos have also ruled out this possibility in the relevant mass range. See, e.g., https://arxiv.org/abs/1710.06488  andhttp://iopscience.iop.org/article/10.1088/1742-6596/718/3/032008/pdf

* Exclusions for Axion Dark Matter: Renée Hlozek, David J. E. Marsh, Daniel Grin “Using the Full Power of the Cosmic Microwave Background to Probe Axion Dark Matter” (August 18, 2017, see https://arxiv.org/abs/1708.05681 ).

* Combined direct dark matter detection exclusions.https://arxiv.org/abs/1708.04630  and https://arxiv.org/abs/1707.01632

* Exclusions based on non-detection of annihilations in dwarf galaxies.https://arxiv.org/abs/1708.04858

* Primordial black hole exclusions. https://arxiv.org/abs/1301.4984

* Daniele Gaggero, et al., “Searching for Primordial Black Holes in the radio and X-ray sky” (see https://arxiv.org/abs/1612.00457 ). Abstract:

We model the accretion of gas on to a population of massive primordial black holes in the Milky Way, and compare the predicted radio and X-ray emission with observational data. We show that under conservative assumptions on the accretion process, the possibility that O(10) M⊙ primordial black holes can account for all of the dark matter in the Milky Way is excluded at 4σ by a comparison with the VLA radio catalog at 1.4 GHz, and at more than 5σ by a comparison with the NuSTAR X-ray catalog (10 – 40 keV). We also propose a new strategy to identify such a population of primordial black holes with more sensitive future radio and X-ray surveys.

* Tight Warm Dark Matter parameter exclusions,https://arxiv.org/pdf/1704.01832.pdf

* More Warm Dark Matter parameters exclusions: Simon Birrer, Adam Amara, and Alexandre Refregier, “Lensing substructure quantification in RXJ1131-1231: A 2 keV lower bound on dark matter thermal relict mass” (January 31, 2017, seehttps://arxiv.org/abs/1702.00009 ).

We study the substructure content of the strong gravitational lens RXJ1131-1231through a forward modelling approach that relies on generating an extensive suite of realistic simulations. The statistics of the substructure population of halos depends on the properties of dark matter. We use a merger tree prescription that allows us to stochastically generate substructure populations whose properties depend on the dark matter particle mass. These synthetic halos are then used as lenses to produce realistic mock images that have the same features, e.g. luminous arcs, quasar positions, instrumental noise and PSF, as the data. By analyzing the data and the simulations in the same way, we are able to constrain models of dark matter statistically using Approximate Bayesian Computing (ABC) techniques. This method relies on constructing summary statistics and distance measures that are sensitive to the signal being targeted. We find that using the HST data for \RXJ we are able to rule out a warm dark matter thermal relict mass below 2 keV at the 2 sigma confidence level.

* Paolo Salucci and Nicola Turini, “Evidences for Collisional Dark Matter In Galaxies?” (July 4, 2017, see https://arxiv.org/abs/1707.01059 ). Abstract:

The more we go deep into the knowledge of the dark component which embeds the stellar component of galaxies, the more we realize the profound interconnection between them. We show that the scaling laws among the structural properties of the dark and luminous matter in galaxies are too complex to derive from two inert components that just share the same gravitational field. In this paper we review the 30 years old paradigm of collisionless dark matter in galaxies. We found that their dynamical properties show strong indications that the dark and luminous components have interacted in a more direct way over a Hubble Time. The proofs for this are the presence of central cored regions with constant DM density in which their size is related with the disk length scales. Moreover we find that the quantity ρDM(r,L,RD)ρ⋆(r,L,RD) shows, in all objects, peculiarities very hardly explained in a collisionless DM scenario.

* Dark matter distributions have to closely track baryon distributions, even though there is no viable mechanism to do so: Edo van Uitert, et al., “Halo ellipticity of GAMA galaxy groups from KiDS weak lensing” (October 13, 2016, seehttps://arxiv.org/abs/1610.04226 ).

* One of the more successful recent efforts to reproduce the baryonic Tully-Fischer relation with CDM models is L.V. Sales, et al., “The low-mass end of the baryonic Tully-Fisher relation” (February 5, 2016, seehttps://arxiv.org/abs/1602.02155 ). It explains:

[T]he literature is littered with failed attempts to reproduce the Tully-Fisher relation in a cold dark matter-dominated universe. Direct galaxy formation simulations, for example, have for many years consistently produced galaxies so massive and compact that their rotation curves were steeply declining and, generally, a poor match to observation. Even semi-analytic models, where galaxy masses and sizes can be adjusted to match observation, have had difficulty reproducing the Tully-Fisher relation, typically predicting velocities at given mass that are significantly higher than observed unless somewhat arbitrary adjustments are made to the response of the dark halo.

The paper manages to simulate the Tully-Fisher relation only with a model that has sixteen parameters carefully “calibrated to match the observed galaxy stellar mass function and the sizes of galaxies at z = 0” and “chosen to resemble the surroundings of the Local Group of Galaxies”, however, and still struggles to reproduce the one parameter fits of the MOND toy-model from three decades ago. Any data set can be described by almost any model so long as it has enough adjustable parameters.

* Dark matter can’t explain bulge formation in galaxies: Alyson M. Brooks, Charlotte R. Christensen, “Bulge Formation via Mergers in Cosmological Simulations” (12 Nov 2015, see https://arxiv.org/abs/1511.04095 ).

We also demonstrate that it is very difficult for current stellar feedback models to reproduce the small bulges observed in more massive disk galaxies like the Milky Way. We argue that feedback models need to be improved, or an additional source of feedback such as AGN is necessary to generate the required outflows.

* Baryon effects can’t save cold dark matter models.https://arxiv.org/abs/1706.03324

* Cold dark matter models don’t explain the astronomy data.https://arxiv.org/pdf/1305.7452v2.pdf

Evidence that Cold Dark Matter (ΛCDM), CDM+ baryons and its proposed tailored cures do not work in galaxies is staggering, and the CDM wimps (DM particles heavier than 1 GeV) are strongly disfavoured combining theory with galaxy astronomical observations.

* As of 2014, a review article ruled out pretty much all cold dark matter models except “warm dark matter” (WDM) (at a keV scale mass that is at the bottom of the range permitted by the lamdaCDM model) and “self-interacting dark matter” (SIDM) (which escapes problems that otherwise plague cold dark matter models with a fifth force that only acts between dark matter particles requiring at least a beyond the Standard Model fermion and a beyond the Standard Model force carried by a new massive boson with a mass on the order of 1-100 MeV). Alyson Brooks, “Re-Examining Astrophysical Constraints on the Dark Matter Model” (July 28, 2014, see https://arxiv.org/abs/1407.7544 ). As other more recent links cited here note, collisionless WDM and pretty much all SIDM models have since been ruled out.

* Proposed warm dark matter annihilation signals also turned out to be false alarms. https://arxiv.org/abs/1408.1699  and https://arxiv.org/abs/1408.4115 .

* The bounds on the minimum dark matter mean lifetime of 3.57×10^24 seconds. This is roughly 10^17 years. By comparison the age of the universe is roughly 1.38 x 10^9 years. This means that dark matter (if it exists) is at least as stable as anything other than a proton, which has an experimentally determined mean lifetime of at least 10^33 years.https://arxiv.org/abs/1504.01195 . This means that all dark matter candidates that are not perfectly stable or at least metastable are ruled out. Decaying dark matter and dark matter with any significant annihilation cross section are inconsistent with observation.

* Torsten Bringmann, et al., “Strong constraints on self-interacting dark matter with light mediators” (December 2, 2016, see https://arxiv.org/abs/1612.00845). Abstract:

Coupling dark matter to light new particles is an attractive way to combine thermal production with strong velocity-dependent self-interactions. Here we point out that in such models the dark matter annihilation rate is generically enhanced by the Sommerfeld effect, and we derive the resulting constraints from the Cosmic Microwave Background and other indirect detection probes. For the frequently studied case of s-wave annihilation these constraints exclude the entire parameter space where the self-interactions are large enough to address the small-scale problems of structure formation.

The conclusion of the paper notes that:

Models of DM with velocity-dependent self-interactions have recently received a great deal of attention for their potential to produce a number of interesting effects on astrophysical scales. We have shown in this Letter that these models face very strong constraints from the CMB and DM indirect detection. In the most natural realization of this scenario with a light vector mediator with kinetic mixing, these constraints rule out the entire parameter space where the self-scattering cross section can be relevant for astrophysical systems. These bounds remain highly relevant for a number of generalizations of the scenario, such as a different dark sector temperature and different mediator branching ratios. Clearly, future efforts to develop particle physics models for SIDM need to address these issues in order to arrive at models that provide a picture consistent with all observations in cosmology, astrophysics and particle physics.

* Dark photon parameter space (the carrier boson of the SIDM models) is also tightly constrained and all but ruled out. Yet, the properties a dark photon has to have, if there is one, are tightly experimentally established based upon cluster dynamics. https://arxiv.org/abs/1504.06576 .

* The Bullet Cluster is a huge problem for DM. Jounghun Lee, Eiichiro Komatsu, “Bullet Cluster: A Challenge to LCDM Cosmology” (May 22, 2010, seehttps://arxiv.org/abs/1003.0939 ). Later published in Astrophysical Journal 718 (2010) 60-65. Abstract:

To quantify how rare the bullet-cluster-like high-velocity merging systems are in the standard LCDM cosmology, we use a large-volume 27 (Gpc/h)^3 MICE simulation to calculate the distribution of infall velocities of subclusters around massive main clusters. The infall-velocity distribution is given at (1-3)R_{200} of the main cluster (where R_{200} is similar to the virial radius), and thus it gives the distribution of realistic initial velocities of subclusters just before collision. These velocities can be compared with the initial velocities used by the non-cosmological hydrodynamical simulations of 1E0657-56 in the literature. The latest parameter search carried out recently by Mastropietro and Burkert showed that the initial velocity of 3000 km/s at about 2R_{200} is required to explain the observed shock velocity, X-ray brightness ratio of the main and subcluster, and displacement of the X-ray peaks from the mass peaks. We show that such a high infall velocity at 2R_{200} is incompatible with the prediction of a LCDM model: the probability of finding 3000 km/s in (2-3)R_{200} is between 3.3X10^{-11} and 3.6X10^{-9}. It is concluded that the existence of 1E0657-56 is incompatible with the prediction of a LCDM model, unless a lower infall velocity solution for 1E0657-56 with < 1800 km/s at 2R_{200} is found.

*Garry W. Angus and Stacy S. McGaugh, “The collision velocity of the bullet cluster in conventional and modified dynamics” (September 2, 2007, seehttps://arxiv.org/abs/0704.0381 ) published at MNRAS.

We consider the orbit of the bullet cluster 1E 0657-56 in both CDM and MOND using accurate mass models appropriate to each case in order to ascertain the maximum plausible collision velocity. Impact velocities consistent with the shock velocity (~ 4700km/s) occur naturally in MOND. CDM can generate collision velocities of at most ~ 3800km/s, and is only consistent with the data provided that the shock velocity has been substantially enhanced by hydrodynamical effects.

* El Gordo poses similar problems for dark matter models. Sandor M. Molnar, Tom Broadhurst. “A HYDRODYNAMICAL SOLUTION FOR THE “TWIN-TAILED” COLLIDING GALAXY CLUSTER “EL GORDO”, see https://arxiv.org/abs/1405.2617. The Astrophysical Journal, 2015; 800 (1): 37 DOI: 10.1088/0004-637X/800/1/37

* Axion fuzzy dark matter ruled out: Vid Iršič, Matteo Viel, Martin G. Haehnelt, James S. Bolton, George D. Becker. “First Constraints on Fuzzy Dark Matter from Lyman-α Forest Data and Hydrodynamical Simulations”, seehttps://arxiv.org/abs/1703.04683 . Physical Review Letters, 2017; 119 (3) DOI: 10.1103/PhysRevLett.119.031302

 

什么是科学?科学一定要经过验证吗? 科学一定要能被验证才可以称之为科学吗?

李小坚

这是的题目https://www.wukong.com/answer/6494858328659722510/?isRedirect=1

 

科学是发现正确科学理论的活动,科学是一个追求真理的过程。中国最权威的定义:“科学是关于自然、社会和思维的知识体系”。参见中国《辞海》。很显然,这个知识体系里,还有很多未经证实的东西!伪科学很可能隐含其中。

更全面一点的定义科学是一种理论和知识体系,它是人类对于客观世界的正确反映,是人类认识世界和改造世界的社会实践经验的理论概括和系统总结。

我们认为:科学理论必须经过人类最大可能的严格的验证和最缜密的数理逻辑检验,才能确立为正确的科学理论。从而,伪科学无处藏身!

卡尔·波普尔(SirKarl Raimund Popper,1902年-1994年)是当代西方最有影响的哲学家之一,其定义的可证伪科学范式已经不合时宜了。

波普尔关于科学的定义,已经被二十一世纪的新科学体系打破和抛弃。

一个科学理论可以 以公理化的命题开始,严格准确的逻辑推导,加以严密精确的实验验证,成为新的科学范式。同一个问题的不同理论,通过比较和竞争,排除错误的理论,从而确立一个完美科学理论的创建。

我们总结前人的经验,第一次介绍检验科学理论正确性的新标准:完美性。参见:《物理世界的理论创新模型》 《创新物理学》之一   http://www.pptv1.com/?p=9。(部分拷贝过来)

完美性具体体现为:

1.简单性:大道至简,简单明了。越简单越美!

2.精确性:准确无误、计算精确,甚至可计算至小数点后任意位的精度。

3.解释性:不仅逻辑上合理,并揭示事物的本质。

4.预见性:不仅适合现在的各种环境和条件,对于未来亦适用,能预测和预见未来。

5.统一性:能够将不同的理论和描述,甚至是矛盾的理论,统一成为一个完整的整体。

6.包容性:兼容并蓄,尊重原有各种理论及其产生的原因和历史背景,但新理论一定站得更高、看得更远,覆盖更广。

7.先进性:超越原有理论,更新旧的理论,与时俱进。

8.独立性:不依赖任何学派与理论,创建一个完全独立自主地创立一个新的理论体系。

9.稀缺性:不可缺少的力量(the power of indispensability)

如果没有创新物理学,就不可能解决令人迷惑的问题。这就是创新物理学不可或缺的、不可替代的、前所未有的力量。

10.一致性与完备性:一个理论的最高境界,既达到一致的逻辑和推理,结果无矛盾。同时,完备性,没有漏洞,没有悖论,全面、整体、终极。

如果创立一个新的科学理论体系,具有上述10个优势和特性,那么这个科学理论就具有完美理论的特征,也就离客观的真理更接近了。

这样的科学理论在科学世界还要接受更为苛刻的检验,如进一步排除奥卡姆的巧合概率,进行理论检验公理检验,并符合理论创新定律等。这样的理论才可以成为人类的真知,才可以成为世人广泛接受的真理学说。参见[14]:科学方法的正确性问题。http://www.pptv1.com/?p=324(部分拷贝过来)

科学理论与科学方法的正确性问题

正确的科学方法是什么?有正确的科学方法论吗?检验科学理论的正确性的标准是唯一的吗?这些问题,困扰着科学界,并没有得到很好地解决。

自从文艺复兴时期(第十四世纪)到当代,科学被定义为三步曲:定义为假设、理论和验证之间的(HTV)交互作用过程。

S1,假设H(假设或猜测)

S2、理论T(构建模型产生的预测)

S3,验证V(通过测试和实验对这些预测进行验证)

现代科学发展历程中运用H-T-V这一过程淘汰了许多非科学理论,为科学打下了坚实的基础。然而,它也让一些错误的理论通过过了H-T-V过程,如汤姆逊原子模型(http://www.britannica.com/science/thomson-atomic-model)。然而,随着不断迭代使用HTV过程,科学获得自我校正能力。它似乎是一个完美的科学方法论。

当然,这不可能是完美的科学方法论。当一个预测的验证是正确,其相应的假设也是正确的,因此,假设得到确认。然而,一个预测结果的否定,不能合乎逻辑地排除它的假设是错误的可能性。因为,很可能所做的检验是超出测试空间或其试验设置范围引起。因此,如果一个理论有一个非常大的参数空间,它可能在实践中无法测试。最好的例子是“上帝理论”,它可以预测包含一切可能性,也就是说,有无限大的参数空间。因此,一些理论的本质使得任何批评或实验都不可能马上去验证,甚至在原则上,它是不可能被验证。那就是,对那些“神一样的理论”H-T-V是完全无用的。

为了克服这个困难,卡尔波普尔坚持:一个理论如果不能被证伪,原则上,这不是一个科学的理论。科学理论的力量在于它既能够容易被证伪,并且能在不断的批评与质疑中被确立。这是科学哲学中所谓的划界问题,这就是波普主义哲学。

在波普主义哲学原则下,一个理论必须通过标定工具程序确定为是否可证伪,才能提交到H-T-V检验过程。许多伟大的理论几乎因为在当时的技术条件下不可证伪,因此而被废弃多年,经常长达半个世纪。

今天,问题变得越来越糟了。实验技术再也赶不理论思维的发展,许多伟大的物理理论是超越当今技术条件下才能够进行证伪的。一些理论原本是完全不可证伪的,例如,公理性的假设原则,以及绝对真理性的原理。

我们应该全部丢掉那些不可证伪的理论?我们应该停止任何超越当今技术条件的理论物理研究吗?或者,我们应该重新评估波普主义哲学?

而现在的西方科学早已超越了波普主义哲学。已经很深刻地反思波普主义哲学,如2015年十二月7-9日,在德国慕尼黑举行的一个题为{为什么要相信一个理论?反思科学方法论-现代物理学之光}的会议。许多非常杰出的物理学家和哲学家出席并讨论了这个非常重要的问题。请看:http://www.whytrustatheory2015.philosophie.uni-muenchen.de/index.html 。

而中国科学界仍然只拥抱波普主义哲学,强烈质疑科学理论创新,认定科学理论不能建立在猜测性假设,只能接受眼见为实的科学方法论。就像三十多年前中国有过一场《关于实践是检验真理的唯一标准》那样的大讨论,对于当时社会各界震动和影响很大。现在中国的科学界面临的问题,比那时还要复杂严峻,对理论创新有更多的限制和歧视。如果不能解放思想,敞开胸怀,尝试和探索,就不可能有真正意义上的创新。还用文艺复兴前的禁言禁声和打击迫害,更是与科学发展背道而驰。

除了现实实验能力和技术条件的限制,现在在理论创新与逻辑思辨和检验验证存在巨大的鸿沟,目前科学的另一个致命缺陷:即使100% 通过了H-T-V验证以及100%波普主义检验通过了的理论仍然可能是一个错误的理论。最重要的一个例子是SM基本粒子标准模型,因为他们通过了H-T-V各个步骤和100%波普主义要求的检验,但仍然是一个局部理论,而不是全局性的正确的理论,同样广义相对论也有此问题,现在看似一个伟大正确的理论,随着深度质疑的到来,会发现那是一个很不完善的局部理论。

另一方面,G弦物理学的认识论站在更高的高度、更广的视野,超出波普学派的制约,超出慕尼黑会议要求的科学理论水平。

从第一原理出发,发展出一个公理性的理论体系。该理论可以描述宇宙创生、标准模型粒子结构、统一力方程、不确定性原理等,可以理论计算如宇宙学常数、暗物质、暗能量、可见物质的成分、卡毕波角、温伯格角、精细结构常数、希格斯粒子精确值等。近几年的全球物理观测数据,正在向龚学物理的理论计算值逼近。

如该理论计算预测了:

新的真空波色子质量VB = (1/2) vev + 1% of vev =125.46 Gev

2012年欧洲强子对撞中心撞出来的125.4 Gev粒子就是这个!可惜,被错误地认为是希格子波色子,毫无道理!

2013年诺贝尔物理将授予给了希格斯发现者。

然而,发现希格斯玻色子5年过去了,希格斯玻色子发现到现在仍然没有一个合理的理论解释。其主要的希格斯60%的H衰减通道(H=>b-b、H=>u-u等)仍然不能确定。

LHC进行了大量的对撞实验,收集了海量的数据,一直到2017年11月后,主流至今无法解释希格斯玻色子机制。相反,认为这不是他们期望的希格斯粒子。

而龚学理论早在33年前就有机理产生125.46GeV的真空玻色子的预测,很好地解释和预测了质子碰撞的结果将产生真空玻色子。而至今真空玻色子是唯一能正确解释此结果的正确理论。

2年内全世界的实验物理进一步的实验验证,将证明希格斯机制的失败。即使是获得诺贝尔奖也有可能被证明是错误的理论。

龚学理论从提出到现在的38年来,随着主流物理实验数据的不断发现,它没有被证伪,相反被所有数据所证实,龚学理论应该是一个正确的理论。

一个正确的理论可以蒙尘!但一个正确的理论随着其它错误理论的失败,必将胜利!真理终将昭示于天下。

祝贺中国空间探测卫星“悟空”取得新数据

今天,人民网和今日头条等媒体报道了中国空间探测卫星“悟空”(DAMPE)取得新数据,这是中国探测暗物质和研究暗物质物理本质的一个新进展,有利于解决暗物质和暗能量这个国际上粒子物理和天体物理领域的重大的热门问题。值得祝贺!

重磅成果!暗物质不解谜题 中国“悟空”率先突破_搜狐科技_搜狐网  http://www.sohu.com/a/207492823_313745?_f=index_chan30news_4

中科院高能物理所研究员、博士生导师张新民说,“与通常物质一样,暗物质有引力作用。这个引力效应让天文学家在宇宙空间发现暗物质占宇宙的23%,另外73%是暗能量。而组成我们身边这个世界的‘常规物质’只占4%。”

媒体引用的是2013年以前的数据,而权威机构普朗克2013和2015发表的数据表明:After Planck CMB data (2013, 2015)

plk2013

See https://darkmatterdarkenergy.com/tag/planck/

而2017年国际权威暗能量调查报告的数据,当今的宇宙成分比例:

 

国际物理界,对到底什么是暗物质、什么是暗能量还茫然不知具体是何物!

我们早已指出暗能量、暗物质与可见物质为何物,还能精确理论计算普朗克CMB成分数据。

在这里我们分享我们这个计算结果,希望能促进我国进一步实验验证并预祝悟空暗物质探测取得更伟大成就。

eggcarton204

希望中国物理学界努力,尽早将这些暗物质都挖出来,把暗能量展示出来!

中国,加油!

————————————-

注1:前年在全国范围内征集此暗物质探测卫星的名字,我也参加了。觉得还是“悟空”这个名字,响亮有意义。有如孙悟空火眼金睛,洞察秋毫,还有宇宙探索,感悟空间和能量的本质意义。

注2:2017年12月1日下午参观了中国科学院院史馆,观看了悟空实物模型,天眼模型,电子对撞机模型,沈阳自动化所智能制造实物展览等。感谢沈阳自动化所刘意杨副研究员认真介绍了智能制造系统功能结构和工作运行等!感谢SAP高校部负责组织此次活动。

 

中国的大对撞机工程争论的最后意见

怎么看待杨振宁反对中国建加速器?-悟空问答 https://www.wukong.com/question/6418830450436866306/

我们有靠谱的答案,我们知道杨振宁博士反对建大对撞机是正确的!关于大对撞机的讨论,今日头条问答里大家讨论了5月11日杨振宁博士发表公开的视频意见,而我们在此之前,就进行了系列论证。公开支持杨振宁博士的观点,更有我们从科学理论的正确性与科学理性的角度,提出了我们的观点。对此问题,龚先生一连四篇撰文,论证大对撞机是没有前途的项目。至此,我们在此也给出了最后的意见!

然而,最后的决策肯定在中国的最高层!而且,这也不是一、两次政治局会议能决定的大事。一定要最高领导的担当。

中国高能物理在上世纪八十年代第一次引进电子对撞作为学习玩具那也政治正确;而2008年第二次引进电子对撞机II(BEPCII)继承了CLEO-c作为魅力粒子工厂而不是b-夸克系统,那是技术错误;那么,这次还捡起被美国抛弃的SSC的垃圾,再上质子大对撞机将会被历史记载为愚不可及!好在我们国家的最高领导还是英明的!

我们分析了中国不能再继续进行质子大对撞,那是死路一条!可能的道路有:

  1.   μ子对撞

2.    深海中微子观测实验平台

3.    长基线中微子实验系统

4.    太空多星联合探测。

以上四项将更有可能让中国在高能物理领域走向世界前列。

天外有天,人外有人!

人在做,天在看!

不忘初心,人民利益大于天!

欢迎下载原文档:CSCDebate

西方主流物理界正在逐步走向龚学理论

最近两年来,西方主流物理界的实验数据,越来越逼近龚学理论的结果。

如最近( 2017年11月14日)欧洲核子中心大型(CERN)强子对撞机(LHC)的CMS项目刚刚报道了最新的弱电精密测量结果:{(sin(θ), lepton/eff)^2 = 0.23101±0.00052},参见https://arxiv.org/abs/1711.05288。温伯格角的精确测量值越来越接近龚学理论计算值28.75度。

西方主流物理界最近所得到的结果与龚学理论的计算结果比较:

s Calculation

s Calculation2

http://accelconf.web.cern.ch/AccelConf/linac2016/papers/mop106012.pdf
https://arxiv.org/pdf/1705.04764.pdf

https://arxiv.org/pdf/1711.05288v1.pdf

一、实验物理的检验数据都在支持龚学理论:

自从2012年LHC发现了一个新的玻色子以来,主流物理学的实验物理发展得非常迅速。

而这个被称之为希格斯波色子的粒子,早已经被龚学理论预测并精确计算,应称之为真空波色子。

2013/2015年普朗克CMB数据1;

2016年发现W暗流;

2015/2016年中微子CP违反:中微子不是它自己的反粒子;

2017年LIGO暗能量调查;

2017年LIGO双中子星合并,排除MOND数据;

2017年正物质/反物质:正电荷与负电荷除电荷相反外各方面没有差别;

2017温伯格角的精密测量(原来从28到30度),最近精确到28.75度。

2017年排除了SUSY超对称是希格斯自然问题的解决方案;

2017年排除了WIMPs。

2017年KCM精密测量表明,没有粒子存在于SM粒子附近一直到50000 TeV(不是100,不是1000,不是10000而是50000tev)。

二、西方主流物理理论界,自2012年来开始走向龚学理论的怀抱:

2012年的诺贝尔物理奖得主T胡夫特2016年完全接受了元胞自动机的量子机理。

2015年Paul J. Steinhardt等人接受了循环宇宙。

2016年主流弦物理宣布失败。诺贝尔物理奖得主David Gross 认识到了主流的错误。

2017年温伯格揭示了M弦理论的老将威腾和将军们,现在放弃了该理论。

注:最近,威腾大牌教授,M弦物理之父,放弃没用的错误理论,知错能改,善莫大焉!

ttps://www.quantamagazine.org/edward-witten-ponders-the-nature-of-reality-20171128/ .

龚先生写了一文,给予表扬!

Edward Witten, a physics hero

威腾清楚地显示了以下四点表述:

1,QFT是一个失败的描述自然特性的理论。

2,{ M弦理论+ AdS/CFT +全息}都无法描述自然的性质。

3,认为除了上述两个理论之外,还应该有额外的抽象层描述。

4,他根本不知道这一抽象层的描述是什么样子。

以上四点不仅给M弦理论判了一个死刑,同时,也是对所有其他的弦理论基石(QFT,AdS/CFT和全息图)判了死刑。而这些理论在过去的50年中成为基础物理的主流范式。

所以,西方主流物理正在乖乖回归龚学怀抱 ,他们别无选择。

如果西方主流错了,西方的主流就失去了权威。他们现在的确错了,他们在正确的理论和科学客观事实面前,西方的主流除了走向龚学理论和采用龚学理论之外,没有任何其他选择。

现在,日本也学乖了!日本的ILC刚刚宣布将他的对撞机项目大大压缩,能量缩减从500 GeV到250 GeV,这样可以节省了大约50亿美元,因为他们也相信增加额外的能量不会发现任何新粒子。

西方主流正在进一步精确测量CKM,其结果确认之后,他们没有任何机会建造任何一种后HL-LHC对撞机。

在中国,我们现在还可能无法看到主流物理的进步,中国的主流没有能力和机会参与龚学理论的体验和验证。中国科学的话语权还掌握在某些主流派手中。

中国现在还有人在忽悠政府,想上大对撞机,真是不到南墙不回头啊!

在两年之内,主流物理学界将100%地彻底排除中微子不是Majorana中微子,从而, Higgs机制将正式抛弃。

三、结论

届时,龚学理论将上位成为超越标准模型理论,成为基础物理学的主流理论。

这将是龚学理论的必然胜利!

圣贤之作,供世人学习和体验

龚先生近作系统总结了龚学理论重要贡献,可称之为圣贤之作!世人可能漠视,也可能高山仰止,或是反对和阻扰,或是应该好好学习并参与体验和验证。特别是西方主流物理界开始逐渐转变,正在乖乖回归龚学理论的怀抱!

HEAVENLY FATHER AND HIS ARTISTIC BABY

Heavenly Father created THIS universe with His ‘First Principle’.

eggcarton388

Figure 1

A: The consequences

The consequences of this first principle (Equation zero, G-theory) are followings:

 

One, time moves forward as a time-hose to create a space-time cone, and space expands at EVERY point with constant speed ‘C’, and it consists of 11-dimensions (see https://tienzengong.wordpress.com/2016/11/06/quantum-gravity-from-here-to-eternity/ ).

eggcarton37

Figure 2

Two, it produces ‘intrinsic spin (1/2 h-bar) via bouncing between the real/ghost worlds.

eggcarton313

Figure 3

Three, for any set of concentric circles, the outer circle moves always with acceleration.

Four, Universe structure of G-theory produces:  Matter, 24 fermions; Anti-matter, 24 anti-fermions and Vacuum/space, see http://prebabel.blogspot.com/2012/04/48-exact-number-for-number-of.html . This rules out any additional fermions (Such as SUSY) or sterile particles (such as WIMPs, sterile neutrino).

Five, all 48 fermions emergent out from each time quanta. That is, matter/anti-matter co-exist at every time-moment, and there is no matter/anti-matter annihilation at the Big Bang.

Six, it locks two measuring rulers {C (light speed) and ħ (Planck constant) with two locks:

First lock, electric charge (e) = F (square root of (ħ x C)

Second lock, alpha (electric fine structure constant, a dimensionless pure number, unchangeable by selection of dimension-units) = {1/137.0359…}.

Seven, the universe pie is thus divided into three pieces via an intrinsic Angle (A (0)): energy (space), energy (time) and matter (visible and not visible).

Eight, it produces the gene-color: rules out 4th generation and sterile neutrino. And, it produces the neutrino oscillation.

Nine, the matter/vacuum interaction will produce a ‘vacuum boson’.

Ten, all 48 fermions share an ‘equal right’ (the mass-land-charge), while their apparent masses are different. That is, all those 48 fermions are the SAME kind, and Majorana neutrino is ruled out.

Eleven, it moves the ENTIRE universe from ‘NOW’ to ‘NEXT’, which produces both gravity, ‘quantum-ness’ and ‘unified force’.

Figure 4

 

Twelve, it creates a ‘book keeping’: entropy and CC (Cosmology Constant)

Figure 5

 

Thirteen, it produces ‘bio-computer (a Turing machine)’.

Figure 6

 

Fourteen, it demands a dark flow (W, from 100 to 0%) for the evolution of this universe. The W is 9% now.

Figure 7

 

The above are explained below.

Figure 8

 

eggcarton590

Figure 9

 

eggcarton582

Figure 10

 

eggcarton572a

Figure 11

 

Locking the measuring rulers with intrinsic angles:

eggcarton584

Figure 12

 

Figure 13

 

Energy/mass distribution:

Figure 14

 

Producing ‘quantum-ness’, ‘unified force’ and accelerating the universe expansion:

eggcarton466

Figure 15

 

eggcarton467

Figure 16

 

Producing a vacuum boson.

eggcarton570b

Figure 17

 

Figure 18

 

This is an eleven –dimensional universe.

Figure 19

 

Here is the Physics-TOE.

Figure 20

 

 

B: The verifications

The above Heavenly laws are slowly but surely verified by the artistic baby (the mainstream physics).

One, acceleration expansion of this universe was verified in 1997.

Two, the vacuum boson (with 125.26 Gev) was discovered in 2012.

Three, Neff = 3 is verified by Planck (2013, 2015) data.

Four, energy/mass distribution was verified by Planck CMB data (2013) and by Dark Energy Survey (2017).

eggcarton581

Figure 21

 

Five, WIMP is ruled out in 2017, see http://www.nature.com/news/dark-matter-hunt-fails-to-find-the-elusive-particles-1.22970

Six, MOND is ruled out in 2017 by LIGO data.

Seven, Big Bang matter/anti-matter annihilation is ruled out in 2017, see https://cosmosmagazine.com/physics/universe-shouldn-t-exist-cern-physicists-conclude .

Eight, the Weinberg angle is now measured precisely = 28.75 degrees, see https://arxiv.org/abs/1711.05288 .

Nine, the dark flow (W = 9%) was discovered in 2016 by Adam Riess, see https://tienzengong.wordpress.com/2017/05/15/comment-on-adam-riess-talk/ .

eggcarton473

Figure 22

 

eggcarton502

Figure 23

 

C: Possessed baby soul rescued

While the growth of this artistic baby (mainstream physics) is progressing slowly but nicely, its soul is nonetheless possessed by three demons: {Copenhagen doctrine (measurement mystery and Schrödinger’s Cat), GR (General Relativity) and Higgs mechanism, see https://tienzengong.wordpress.com/2017/11/12/the-angel-and-demons-in-the-100-years-of-physics-nightmare/ }.

 

Fortunately, the ‘Cellular Automaton Quantum Mechanics’ is now casting out the ‘Copenhagen demon’, see https://tienzengong.wordpress.com/2017/10/21/the-mickey-mouse-principle/ .

eggcarton561d

Figure 24

 

eggcarton561

Figure 25

 

Fortunately, the Higgs demon is about to be exorcised.

One, the Higgs naturalness has now failed, even if SUSY Were existing at GUT scale.

Two, the Majorana neutrino is about completely ruled out.

First, a very strong hint shows that neutrino is different from its anti-particle.

Second, the observation of ‘Big Bang Nucleosynthesis’ is very much ruling out the Majorana neutrino.

eggcarton515

Figure 26

 

D: The lingering hallucinations are cured

Two physics hallucinations happened about at the same time and they converge to the same delusive wonderland, the Multiverse.

The M-string theory gets zillions ‘string vacua’, which leads to multiverse.

The ‘inflation scenario (without a guiding principle for the initial conditions)’ leads to ‘Eternal inflation’ which in turn leads to multiverse.

 

The wonder drug for these hallucinations are showing that Multiverse is a delusion with two points.

One, the soul of multiverse is that the structure constants of THIS universe are just  happenstances (the result of the Boltzmann brain). That is, even nature (or God) does not know how to calculate the structure constants of this universe. So, by showing the ways of calculating them, that hallucination is cured.

Two, by showing that those calculations are not bubble dependent, it further bursts the delusion bubble.

 

Now, many prominent physicists (such as Paul J. Steinhardt and et al) are joining in to eradicate these physics hallucinations, see https://tienzengong.wordpress.com/2017/05/13/the-end-of-the-inflation-war/ .

 

F: the remaining living dead

SM (Standard Model of particle physics) has passed every test which we can throw at it, but no one believe that SM is a correct final theory.

On the other hand, every one still sees GR (General Relativity) being a Gospel for gravity, especially after the LIGO announcement on October 16, 2017.

Indeed, GR has also passed all tests which we can throw at it. Indeed, LIGO could be a great tool for viewing the Cosmos in a different way. But, these will not change the fundamental FACT that GR is a totally wrong description for gravity.

The most important damning FACT on GR is that GR plays no role at ALL in the Heavenly Father’s description (HFD) of THIS universe.

In HFD, this universe is ruled by a Structure Function which consists of {G (energy, dark energy) + G (mass; dark and visible)}.

The G (energy) leads to the acceleration of the expansion of this universe. But, most importantly, it also leads to ‘quantum-ness’.

The G (mass) is of course leading to Newtonian gravity, while GR is just an attribute of this G (mass).

 

There is no issue about GR being an excellent effective theory for gravity, but seeing it as the Gospel becomes the major hindrance for getting a correct Gravity-theory. The recent over hyped LIGO story makes the situation even worse. AT this moment, this GR demon is not yet exorcised in terms of sociology. The KEY mission of this article is to cast this GR demon out once and for all. More details, see https://tienzengong.wordpress.com/2017/11/12/the-angel-and-demons-in-the-100-years-of-physics-nightmare/ .

 

Heavenly Father and his artistic baby

 

 

百年物理学梦魇中的天使和恶魔

THE ANGEL AND DEMONS IN THE 100 YEARS OF PHYSICS NIGHTMARE

The Angel and demons in the 100 years of physics nightmare

Natural is moving nicely minute by minute for the past 14 billion years and is playing its predetermined dance to its predetermined destiny with grace and joy.

On the contrary, the human mainstream physics is now in a hellfire nightmare after the discovery of a new boson in 2012. Is it suddenly falling into this hellfire nightmare unexpectedly? Or, were many hellfire demons already plagued the mainstream physics since the beginning 100 years ago? Logically, the latter must be the case. That is, the cause for the nightmare today can be traced out from its history.

The brief history

One, in (1925 – 1927), Copenhagen doctrine DECLARED that ‘quantum uncertainty’ is an intrinsic attribute of nature, and it cannot be removed by improvement of measurement in principle, and this led to the ‘measurement mystery’.

Soon, Schrödinger came up a Cat-riddle, and it CREATED the ‘superposition mystery’, the omnipresent of the ‘Quantum God’.

Two, in early 1954, a general gauge symmetry theory was developed by Chen Ning Yang and Robert Mills. Then, in the first part of 1960s, Murray Gell-Mann discovered the “Eightfold Way representation” from the experimental data. The Yang-Mills theory is a mathematic beautiful tool to describe some symmetries while the ‘Eightfold way’ is obviously encompassing some beautiful symmetry. However, the Yang–Mills field must be massless in order to maintain gauge invariance.

eggcarton580

Three, in order for the Yang-Mills gauge to make contact to the real world (the Eightfold Way), it must be spontaneously broken. In 1964, Higgs and et al came up a ‘tar-lake like field’ (the Higgs mechanism) to break the SU gauge spontaneously.

Four, in 1967, Steven Weinberg and others combined a SU (2) gauge (a special Yang-Mills gauge) and the Higgs mechanism to construct the EWT (Electroweak Theory). And, this EWT works beautifully for a two quark model (with up and down quarks).

Five, in the November Revolution of 1974, Samuel Ting discovered Charm quark via the J/ψ meson; the original two quark model was thus expanded as a four quark model.

Six, in 1973, Maskawa and Kobayashi introduced the “CP Violation in the Renormalizable Theory of Weak Interaction”. Together with the idea of Cabibbo angle (θc), introduced by Nicola Cabibbo in 1963, the ‘Cabibbo–Kobayashi–Maskawa matrix’ was constructed. As this CKM matrix demands AT LEAST ‘3 generations of quarks’, a six quark model was constructed, the SM (Standard Model). The SM further predicts the weak-current (Ws) and neutral current (Z). tau (τ) lepton was discovered in 1975.

Seven, in 1983, the Ws was discovered, and Z soon after. Then, top quark was finally discovered in 1995.

At this point, the SM is basically confirmed. However, the Higgs mechanism also predicted a field boson. As the Higgs mechanism is the KEY cornerstone for SM, it (the SM) will not be complete if the Higgs field boson is not discovered.

The brief history of BSMs

With the great success of SM, a few BSMs (beyond standard model) quickly emerged.

One, the GUT (grand Unified Theory), with a higher symmetry; {SU (5), SU (3) x SU (2) x U (1); at about 10^16 Gev energy scale}. This work was mainly done by Glashow in 1974. The key prediction of GUT is the proton decay. From the early 1980s, a major effort was launched to detect the proton decay. But, the proton decay’s half-life is now firmly set as over 10 ^ 33 years, much longer than the life time of this universe, To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed. With these results, Glashow was basically going into hibernation, while hoping that ‘sterile neutrino’ come to his rescue.

Two, the Preon model (done by Abdus Salam) which was expanded as Rishons model (mainly done by Haim Harari). It has sub-quarks (T, V): {T (Tohu which means “unformed” in Hebrew Genesis)  and V ( Vohu which means “void” in Hebrew Genesis)}.

Rishons (T or V) carry hypercolor to reproduce the quark color, but this set up renders the model non-renormalizable. So, it was almost abandoned on day one.

Three, the M-string theory began as a bosonic string theory. In order to produce fermions, it must incorporate with the idea of SUSY. That is, M-string theory and SUSY must be Dicephalic parapagus twins.

eggcarton320a

In the 1960s–1970s, Vera Rubin and Kent Ford had confirmed the existence of dark mass (not dark matter). SUSY was claimed as the best candidate to provide this dark mass. Thus, M-string theory dominates the BSM for the past 40 years.

The awakening of the demons

In 2012, a Higgs boson-like particle was discovered, with a measured mass = 125.26 Gev which is trillions and trillions smaller than the expected value.

eggcarton430

The only way out for this predicament is by having a hidden massive partner to cancel (balance) out its huge mass. This massive partner can be a SUSY particle or a twin-Higgs. By March 2017, no twin-Higgs nor any SUSY were discovered under two (2) Tev range. Even if SUSY were existing in a higher energy sphere, it (SUSY) is no longer a solution for this Higgs-naturalness issue.

Furthermore, the b/b-bar should account for over 60% decaying channel for Higgs boson. But by now (November 2017), this channel is still not confirmed. The best number was 4.5 sigma from a report a year ago, which is not enough to make a confirmation. Most importantly, even if the channel were confirmed, it cannot meet this 60% mark.

eggcarton311a

Thus, many physicists now are open the possibility that this 2012-boson might not be the Higgs boson per se.

eggcarton512

Yet, this Higgs demon does not stop its dance with the above issues.

The neutrino’s mass by definition cannot be accounted by Higgs mechanism, as a tar-lake like field to slow down the massless particle to gain an apparent mass, as neutrinos do not slow down in the Higgs field at all. Thus, neutrinos must be Majorana fermions.

Yet, the Majorana angel has never been observed.

One, by definition, Majorana particle must be its own antiparticle. But, many data now show that neutrino is different from its antiparticle.

Two, Majorana neutrino should induce the ‘neutrinoless double beta decay’, but its half-life is now set as over 10 ^ 25 years, much longer than the lifetime of this universe.

Three, by definition again, Majorana particle’s mass must come from ‘Sea-saw’ mechanism, that is, balanced by a massive partner, such as sterile neutrino or else (SUSY or whatnot). But, ‘sterile neutrino’ is now almost completely ruled out by many data (IceCube, etc.)

Four, the most recent analysis of the ‘Big Bang Nucleosynthesis’ fits well if the neutrino is a Dirac fermion (without a massive partner). If the neutrino is viewed as Majorana particle (with a hidden massive partner), ‘the Big Bang Nucleosynthesis’ can no longer fit the observation data.

Without a Majorana neutrino, the Higgs mechanism is DEAD. With a dead Higgs mechanism, SM is then fundamentally wrong as a correct model, although it is an effective theory.

This Higgs demon is now killing the SM, pushing the mainstream physics into the hellfire dungeon.

Of course, Weinberg and many prominent physicists still hope a rescue from one of the BSMs, especially from the M-string theory. But, SUSY (a major component of M-string) is now totally ruled out as an EFFECTIVE rescue. And, many most prominent String-theorists are now abandoning the M-string theory, see Steven Weinberg video presentation for ‘Int’l Centre for Theoretical Physics’ on Oct 17, 2017, at 1:32 (one hour and 32 minutes mark. Video is available at https://www.youtube.com/watch?v=mX2R8-nJhLQ . A brief quote for his saying is available at http://www.math.columbia.edu/~woit/wordpress/?p=9657

eggcarton569

The rescuing angels

While the theoretical physics is falling into the hellfire dungeon step by step, the experimental physics angels are descending on Earth with sincerity and kindness.

One, dark mass (not dark matter) was firmly confirmed by 1970s.

Two, acceleration of the expansion of universe was discovered in 1997.

Three, a good estimation of CC (Cosmology Constant) ~ 3×10−122 was reached in 2000s.

Four, a new boson with 125.26 Gev mass was discovered in 2012.

Five, Planck CMB data (2013 and 2015) provided the followings:

(dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82)

Neff = 3.04 +/- …

Hubble Constant: H0 (early universe) = 66.93 ± 0.62 km s−1 Mpc−1 (by Using ΛCDM with Neff = 3)

These were further supported by ‘Dark Energy Survey”.

eggcarton581

Six, the Local Value of the Hubble Constant: H0 (now, later universe) = 73.24 ± 1.74 km s−1 Mpc−1. The difference between this measurement and the Planck CMB data show a dark flow rate, w = 9%.

Seven, the LIGO twin-neutron stars coalescing ruled out most of the MOND models in October 2017.

Eight, there is no difference between matter and its antimatter in addition to being having different electric charges.

 

The failed Inter-Universes Escape

Under a total siege by the data angels, the Higgs mechanism led army planed an ‘Inter-Universes Escape’. Its war plan was very simple, with two tactics.

One, blind its own eyes and yelling super loud: {We are the only game in town.} For this, they organized a Munich conference: {Why Trust a Theory? Reconsidering Scientific Methodology in Light of Modern Physics, on 7-9 December, 2015, see http://www.whytrustatheory2015.philosophie.uni-muenchen.de/index.html }.

Two, INVENTING almost unlimited ghost universes by using the dominant cosmology theory, the ‘inflation cosmology’.

“Inflation” was a reverse-engineering work for resolving some cosmology observations, such as the flatness, horizon and homogeneous cosmologic facts. As a reverse-engineering, it (inflation) of course fits almost all the old data and many NEW observations. But, almost all reverse-engineering are only constrained by the THEN observed data while without any ‘guiding principle’.

That is, the ‘initial condition’ of the ‘inflation’ cannot be specified or determined. This guidance-less fact allows unlimited ‘inflation models’ to be invented. Of course, it leads to ‘eternal inflation’, having unlimited bubble-universes.

At the same time, the M-string theory also reached its final destination, the ‘String Landscape’, having also unlimited string vacua, again for unlimited bubble-universes (the Multiverse). That is,

“Eternal inflation” = ‘string landscape’ = multiverse

Now, there is a CONVERGANCE coming from two independent pathways, and this could be a great justification for its validity.

With the super weapon of Multiverse, ‘the Higgs mechanism led army’ is no longer besieged by the angel of facts. Those facts (nature constants, etc.) of this universe is just a random happenstance, and even Nature does not know how to calculate them.

The only way to kill this Multiverse escape is by showing:

One, ALL the angel facts of THIS universe can be calculated.

Two, ALL the angel facts of THIS universe is bubble-independent, see http://prebabel.blogspot.com/2013/10/multiverse-bubbles-are-now-all-burst-by.html .

eggcarton204

See, https://tienzengong.wordpress.com/2015/04/22/dark-energydark-mass-the-silent-truth/

eggcarton229

See https://tienzengong.wordpress.com/2016/04/24/entropy-quantum-gravity-cosmology-constant/

eggcarton235

About the Higgs: see, https://www.linkedin.com/pulse/before-lhc-run-2-begins-enough-jeh-tween-gong

More discussions on M-string theory is available at https://tienzengong.wordpress.com/2016/09/11/the-era-of-hope-or-total-bullcrap/ .

 

The Arch-Demons

In addition to rule out the Multiverse nonsense, there are some other major issues:

One, baryongenesis

Two, the dark energy/dark mass

Three, the gravity/spacetime

Four, is ‘Quantum-ness’ fundamental? (Including its measurement and superposition issues).

 

In G-theory, the ‘quantum-ness’ is not fundamental but emerges from the dark energy, see http://prebabel.blogspot.com/2013/11/why-does-dark-energy-make-universe.html .

eggcarton466

 

eggcarton467

 

Furthermore, the G-theory universe is all about ‘computation’, that is, there must be a computing device in the laws of physics. And, of course, there is. In G-theory, both proton and neutron are the base of Turing computer, see http://www.prequark.org/Biolife.htm .

These two points show that the ‘quantum-ness’ is not about ‘uncertainty’ but is all about the ‘Cosmo-certainty’, see https://tienzengong.wordpress.com/2014/12/27/the-certainty-principle/ . That is, the Copenhagen doctrine is in fact one of the Arch-Demon.

In addition to ‘computation’, THIS (not other-verse) universe is all about energy and mass. So, the Structure Function of THIS universe can be defined as:

S (universe) = S (energy, mass)

= S (dark energy, dark mass, visible relativistic mass/energy)

As both Newtonian and GR are related to the structure of this universe, Gravity can be defined by the S-function, as:

Gravity = G (S) = G (dark energy, dark mass, visible mass)

= G (dark energy) @ G (mass)

For G (mass), it has only one parameter, mass. This FACT shows that every ‘mass’ must interact with ALL other masses in THIS universe. That is, the Simultaneity Function can be defined by G (mass), that is,

G (mass) = Si (mass); G (mass) is a simultaneity function.

This Si function can be renormalized only if the gravity interaction transmits instantaneously. In fact, if the gravity of the Sun reaches Earth with light speed, it will not fit the reality. The Sun/Earth gravitational interaction is precisely described with Newtonian gravity law, which encompasses instantaneity.

So, for Sun/Earth gravity at least (if not for other cases), G (mass) should be the function of both {simultaneity and instantaneity}. Thus, we can define:

G (Sun/Earth) = G (mass, simultaneity, instantaneity)

For Newtonian gravity, the ‘masses’ are wrapped into two points, the ‘center of mass’ while the simultaneity and instantaneity are innate part of the equation.

For GR, the simultaneity and instantaneity are wrapped into the ‘spacetime sheet’. When mass interacts with the GR spacetime sheet, it transmits both simultaneously and instantaneously.

This kind of wrapping makes both gravity theories automatically incomplete, as effective theories at best. Now, Newtonian gravity is now viewed as wrong in terms of Occam’s razor, and thus it does the modern physics no harm. On the other hand, GR is still viewed as the Gospel on gravity, and it becomes the greatest hindrance for getting a correct gravity theory.

If GR did provide us some insights before, it is a long time ago past tense. The recent promotion about the greatness of the LIGO discovery will further drag us down the hellfire dungeon. LIGO indeed might provide some additional data to confirm what we already know, but it cannot rescue GR’s fate as a total trash. The following is just a short list of GR’s shortcomings.

One, GR plays zero role in the construction of quark/lepton.

Two, GR plays zero role in calculating the nature constants, such as Alpha or Cabibbo/Weinberg angles, etc.

Three, GR fails to account for dark mass and dark energy, unable to derive the Planck CMB data.

(dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82)

Neff = 3.04 +/- …

Hubble Constant: H0 (early universe) = 66.93 ± 0.62 km s−1 Mpc−1 (by Using ΛCDM with Neff = 3)

Four, GR provides no hint of any kind for the BaryonGenesis, which is definitely a cosmology issue, and this alone should give GR the death sentence.

Five, the last but not the least, GR is not compatible with QM (quantum mechanics).

More details on this, see https://medium.com/@Tienzen/yes-gr-is-very-successful-as-gravitational-lens-ff65efb63889 .

Yes, GR is of course a very EFFECTIVE gravity theory (as a great reverse-engineering work) but is definitely a wrong one for the correct theory. The GR wrapping which hides the essences of gravity (simultaneity and instantaneity) renders it unsalvageable and unamendable. That is, it is in fact the greatest hindrance for getting a correct gravity theory. So, GR is the other Arch-Demon for modern physics.

Here is the ArchAngel

All the calculations for those angel facts (of section D) are done in G-theory (Prequark Chromodynamics).

Superficially, Prequark model is similar to the Preon (Rishons) model, but there are at least four major differences between them.

One, the Rishons model has sub-quarks (T, V): {T (Tohu which means “unformed” in Hebrew Genesis)  and V ( Vohu which means “void” in Hebrew Genesis)}. But, Harari did not know what T is (just being unformed). On the other hand, the A (Angultron) is an innate angle, a base to calculate Weinberg angle and Alpha, see http://prebabel.blogspot.com/2012/04/alpha-fine-structure-constant-mystery.html .

Two, the choosing of (T, V) as the bottom in the Rishons model was ad hoc, a result of reverse-engineering. On the contrary, there is a very strong theoretical reason for where the BOTTOM is in G-theory.

In G-theory, the universe is ALL about computation, computable or non-computable. For computable, there is a TWO-code theorem. For non-computable, there are 4-color and 7-color theorems.

That is, the BOTTOM must be with two-codes. Any lower level under the two code will become TAUTOLOGY, just repeating itself.

Anything more than two codes (such as 6 quarks + 6 leptons) cannot be the BOTTOM.

Three, rishons (T or V) carry hypercolor to reproduce the quark color, but this set up renders the model non-renormalizable, quickly going into a big mess. So, it was abandoned almost on day one. On the other hand, prequarks (V or A) carry no color, and the quark color arises from the “prequark SEATs”. In short, Rishons model cannot work out a {neutron decay process} different from the SM process.

 

eggcarton570b

eggcarton582

This is one of the key differences between prequark and (Rishons and SM).

Four, Preon/Rishons model does not have Gene-colors which are the key drivers for the neutrino oscillations.

eggcarton572a

More details on those differences, see http://prebabel.blogspot.com/2011/11/technicolor-simply-wrong.html .

In addition to being theory to describe particles, G-theory also resolves ALL cosmologic issues which consists of only three:

One, the initial condition of THIS universe

Two, the final fate of THIS universe

Three, the BaryonGenesis mystery

BaryonGenesis determines the STRUCTURE of THIS universe, that is,

G (S) = G (dark energy, dark mass, visible mass)

= G (dark energy) @ G (mass)

So, BaryonGenesis must be the function of G (S), which is described as:

(dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82)

The calculation of this Planck CMB date in G-theory uses the ‘mass-LAND-charge’, that is, all 48 fermions (24 matter and 24 antimatter) carry the same mass-land-charge while their apparent masses are different. And, MASS-pie of THIS universe is evenly divided among those 48 fermions. That is, the antimatter does in fact not disappear (not be annihilated) while it is invisible. See the calculation below. More details, see https://tienzengong.wordpress.com/2017/10/26/science-is-not-some-eye-catching-headlines/ .

This BaryonGenesis of G-theory rules out the entire sterile dark sector (WIMPs, SUSY, sterile neutrino, axion, MOND, etc.) completely.

On November 8, 2017, Nature (Magazine) announced the death of WIMP, see http://www.nature.com/news/dark-matter-hunt-fails-to-find-the-elusive-particles-1.22970 .

eggcarton583

This BaryonGenesis calculation must also link to the issues of {initial condition and the final fate}. And indeed, it does.

BaryonGenesis in fact has two issues.

One, where is the antimatter in THIS universe?

Two, why is THIS universe dominated by matter while not by antimatter?

The ‘One’ was answered with the above calculation.

The ‘Two’ can only be answered by ‘Cyclic Multiverse’.

However, for THIS universe goes into a ‘big crunch’ state, the omega (Ω) must be larger than 1, while it is currently smaller than 1. That is, there must be a mechanism to move (evolve) Ω from less than 1 to bigger than 1.

Again, only G-theory has such a mechanism, and it is not a separately invented but is a part of BaryonGenesis calculation, the ‘Dark Flow, W’.

This dark flow (W) prediction of the G-theory was confirmed in 2016, see https://tienzengong.wordpress.com/2017/05/15/comment-on-adam-riess-talk/ .

eggcarton502

G-theory of course accounts for the ‘initial condition’, see https://tienzengong.wordpress.com/2016/12/10/natures-manifesto-on-physics-2/ .

 

Army of the Archangel

Weinberg has been complaining about the Arch-Demon (Copenhagen doctrine) many times but without making any new proposal, see http://prebabel.blogspot.com/2013/01/welcome-to-camp-of-truth-nobel-laureate.html .

 

On the other hand, ‘t Hooft (Nobel Laureate) did embrace the G-theory from the point of Cellular Automaton Quantum-ness, see http://prebabel.blogspot.com/2012/08/quantum-behavior-vs-cellular-automaton.html . In 2016, he even published a book on it.

eggcarton579

More details, see https://tienzengong.wordpress.com/2017/10/21/the-mickey-mouse-principle/ .

Sabine Hossenfelder just issued a death sentence for Naturalness (see http://backreaction.blogspot.com/2017/11/naturalness-is-dead-long-live.html ).

eggcarton575

 

The death of Naturalness is a precursor for the death of Higgs Mechanism.

eggcarton576

 

Steven Weinberg just revealed the death of M-string theory in his October 2017 video lecture.

eggcarton577

 

Paul J. Steinhardt announced the death of ‘inflation cosmology’ in 2016.

eggcarton578

 

Conclusion

The current hellfire nightmare of the mainstream physics did not start in 2012 but is the results of three demons {Copenhagen doctrine, GR and the Higgs mechanism}, began in 100 years ago. Fortunately, many angel facts (experimental data) have revealed their demon-faces. Finally, the ArchAngel (the G-theory) has come for the rescue. With the growing army of ArchAngel, the human physics’ salvation is now secured.

 

 

 

天使之歌

宇宙创生以来的一百四十 亿年的历史长河中,自然世界正在一分钟一分钟地向前移动,并按预定的节奏,优雅和喜悦地走向预定的命运,一路翩翩起舞。

而这个舞曲,无疑就是天使之歌!

《超统一理论》—(Super Unified Theories,SUT):由美籍华人学者龚天任先生创立。经历33年后,解决当今基础物理学界无法解释的重大开放问题,全部准确无误。新版称之为创新物理学,统一的宇宙:www.pptv1.com

全部理论基于第一原则:这个宇宙的本质是虚空,从虚空中来,到虚空中去,并一直保持虚空的不变性。

创世方程0

DS =(i^n1,i^n2,i^n3)* C * DT = N* C * DT ………………(方程0)

DS是一个空间单元,DT一个时间单元;C是光速。ni的取值为{0,1,2,3}。

N是一个虚-实数域,而N内积有四个可能的值。

N^ 2 = {+ / – 1 ,+ / – 3} ………….(方程1)

方程0以精确的方式连接时间和空间。虚-实数域的N产生64个子空间。“方程1”是一个选择规则。当一个子空间有N ^2 = + 3,那么这个子空间是一个真正的实空间,这些子空间构成一个正常3维欧氏空间,并伴随实时间t;当N ^2 = – 3,那么这个子空间是一个虚空间,相当于 3维虚空间,伴随虚时间。而且,实时空与虚时空相随相伴。而N ^2 =+ / – 1,则产生24个正物质夸克粒子和24个反物质夸克粒子,一共48个基本粒子。

并解释夸克味色和代色及其色动力学:

理论计算电子精细结构常数alpha、Cabibbo angle、Weinberg angle

宇宙加速膨胀

宇宙学常数计算

计算暗能量、暗物质以及普朗克数据中各成分比例:暗能量(69.2%)、暗物质(25.8%)、可见物质(4.82%)计算模型。

宇宙暗流w预测及检验:

宇宙未来震荡无穷无尽:

2013年LHC发现了称之为希格斯波色子的新粒子,5年来主流物理仍然无法确定其产生机制。而龚学理论33年前就已经预测到:

Super Unified Theory, US copyright TX 1-323-231, issued on April 18, 1984)

 我们称之为真空子:{246/2} + {246 x 0.01} = 123 + 2.46 = 125.46 Gev.

创新物理学已经给出所有力统一方程,统一力:

F (统一) =K* ħ/(delta S * delta T) ———(方程2)

空间S, 时间T,ħ普朗克常数,K为力耦合系数

该方程成为大统一理论的最有力候选方案。

此外,龚学理论是关于宇宙“计算”,即在物理定律中必须包含计算装置。当然,龚学理论发现,无论是质子和中子都是图灵计算机的基础,包含了生命的种子。参见: http://www.prequark.org/Biolife.htm .

eggcarton188

终极理论:创新物理学能够解决以下所有问题:

1.解释自然常数的由来,如电子精细结构常数Alpha(electron fine structure constant)、电子电荷e、光速C、普朗克常数 ħ 等。

2.给出标准模型SM中的亚原子粒子族,准确解释SM理论。

3.说明量子力学原理的由来,并从理论中推导出测不准原理。

4 给出力的大统一方程,统一描述电磁力、弱力、强力和引力。

5 解释计算暗能量、暗物质以及普朗克数据中各成分比例的由来,其中普朗克CMB数据:暗能量(69.2%)、暗物质(25.8%)、可见物质(4.82%)计算模型。

6 解释宇宙常数(Λ)的由来并理论计算。

7 解释量子自旋(quantum spin) 的由来。

8 解释重子(baryongenesis) 产生过程。

9 解释生命的产生的物理基础,包括意识与智能。

10统一的形式化语言体系(formal and nature languages)。

11论证数学与物理的统一。

12 解释我们这个宇宙中所有一切的由来。

 

如果读者对此还有疑问,请看龚先生原著。

The Angel and demons in the 100 years of physics nightmare

THE ANGEL AND DEMONS IN THE 100 YEARS OF PHYSICS NIGHTMARE

Natural is moving nicely minute by minute for the past 14 billion years and is playing its predetermined dance to its predetermined destiny with grace and joy.

主流弦物理的悲歌

前几天,有一个对弦物理与基础物理有重要意义的、值得关注的事情:

2017年10月17日,温伯格在“国际理论物理中心”发布深度视频访谈。https://www.youtube.com/watch?v=mX2R8-nJhLQ&lc=z23zy1k40xabtbbeoacdp435hrm2pyeeybf5beqjdf5w03c010c

他说到威滕(Witten)和尼玛(NIMA)等人,已经不再做M弦理论工作了。他说{虽然我注意到,最近,聪明人如Witten似乎已经将注意力转向了固体物理学。也许这是他们放弃的迹象,但我希望不是}。

如果你看不到视频,可以在以下网站看到这几句话的简短话语:

http://www.math.columbia.edu/~woit/wordpress/?p=9657 .

witten222

弦论是由世界上数十万计的顶尖物理学家研究了40多年,并成为基础物理学主流骨干,但最高层的领导人却很少。

1970年代的理论物理学家John H. Schwarz推出了第一代弦理论。

在20世纪80年代,爱德华·威滕/布赖恩·格林进行了进入的研究(第一次革命:S-对偶/二元性,发展11个维度弦理论)。

在20世纪90年代,弦物理发生了第二次革命(年代):Joseph Polchinski(膜)、Juan Maldacena发展了AdS/CFT和M理论。

然后,尼玛(Nima Arkani Hamed)作为青年弦理论家明星,把SUSY发展到最高阶段(弦理论的一个重要组成部分),并试图说服中国政府投资1000亿建设100 Tev 超级对撞机。

戴维·格罗斯是QCD的创始人,对弦理论没有做什么工作,但却是弦物理啦啦队长的教父。

去年2016弦物理大会在北京召开,他们都是重量级人物参加此会,还在忽悠说这是基础物理的黄金时代。但戴维·格罗斯去年就开始认识到弦物理的错误,并带头和弦物理高层十多人于2016年9月15日在量子杂志,一同发布了弦物理作为物理理论已经失败的宣言。

今年的新闻表明弦物理国王威藤(Witten)和高级将领尼玛(NIMA)已经放弃弦物理研究,似乎已经彻底投降了。

那么,弦物理研究还有些什么人物呢?大概中国的弦物理界还紧抱弦物理这根稻草舍不得放弃。那么,现在(两周前)这个新闻,对这些不知道这个重要转变的中国物理学界的某些人还是有一些启示意义的。

这个情况,对中国大对撞机更是一个悲观的消息。

——————————————————————————-

注:最近,威腾大牌教授,M弦物理之父,放弃没用的错误理论,能够知错能改,善莫大焉!

ttps://www.quantamagazine.org/edward-witten-ponders-the-nature-of-reality-20171128/ .

龚先生写了一文,给予表扬!

Edward Witten, a physics hero

威腾清楚地显示了以下四点表述:

一,QFT是一个失败的描述自然特性的理论。

二,{ M弦理论+ AdS/CFT +全息}都无法描述自然的性质。

三,认为除了上述两个理论之外,还应该有额外的抽象层描述。

四,他根本不知道这一抽象层的描述是什么样子。

以上四点不仅给M弦理论判了一个死刑,同时,也是对所有其他的弦理论基石(QFT,AdS/CFT和全息图)判了死刑。而这些理论在过去的50年中成为基础物理的主流范式。

-----------------------------

The Unified Universe,The Unified Theory!