mercredi 1 mars 2023

THE DECEPTION TEMPTATION

THE DECEPTION TEMPTATION

The End of Scientific Enlightenment ?

 


The notion of ‘good and bad’ was part of a moral code inspired by the 10 commandments of the Bible, for instance: you shall not lie was one of them, and since you would constantly be tempted to lie anyway (to take advantage of others or brag about yourself etc.) you were told to learn how to control the temptation to lie. The temptation was driven by the Devil himself, whispering in your ears: do it..., no one will ever know…, it’s gonna be all right! But every Sunday God himself would step down to sermon that you had the choice to be a good person and say NO to the deception temptation.

 

WHY SHOULD YOU? whispered the Devil.

 

And then came the learning of science principles, the QED SCIENTIFIC PROOF, more powerful than the 10 commandments: you don’t need to lie to convince, you just need to prove it. This was demonstrated by Descartes and Newton and became the Age of Enlightenment, then by Einstein and Feynman and millions of others to become the era of REASON.

 

REALLY... REALLY? whispered the Devil who hastily prepared new tools to mess around all this good will.

 

Fast forward to the TRUMP and POUTIN century with its panoply of fine tuned instruments of deception: Mirrors, Illusions, Video fake montages, Propaganda, Disinformation, Fake News, Deep Fakes, Self-paid Publications etc., with as primary result the disorientation in people’s mind:  who tells the TRUTH?

 

All these deception tools were rapidly endorsed and mastered by the industry and by the governments, legitimized under the disguise of sales promotion information, publicity, marketing, information mixed with entertainment etc.

 

Disinformation practices have nowadays infiltrated all aspects of our life, even the Justice system that has finally surrendered to the practice of overpaid lawyers helping their clients lie under oath. No wonder the Courts have become the playground of the rich and famous at the expense of Justice for all!

 

THANK YOU… whispered the Devil!...GLAD TO BE OF SERVICE!

 

Someone argues back:

 

But deception is not in sciencescientistsyou knowthose are the incorruptible pillars of the search for the TRUTH!.

 

OH YEH?… jubilated the Devil…CHECK MY LATEST ACHIEVEMENT BELOW:

 

I have suggested for the last decade to the polymer scientific community to abandon the current paradigm of polymer physics that is used to describe the molecular dynamics of macromolecules. Essentially, this paradigm consists of two models: the Rouse model and the reptation model. There are, of course, ramifications and improvements that have been applied to modify these models. Yet, I have exposed the problems and deficiencies of the current paradigm to explain old and new experimental evidence that have been validated by other independent researchers, in particular the “sustained orientation” property of Rheo-Fluidified melts that totally contradicts the current molecular dynamics paradigm [1].

 

The fundamental question that I address in my books [1-5] relates to the statistical treatment of the interactions between macromolecules that should be applied to polymers: whether the classic Boltzmann’s statistics of systems made up of macromolecules should be used (this solution is the classical approach that led to the current paradigm), or another statistical model and another statistical system should be considered?

 

 I define and explore in my research the use of a dissipative statistical description of the interactions, the Grain-Field Statistics, to quantitatively describe the viscoelastic behavior of polymers. This is the objective of the new paradigm taught and disseminated by the New School Polymer Physics [1-5].

 

In his famous book “The Structure of Scientific Revolutions” [6], Thomas Kuhn identified the 3 criteria that define the strategy of current paradigm’s gate keepers to oppose the dissemination of a new paradigm challenging their established one:

 

 

              “- 1 rejection by impossibility to leave behind the previous paradigm,

                - 2 accept the disruptive necessity for a change but find it very hard to adapt to                                         the new paradigm.

                - 3 attempt to sabotage by deception the new paradigm to prevent its                                             dissemination”

 

Amazingly, the review process of my 1st book [1] led to confirm these 3 statements “by the book”.

 

One reviewer, a famous rheologist, candidly said ” ...my mind is not able to bend around your ideas. My upbringing was influenced by the classical concepts of Rouse, Graessley, de Gennes, etc, so it is difficult for me to understand and agree with some of your ideas...”. Kuhn’s criteria #1

 

A second reviewer, PhD, MBA, advisor to Coorporate Research Companies, approved the necessity of a change of paradigm in polymers and expressed enthousiastically the industrial perspectives of the sustained-orientation technology, he wrote:

 

“...with this new cross-dual phase model, Dr Ibar is able to explain the physics of polymer molecules, and particularly the rheology, and entanglement and disentanglement behaviours.

 

But he also added:”This book is not for the faint-hearted. It is both a complex, difficult read, and yet at the same time, exciting-brimming over with new concepts and ideas, almost mind-bending! ” .Kuhn’s criteria #2.  The full review is available in Ref. 7.

 

A third reviewer, the former Dean of a well established German University Plastic Processing Department, might have not been aware that his response would be transparent to Kukn’s 3rd way of fighting a new paradigm: sabotage by deception. Obviously, the Devil had whispered in his ears and could be satisfied of his pupil! Kuhn’s criteria #3.

 

I invite you to read my rebuttal of the paper published by this 3rd reviewer in the Journal of Rheology (JOR) by clicking on the link below:

 

Rebuttal to H. Munstedt JOR paper “ Mechanical Pretreatment of Polymer Melts. Critical Aspects and new rheological investigations on a linear and a long-chain branched Polypropylene”, J.O.R, 65.5(2021):871-885, DOI: 10.1122/8.0000237


Not only do I rebute one by one the various misrepresentations left uncorrected in this Journal, but I explain the difference of interpretations of Shear-Refinement of polymer melts by the classical views (orchestrated by the 3rd reviewer) and by the New School Polymer Physics’ approach to viscoelasticity.


Tell me:

 

-        Why would a scientist of great notoriety, publishing in a reputable science journal, need to deliberately publish false information to undermine the credibility of new research contradicting his convictions?  Why would he, the former Dean of his Department, denigrate the results by misquoting them, even a few times by deliberately distorting the facts? What was his real intention in using deception to disqualify the credibility of the results presented in a book1 he had accepted to review (and had accepted to discuss with the author)?

 

-        Why would the prestigious Journal not invite the discriminated author of the book as one of the reviewers of Munstedt’s paper when he submitted it? Is it not customary and fair to offer an author attacked in a paper to become a reviewer of the paper?

 

-        Why would the Editorial Board members of the Journal in which the notorious Professor published the deception stay mute when they were made aware of a rebuttal request by the author?  Even if it is true that the deception was disguised, subtle, and not apparent at first glance, the deception was revealed explicitly in the rebuttal article (now published as GJSFR A22.5(2022):21-31), documenting that it was not a divergence of opinion, that the author of the deception knew the facts but deliberately chose to misquote or distort them.  The peer review system can only be fair if it remains uncompromised.

 

Is it not obvious that the real reasons to disparage my work on the shear-induced melt instability of linear polymers (resulting in “disentanglement”, see Ch. 4 of my book1) is to avoid the discussion on the failure of the molecular models to comprehend it (Chs. 6, 7) and the need for a change of paradigm in polymer physics to understand entanglements (Ch. 1, 2).

 

And is it not true that the Editorial Board Members of the Journal were all siding with the notorious author to defend the current paradigm that my book claimed was deficient?

 

QED: KUHN’S ARGUMENT # 3 : “SABOTAGE  BY DECEPTION TO PREVENT ITS DISSEMINATION” IS THE REASON FOR H. MUNSTEDT’S FALSE INSERTIONS.

 

Here is how I concluded my letter sent to the Board Members of the journal ( Journal of Rheology) who found futile reasons to refuse to print my rebuttal:

 

“It is crucial that scientists doubt the results of others and it is of course acceptable to disagree with someone else‘s conclusions, but it is unacceptable to use deception to refute what one disagrees with”.

 

President Biden said recently (paraphrasing):

 

“It is the end of Democracy, yes the end of Democracy… if the contestants in an election only accept to win and accuse the winner of fraud if they lose.”

 

Likewise, it is the end of Science if the people in charge of disseminating and protecting the scientific method of enlightenment only accept to protect the existing paradigms and accuse the contradicting data of artifacts, even of fraud.

 

It is the end of the power of Science if the people in charge of disseminating and protecting the scientific method of enlightenment are systematically and deliberately censor-shipping the publication of results that prove the failures and short-comings of the established paradigms to cover- up the possibility that they are incomplete or simply wrong!

 

Yes, it is the end of the credibility of Science if the people in charge of disseminating and protecting the scientific method of enlightenment are accepting the insertion of unanswered false information in their publications and the use of deception tactics in their peer review administration. 

 

SHOULD SCIENCE STAY OUTSIDE MY REACH, THEN? ....pleaded the Devil...

 

1 Ibar J.P., “The Physics of Polymer Interactions. A Novel Approach. Application to Rheology and Processing”., HANSER (2019).

 

2. Ibar J.P., “Dual-Phase Depolarization Analysis: Interactive Coupling in the Amorphous State of Polymers”. REFERENCE book , Dual-Phase Polymer Science and Technology, De Gruyter (2022).

 

3. Ibar J.P. , “Dual-Phase Rheology. A New Understanding of Viscoelasticity in Polymers.” De Gruyter Book (2023)

 

4. Ibar J.P. ” Dual-Phase Crystallization. Dissipative Interactive Coupling between the amorphous and Crystalline states.”, De Gruyter Book (2024).

 

5. Ibar J.P. ” Grain-Field Statistics of Dissipative Interactions”, De Gruyter’s Book (2025)

 

6. Kuhn T. S. ” The Structure of Scientific Revolutions”, University of Chicago Press, 3rd Edition (1996).


7. Hutley T. ” REVIEW of Ibar’s book “The Physics of Polymer Interactions. A Novel Approach. Application to Rheology and Processing”., HANSER (2019).


8. Rebuttal to H. Munstedt JOR paper" Mechanical Pre-Treatments of Polymer Melts" 2021) 


vendredi 27 août 2021

IS THE LIQUID-LIQUID TRANSITION (TLL) FUNDAMENTAL?


YES, I do suggest that this transition, TLL, is fundamental in physics!

 I just published (Vol. 60 issue 10, Oct. 2021) a couple of papers about this subject of TLL, its existence and its impact (Part I and Part II):  Taylor and Francis has decided to publish it as a single volume of the J. Macromol. Sci. Physics.:


        Part I of the paper on TLL


         Part II of the paper on TLL


 Here are the Titles of the papers and their Abstract. 



ABSTRACT (Part I)


R.F. Boyer recognized the manifestations of a T > Tg transition-relaxation as early as 1963 and named it TLL, the liquid-liquid transition. He suggested that it was due to the melting of “local order”, a controversial issue conflicting with the dominant theories at the time, led by P. Flory, which asserted a structure-less liquid state for melts. At the same time as this controversy unrolled, de Gennes published his reptation model of polymer physics which, after some modifications and ramifications, quickly became the new paradigm to describe the dynamic properties of polymer flow. The new model of reptation has no theoretical arguments to account for a T > Tg transition occurring in the melt; hence, the current consensus about the existence of TLL is still what it was already in 1979, that it is probably an artifact only existing in the imagination of Boyer. In part I of this paper on the TLL transition, we mathematically derive the existence and the characteristics of TLL from a dual-phase description of the free volume using a modification of the Vogel-Fulcher equation (VF), a well-known formulation of the temperature dependence of the viscosity of polymer melts. This new expression of the VF formula, that we call the TVF equation, permits to determine that TLL is as an iso-free volume and iso-enthalpic state when M varies. The data analyzed by the TVF equation are the dynamic rheological results for a series of monodispersed, un-entangled polystyrene samples taken from the work of Majeste. The new analysis also permits to put in evidence the existence of a new transition, which we call Mmc, approximately located at Mmc ~ Mc/10, where Mc is the molecular weight for entanglement. A Dual-Phase interpretation of Mmc is proposed. 



ABSTRACT (Part II)

In Part I of these 2 parts paper we derived mathematically the existence of a unique state for polymeric melts, occurring at a specific temperature above Tg, which we recognized to be the liquid-liquid transition, TLL, observed and described by Boyer and others. TLL is the temperature at which the melt is in an iso-free-volume and iso-enthalpic state independent of the molecular weight. It is a fundamental property of the material. The purpose of part II is to examine and explain the following: 1. the elusive character of TLL (at the origin of the controversy about the existence of TLL in the past), 2. the increase of free volume at TLL and 3. the endothermal change of heat capacity on heating across TLL. Finally, our objective is to provide an explanation of TLL and emphasize its importance as an example of a self-dissipative dynamic process that converts, at TLL, into a classical thermally activated process. In this paper the experimental evidence found in the literature for TLL is critically examined to point out the often biased reviews offered by the antagonistic authors of a controversy, here the pros and cons TLL. We propose a Dual-Phase origin of the interactions in polymers to explain the weak and elusive manifestations of TLL and show, by DSC, that the TLL manifestations are made much more visible and prominent when the samples’ state has been brought out of equilibrium. We analyze, in detail, the thermally activated depolarization of samples which have been submitted to a polarization stage by a voltage field. The experimental technique of thermal stimulated depolarization (TSD), and its sister derivative the thermal-windowing-deconvolution (TWD), are unique and powerful analytical tools that can experimentally characterize “interactive coupling”, the factor that we have assumed is quantitatively responsible for the behavior of polymers and in particular of TLL. The existence and the characteristics of TLL were understood and predicted in Part I from rheological results by the use of the Thermo-Vogel-Fulcher equation whose thermo-kinetic terms, H, S and T, could be interpreted by the interactive coupling of the local free volume and the rotational isomeric conformational state of dual-conformers belonging to the macromolecules, themselves embedded in a collective dissipative system of interactions. The statistics controlling the interactive coupling parameters was described by the Dual-Phase and Cross- Dual-Phase models. In part II, the same models are used to explain the interactive coupling manifestations specific to the TSD and TWD results. We show that certain characteristics of the TSD and TWD results are directly related to specific parameters of the Dual-Phase model. It is the case for the transitions visible by TSD, such as Tg, related to space charges and local free volume (F-conformers), and TLL marking the end of the specific impact of the Dual-Phase statistics on the properties. It is also the case when interactive coupling is analyzed by TWD: the compensation of the enthalpy and entropy of activation of the relaxations taking place at various polarization temperatures only occurs below TLL, permitting its specific determination. 


We conclude pointing out the perhaps crucial importance of TLL in establishing the distinct role of thermal energy in structuring or modulating the dynamics of the interactions. The Dual-Phase view of the interactions in polymers suggests that the local density difference between the b-grains and the F-conformers is “time-averaged” by the constant wiping (above Tg) of an “elastic dissipative wave” having a frequency, , that is a function of temperature and molecular weight, and thus is different from the Brownian dissipation, i.e. the thermal fluctuation characteristic of the Boltzmann’s mean field (the classical kT/h term). The elastic dissipative wave kinetically loses its collective modulation role and becomes the thermal wave at TLL.


CONCLUSION

Is TLL the manifestation of a fundamental transition between the applicability of the Boltzmann’s statistics description of the interactions (mean average homogeneous field) and the Grain-Field Statistics’ description of the interactions (open dissipative systems generated by field fluctuations leading to dual and cross dual phases), the Grain-Fiel statistics being more general?

According to this interpretation of TLL and of its impact on the flow properties, the reptation and other polymer chain dynamic models can only provide valid predictions of the properties when T > TLL with TLL varying with molecular weight, pressure, shear rate etc. Unfortunately, without even knowing it, the experimental parameters used are often such that T < TLL, rendering invalid the applicability of the currently polymer dynamic models of rheology. Therefore, knowing the value of TLL during processing should play a major role in our ability to predict and control the properties of processed polymers.

 




jeudi 12 août 2021

.......................................... IT'S ABOUT TIME

 

                                                       IT'S ABOUT TIME! 



…about time for what?

 

No, no, no…this post is really about TIME, you know, the concept of time, the tic-tac of the watch, the flow that carries us along in our existence.

In physics, we measure time with respect to a reference, the second, and we express the flow of time by a multiple of that reference.  For instance the Big-Bang occurred 4,229 trillion seconds ago (13.4 billion years), and the life span of a human being, if everything goes right, may reach 3.16 billion seconds (100 years), enough to be part of a lot of good things and bad things…

So, time is usually perceived as an index of the flow of something passing by that we can measure. The consciousness of the reality of the flow of time is due to our perception of the differences between the past, the present and the future. The events which occurred in the past appear frozen like the objects in a picture or in a recorded movie that can only play the same sequences of the events, inalterably. There is a great certitude that it happened, one way and no other way, even if it’s difficult to reconstruct which way it was.

The present is the living reality of time: it is more difficult to apprehend because it requires defining a scale to gauge the dynamic changes occurring simultaneously and the magnitude of their interactions: what is the present for a rock, for a vibrating cesium atom, or for a living cell? The present is our reality but it is multiple: it could be light years, days or an hour, a second, a picoseconds (10-15 s) or even the Planck’s time, 10-43 seconds, the theoretical smallest time duration between the beginning and the end of any subatomic event.

The future is imaginary, or at least imaginable to physicists and engineers, because bound to the present, starting at the end of the present, generated by it (the famous initial conditions of a physical system), and perhaps “merging with the present”(uncertainty principle) at the Planck’s scale.

Time appears to be linked to the description of changes occurring to events, such as in the diffusion equation of heat exchange or in the propagation of sound or light in space. This is a passive role, the index role of a stop watch.

No changes, no need for the concept of time, no difference between past, present and future. Understood!  But how do we measure “no changes”? The measurement of something implies a duration, thus a change of time, an apparent catch 22!. Yet we know how to solve this problem: by the use of the concept of derivative in infinitesimal calculus.  The variation of the changes occurring to a variable, dv, is found by extrapolation to zero of an increment of time, dt:

                v’ = limit of dv/dt when dt0 .

Each reality change is defined by v’, the ratio of 0 to 0 as dt becomes 0. This is an extraordinary weird thought that our description of reality, of our present, is made up of a series of 0/0! But the magic is that this works: we can describe natural events at no changes, neither time nor anything else: this is a timeless solution, time is just an index in such a mathematical construction.  

The question, though, of the physical legitimacy of extrapolating to 0 remains vivid: are we introducing a fundamental error of comprehension of physical phenomena by extrapolating to 0? Is this why quantum reality, described by the physics of derivatives, has such a hard time to make sense? I like to call the analysis surrounding this question: the structures of zero.  It involves describing the vacuum, of course, both the vacuum separating clusters of matter in the galaxies and other skies and the vacuum inside the atoms, inside all subatomic particles. Vacuum versus vacuum: the structures of vacuum.

Many important equations of physics are expressed as direct or partial differential equations to calculate the changes occurring to a parameter as a function of time. Time reality has disappeared in such expressions because of the use of the derivatives (1st order, 2nd order etc.): Time is totally passive in these expressions.   In other equations, such as in Kepler’s 2nd law of planet orbital motion, time is directly embedded in the description of the event, not its derivative: it is proportional to the surface area swept by the radius between the planet and the sun localized at the focus of the ellipse. Time is still a passive variable in all these equations, it is there to measure changes occurring to active variables.

In 1905, Einstein’s relativity made time and space correlate dynamically as if time was an active variable, not passive.  Einstein made clear that time, space and mass (of matter filling space) could not be considered independently of each other when motion was observed.  Einstein led the way to a forward vision, making time stretchable or shrinkable as matter moved at speeds near the speed of light: this was, indeed, a revolution in thinking!

However, is time a true active variable in Einstein’s physics when time (multiplied by the velocity of light) is simply used to create a four dimensional  timespace where the unit referencing time and the unit referencing space become linearly dependent?  In any case, Einstein’s new vision certainly bookmarked in our consciousness the end of a well established simple understanding of the concept of time.

What now? Could time be still misunderstood and play a REAL active role?  What does “an active role” mean?

If time comes as discrete Lagrangien durations, packets of “chronons”, say defined by a discrete set of Fourier frequencies and amplitudes, which we call  “the vertical structure” of this time duration, one can imagine that this structure of correlated chronons evolves as the event itself exhibits changes in time (for instance due to the acceleration of matter in timespace). As the vertical structure and the parameter describing the event are correlated, one can now talk about “an interaction” between the time structure of the chronon packet -representing the index to characterize the event’s evolution in time- and the parameter describing the event, say its electrical or magnetic properties, its position in space, its velocity or acceleration. This “interaction” makes time part of the event: the structure of the chronon packet and the total Lagrangien duration of the packet become an integral part of the event, interfering with it to optimize some aspect of the changes (minimize the dissipative energy). This type of interaction where the size of the unit and its structure are coupled with a minimum principle describes what I have called a “dissipative interaction”.  I have described such interactions in other disciplines of physics: dissipative interaction of Temperature and Voltage in the thermally activated relaxation of dipoles, and dissipative interaction between the conformational energy of conformers and their free volume to understand the visco-elastic behavior of polymeric melts in a novel way.

In other words, using the terminology above: is time dissipative? Is time visco-elastic?

If it is, all of our mathematical description of physics that now includes time as a passive variable in differential equations, for instance, should be revisited.

And if time is itself interactively part of what we measure, shall we continue to consider the velocity of light the ultimate reference to understand our past, by back extrapolation, not knowing the history of the structures of time that came along during the travel of the photons?

Our consciousness of time and space, realities elevated to the status of collective knowledge when they started to be defined and measured with respect to their reference unit (a concept generalized to all parameters considered in physics), is deeply engraved in all of us. This is ground zero of our collective elementary knowledge of physics. The measurement of these two fundamental realities, and of velocity as a corollary of their duality, marked a major step forward in our evolution as humans.

Yes, perhaps, IT’S ABOUT TIME to evolve from this 2,000 year old idea of a fixed reference unit to describe the fundamental parameters and constants of natural phenomena. Einstein led the way to such a forward vision, making time stretchable or shrinkable, but this was an “elastic” vision, also affecting mass and space elastically. We may have to think of time as a visco-elastic entity with an elastic component and a viscous (dissipative) component.

The direct impact of this new vision of time on our knowledge of the universe at the scale of our daily life might be insignificant, because time is an index under such conditions, and our mathematical  description of the laws of nature with such a passive time is clearly powerful and well adapted (infinitesimal calculus).  I propose, however, to go beyond a passive concept of time and articulate our comprehension of the interactions at the microscopic scale of quantum physics or at the scale of general relativity with help of a new mathematical description of an elasto-dissipative time in the equations.

Will the concepts of dark matter and dark energy survive such a dissipative description of the interactions? Will gravitation naturally rally the other fundamental forces?

 

Jean Pierre Ibar

jpibar@alum.mit.edu

University of the Basque Country

 

Arrosa, July 28 3021


dimanche 20 septembre 2020

WHY THE NEED OF A NEW RENAISSANCE IN (POLYMER) PHYSICS?

 

WHY THE NEED OF A NEW RENAISSANCE IN (POLYMER) PHYSICS?



Prof. Ilya Prigogine


The French word “renaissance” means “re-birth” in English, but, actually, it does not need to be translated: everyone uses the original French word to designate a new paradigm, a new era, a fresh start.

I used that word, Renaissance, yet added “New” in front of it.  Is it not a pleonasm to say: “a new renaissance”, like “a re-re-birth”?

Not if you consider that the 1st renaissance in polymer physics occurred with de Gennes, circa 1971-1979, introducing the reptation model of polymer dynamics.

The pre-de Gennes era, in polymer physics, was dominated by physical chemists: Bueche, Flory are typical examples, Treolar also pops in my mind, but there are so many others; I would need Boyer’s  legendary memory to be able to turn up a long list of names (see my previous blog #35).  

The pre-de Gennes era established the concept of macromolecules (Staudinger), and de Gennes School developed the scaling concepts to adapt the statistical mechanics of the small molecules to the dynamics of the long macromolecular chains. This field has become a very sophisticated mathematical model, indeed, after 40 years of fruitful advances due to ramifications and improvements.

Yet I maintain that we need to turn the page and start a new renaissance in polymer physics.  Do we need a clean slate?

Let’s roll back to the time when de Gennes started: how can we consider, differently, the statistics of interactions of the macromolecular chains?  After all, if the concept of macromolecule is not challenged, do we not need to select a single chain as our statistical system?  Like we always do in physics, once the properties of the system (here the single chain) is properly described, after accounting for the presence of the other chains that perturbs the property of a single isolated chain, we can extrapolate to the whole set of chains. This is the model that the reptation model proposed and elaborated for 40 years.

The clean slate that we actually need, in order to reformulate the interactions between the macromolecules, is not a small matter; in my opinion, it also bears general consequences in the way we should view interactions in physics, I mean statistically speaking.  The Boltzmann’s kinetic theory of gases inspired all current statistical models of the steady state of the interactions between a large set of units, molecules in the case of Boltzmann. This involves the description of a mean field calculated from the energy distribution function and it also involves a close statistics, where the canonical ensemble is well defined and constant.

Should the Boltzmann’s assumptions be put into question, challenged and perhaps even considered as THE problem to solve?  

If it is the case, not only do we need a new renaissance in polymer physics, but perhaps even, more generally, in physics.

You may not have recognized the man in the picture at the forefront of this blog. He is not as popular as Feynman, but perhaps should be. He is Professor Ilya Prigogine (1917-2003), Nobel Laureate in 1977 and author of several statistical books on the dynamics of “dissipative structures”[1], [2].

I happen to use almost the same words: elastic dissipative wave[3], dissipation energy, vertical structuring due to the minimization of the dissipative term, etc. in the description of the Grain-Field Statistical model of the interactions applied to polymers[4].  There is no doubt in my mind that the essence of my work on dissipation, started independently from Prigogine, has great resonance with what Prigogine has most brilliantly elaborated.  Perhaps luckily, though, I was not inspired by that work (at the time) and developed a different mathematical formulation of dissipation, possibly more adapted to the case of interactions between macromolecules, moreover mathematically simpler to apply, it seems.      

I am not going to elaborate these statements in a blog post, but since I have been working on those issues for the last 5 years and am ready to send to publishers a couple of books developing these new ideas on the subject[5],[6], I propose to “avant-premiere” a selection of general paragraphs from the books to illustrate what I have concluded and the questions that remain open.

 

A.  Excerpts from Prigogine1:

 

“We believe that we are only at the beginning of a new development of theoretical chemistry and physics in which thermodynamics concepts will play an ever increasing role.”

“In the classical theory of integrable systems, which has been so important in the formulation of quantum mechanics, all interactions can be eliminated by an appropriate canonical transformation.  Is this really the correct prototype of dynamic systems to consider, especially when situations involving elementary particles and their interactions are considered? Do we not have first to go to a non-canonical representation which permits us to disentangle reversible and irreversible processes on the microscopic level and then only to eliminate the reversible part to obtain well defined but still interacting units?”

“A general feature of interest is that dissipative structures are very sensitive to global features which characterize the environment of chemical systems, such as their size and form…, “

“For example, the occurrence of dissipative structures generally requires that the system’s size exceeds some critical value…”

“…It is precisely because of inequalities (3.2) and (3.4) that d2S is a Lyapounov function.  Its existence ensures the damping of all fluctuations. That is the reason why near equilibrium a macroscopic description for large systems is sufficient. Fluctuations can only play a subordinate role, appearing as corrections to the macroscopic laws which can be neglected for large systems.”

 

 

B   Excerpts from my books in footnotes 3, 5 and 6 (polymer interactions):

 

“…In our view, “conformers”, the constituents of the macromolecules, gather into statistical systems which go beyond belonging to individual macromolecules. A conformer is shown in Figure 7-1, duplicated from Ref. 276.  The macromolecules themselves represent a chain of "covalent conformers" put together as an entity.  The problem is to determine whether the chain properties, derived from its statistics, control entirely the dynamics of the collection of chains making up a polymer. This is what has been assumed by all the other theories, and this is what the Dual-Split kinetics and the Grain-Field statistics challenge…” 

 

“…to simplify, one could view the difference between our statistical model and the classical model to describe the properties of polymers as follows: according to the classical views, the statistical systems are the macromolecules, i.e. a network of chains; the properties of the chains are disturbed by the presence of other chains and by the external conditions (temperature, stress tensor, electrical field, etc.).  This classical definition of the statistical system contrasts with our approach where the statistical systems are the “dual-conformers”, not the macromolecules, assembled as a network of dual-conformers. The interactive coupling between the dual-conformers is defined by a new statistics, the Grain-Field Statistics, which explores the correlation between the local conformational property of the dual-conformers and their collective behavior as a dissipative network…”

 

“… the statistics that are used by the classical models and by our model to describe the RIS (rotational isomeric states) of the conformers are fundamentally different: the classical molecular dynamic statistics is the Boltzmann statistics, famous for its kinetic formulation of the properties of gases. The Dual-Split or Dual-Phase statistics, leading to the Grain-Field Statistics, is inspired by the classical Boltzmann concept but departs from it by defining a dissipative term in the equations and assuming that the Free Energy remains always equal to its minimum value, that of the equilibrium state, even for transient states. The kinetics created by such changes in the fundamental equations result in the formation of Free Energy structures, which we have once called “the Energetic Kinetic Dissipative Network of conformers (EKNET)” ([265] to [270])  and more recently, while dealing with rheology “ the Elastic Dissipative Network” ([276], [283a])…”

 

“…In our analytical formulation of the dynamics of these “open dissipative systems of interactions” generated by our two modifications of the classical formula, we realized that essentially two mechanisms of structuration of the Free Energy prevail and compete: a “vertical structuring” and a “horizontal” structuring, each specifically applying its own version of the basic equations. This distinction increases the complexity of the analytical solution but is, in our opinion, a fundamental aspect of the way interactions work. The vertical structuring refers to a split of the units (collectively interacting in the system) into 2 compensating sub-systems having each a different statistical partition. The horizontal structuring offers a different split of the collective set, via the generation of Ns identical sub-systems, each with the same statistical partition. Each split mechanism generates a dissipative function. The total dissipative function ought to be minimized (it is 0 at equilibrium), a condition that creates their compensation, i.e. whether they work independently, in sequence or together…”    

 

“…the details of the simulations performed using this model of polymer interactions shows that there is a temperature that we associate with TLL, that is the dynamic transition temperature beyond which the classical molecular models based on the Boltzmann statistics and our open dissipative network model are compatible and coherent. This stipulates that classical molecular theories of polymeric materials will provide the same results as our model for T > TLL, a temperature at which the dissipative function kinetically collapses.  Below TLL, the behavior that results from the interactions is dominated by the statistics of the Dual-Phase and Cross-Dual-Phases; thus, below TLL, the macromolecular aspect does not statistically dictate the properties. The projections of these macromolecular statistical models (reptation, for instance) are physically unfounded below TLL, in our opinion, which explains their failure to describe the experimental results under those conditions (Ch. 7 of [276], [283b])...”

 

“…If one tests the predictions of the classical approach under conditions that bring its state above TLL, one may conclude that those data validate the classical views since they provide correct answers in the range tested. This is not an easy task, because TLL is rate dependent, pressure and shear dependent and molecular weight dependent. Thus, although one will find in the literature convincing experimental evidence of success for the classical models, which is the reason for their acceptance, we claim that these successes are due to the use of conditions that bring the state of the polymer above its TLL transition…”

 

“… in most experimental set-ups used by the industry, TLL is raised to such high values, due to the high rates and pressures, that the range of validation of the classical macromolecular dynamics to predict the properties is in default: the use of such classical models in such conditions provides the wrong answers….”

“…we have advocated elsewhere (Ch. 7 of [276]), to abandon the classical interpretation of polymer physics by single chain macromolecular dynamics because of its inability to describe the full range of its behavior and other essential properties of polymers such as the dielectric TSD/TWD responses analyzed in [283f]…” 

 

“…we propose that the mechanism of relaxation, in polymers, is due to the dynamic coupling of two types of splitting processes of the total statistical population of conformers in interactions: the creation of Ns(t) Energetic Kinetic systems (horizontal splitting) and the modulation of the conformational structure of these systems by the dissipative function (vertical splitting)...”

 

“…It might be more appropriate to categorize TL,L as one of the kinetic manifestations resulting from the cooperative kinetic process already giving rise to Tb, ,Tg and Tg,ρ.  Beyond TL, L, the organization of the inter-intra molecular interactions between the various dipoles as a dissipative network is kinetically inefficient, hence has ended.  As we said earlier, a description of the properties of the polymer by invoking the properties of the individual macromolecules embedded in a mean field is acceptable from this point on…” 

 

C. Excerpts from the book in footnote 4 and in Vol. I of the book in footnote 6 (Grain-Field Statistics):

 

“…The study of kinetics is a discipline that describes the evolution of the units of a population of, say, chemical molecules that participate in chemical reactions. Another example would be to describe the evolution of units of a population which could occupy different “states”. Many other terms have been used to describe the same objective: “statistics”, or “dynamics”, for instance, as shown in the following definitions: the population partition that evolves with time can be studied with the tools of “statistics”, a transient statistics in fact, a field also regarded as “dynamics”.  All these definitions are used in our presentation. The important thing here is to define the terms quantitatively...”

 

“…Can we modify the set of equations driving the kinetics so the system Free Energy stays at its minimum value at all times?  The Dual Split Kinetics model describes new sets of kinetic equations fulfilling these conditions. There are two types of solutions that we have studied, vertical and horizontal splitting, and several possible hybrid combinations of the two…”

 

”…In this section we present the assumptions driving the new non-equilibrium statistics and study the difference between its results and results obtained classically. The new equations converge to traditional kinetic equations at long times or under "true" equilibrium conditions. Under non-isothermal conditions the system becomes self-dissipative, and the duality is responsible for a structure of the Free Energy…”

 

“…Note the presence of an additional term, Ln (Nb/Nf), in the expression of the Free Energy. This function is what we designate the "dissipative term"… “Its introduction is fundamental in our work on interactions; it is the source of the originality of the new statistics and results in the study of a new generation of dynamic open-self-dissipative systems…”

 

 “…In summary, simple relationships between Lnux, Δx and Δe exist which are revealed by varying Δe in Eqs. (5) to (7). The vertical splitting kinetics is, on its own, powerful enough to simulate the effect of activating the dipoles (permanent and/or induced) at the polarizing temperature Tp, and observing its thermally activated depolarization as a Debye current...”

 

“…We now imagine solutions that combine the Vertical and Horizontal Dual Split Kinetics to simulate the dynamics of statistical units in interactions… we just want to illustrate one of the solutions of the Grain-Field Statistics that we have explored extensively to simulate the thermal properties and the rheology of polymers. More generally, the description of the several combinations possible and their simulation constitutes a vast and fascinating program of investigation. Additionally, among the various solutions, the challenge is to recognize what combination could possibly correctly simulate the specific interactions in a given field of the physics of interactions, not just polymer physics…”

 

“…In each of the combinations of Vertical and Horizontal structuring mentioned above, we are dealing with auto-generated open dissipative systems driven by the energetic kinetics assumptions, i.e. by solutions of the structure of the Free energy  so that the minimization of the total Free energy, for the collective set, always remains equal to the equilibrium value at that temperature. An open system occurs when its total population is not kept constant. Such a system compensates its openness by modulating the structures of its Free energy at various levels (scales)...”

 

“…Figure B-21(below) provides another interesting perspective of the open systems dynamics when their total population, Bo, is submitted to a fluctuation around a mean value.  The system is again defined by Eq. (12), but we now suppose that the value of Bo varies with time like a sine wave as it is cooled  at rate q=-1 oK/s:

Bo= A+B sin (Ct) and T=To- qt.   A=1000, B= 100 are used in Fig. B-21. 

The problem is more complex to resolve numerically, but is still quite tractable. The solution for Ns (T) is plotted on the same graph as Bo (T) for comparison. As the temperature decreases, from right to left on the figure, (Bo(t) is represented by the squares, and Ns(t) by the open dots.

We see in Fig. B-21 that Ns varies periodically but nothing like a sine wave.

What is interesting, however, is to follow the period of the oscillation and the maximum and the minimum of the amplitude of Ns (T), from right to left: The amplitude of Ns has difficulty to rise up beyond its initial value at high temperature, showing a slow increase of the peak maximum value, and a strong non-linear oscillating systems, with lots of harmonics. . Beyond the 4th oscillation of Bo(t), however, the peak maximum value increases rapidly and levels off to a plateau value, and the oscillation becomes cleaner, indicating the muting of many harmonics. The other remarkable feature is the period doubling of the oscillation at lower temperature. At high temperature, the period of Ns(t) and Bo(t) are the same, but as we cross the transition temperature on cooling, the period of Ns(t) becomes twice the period of Bo(t), a phenomenon observed for “time crystals”, for instance.  

This Horizontal dynamic system is capable to produce a transition separating two temperature regions presenting very unique characteristics in material physics [283d]...”   




Figure B-21

 

 

“…This challenge of finding the correct combination of horizontal and vertical splitting to describe interactions requires a dedicated book [283d]… An important chapter of that book is titled: “The Dynamics of Open-Dissipative Systems of Interactions: the Question of their Finitude and Stability”.  This chapter, whose title, obviously, resonates with the work of Prigogine in the seventies [500a], although it totally differs from it for its mathematical treatment, should lay down the basic map of what needs to be resolved by the next generation of research scientists (interested in these solutions) to better understand how the local and the global interactions structure one another to generate transient and steady state events in time-space, i.e. describe what we call reality (Time vs Space, Matter vs Vacuum). This would offer new perspectives on the description of the different types of interactions, and of their unification as solutions of the network of open dissipative cooperative systems.  The 1st Renaissance in physics, from Newton to Einstein for gravitational interactions, to the Standard model of interactions to describe the infinitely small, may have exhausted its resources to complete a grand unification. Should the 2nd Renaissance, inspired by Prigogine [500b], now take the baton?...” 

 

Jean Pierre Ibar

New School Polymer Physics

jpibar@eknetcampus.com

 

 

September 14, 2020

Blog Post #36


[1] Prigogine I., “Time, Structure and Fluctuations’, Nobel Lecture (1977).

[2] Prigogine I., Nicolis G., “Self-Organization in Non-Equilibrium Systems”, Wiley (1977) ISBN 0-471-02401-5. Also:

Glansdorff P., Prigogine I.,”Thermodynamics of Structure, Stability and Fluctuations”, London, Wiley-Interscience (1971)

[3] Ibar J.P., “Physics of Polymer Interactions. A Novel Approach.  Application to Rheology and Processing”, Hanser, (2019). 

[4] Ibar J.P., “Grain Field Statistics Applied to Polymer Physics”, book in preparation.

[5] Ibar J.P., “Application of the Dual-Phase and Cross-Dual-Phase model of Polymer Interactions to the Rheology of Polymer Melts. Temperature and Molecular Weight Dependence: A New Approach.”, book, submitted for publication.

[6] Ibar J.P., “Interactive Coupling in the Amorphous State of Polymers”, books,  Vol. I and II, Accepted for publication