About

My photo
Tamworth, NSW, Australia
If you're reading this you possibly have too much time on your hands.

Wednesday, January 16, 2019

CPT-INVARIANT GRAVITATION - A LETTER TO SABINE HOSSENFELDER

AN EXPLANATORY NOTE

This is a letter outlining a radical, modified theory of gravity, intended to solve a suite of known problems in galactic astrophysics, galactic astronomy and theoretical physics. It was written and sent to theoretical physicist Dr. Sabine Hossenfelder in mid-late November 2018. I received no reply. 

This letter summarises a vast amount of work done since 2001 and this exposition of that work is as clear an account of it as I have ever managed to produce. So, I then wrote a follow-up email to her on Jan 2, 2019 offering a fortnight to present any objections to publication of this letter online. Again I received no reply.

Since I received no objection from Dr. Hossenfelder, the outline I sent her is reproduced below.


DEAR SABINE

Lost in Math dissects a very grave problem in theoretical physics: theory strives to be beautiful, not falsifiable. You also described the very real dangers which arise from accepting ‘faith-based articles of science’ as normal to doing theory. These same concerns have been a topic of private correspondence between myself and my good friend Martin Darke for almost eight years now. There are some truly striking parallels between some sections of our emails and some sections of Lost in Math. I know that Martin intends to send you a separate letter and he strongly encouraged me to write to you about a few topics which may be of interest and he also proofed and edited this letter.

You interviewed Garrett Lisi, who claimed, ‘if you want to find a theory of everything, your aesthetic sense is pretty much all you have to work with.’[1] Risking some impoliteness and meaning no insult to him, he is committed to a terrible method. I’ve also given considerable thought to a more practical approach for inching theory closer to unification and I have a plausibility argument to describe such an approach. I suspect it will interest you and so that plausibility argument now follows.




[1] Lost in Math, p165.


AN HISTORICAL METHOD

There might be ten ‘beautiful’ models someone could invent which all fail to solve one problem. Conversely, there’s likely very few ways for anyone at all to solve ten problems using just one model. That is both a summary of all major, successful and convincing modifications to theory ever made in physics and a restatement of what it means to assume a more unified form of theory is possible today.

That same history of progress in physics also suggests the very peculiar possibility that solving multiple problems in one stroke might actually be easier than solving any one problem in isolation. Possible solutions to problems always constrain possible solutions to other problems. This interconnectedness of problems and how they work together to constrain modelling contains valuable ‘metadata’ about unification physics. Collecting problems into an ensemble for a metadata-type of analysis is a strategy which also suggests the need for a method of gathering problems sensibly.

Some reasonable selection criteria seemed likely to prove helpful in choosing especially suggestive or fertile problems which also stood to tightly constrain each other, assuming further unification is possible. There were several criteria I used but only two are needed to make a fair case for the effectiveness of this approach.

  • Criterion one: Inconsistency. Any assumption or conclusion found in an orthodox theory which produces contradictions when transported into another orthodox theory is clearly a very good problem. The vacuum energy problem is one such thing, a very dramatic inconsistency between QFT and cosmology. The baryon-symmetry problem is another problem of fundamental inconsistency between the same two parts of modern physics. Because it looked exactly like a pressure point which strongly confined any physics nudging closer to unification, because I found it easy to understand why the problem mattered so much, the baryon-symmetry problem went on to my list very quickly.


Collecting big-picture problems from theory cannot possibly be sufficient though and an empirical form of criterion is needed for easy consistency checking against real-world data. The more data, the faster any unhelpful assumptions can be ditched. The more data to hand, the more any useful new assumptions will tend to stand out as making progress on multiple problems. Having strange and unexplained data to aim at is certainly a precondition for progress in modelling.

  • Criterion two: Falsifiability. For the sake of efficiency, I wanted it to be as easy as possible to prove unhelpful assumptions wrong. Using history predictively would suggest the need to locate a region of physics where the data are outsmarting some theorists and where the rest of the theorists are outsmarting themselves.


Physics history is of tremendous help here. There are obvious patterns in work done prior to, during and even some time after all previous revolutions. These patterns very clearly suggest what we should be looking for. There should be at least one entire branch of physics in which an unusually large number of empirical rules have been unearthed but without any known derivation from simpler principles. It would appear as a region of physics in which any orthodoxy underwent routine and periodic reinvention by adding poorly defined parameters in order to avert every new data-driven crisis, a region in which there are one or more key assumptions for which there is little to no independent, confirming evidence. We are looking for a region of physics in which a language of apologetics, vagary and confusion has been invented to explain data in a hand-waving or even hand-wringing fashion. That is also going to be a language describing unknown physical processes or unknown causes for real-world effects which anyone can see. That’s what you would get by treating physics history as useful information and using just some of the historical trends in it predictively.

This description just given of theory going wrong is also nothing like the present state of the standard model. Lost in Math paints a timeless portrait of exactly the opposite kinds of problems showing up there instead: theory showing little strain to explain the data at all and theorists wanting to modify it anyway. As you pointed out this leads to theory which stagnates precisely because it can only modify an orthodoxy using blind methods. The language of vagary underwriting plausibility arguments for susy doesn’t describe unknown physical processes. Naturalness and beauty instead describe undefinable theoretical methods.

Application of this second criterion demands turning away from the standard model and looking elsewhere for a region of physics going haywire in the manner just described. This is the exact opposite of Garrett Lisi’s view and that is why it might be fair to say that he’s not arguing from any historically defensible position. The standard model simply won’t get through this second, empirical and historical sieve and precisely because it is too robust. To locate a lot of data-driven problems you have to look elsewhere. You also don’t have to look very far.

COSMOLOGY IN CHAOS

Looking for experimental foundations for any of the key cosmological assumptions you must come up short, as not even one key assumption in cosmology is based on any direct experiments.

  • Baryon-symmetry violation as needed for baryogenesis has, of course, never been observed. A universe-load of violation has long had to be assumed as an article of faith which leans heavily on human intuition but not on any experimental evidence.
  • Inflation cannot possibly be experimentally checked in a lab and it was only ever assumed as an element of causal structure because, when tuned just right, inflation appeared and then disappeared so as to rescue the older class of failing Big Bang theory. Now, of course, known problems with the cut-off of inflation are used predictively instead as the foundation for a class of multiverse theory having an infinite energy budget.
  • The dark matter also has to be assumed as an article of faith still without any direct experimental vindication. It also entails another, new class of cosmology.
  • The dark energy/cosmological constant is the latest rescue measure of course, only added into the theory when a prediction of an accelerated form of expanding universe was suddenly needed and the old cosmology could not possibly produce one.


Every assumption from inflation onwards has created a new class of ever less constrained cosmology by introducing new parameters, each of which needs to be tuned. That would not be so bad except that each of these assumptions has been taken as correct without any one of them having been frisked at even one experimental checkpoint.

Just suppose cosmology is somehow badly wrong and then scratch around a little to use that assumption predictively. If it was wrong where else would that wrongness show up? What branches of physics already assume cosmology as a formal foundation for problem solving? If cosmology is wrong then galactic astronomy would have to be in disarray. It would have to look exactly like a good match for the second criterion, if history can be used predictively. The case for galactic physics being in a state of glorious disarray and ripe for theoretical plunder is not hard to make.



GLORIOUS DISARRAY IN GALACTIC ASTRONOMY 

The lack of proper theory describing observables in galactic structures or galactic dynamics appears in the journals as a preponderance of empirical rules without any widely accepted derivation. You already mentioned the Tully-Fisher relation for spirals in your book[1]. There’s also the Faber-Jackson relation for ellipticals, there’s the  relation and the SΓ©rsic index law.

  • For another instance, consider the colour-density or morphology-density relation: spirals are known to be predominantly loners, they tend to have bluer, heavier, more recently formed stars. Spirals strongly dominate the field-galaxy population. That is also where ellipticals are the smallest fraction of the population statistics. Ellipticals have almost entirely red stars and seem more sociable in the sense that they strongly dominate cluster population statistics. Nobody knows why galaxy shape and star formation histories should correlate so strongly to environment. Any theory capable of solving a group of problems in galactic physics should definitely be able to explain the very curious fact that there are two inverted sets of population statistics, one for clusters and another for field galaxies.
  • For another, very curious relation, nobody can explain why so many lone spirals should ever possess such an extreme form of parity symmetry. Allowing for some coarse graining, the star formation rates are very nearly symmetrically distributed in space. So too are the star rotation speeds, distribution of any large, radio-loud sources, the population statistics of stars sampled from a given region, locations of voids or densities of the interstellar medium, thicknesses of any so-called thin and thick disks… all these statistical observables appear symmetrically distributed in space as though spirals all form with an innate parity symmetry. That is suggested by a rather coarsely-grained form of statistical data.


Thermodynamics can be used to state the almost unbelievable scope of this last problem: the coarse-grained entropy, S, can be treated as a scalar field such that entropy is distributed symmetrically in space about the centre of mass of the system with respect to position, [2], . That leads to the symmetry S(r) = S(-r). Any credible theory of galactic dynamics would have to reproduce this time-independent parity symmetry.

Astronomy also offers up instances of data not predicted by cosmological-type modelling and it even offers data which cosmology had, rather awkwardly, excluded. Some of these problems in which data is non-compliant with cosmological modelling remain unsolved. 

  • For what is surely the best example of this, we assume today that supermassive black holes are found at the centres of all large, regular galaxies. The formation time required for supermassive black holes was once computed to be longer than the estimated age of the universe. Today the exact opposite result has simply been digested as an astronomical fact and so a very big unsolved problem of theory is how the early universe can be so smooth and homogenous and yet very rapidly become clumpy enough to form primordial black holes.
  • Observations of black hole masses are curious and also stand in clear defiance of hierarchical mass building scenarios. There are a lot of black hole candidates up to about 100 solar masses. There are also a lot of extremely massive black hole candidates, located in small and large galaxies having between  to  solar masses. Between 10^4 and 10^10 solar masses, however, there’s almost no known intermediate mass black hole candidates. This persistent gap in black hole candidates’ masses is sufficiently well established by now that it can be taken as an astronomical observable which theory has to aim at and somehow not miss.
  • There is also a chicken-and-egg-type confusion now about black holes seeding galaxies or galaxies seeding black holes or both happening at the same time and quite how that might work. Nobody knows how to deal with black holes in this astrophysical setting. We don’t know how they formed so quickly, we don’t know how they influence galaxy growth, we don’t know how they might influence galaxy evolution, we don’t know why there are two distinct black hole populations and not one continuous population. All the important things you would want to see constrained by a good theory of galaxy formation or galactic dynamics are conspicuously absent.


The claim that nobody truly understands spirals leads directly to noticing that nobody understands how so many supermassive black holes formed so long ago in the first place, which in turn leads to nobody understanding too much about the formation of any galaxies at all. Ellipticals are just as poorly understood as spirals and this is equally clear from an inspection of the journals.

  • It is not obvious to anyone, for instance, why elliptical galaxies should generally be so depleted of dust, nor why they should comprise exclusively older and so also smaller star populations. Why is star formation so slow when spirals are invariably undergoing far more rapid star formation in the same epoch?
  • It is not clear why ellipticals do indeed strongly dominate cluster populations and why the inter-cluster medium (ICM) should be so intense in x-ray. It is not clear why ellipticals should have a different initial mass function (IMF) to spirals, meaning they have characteristically different star populations[3]. Spirals and ellipticals were widely expected to have the same IMF but this is not observed. Now instead galaxy formation is assumed to help trigger further star formation but why this happens differently in differently-shaped galaxies is not known.
  • There is no definitive and clear modelling to account for why the interstellar medium (ISM) interior to ellipticals should also be so intense in x-ray instead of what might be expected, which would be a cold near-perfect vacuum which is extremely quiet at high energies. Nor is it clear why there should be any connection at all between luminosity in x-ray and luminosity in optical, yet a strong correlation of intensity in two different energy bands in ellipticals is a firm conclusion of multiwavelength astronomy.
  • Nobody understands why it is that some ellipticals are associated with high-energy pulses or burst emissions in a strongly coherent dipole jet formation. These galaxies were first noted as being curious objects because they were a class of radio-loud galaxy. High-energy quasars and radio-loud galaxies bearing relativistic dipole jets were both unexpected results completely unanticipated by any orthodox cosmological-type modelling of galaxy formation. Jet production modelling is an active field of theory with nobody too sure of any exact rules, with a few competing models and with ever improving data.
  • There is certainly a language of vagary and confusion in galactic physics; aside from the entire dark sector there’s things like black hole feedback. Feedback is a common term found in some modelling of galaxy growth. It’s been proposed as a player in galactic jets, it’s been proposed as a cause of barred spirals, as a cause of suppressing star formation in ellipticals and as a cause for triggering star formation in spirals. Feedback is a generic cause ‘justifying’ a lot of parameters in a lot of modelling but there’s no definitive model for feedback itself.


Galactic physics ticked every last box as the place to go to look for nice unsolved problems which also stood to constrain each other. There was a whole lot of data from a golden age of astronomy and problems were so clearly begging for a common solution. Having set out to locate a branch of physics just like this, I decided in 2001 that galactic astronomy was very likely where the next revolution in physics was brewing.

As was suggested before, each of these problems in orthodox galactic physics is invariably treated in isolation from all of the other problems also just mentioned. That is happening partly because of institutional pragmatism: people are trained to handle problems as though they are all mutually exclusive units. It is also only possible to justify such a strategy in galactic astrophysics because each problem is widely expected to have a reductive explanation founded in cosmology. Treating every troubling data point in galactic astronomy as a problem for ‘applied cosmology’ to explain is, of course, completely crazy.

If I asked a theorist such as yourself to explain how the standard model of particle physics reduces to a branch of ‘applied cosmology’, you’d very likely wonder if I was asking you to describe or maybe even advocate for some kind of a multiverse theory. Or perhaps I might be asking you to advocate instead for susy, based on the apparent need for dark matter particles. The multiverse and susy are exactly the kinds of modelling you end up with when particle physics is treated as a branch of ‘applied cosmology’. Lost in Math details how ineffective and how dangerous this approach has been.

I decided that the dynamics of galaxies would likely make better progress if it were taken up as a study in its own right. Also, I decided that the vast piles of astronomical data and unsolved problems all had to be described using Heisenberg’s old rule about paying close attention to what the observables really are in the construction of theory. In this particular case that means not having any cosmological assumptions drift into the analysis. For instance, redshift data is not a definitive proof of expansions of space; it’s a trend of redshifted spectra, that’s the observable. Gravitational anomalies are not evidence of dark matter halos, they are just anomalies in need of a physical or dynamical cause because that’s the information to hand when starting your modelling from scratch.



[1] This can of course be obtained from MOND but as you point out in your recent Modified Gravity upload on YouTube, it’s well established that MOND cannot predict other key relations. Obtaining the Tully-Fisher relation from MOND cannot possibly be a proper derivation.

[2] Specifically, its position in a Euclidean projection which is a good approximation for how we actually observe real galaxies.

[3] Ellipticals appear to have a so-called bottom-heavy population having a larger proportion of generally smaller stars than are observed in population samples from disk galaxies.


GALACTIC STRUCTURES AND THE KERR SOLUTION

My earliest work on modelling galaxies led to a mess of new ideas and an effort to solve problems using assumptions that didn’t quite work. Not all of these assumptions were failing though. It was a real struggle lasting a decade to locate and get rid of all the stubborn ideas that had failed and to keep only those assumptions which were getting more and more problems under some kind of control. 

To my great frustration the modelling only ever got more complicated and harder to describe and didn’t seem to want to simplify until March of 2011, when I was very lucky to find a freshly-posted preprint [1] by the Italian physicist Massimo Villata on the arXiv. His was a rather strange-looking theory paper describing a 𝐢𝑃𝑇-type symmetry of the Kerr solution. His own cosmological inferences in that paper still seem odd to me, but his approach derives a very specific gravitational coupling structure for matter and antimatter. It was the same very specific coupling structure I had long assumed when modelling galaxies.[2]

It turned out that the maximally extended Kerr solution already has or implies every single one of the analytic properties I had previously needed to assume for black holes when modelling galaxies. I hope to make a fair plausibility case for this modelling shortly. First though, there’s the problem to consider.

In order to use the Kerr solution in any orthodox analysis you have to enforce a positive energy constraint on it and this ultimately means removing the ring singularity. The ring singularity is, however, non-removable and a non-removable part of a unique family of exact solutions to the Einstein equation. The ring also inherits the key properties of mass and angular momentum with 𝑀 and 𝐿 also being the only two values needed to pick out a particular 𝑄=0 Kerr solution. The ring behaves as though it is the sink/source of the strange geometry into which it is embedded, since the ring and the ring alone inherits the two and only two parameters needed to single out one specific geometry from the infinitely large family of Kerr solutions.

The ring is behaving as a sink/source of the geometry and it is also a non-removable feature of it and these two properties are not only self-consistent (a removable source or sink would not be a physically self-consistent proposition for any exact solution) they are also both, so to speak, generally covariant properties of the solution: everyone must agree that a non-removable thing really is non-removable. What could not be true is that some observers can remove it and others cannot, since that would be inconsistent with general covariance. Non-removability is a property of the solution proved for generalised coordinates so it is, in a sense, ‘protected’ by general covariance.

The same form of concern appears for the ring behaving as the only possible sink/source. There is no coordinate system in which the ring isn’t behaving like a sink/source. Everyone should be able to agree that it behaves exactly like an inner property of that geometry. For a stationary solution, the ring is a geometric invariant. What cannot happen is for the exact Kerr solution to behave like an inner property of the geometry for some observers but not others. This property is also protected by general covariance and if life were easy then everyone would just agree the ring singularity has to stay. But we don’t agree. We can’t agree. Demanding only positive energy solutions in all physical law demands that a gravitational sink/source must be excluded because it’s supposed to be a sink, always a sink and only a sink.

The problem I have with the orthodox procedure of removing the ring is the obvious: observers should not be able to agree on how to remove something from an exact and unique solution which all observers already agreed was non-removable. If observers could find any way to agree on how to remove it, then it would, by definition, not be non-removable. There is a grave inconsistency in methods and assumptions: geometric invariants are treated as pathological not physical, spacetime geometries exist without any sink/source and general covariance is troubled by both of these inconsistencies. These concerns all appeared the moment a positive energy condition was applied to the Kerr solution. I took this to be a decisive problem, seriously troubling to the existing gravity law.

As you say, mathematics can be wrong but it cannot lie. In that exact spirit I started to treat the maximally extended Kerr solution as predictive instead of pathological. This means, rather drastically, rejecting any positive energy constraint on masses in all physical law. That was okay with me though because gravitational repulsion between matter and antimatter was one of the assumptions I had already found was needed for my modelling of galaxies.

I would suggest that even though it allows negative energies into physical law, using the maximally extended Kerr solution predictively while working on galactic dynamics is definitely worthwhile. I cannot recommend this small backwater of research strongly enough. Now I have to make some kind of a case for this rather bold claim. Making that case will take up most of the rest of this letter.



[1] Villata, M., CPT symmetry and antimatter gravity in general relativity, EPL 94 (2011) 20001.
[2] Villata’s 2011 paper modifies prior modelling done by Chardin & Rax in 1992. My work further modifies Villata’s approach. 



THE MAXIMALLY-EXTENDED KERR SOLUTION

Consider first the distinct classes of event horizon structure of the Kerr solution: slow Kerr or fast Kerr and the critical point between them, extreme Kerr. This reconciles pleasantly with galactic physics since astronomy has located two very stable forms of regular galaxy whose star populations also differ sharply in the distribution of their angular momentum; elliptical and spiral. Transitional, lenticular forms would appear consistent with galaxies having to hit some critical point in their time-evolution.

The ring singularity should also spontaneously decay into particle-antiparticle pairs and another ring singularity with a slightly reduced mass once a vacuum process[1] is introduced into the modelling. The decay products of any ring singularity must include another, fractionally less massive ring singularity and that is the sense in which a ring singularity remains stable with time-dependence. It is topologically stable even in decay, which is assured because the Kerr geometry is a unique family of solutions and conservation laws are assumed to apply. For any and every possible spinning mass dipole there’s not some other form of mass dipole for it to decay into. It must instead evolve into another member of the Kerr solution family with slightly different values of 𝑀 and /or 𝐿.

Also, since Kerr is a stationary solution with fixed angular momentum, and since it possesses a 𝐢𝑃𝑇 type symmetry, time-dependent solutions are also expected to be 𝐢𝑃𝑇-symmetric. Any decay from Hawking radiation should start out being entangled and so also distributed in space with a parity symmetry. 

That is also to say if the baryonic content of a lone galaxy really is half matter and half antimatter from decay of a mass dipole into entangled pairs, then all dynamical processes must inherit a strong bias to continue to obey the same parity symmetry constraint. Consistently antisymmetric changes in the geometry taking place in an already antisymmetric geometry will tend to reinforce the existing symmetries in all valid dynamical descriptions of isolated systems. This result has to be produced in a more extreme form if 𝐢=𝑃𝑇 is just assumed to constrain dynamical structure when also modelling isolated, many-body systems. That recovers an unrealistic, ideal, fine-grained form of the previously described coarse-grained relation, 𝑆(𝐫) = 𝑆(−𝐫).

For a stationary Kerr solution to have a Lorentz invariant entropy, which is of course required, the ring would have to be a geometric invariant, which it is. That also entails non-removability and this constraint would mean that any removable event horizons cannot possibly be used as a self-consistent measure of entropy since they cannot be expected to produce an invariant. For the ring to describe the entropy of the sink/source of the geometry then the ring would also have to behave like the only possible sink/source as well, and it does.

Using the Hawking-Bekenstein entropy theorem on the only obvious area left after all other horizons have been excluded means that the circular region bounded by and in the same plane as the ring singularity must continue to increase in area over time. The ring must grow in time. It is a two-parameter system of course, so if the angular momentum of the ring is conserved, increases in area also entail mass loss. Evaporation of the ring’s mass by a vacuum process, required for consistency with QFT, also achieves consistency with thermodynamic modelling. This also quantises all allowed mass and spin transitions of the ring singularity and with no chance of any information paradox.



[1] The decay mechanism is of course Hawking radiation. In this case it is due to the presence of a strong gravitational dipole capable of splitting and accelerating any massive + and – virtual particles in opposite directions.





AN EVOLUTIONARY ARROW IN GALACTIC PHYSICS

Thermodynamic-style arguments applied to the source of the geometry have no choice but to locate an arrow of evolution in the Kerr solution and that implies the same arrow of evolution appears in the Hubble diagram. The ring must continually increase in area and so also the radius, which is a measure of spin per unit mass. Spin will necessarily come to dominate the dynamical structure of galaxies if the ring has an entropy. Thermodynamical modelling expects an inexorable and unstoppable evolutionary arrow in the spin structure as a prediction resting on long-known properties of the Kerr solution. Explicitly, there should be a thermodynamic arrow which runs from low-spin and low-entropy ellipticals, evolving into lenticulars and onwards to high-entropy, high-spin states, or spirals. Since this is an argument involving isolated and lossy systems, it expects that high-spin galaxies, spirals, should come to dominate any field population. As already noted, they certainly do.

This evolutionary arrow also implies the Milky Way was necessarily once an elliptical galaxy. This speculation would have to be fully consistent with the galactic archaeology of the Milky Way. If the Milky Way once had a fairly typical elliptical star population then the smallest mass fraction of that ancient population should still be around. Galactic archaeology of spirals is one place where evidence of an evolutionary arrow in galactic physics is expected to show up systematically.

Bulk annihilation would make the central cores of galaxies lossy over tens of thousands of years, an effect also spreading outward in spacelike intervals such that a star 105 ly from the centre observes the mass as it was 105 years ago, when the ring was 105 years heavier. That’s an expected observable in extragalactic physics. Mass loss in time would present as a spacelike defect, increasingly influential with increased distance from the core. Mass loss in time could potentially account for all gravitational defects found in lone ellipticals without the need for any dark matter.

Very large lenticulars approaching, living on, or still very near to the extremal value of angular momentum per unit mass should then have morphologies especially sensitive to the appearance of spacelike pseudoforces. In lenticulars the battle for dominance of mass over spin or spin over mass in the dynamical structure should be observed playing out over spacelike intervals. For unusually large and isolated lenticular forms that means defects spread out over sufficiently long distances for it to manifest in long-lasting transient forms exhibiting a symmetric but peculiar structure.

Bulk annihilation would also span millions of years. This would result in cluster populations having anomalous looking masses, an anomaly which scales with size of the cluster population. Dark matter would, once again, not be needed.

Over billions of years ring singularities stand to have lost a significant fraction of their original mass. A large sample of galaxies at a roughly fixed and very large distance should be significantly (in statistical terms) more massive than are comparable galaxies sampled from, say, the local group.
Supposing that at any epoch we care to observe, say a billion years ago, the Milky Way finds a twin. It is a twin in the sense that this twin appears to us tonight exactly like the Milky Way looked one billion years ago. Photon exchange with a twin galaxy at any epoch means that a photon created in some twin galaxy in the past and annihilated in the Milky Way tonight will gain less energy by falling into our galaxy than it lost by leaving a more massive galaxy in the past. A gravitational redshift-distance relation is expected if galaxies are all always losing energy density and if the Milky Way is a fairly typical field galaxy.

The timeline of the Milky Way’s own galactic archaeology (star formation rate history) should correspond with the redshift data once that redshift data is converted into a redshift timeline instead of a distance relation. It was noted that at some stage the Milky Way is required to have changed from an elliptical into a spiral. This change in lossiness should leave an impossible-to-miss imprint in the redshift timeline. If this transition were interpreted as due to Doppler shift then it might be attributed to an expansion of space or some kind of repulsion force acting quite late in cosmic history. The two forms of timekeeping, the redshift timeline and the star-bursting activity timeline produced by galactic archaeology, would have to agree or the modelling is falsified. If a close correspondence of timelines is observed however then this modelling could explain trends in redshifted spectra in terms of the Milky Way’s mass loss and its time-evolution, while also assuming Ξ›=0.

Ellipticals can only be self-consistently modelled as a low-spin, homogeneous form of solution with a rather hostile star-antistar population in which stars and antistars are (very nearly) homogeneously distributed. That is in stark contrast to the highly heterogeneous distribution of stars and antistars forced into a smaller, disk-shaped volume, which is to say spirals. The far more homogenous star-antistar distribution in ellipticals (and spiral bulges) would absolutely and unquestionably result in a dust depleted ISM since annihilations will remove any dust and antidust, reaching an equilibrium between production of the ISM from stellar winds and steady attenuation of that same ISM from annihilations.

Where stellar winds interact in ellipticals we should anticipate a high-energy signature in gamma rays, largely scattered after collisions into the hard and soft x-ray range, an emission which should be strongly non-polarised. This high-energy signature would show up wherever star-antistar populations are homogenous. It would show up as a form of non-polarised x-ray ‘background’ interior to ellipticals but also in the bulges of spiral galaxies. That ‘background’ of x-ray emission would also be expected to have an intensity exactly proportional to the density of sources of the ISM in homogeneously mixed star-antistar populations. This would also scale to clusters of ellipticals, the ICM should also be dust depleted and x-ray emission from the ICM should be correlated to the density of galaxies in a given region of the cluster.



CHARGE-CONJUGATED OBSERVERS

The need for a second set of coordinates with negatively-valued parameters for any complete mapping of the maximally extended Kerr geometry had long bothered me. The reason being it is another non-negotiable property of the Kerr solution in the sense that all observers agree that one set of coordinates is never sufficient to recover the entire geometry. In late 2017 I finally worked out that I could use this feature of the Kerr solution as a bludgeon against any non-compliant theory. This offers a way to get baryon symmetry into the gravity laws with even tighter mathematical constraints.

The modification I believe is needed in the gravity laws amounts to introducing something very much like what you once called, ‘funny kinds of observers’: let there be ‘antiobservers’ and let us try to exchange our laws with them for comparison and let us do this while extending the idea of general covariance to apply also to this second class of (charge-conjugated) observer. This enforces an egalitarian principle into life.

A charge-conjugated physicist has equal rights in the construction of all physical law and an expectation to see the same laws as we do, using a completely disconnected set of generalised coordinates. Defining things in this way also places baryon symmetry into the heart of the definition of general covariance. Note that it does not define what charge conjugation is, it just requires such a thing enters the laws purely to create an antiobserver. So, the definition of the 𝐢-operation stays hungry, so to speak, and there must then be an additional assumption which feeds it a definition.

We can define that 𝐢-symmetry in terms of both observers needing to share in the same ideas of causal structure while mapping that using two completely disconnected covers. Why are there two possible conventions, [+,-,-,-] and [-,+,+,+], for the Minkowski metric? You also found the need for a second metric obtained from the first by a transformation in your work on antigravity. Here the symmetry rule connecting two 𝐢-conjugated observers already appears as two disconnected ways to write the exact same laws and that is only possible because both metrics also inherit a well-behaved spacetime signature. The 𝐢 transformation has then been defined for 4-vectors in flat spacetime by using the two disconnected Minkowski metrics and this definition recovers 𝐢=𝑃𝑇.

In the case of spinors there’s a cryptic kind of metric sign convention involving the squares of the gamma matrices. Take the conventional Dirac representation and multiply each gamma matrix by 𝑖 and you now have a conjugated set of equally viable gamma matrices which must also inherit the other metric convention. The same would be true for the Weyl or Majorana basis. The 𝐢 operation can also be defined as a 𝐢=−𝐼 operation on the generators of the associated Lie group. This generalises the original definition of the 𝐢 operation to any spin and always constructs a pair of 𝐢-conjugated representations for any spin.

This reappearance of duplicated metric conventions for any spin-half basis at all is regarded as an unimportant, redundant feature on the landscape of the mathematical theory of Lie Groups, having no physical meaning. Conjugated, completely disconnected metrics are, by contrast, demanded in this modelling because antiobservers are presumed to exist even in absentia. Had this dualism not already appeared in existing formulations this would have created a terrible inconsistency problem. But this inconsistency problem just isn’t a problem because the duplicate metrics are guaranteed to always be there.

The Kerr geometry is of course retained by this extended form of general covariance. That, combined with data from astronomy, conspires to predict the scale at which baryon symmetry should appear in the skies. That is also to suggest that the maximally extended Kerr solution is, in and of itself, a time-independent, baryon-symmetric solution of the gravity laws. It is a mass dipole and it is the unique way for any sink/source dipole solution in GR to also have inertia and spin. You even need charge-conjugated observers using two covers to span the entire geometry.

Because it is an innately baryon-symmetric object and because it is the unique description of a dipole type of mass, because it should decay into matter and antimatter in equal amounts, baryon symmetry added into the gravity laws has predicted that wherever the need for black hole candidates show up in existing data, that is also the scale at which we should expect to see baryon symmetry assert itself.

The supermassive black hole candidates are an extreme case in which black holes should dominate the system and that is the most reasonable place to start looking for a stable, cosmic unit of baryon symmetry. That will also be the scale at which the existing gravity law will appear to fail and it should be failing precisely because that is the scale at which baryon symmetry cannot be ignored. That is where dominant, 𝐢𝑃𝑇-symmetric gravitational dipoles and mass loss over deep time would be needed to account for observations.

Any other gravity theories of this class, any modelling which seeks out baryon symmetry at some very large scale, is either looking around at the galactic scale or it is looking at a scale not consistent with baryon symmetry entering the gravity laws. If galaxies are not consistent with this modelling then baryon symmetry is empirically ruled out altogether and precisely because baryon symmetry was taken completely seriously and then used predictively.

It was suggested earlier that even though it forces negative energies into physical law, using the Kerr solution predictively is still worthwhile and that this was a good area of research to get into. This is the final point in the argument supporting both claims. Allowing ring singularities to enter the gravity laws constrains the scale at which we should look for baryon symmetry in the skies. It is as easy to falsify baryon symmetry as it is easy to falsify this particular modified gravity theory. For this reason alone, I believe my modelling falls into the terrifying if true and interesting if wrong basket.



CONCERNS WITH THE MODELLING

A serious concern of the modelling, a place where my work seems to struggle or even fail completely is obviously worth mentioning. The exotic ring galaxies such as Hoag’s Object or the Cartwheel Galaxy are of real concern. I freely admit that I have never found any satisfactory account for how these ring galaxies form without adding in new and unwanted assumptions. Ring galaxies are welcomed and adored and also very troubling.

Another concern cited in the literature and put to me in private correspondence by Prof. Michael Doser comes from experiments involving Penning traps. Specifically, protons and antiprotons should sit on two different gravitational potentials and exhibit energy-splitting in cyclotron frequency. 

Because of the symmetry of the electric field in the Penning trap itself and because of the added symmetry imposed by CPT, it seems to me that only gravitational tidal forces acting over the height of the trap’s cavity could stand to act as a perturbation. With tidal forces giving up the free energy for splitting, experiment still falls a few orders of magnitude short of the sensitivity needed to probe this effect, at least that is so if my toy model has produced a fair first approximation.

I wanted to thank you for your time before rudely asking if I might impose on it further at a future date. Having presented my case for a method while summarising where it led, I also hoped to send you two research documents outlining this work more formally. I should very much like to get my research work peer reviewed and professionally published if at all possible and am very keen to receive any advice or criticisms or assistance which stands to advance that effort.

With tremendous gratitude once again for your wonderful book, wishing you good fortune in future research and writing projects, with warm regards and best wishes besides,



Bart Alder


*This document has had some references removed, links, bullet points and section headings added for easier reading online but it is otherwise published verbatim. It is part one in my 'Being Ignored by Scientists' (BIbS) series.