The Missing Operation
When I was a child, my father taught me basic arithmetics. Then, I learned that I could swap the two terms of an addition without affecting its result. For example: I also learned that I could do the same with multiplication: My father called these operations commutative, and I loved their elegant symmetry. He went on to explain that doing a multiplication was a bit like doing multiple additions, and he made sure to bring this concept home by having me subdivide sets of various pictorial items by circling subsets of equal counts. In so doing, he planted the seed for division. My father was a great teacher.
Later on, I learned about exponentiation, and was told that it was to multiplication what multiplication was to addition. I immediately loved the idea, but soon realized that exponentiation did not share the beautiful symmetry of its cousins: Exponentiation is not a commutative operation. Oh, the disappointment!
Ever since, I have looked for a commutative operation that would follow addition and multiplication. In order to find it, my intuition was to look at what mathematicians call the identity term for a binary operation, which leaves a term unchanged by the operation. For example, the identity term for addition is : And the identity term for multiplication is : Then, I asked myself the following question: what would be the identity term for my new operation? In other words, what comes after and ? The naive answer is , but I knew that it could not be that simple. In mathematics, is not a very special number, and I did not expect that it would help me in my quest. But I also knew that I was looking for an operation that would do for multiplication what multiplication does for addition, and I knew that the exponential and logarithm functions were created to perform just that kind of trick. Therefore, I assumed that [Euler's number ](https://en.m.wikipedia.org/wiki/E_(mathematical_constant) would be the identity term for my operation.
From there, and with a little bit of help from a friend, I found my operator: It is quite similar to the traditional exponentiation operator, because : And just as anticipated, was its identity term, simply because : While I was at it, I also defined the inverse operation, which does for my new commutative operation what subtraction does for addition and what division does for multiplication. Obviously, much like subtraction and division, this operation is not commutative, as it certainly should not be: In reference to a beating heart, I called these operations expansion and contraction.
As I was having fun with my cute little operations, I wondered if other people had found themselves on a similar quest. Within a few minutes, Google came back with an answer: Albert Bennett had developed an infinite sequence of such operations back in 1914 and called them commutative hyperoperators. Being the nerd that I am, I immediately fell in love with them. All this was taking place during the 2018-2019 holiday break, which meant that I had a little bit of time available to fool around. So I decided to take advantage of it, and I put together a little theory for all this called Hyperlogarithmic Arithmetic. Nothing groundbreaking really, just a nice workout for a rusty old college graduate. But this convinced me that what I was looking at was a very elegant generalization of the concepts of addition, multiplication, and expansion, ad infinitum. These are essentially the same operations applied at different exponential scales. And while it might seem obvious to professionally-trained mathematicians, for the computer engineer that I am, it was a revelation.
Now that I had a commutative successor to addition and multiplication, I had to find a real-world application for it. This is important because as an engineer, applied mathematics are a lot more valuable than their purely-abstract counterpart, and the sooner your shiny new mathematical object can be grounded in the real world, the more adoption it is likely to receive, not only by the community of mathematicians, but also by practitioners like physicists, biologists, statisticians, or economists. This proved quite challenging though, because nobody seemed to care. When I questioned the mathematicians, they viewed my little construction as meaningless, or cute yet useless at best. And when I turned to the practioners, they told me that it was the mathematicians' job to find applications for a new mathematical object. Clearly, this was not going to work. So, I decided to look for applications myself. And for a while, I could not find any. No matter how hard I looked, there was no match to be found, be it in the fields of theoretical mathematics, Newtonian or relativist mechanics, chemistry, biology, or electrical engineering. I even asked some friends working in quantitative finance, and they could not find any relevant example either, cleverly pointing out that if I could not find any in the hard sciences, I was rather unlikely to find one in their domain of predilection. The future looked pretty grim for my cute little operation...
Not finding anything with my relatively random search, I decided to become a bit smarter about it. What I was looking for must have some logarithm in it, so I decided to focus on equations that make use of this transcendental function, and I quickly stumbled upon Boltzmann's entropy formula: This looked rather encouraging, but I would need the logarithm to be wrapped into an exponentiation. So I looked for exponential entropy, and I found a theme for a videogame and some arcane theories that seemed to exhibit my operator. Eureka! Or so I thought... After a euphoric lunch break, I quickly realized that I had jumped the gun, and that my operation was definitely not there, at least not directly. There might be a derivative way to find it into some of the equations, but it would certainly not come naturally.
At that point, I could have given up, and this was pretty much the advice that I got from anyone knowledgeable about the topic at hand. After all, there are many beautiful mathematical objects that have absolutely no grounding in the real-world, and for a mathematician, that is perfectly acceptable. Furthermore, as an engineer, you are constantly told that you should fall in love with a problem, not a solution. Clearly, I had fallen in love with a solution, and there might be no problem for it. Unfortunately, I am more an artist than an engineer, and the beauty of the solution meant that there had to be a problem for it. Such an elegant object must be an elegant solution to an important problem. There had to be something. So I kept looking, and I finally turned my attention to the one place where I had found a ton of exponentiations: statistics. There, I stumbled upon the log-normal distribution, and the real eureka moment happened.
The Probability Density Function (PDF) of the log-normal distribution is defined by: If you are not entirely familiar with this beautiful multiplicative cousin of the normal distribution, I strongly recommend reading this excellent article written by a team of Swiss statisticians. The really interesting part in its definition is the exponentiation containing the square of a logarithm. This is interesting, because a unary version of the binary expansion introduced earlier can be defined as: With it, if we call the geometric mean and the precision, we can simplify the PDF function of the log-normal distribution in the following fashion: Why do I find this expression simpler? Well, not just because it uses 17 symbols instead of 21, but because the symbols and functions it uses are simpler, at the exception of the expansion symbol , which won't be familiar to anyone a priori. The traditional definition for the log-normal distribution has a lot going on, and cannot be understood without a deep a priori understanding of the normal distribution. Instead, the alternative formulation proposed here for its multiplicative cousin is beautifully centered around the expansion operator, and all parameters and factors find their normal place: The factor ensures that the total area under the curve is equal to . The factor ensures that the variance is equal to . The factor ensures that the geometric mean is equal to . And to avoid any temptation of associating the and factors, which have positively nothing to do with each other, we could replace by , thereby saving an extra symbol:
How important is the log-normal distribution for statistics and sciences? Well, if you believe our Swiss friends, really important, and possibly more so than the traditional normal distribution. In fact, the more I am learning about the subject, the more I tend to believe that the normal distribution should be viewed as a particular case of log-normal distribution, and the former should be called additive normal distribution, while the later is called multiplicative normal distribution. Because most natural processes are multiplicative rather than additive, we tend to find the multiplicative normal distribution everywhere, from geology and mining to human medicine, aerobilogy, plant physiology, and food technology. As mentioned earlier, my operation is actually built as a recursive sequence of operations, starting with addition (level ), going to multiplication (level ), continuing with expansion (level ), and carrying on ad infinitum. Interestingly, its level expression dramatically simplifies the Probability Density Function of the log-log-normal distribution, which I call the expansive normal distribution. There are dozens of documented applications for this distribution in the real world. Unfortunately, I have yet to find any example of level normal distribution, but I cannot think of any reason why we could not find some. And finding applications at levels and is enough justification for the definition and adoption of the proposed recursive family of operations.
Finally, the circuitous path that took me where we are now is cause for celebration: initially, I thought that I had found my operation in the formula for entropy, which is a measure of disorder. If one defines life as an auto-reproductive process of order creation, one can view entropy as an indirect measure of the absence of life. In contrast, the probability distribution function for the multiplicative normal distribution is quite possibly the best equation we have to describe multiplicative processes. In other words, while I thought that I had found my operation in the equation of life's absence, I actually found it in the equation of life itself. And if given the choice between the two, I chose life.
Sequel This musing lead to the creation of Hyperlogarithmic Arithmetic.
Credits Many thanks to the following people who helped with this work: Baptiste Henry Jean Jérémie Mark May Reynald
Contact ishi at ishi dot io