k and Kahneman
Navigating Big and Small Decisions Via Probability

Many years ago I introduced a number, k, which has peaceably been sitting in libraries undisturbed. The number k serves to vastly simplify statistical and probability concepts that are intimidating for most of us.

In many instances formerly complex problems become childishly simple when k is applied. Even the profoundly math averse can readily use k to understand and calculate probability.

Books concerned with probability have increasingly come to public attention (Black Swan, Outliers, Thinking Fast and Slow, etc.). Daniel Kahneman’s Thinking Fast and Slow is one of the most interesting. When traveling overland this year and camping from Santiago, Chile, down to Tierra del Fuego, and then up to Buenos Aires, a friend had Thinking Fast and Slow on the truck. We discussed it once in a while. It was my intention to send an email with the subject line “k and Kahneman” to round out the discussion but, once started, the proposed email grew into the words that are in front of you now.

With probability in the public eye and the topic of many best-selling books, it also seemed appropriate to mention k again.

So, prompted by these incentives, here goes.

Much of the following is not intuitive. It will surprise you. It may even help you to make decisions in a wide variety of otherwise difficult to comprehend problems that you will face.

Table of Contents

k and Kahneman

Part I
The Kablona Quest
Probability of success
The essence of k
Surprise
Including information

Understanding Sample Size, k, & Waiting Time Problems
Getting it right
The bell-shaped fib & k
Noise, too small a sample
k at the Max


Part II
The Worm Turns
Stated simply
Variability and getting base rates right
Download
Base rates in history
2008-2033
Proliferation
Drilling deeper into base rates
All of the components
Stimulatory vs. Inhibitory Components
Fixing it. The bundle of sticks. The three-legged stool.
Probability accumulates
Proliferation again
Your lifetime

Monte Carlo
Potentially a wide variety of base rates in 2013
What a surprise. Base rates fall.
Relative base rates
The teaching example. How far base rates can fall.
When we talk about it
Rare events
Tidbits for Kahneman aficionados

Human Lifespan Explainable as an Ailment Proliferation Phenomenon
Pills
Potential Afflictions
Managing a risk
The multiplier
Reconsidering proliferation

Severity Unfolding.
The Passage, Rights and Wrongs – Teddy & Friends
WWI
Lodge, a net negative contributor to mankind

The Severity
The breakeven point
Probability & Severity

Part III
The Prize

 

Part I
The Kablona Quest.

For a thought experiment, a great quest is proposed. A group of people is striving to accomplish one of the most difficult tasks known: achieving the Grand Kablona (Ka-blow-na; the great cure, the great discovery, the great invention, the new development, the new way, or another similarly difficult, yet worthy endeavor). From the outset, it is believed that on average, it would take 360 years for a highly able person to accomplish Kablona. In each year, for each participant, there is only a one-in-360 chance that he or she will achieve it. The participants are all of equal ability. Some have proposed that although the project is difficult, the base rate (the one-in-360-year chance) is wrongly estimated. It might be as low as once in 180 or 90 years, instead of the proposed 360.

The Kablona Quest began in 1946 when Tim was a precocious child and over the years, eight others have independently also undertaken the Quest, all continuing to dedicate themselves to it. Over the years Tim, Barbara, Giles, Karen, Peter, Wendy, Jacob, Kate, and Matt have become dedicated Kablona seekers. Jimmy also was an active participant, until his death in 1989 in a tragically unnecessary traffic accident in San Pedro Sula, Honduras. By 2013 the commitment had produced 417 man-years of work. Yet, Kablona had not been achieved. Given that the effort is in its 69th year, some have wondered whether the attempt should be abandoned.

Kablona Section

Probability of success.
Now, using k (which will be explained later), a soon to follow table was developed to assess the probability of success. It examines probability of success at base rates of 360, 180, and 90. Here we focus on an edited portion of it. As the time and man-years dedicated to the task increase, the probability of success increases.

A base rate of one-chance-in-360 years also means that for a single participant the average interval is 360 years. Conveniently, chance and average interval are expressed identically (in this instance, 360).

In the year 2013, we see that a 360 base rate would have delivered a 68.6% probability that Kablona would have been achieved sometime within the 1946-2013 period. Here we see the influence of chance and a designated constant 360 base rate of ability. The median case for an occurrence is at the 50% point (a 50-50 chance of success).

Given that the median is 50%, we know that, at a 68.9% probability in 2013, it would have been likely that Kablona would have already been reached, and that it would be considerably more likely that it will be solved by the year 2032. The properties of chance (a 360 base rate) do not change as we approach the point of success, but march along just as they did in prior years.

The heartening point is that even when faced with exceedingly difficult tasks, they can be accomplished by persistence and determination, as is reflected in the often-repeated observation, “chance favors the prepared.” We see that by the year 2032 there would be an 81.5% chance that Kablona will have been solved. There would be two options to more quickly reach resolution: bringing on additional participants of equal ability (thus increasing man-years more quickly); and reducing the base rate by improving participant chance of success – in short, their getting better at what they do. The complete downloadable spreadsheet is shown here:

Kablona Quest

The essence of k.
The Kablona Quest represents a broad class of problems, sometimes called waiting time problems, in which all possible outcomes are equally likely. In any year, any one of the participants would have been equally likely to solve the problem. Despite such uncertainty, we know a great deal about the likelihood of success. The number k illustrates the degree to which outcome deviates from the average time to occurrence.

Short k probability listIn the discussion here, we are looking at the interval between events and how long those intervals are in relation to the “base rate” [the average annual chance, which refers to one chance in blank years]. The k-values in the table show the relationship between the average time to solve a problem and the probability of actually solving it. Just multiply k times the average.

For example, if it takes an average of ten years for a person to achieve success in this class of problems, then we could be confident that in 95% of instances, the problem would be solved before thirty years had elapsed [30= 3*10 when k=3]; and we would be 99.9% sure that it would be solved within seventy years [70= 7*10 when k=7]. Similarly, k-values apply throughout the range of probabilities. k= ABS(LN(1-probability)) [ABS indicates absolute value and LN indicates natural log in spreadsheets and on hand calculators. In computer program languages, the natural log is often represented by LOG(), making the formula there: k= ABS(LOG(1-probability)). Additionally, note that in instances when it is necessary to compute a primary base rate from many subordinate probabilities, it can be done via the geometric average of the probabilities, as illustrated in the downloadable spreadsheet, green tab “k-Process DATA”]

The chief advantage of k lies not in the particular calculations, but in being able to see that in many problems that we face in daily life, the expected outcome varies greatly from the average case. And, those outcomes can often be easily explained by k alone. There is too great a temptation to refer to unusual random events as unexpected outliers when, in fact, they often result from an entirely normal k process. For example, in one-in-20 cases, it will take three times longer than average for an event to occur; in one-in-150 cases, five times longer; in one-in-3,000 cases, eight times longer – all implicit in the table [outlier chance= 1/(1-probability); 95%= 1-1/20 at k=3; 93.3%= 1-1/150 at k=5; 93.97%= 1-1/3,000 at k=8; see downloadable spreadsheet green tab “k-Process DATA”].

k probabilities and intervals

Referencing the above table, we can also see how probability, outliers, k, and intervals are interrelated. Assume a constant 360 base rate and ten participants. They would tend to solve the problem within an average 36 years [36=360/10]. As shown in the table, in 10% of cases, they will succeed within 3.8 years; in 50% of cases, within 25 years; in 75% of cases, within 49.9 years; in 95% of cases, within 107.8 years; in 99.5% of cases, within 190.7 years; and in 99.999% of cases, within 414.5 years. Multiplying k by the 36-year average created these intervals [interval=k*average].

In the event that you might be curious about calculations in the spreadsheet, this paragraph is for you. Taking the year 2013, we see 417 man-years of work had been expended at a 360 base rate. In the spreadsheet, k is solved by a simple formula where k indicates the extent to which elapsed time exceeds average time: k= ManYears/BaseRate; so to solve for k, 1.158=417/360. And, probability= 1-1/EXP(k); so to solve for probability, 68.6%= 1-1/EXP(1.158). As they do in all the years, these two tiny spreadsheet formulas combine to give the answer [probability= 1-1/EXP(ManYears/BaseRate)]. For example, the 68.6% at base rate 360 in the year 2013.

Though this may strike you as unduly mechanistic, please note that the method applies surprisingly well even in computer simulations that provide maximum random variability. And in the case of an unvarying base rate, the relationship is quite tight.

Surprise.
Even when completely unfamiliar with probability, we can sense at the outset that waiting time problems will differ from randomly pulling numbers from a hat.

For an example, assume that you had a group of one hundred percentage numbers ranging from 1% to 100% that were placed in a hat. From the hat you might draw a sequence similar to: 5%, 67%, 3%, 85%, and so forth.

On the other hand, imagine if you were tasked with predicting probability that an event would occur within a similarly sized 100 years, from 1945 to 2045, you would be predicting the probability of occurrence for different periods at various steps, all beginning with the same start year: 1945-1956 ... 1945-1962 ... 1945-1973 ... 1945-1991 ... 1945-2013 ... 1945-2045.

In those instances you would have drawn continuously increasing numbers such as: 7%, 12%, 24%, 47%, 69%, and 87%, which are vastly different from the hat-drawn values “5%, 67%, 3%, and 85%” that increased and decreased randomly.

Waiting time problems differ from hat-drawn values, because probability numbers increase as the periods lengthen. For example, the probability for the 1945-1991 period can never be less than the 1945-1956 period, because the 1945-1956 probability is included in the longer period. As periods lengthen, probability increases. Values cannot reduce because each new value depends on the values that preceded it.

The “dependence on numbers that preceded it” is nicely and exactly explained by what is called the natural logarithm. The LOG(blank) in a computer program uses numbers, substantially from 0 to 1, in such a way that the k-process can convert probabilities to the intervals that we are interested in (even in some instances when the blank numbers are picked randomly, as from a hat). The k-process recruits the natural log to give us the pragmatic real-world numbers that we are interested in.

Waiting time problems are intimately connected to the properties of the natural log. That connection and its relationship to events in our own lives is often a surprise. But it is a sufficiently regular one that we can use it to our advantage.

Including information.
The number k is humbling. One of the greatest errors in thinking is to fail to include sufficient information before blurting out an opinion. More frequently than not, unless purely random events over time include at least one event occurring at an interval that is at least five times longer than the average interval (the base rate), then the scope of experience will be insufficient to deliver an opinion that is 99.3% reliable. The short and extended k tables are again shown below. The short k table can serve as a preliminary method to assess the reliability of the range of intervals that you are drawing on to form your own opinions.

Short and long k tables combined

You will often hear of an expert who asserted that based on 18 or so years of stock market data, conditions have changed so significantly that a large drop in stock value is vanishingly unlikely. And then, sure enough, the market tanks. The market had probably been cruising along with the most extreme interval between large drops being less than 2 times (k) the base rate, in which case the knowledge he was relying on was less than 86% reliable. He had failed to include a sufficient range of outcomes in his thinking, as hinted in the short k table.

In waiting time problems, persistent low-level probabilities (e.g., one chance in 360 for each participant in the Kablona Quest) over sufficient time brought the participants ever closer to success. To make a fair estimate of the chance of success, it is well to appreciate the influence of outliers. If the 75% probability point had been reached without success having occurred, it would mean that the outlier chance, shown in the above table, was one-in-four. Within this one-out-of-four realm of events not yet seen, there still lurked the chance for success [outlier chance= 1/(1-probability)].

As will soon be shown, if the outcome of an endeavor is either extremely advantageous or extremely adverse to the extent that any chance of error is absolutely unacceptable, it may even be necessary to have the outlier chance at one-in-100,000 or higher. A huge sample indeed. This concept is not especially difficult to grasp. Sometimes even a one-in-100,000 chance of being wrong is unacceptable. Thus we minimally need to include a correspondingly large number of possibilities in our thinking. All of this relates to small probabilities – sometimes infinitesimally small – having a larger impact than we intuitively expect. Since k is linked to all the components in the table, most calculations can be made with k and base rate alone.

In a sample of random events, the appearance of at least one very long interval between events is usually a prerequisite to having a representative, high-reliability, random sample. Kahneman frequently and correctly points out our proclivity to overvalue recent experiences when making predictions and to be famously wrong for doing so.

We humans are inclined to use predominantly recent information to form opinions. This has the dual disadvantage of using too small a sample size and also biasing our thinking to favor recent experience. In this regard the k tables can provide valuable eyewash because by showing k and corresponding probability, the tables provide an ever-ready illustration of when the range of intervals is apt to contain sufficient information on which to develop a reliable opinion.

Understanding Sample Size, k, & Waiting Time Problems.

Getting it right.
By using k, it is possible to easily understand the degree to which the occurrence of an event tends to be influenced by random chance alone; k= ABS(LN(1-probability)). A surprisingly large proportion of events (and their anomalies) are largely explainable by this process, even when causal components, including human activity, are involved.

As we will do in the following, the k-process provides a handy standard that you can use to graph the difference between real world intervals and k-predicted intervals. The difference between the two in a graph shows deviations that cannot be explained by k alone.

To make the k curve, just multiply k by the base rate (the average of the observed intervals) and use rank order divided by sample size to indicate probability. If you were of the opinion that special workings of “the hand of fate,” “distinguishing aspects of human influence,” or “small sample size” would cause a deviation from the normal k-process, you would be able to see the influence, if any, in your graph.

Interestingly, if you were given 30,000 numbers and were challenged to predict all of them with a high degree of accuracy, you can do it with k. That is, if the numbers fall into the class known as “waiting time problems,” such as those being considered here.

Grinstead and Snell in Introduction to Probability (p.53-54, 186) point out that, “A large number of waiting time problems have an exponential distribution of outcomes. ... For example, the times spent waiting for a car to pass on a highway, or the times between emissions of particles from a radioactive source [can be] simulated by a sequence of random numbers …,” which can be produced using LOG(RND) [LOG refers to the natural log, such as LN on a calculator or in a spreadsheet; RDN refers to random numbers substantially from 0 to 1].

To illustrate how well k can predict intervals based on this principle, 30,000 numbers were generated by a computer program using the “LOG(RND) multiplied by the average” principle, the average equaling 200. The numbers then were separately pasted into the Excel spreadsheet [green tab “Log(RND) DATA”].

k accuracy in predicting 30,000 LOG(RND) numbersSorting the numbers in the spreadsheet produced their sort rank, which in this instance translated into the probability that the event would occur [probability= sort_rank/ 30,000]. These probabilities were then used to determine corresponding k-values using the previously mentioned formula: k= ABS(LN(1-probability)). The k-values multiplied by the average interval length produced the estimates.

The predictions were quite good. In 50% of cases, the difference between the target value and the k-estimated interval was less than a tenth of a percent; in 70% of cases, less than 0.05%; in 81% of cases, less than 1.0%, and in 98% of cases, less than 2% – all shown in the graph, standard deviation 0.93%.

To predict interval size with such accuracy in 30,000 instances might well have surprised an unsuspecting viewer. The details can be seen in the spreadsheet. Although these numbers were machine generated, they are a consequence of the same engine that drives many real-world waiting time problems: an exponential process, a geometric distribution. That is why k is so potent. The real world and the 30,000 numbers share a common characteristic for the investigator. At the outset, the investigator often has only a series of numbers placed at his or her disposal and then must make sense of them, regardless of where they originated – whether they be from the real world or from a machine.

The method illustrates that probability is not a vague conversational concept. On the contrary, when probability is precisely stated (as it is in this example by using rank order for substantially all possible cases under consideration), vagueness is removed and probability (with the helping hand of k) can serve to deliver rational, reasonably accurate expected intervals between events. As will also be shown, even when events do not conform to a designated distribution, outcomes can be explained by slightly altering technique.

The bell-shaped fib & k.
The normal bell-shaped curve implies that the median (the midpoint) and the average are equal or almost equal. In statistics, the normal bell-shaped curve is so iconic that the majority of us tend to believe that this condition universally applies. Not so. Not so in spades.

The median occurs significantly earlier than the average in waiting time problems. More directly stated, this means that events tend to occur long before the average interval. Driving this point home, remember that the event tends to occur at k multiplied by the average. The true k at the median interval is 69.3%. You multiply 69.3% by the average to get the median time to occurrence [at 50% probability, k= ABS(LN(1-50%)), k=0.693147181]. By contrast, the average occurs when k=1.0. You will be widely wrong if you apply the normal bell-shaped construct (median = average) in this class of problem.

The tendency is to act with befuddlement or wonderment when a rare event, say purportedly tending to occur once every 100 years on average, occurs far earlier than expected. Plainly there is no reason for surprise. A waiting time event that tends to occur on average once every 100 years will have a 50-50 chance of occurring before or during the 69th year [69 years=100*.6931].

Graph of k Range According to Sample sSzeSince primary interest when forecasting focuses on when an event tends to occur, it is not the average that is of the most immediate concern. Instead, it is the median, the 50-50-chance point, the 50% probability point (whatever you want to call it) that is of most immediate interest for forecasting. In a waiting time problem, to say that an event can be expected to occur in an average blank years is tantamount to saying there is a 63.2% probability of the event occurring within blank years [k=1 at the average; 63.2%= 1-1/EXP(k)]. Perhaps true. But speaking of the average years to the event is pointless and often misleading since the event will tend to occur far earlier.

That the median interval is considerably shorter than the average does not come as a complete surprise. When we look back on the k-process, we see that intervals beneath the 50% point are disproportionately short relative to long intervals above the 50% point. Even before doing the math, we can sense that the average of the intervals together will exceed the 50% interval (the median).

Given that the median years to occurrence is so crucial, its k-value is especially important (0.6931). As shown in this graph, k at the median in a sample (similar to the average in the earlier graph) is also influenced by sample size. From the graph, two points are abundantly clear: Small sample sizes tend to distort distributions (k at the median varying widely). And, even in small sample sizes, the preponderance of events tend to occur long before the average (the k=1 point).

Lest we become excessively confident, let's take a look at at the extent to which smaller sample sizes can deliver misleading average values and similarly distort the shape of distributions.

Noise, too small a sample.
In some instances it may be impossibly difficult to collect adequate data. The rub is that when events rely on high base rates, often we never can have access to an adequately large sample size.

Sample Size Illustration

Base Rate Range Produced by Sample SizeTake the prior 30,000-number case that had a 200-year base rate. As illustrated in the table, if you had drawn small sequence of numbers from these 30,000 numbers for your sample (say of 10 to 80 numbers, as in the above table), there would have been no certainty that the sample base rate (the average of your sample data) would be acceptably close to the true average of 200.

We know this because the samples used in the above table (and the graph made from it) were systematically picked from the 30,000 random numbers which indeed did average 200. As an investigator, you might well have been misled, if you had used a small sample size because it would have had a tendency to produce an average far different from the true base rate. You can see in the graph the range of base rates that your sample (depending upon its size) would have been predisposed to deliver.

You may be tempted to wrongly believe that a small sample of yours is representative because you gathered data that was indeed correctly produced. Or, you may be tempted to wrongly assert that some causal component caused a deviation from a known true average. The difficultly is that small sample sizes by virtue of their size alone produce widely varying averages. Even when the true average from the complete data set (the 200 average in this instance) was known in advance, we see in the graph how widely the averages from the different samples varied.

Based on chance alone, the base rate derived from a sample would have varied between the Min Base Rate and the Max Base Rate, as shown in the table and graph. We would not have known whether the base rate from the sample were true. The variability of the average in a small sample size simply has enormous influence.

Since the expected time to occurrence is derived by multiplying k-values by the base rate, it is important to state the base rate accurately. Any initial failure to correctly employ the true base rate (the average) will have a proportional influence on the resulting estimates, roughly reflected at a given sample size by the percentage error section shown in the table.

Seventy-five percent of events fall within the solid-black-line boundary curves shown in the graph. And 95% of events fall within the dotted-black-line boundary curves. Similarly 75%, 90%, and 95% of events fall within the respective boundaries shown in the table. As you see, the reliability increases with sample size.

For example, when you are told an average interval between events in a sample size of 20 is 36 years, the best you can say, based on the information in front of you, is you are 95% confident that there is a 50-50 chance of the event occurring within 15.6 to 37.0 years, 25 years being your current estimate [25.0=0.6931*36; at 37.5% and 48.1% error at the 95% boundary]. The percentage errors shown in the table illustrate the extent to which small sample size corrupts base rates. Only by increasing sample size would you have been able to reduce the range of potential error and have a more reliable estimate of the average.

This is not exclusively a math problem. If you are looking at events in history, you will often be misled when using a small sample to draw broad conclusions. On the other hand, events may look so muddled and varied that you abandon all hope of personally understanding them, writing it all off to the “hand of fate” or some other concept. This is understandable. It is only with enough data that an understanding becomes possible. Prior to that, the evidence is insufficient.

A key obstacle is that with high base rates, we tend to wildly underestimate the difficulty of accumulating enough real-world data to deliver reasonably accurate base-rate estimates. The thorny impediment is that in many instances, it is insurmountably difficult to get an adequate amount of data when measuring events along a timeline.

The rule is: “sample size” times “base rate.” When the average interval is 200 years and we need 320 samples, those intervals stretch along a long timeline – 64,000 years long to be exact [64,000=200*320]. Even if we base our data on man-years using nine participants, it would still take 7,111 years to gather enough data [7,111=64,000/9]. The sample size of 320 corresponds to a base rate with a 5.7% standard deviation (Stdev%) level of accuracy, as indicated in the 320-column above.

A small sample size not only corrupts the base rate. It corrupts the shape of the distribution itself as can be seen by k at the median [where k=median()/average()] in the graph that follows. Since we know the true k at the median, we can see the extent to which the distribution is flawed within the median-to-average range of values.

k at the Max.
Previously, I indicated that the appearance of at least one very long interval between events is usually a prerequisite to having a representative sample. The long interval is best measured by k-at-the-Max, the longest interval divided by the average (the base rate), in a spreadsheet: Max()/Average().

k at the Max at various sample sizesOn the same 30,000 numbers, k-at-the-Max was tested at a variety of sample sizes. On greater investigation, we see that k-at-the-Max as a sole determiner of sample quality is insufficient. Within the 75% boundaries, we see that k-at-the-Max did move upward, indicating an increasingly representative sample. But there was a wide range that you might have drawn in your own sample. You still would not have been sure that you had a sufficiently representative sample.

Although k-at-the-Max is a useful cautionary indicator because it shows that you have at least one interval in your sample confirming that long intervals are possible, you cannot rely on it as a reliable indicator of quality of the complete sample.

In the k-at-the-Median and k-at-the-Max graphs, we were looking at a small portion of the total expected distribution: 13.5% of the distribution in k-at-the-Median and 36.8% of the distribution at k-at-the-Max [13.2%= 63.2%-50%; 36.8%= 100%-63.2%]. From these small sections we derive some information, but not enough to have confidence that our sample is representative.

Intuitively we can sense additional reasons why the small sections would tend to be insufficient. As already seen, the average is often misstated in a small sample size. This will influence k-at-the-Median and k-at-the-Max values, because both of these depend on a correctly-stated average. Further, this exercise is based on an observation of a true group of 30,000 numbers (not a theoretically idealized distribution). Because of this, there are modest anomalies, one of which is a single, slightly larger maximum interval than would have normally been expected. This high value is largely responsible for the high-range distortions seen above the 75% upper boundary in the graph.

Small sample sizes are always a problem. These illustrations convey a central point. The information we rely on when it is derived from small sample sizes often tends to be misleading, whether it be in the mathematical sense or in the logic we employ in conversation. Having looked at these features of waiting time problems, we can move ahead using k to solve problems of a completely different character.

 

 

Part II
The Worm Turns.

In the foregoing we looked at Tim, Barbara, Giles, Karen, Peter, Wendy, Jacob, Jimmy, Kate, and Matt engaged in a beneficial endeavor. Using the identical calculations – but substituting the names United States, Russia, United Kingdom, France, China, India, Israel, South Africa, North Korea, and Pakistan – an entirely different case is shown in the following, having a decidedly unpleasant outcome: the possibility of a nuclear weapon being used somewhere in the world.

Chance operates identically whether the outcome is beneficial or adverse. So while in the previous case we sought to quickly reach the goal, with nuclear weapons, the objective is to employ sufficient preventative measures to reduce the chance of occurrence.

Stated simply.
Lest you be inclined to believe the exercise here is misleading, consider an exceptionally plain and practical illustration. If you were given a very long period of 3600 years and one nation during that period successfully exercised a 360-year base rate of one nuclear first strike during each 360-year interval, then you would have 10 nuclear first strikes by that nation during the 3600 years (10 nuclear first strikes = 3600 years divided by a 360 base rate). This would occur whether a random or any other distribution applied.

If you had ten nuclear nations who were similarly so ostensibly peaceful that they also would only successfully initiate a nuclear use once in 360 years, then during the same 3600 years you would have had 100 nuclear first strikes somewhere in the world during the period (100 first strikes = 10 nations times 10 nuclear strikes initiated by each nation).

Even with all nations being ostensibly peaceful, there would have been a nuclear attack on average once every 36 years (36-year average = 3600 years divided by 100 first strikes). This is why nuclear proliferation is so dangerous. It is a lottery of disaster even among ostensibly peaceful nations. The scenario is more unfavorable when nations are frankly warlike and have base rates significantly lower than 360. The only means to reduce the likelihood of nuclear attack in this example is to increase base rate or to reduce the number of nuclear-weapon nations.

Variability and getting base rates right.
The variability that we observe in the real world comes from a variety of base rates applied during different periods. This variability is unrelated to the method of probability calculation – whether it is based on standard product series computations (commonly used in statistics), or based on the k-process dealt with here. Both of these methods yield identical results and are suitable for indicating exact probability, even with base rates widely departing from any known distribution curves.

Since base rates may change in an unpredictable manner, the curves and data that you see here are based on explicit average base rates. As long as a base rate reflects the geometric average probability of all proceeding events in the period under consideration, whether they be constant or variable, then in that year, the probability will be accurately stated.

Later, in a Monte Carlo simulation, you will be able to see a process with varying base rates and different sample sizes. Despite any such variability, once given a correctly stated base rate for a period, you can then also know the probability that applied during the same period.

These are the conditions that would exist when we look at a designated historical sample. The k-process gives us a view of how probability acts over time and the requirements for engineering a more peaceful outcome.

Conventional War Base RatesEstimating a base rate for a nuclear use may initially appear to be unmanageably daunting, but we have more information than is commonly appreciated. It is easy to calculate the base rate for wars of various sizes. To get nation-years, multiply the total number of combatting nations by the number of years in the period under consideration. Divide the number of nation-years by the number of wars to get the average base rate. From 1942 to 1992, for wars of 56,000+, 177,800+, and 1,000,000+ deaths, the base rates were respectively 72, 105, and 163 (Jeanes, Forecast and Solution, p. 61; referred to as F&S in later citations). Twenty-three wars are represented in the graph, which on the x-axis illustrates the intervals between wars in this sample.

Because a nuclear use is so much more horrific than conventional war, and because it is believed that deterrence would tend to forestall a nuclear use, the 163 base rate for large scale conventional war (or the 180 base rate frequently shown here) is presumably an approximate low-end bracket for a nuclear weapon use, roughly in the range of a conventional-war probability.

Keep in mind that a high probability speaks only to the question of luck. It does not indicate certainty. Even a 90% probability means that there is a one-in-ten chance (10%) that the event would not occur within the designated period [10%=100%-90%]. Probability provides a method for decision-making in the face of uncertainty. It seldom indicates certainty, but it frequently indicates a path to prudence.

Download.
Downloading the original data (10+ MB) is not necessary unless you would like to study the data and the formulas more deeply. You can download the spreadsheet by clicking here. An arrow will usually appear near the bottom of your browser showing that it is being downloaded. And a box will also appear near the bottom of your screen to indicate that the Excel spreadsheet has finished downloading. Click on the box to view the spreadsheet. It contains data supporting the discussion here.

 

Base rates in history.
Throughout the history of nuclear weapons, including “duck and cover,” the Cuban Missile Crisis, various U.S. presidents and Russian leaders, and proliferation of nuclear weapons, your emotional response could readily go up or down depending on your confidence in the players. All such interpretations tend to be heavily biased to recent events and often fail to contain a sufficiently large sample size on which to make a sound judgment.

In the section of the downloadable spreadsheet below, you will find the previously discussed base rates approximating 360, 180, and 90 highlighted in yellow, purple, and red. At the end of the 1945-2013 period, these values correspond respectively to a 68.6%, 90.1%, and 99.0% probability when interpolating among the shown probabilities at the top of the spreadsheet.

Estimating nuclear weapon base rates using k determined probability

Even though as of this writing, there has not been a nuclear use since 1945, the above section of the spreadsheet can be employed to make a competent analysis of the prevailing base rate during the 1945 to 2013 period. If it were your opinion that at any point in the period, or in the period taken as a whole, the chance of a nuclear use was 90% or higher, then the base rate would have been 180 or lower (the 2013 spreadsheet row, the purple highlight at the 90% column) and the deterrent effect of nuclear weapons would have been about the same as for other large scale conventional wars.

The difficulty when engineering nuclear peace is that it requires extraordinarily high base rates. To have made it through the 1945-2013 period without a nuclear use at a 50% probability (the median, a level of safety no higher than a flip of a coin) would have required a 602 base rate (at the year 2013 row and the 50% column). Even under ostensibly peaceful conditions throughout the 1945-2013 period, it would have been entirely normal at a base rate of 602 not to have had a nuclear attack prior to 2013. That the 1945-2013 period contained numerous points of high risk suggests that the base rate was considerably lower than 602. By these means and working in reverse, rather than collecting base rate data for each year and for each nation, it is possible to make a rational estimate of the base rate to date. It also becomes possible to overcome the more general problem that collection of proven base rates along a timeline is substantially impossible.

More ambitiously, to have held the risk to a one-in-four chance would have required a base rate of 1,450 (at the year 2013 row and the 25% column). However, the Cuban Missile Crisis alone would seem to have exceeded a one-in-four chance.

To get a reliable consensus estimate from professionals, you could query a large number of experts in the field asking them: 1) What was the probability of a nuclear use during the 1946-2013 period? And, 2) What was the highest probability of a nuclear use during any single crisis? Take the higher of the two [it would have been a false understanding of probability to believe #2 could exceed #1 since probability accumulates over time]. Take the median of the estimates to get a consensus probability [not the average; the average is more prone to bias].

Then, using the spreadsheet above, determine the corresponding (row and column) base rate. If, for example, the expert consensus had exceeded 75%, then the base rate for the period would have been 301 or lower as indicated in the above spreadsheet (at the year 2013 row and the 75% column). If, by throw of luck, the consensus expert estimate of probability were exactly correct, then in the year 2013, with a sample size of 417 nation-years, the base rate during that period would have been accurate at roughly the 5.7% standard deviation level of accuracy suggested in the earlier sample size table.

This capacity to work in reverse, estimating first probability and then working from that via a spreadsheet, such as the above, to determine base rate is invaluable. Even when uncertainty exists regarding probability values, it is often possible to bracket between lower and upper range plausible probabilities in order to get a bracketed range for base rate estimates.

In many fields, there have been huge improvements in safety and reliability. Airplane safety, for example, was 2.8 times better in 1990 than it was in 1969 (Jeanes, F&S, p. 73). Similarly car tires, memory chips, and hard drives are all vastly more reliable than when first introduced. But evidence of a sea change in a nuclear weapon likelihood of use has yet to appear. As the time period under consideration enlarges, probability accumulates.

Focus on Nuclear Weapons World from 2009 to 2003

2008-2033.
Perhaps you may be inclined to say that the 1946-2013 period contains an excessive amount of Cold War history. Should that be your feeling, as was done here, at year 2008, you can edit the number “0” into the columns “Nuclear Weapon Nation-years” and “Years With Nuclear Weapons” in the spreadsheet and extend the spreadsheet one year.

Doing so restarts the calculations and focuses on a 25 year period (recent and future) that contains up to ten nuclear-weapon nations: United States, Russia, United Kingdom, France, China, India, Israel, North Korea, Pakistan, and perhaps one other yet unknown nation.

In this scenario even if one were to make the highly optimistic assumption that all nations were exceptionally responsible and not having a tendency to use a nuclear weapon but once in a 360-year period, then by year 2033, there would be a 49.4% chance of a nuclear use (substantially a median chance of a nuclear use).

And if they were no more responsible than in the approximate conventional-war scenario (180 base rate), there would be a 74.4% chance of a nuclear use sometime during the 2009-2033 period. If there were numerous irresponsible nations and the base rate were 90, one could expect a 93.4% chance by 2033. If by way of experimentation, you had been inclined to edit the previously discussed 301 expert base rate into the downloadable spreadsheet, you would have found the probability of a nuclear use by 2033 to have been 55.7%. In any event, in all of the scenarios, the chance of a nuclear use within the 2009-2033 period appears to approach or exceed the chance of a coin flip (50%).

As for an eight-year U.S. presidential term of office, we also see that 360, 180, and 90 base rates respectively would produce probabilities of 18.8%, 34.1%, and 56.5% in the 2009-2016 period. Under such conditions, on their watch, all forthcoming US presidents (Republican or Democrat) serving for eight years would face probabilities of a nuclear use occurring somewhere in the world that would be similar. To reduce a nuclear use risk, the options are to increase base rate or to reduce the number of nuclear-weapon nations.

By way of a cautionary comment, editing the spreadsheet into a shorter interval, as was done above, does not reflect the full probability of a nuclear use since inception. It only serves to focus on risk produced within a small section of a nuclear weapon timeline.

Proliferation.
There are about 22 nations that have the capacity to go nuclear (Jeanes, F&S, p. 26, 84-85). In some nations, pressure to gain nuclear weapons may now be intensifying. As recently as 2007, Israel bombed a suspected nuclear facility that North Korean technicians were conctructing in Syria. That most nations have not sought nuclear weapons has been a boon to nuclear peace. Notwithstanding, the present world is distinguished by its nuclear proliferation. At the time of the Cuban Missile Crisis in 1962, there were only four nuclear-weapon nations. Now there are more than twice as many – nine. Though not on the public radar, more than 2,000 nuclear tests (above and below ground) have occurred since 1945. Even today bones, teeth, and hair of mammals contain traces of radioactive carbon-14 created in the above ground explosions. Concurrently there have been accidents. As k-driven math teaches, the once ostensible danger of the Cold War has been replaced by stealthily silent, dangerous proliferation.

Whether Israel, India, Pakistan, North Korea, or others were to rise to a 360 base rate is open to conjecture. If not, then, at the remaining base rates the nuclearly proliferated world may well pose a significantly greater risk of a nuclear use than did the Cold War:

At 360, 180, and 90 base rates respectively,
25 years Cold War risk, 1945-1969: 19.9%, 35.9%, and 58.9%
25 years proliferation, 2009-2033: 49.4%, 74.4%, and 93.4%

The greatest and most neglected story in nuclear weapon history is the enormous boost to nuclear peace that was achieved when four nations (South Africa, Ukraine, Kazakhstan, and Belarus) gave up their nuclear weapons in the 1990s. The gains produced by negotiation and sensibility in these instances were simply enormously advantageous and should be loudly celebrated.

The process was considerably aided by Republican Senator Lugar and Democratic Senator Nunn’s collaboration. If those nations had not given up nuclear weapons, we would have had 13 to 14 nuclear-weapon nations now. At 360, 180, and 90 base rates respectively, we would have had the following probabilities in each of these periods (as can be seen by editing the downloadable spreadsheet):

2008-2032: 61.5%, 85.2%, and 97.8%
1945-2013: 74.7%, 93.6%, and 99.6%
1945-2032: 87.9%, 98.5%, and 100.0%

Drilling deeper into base rates.

Nuclear probability base rates repeated

The above section of the spreadsheet is shown again to drill deeper into base rates. Your assessment of the risk of a nuclear use will depend on how peaceful you believe nations can be. One nuclear weapon use by a nation during an average 360 nation-years would reflect an optimistic assessment.

Violent Crime base rates by sex and ageOne use in 180 nation-years would be a midpoint assessment, which incidentally is approximately the same probability that a 15-24 year-old male in the USA would commit a violent crime in year 2011 [174=100,000/573.9]. One use in 90 nation-years would be a pessimistic assessment.

Judith Selvidge, in writing about procedures for assigning probabilities to rare events, points out the value of “external calibration,” which is to say, referencing a rare event to some totally different system “whose only or main connection to the basic event is that they, too, have very small probabilities of occurrence” (Jeanes, F&S, p. 72-73).

The graph, based on United States data, serves as “external calibration.” It also illustrates differences in proclivity to employ violence according to age and sex. The differences produced by sex, age, state, and nation are so great that it is no wonder that we humans approach the possibility of employing violence with vastly different viewpoints.

Coincidentally, the 180 and 360 base rates, which we have talked about so frequently, respectively roughly correspond to male age 23 and age 35-39 propensities to illegally commit a violent crime. Similarly, the statewide averages for Arkansas and Oregon also respectively correspond to the 180 and 360 base rates. None of the groups were as low as the pessimistic 90 base rate.

In the nuclear case, keep in mind that the chance of an accidental use can be as great or greater than the chance of a willful use. If the chance of an accidental use were once in 720 nation-years and the chance of a willful use were also once in 720 years, together the two would produce the 360 base rate (360=720/2).

Sample time line for estimating base ratesTo put reliability timelines into perspective, consider historical dates. Once in 360 years would exceed the 329 years that have elapsed since the year 1684 when Isaac Newton “discovered” gravity. Once in 720 years would considerably exceed the time that has elapsed since the year 1455 when Gutenberg invented the printing press.

And, if one had the objective of preserving nuclear peace at a one-in-four chance of failure in the period from 1945-2013, the requirements would be breathtaking: a 1,450 base rate, with accidental and willful base rates hovering around 2,900, the latter reaching back almost to the time of the Phoenician invention of the alphabet in 1,000 BC. All of these nuclear reliability requirements may exceed human capabilities.

The temptation when looking at war is to attribute cause to the aggressor and to employ a detailed explanation of causes. But this temptation is completely misleading when assessing long-term probability of a nuclear use, which can come in any year and which can be influenced by numerous causes. Even if each of the causes could be modeled with precise distribution curves, all of the different causes melded together tend to become substantially random. The problem reduces itself to assessing the likelihood of a nuclear use for any reason, whether the aggrieved party is justified or not.

As Lewis Fry Richardson, in Statistics of Deadly Quarrels, long ago observed, “Because of the fickleness of alliances and the flitting of distraction and opportunities, different pairs of states behaved in a manner statistically independent of one another when the facts are collected from an interval of at least a century” (Jeanes, F&S, p. 64). Given that the base rate reliability levels required in each nation to preserve nuclear peace are usually centuries long, we can be confident that components can be treated as largely independent and random, especially well-suited for k-process analysis.

Further, using failed-states nomenclature, only three of the nine current nuclear-weapon nations are ranked as stable. The instability in the remainder likewise suggests that “when the facts are collected from an interval of at least a century,” the data can be treated as largely independent and random, well-suited for k-process analysis.

Tversky and Kahneman have pointed out that we can easily be led astray when intuitively evaluating complex systems, especially when we anchor our opinions to a recently preceding frame of reference:

“A complex system (e.g., a nuclear reactor or a human body) will malfunction if any of its essential components fail. Even when the likelihood of failure in each component is slight, the probability of an overall failure can be high if many components are involved. Because of anchoring, people will tend to underestimate the probabilities of failure in complex systems” (Jeanes, F&S, p. 43).

Nuclear peace is such a system. It depends on numerous components not failing.

All of the components.
This snapshot serves to illustrate how components are mathematically interconnected. Here the composite base rate (Reluctance Level) is shown at 295, based on the following individual national base rates: 875 for Britain, 760 for France, 580 for the United States, 450 for China, 270 for Russia, 240 for Israel, 220 for India, 199 for Pakistan, and 150 for North Korea [295 is the geometric average, see spreadsheet green tab “Formulas”].

nukefix screenUnder these circumstances, the median years to a nuclear use is 22.7 years. The curve shows probability of a first use increasing over a 100-year period.

Now to explain international components that can produce the 295 base rate.

RLWillful, the Willful Use component, is shown with a base rate of 882. RLWillful is subdivided into Deterrence and Civility with base rates of 50.1 and 17.6 respectively.

Deterrence (DT) is subdivided into Pure Deterrence, Decision Error, and Irrationality, shown at base rates of 155, 105, and 249. A Deterrence failure can include irrationality (as in a psychiatric event), a decision error (faulty reasoning), or a deliberate intention to be the aggressor (a pure deterrence failure). A failure of any of these subcomponents causes Deterrence (DT) to fail.

On its own, Deterrence failing is not enough to cause a nuclear attack. Civility must fail along with it. In the snapshot, in the Deterrence-Civility mix, primary reliance (74%) was placed on deterrence.

Civility is subdivided into Morality, Friendship, War-Weariness, and Absolute Controls, shown at base rates of 4.8, 2.6, 1.3, and NA (not available). Absolute Controls in the form of international agreements do not now prevent the use of nuclear weapons. Thus it is shown with a NA base rate. The most potent agreement has been the Nonproliferation Treaty, which prevents the acquisition of nuclear weapons in the first place.

In contrast to many of the components in the snapshot, the Civility components are inhibitory. Any of the Civility subcomponents can prevent a nuclear use. Only when they all simultaneously fail is there a nuclear use. The inhibitory components are characterized by being able to be multiplied together to produce their combined degree of protection:

CV = ML * FR * WW * AB
RLWillful = DT * CV
RLWillful = DT * ML * WW * FR * AB

For example, when Civility has a base rate of 17.6, all that would be required in the above formulas would be for Morality, Friendship, War-Weariness (ML, FR, WW) to have each been equal to 2.6 [2.6=17.6^(1/3)]. In the snapshot primary emphasis (74% of inhibition) was placed on deterrence (DT) at a 50.1 base rate and Civility at 17.6, which produced RLWillful via the simple calculation: 882=50.1*17.6 (Jeanes, F&S, p. 657).

The military position is that Morality, Friendship, and War-Weariness have been insufficient to provide adequate protection. That position is correct. Their base rates are typically low. In one moment, morality will preach “love thy neighbor” and soon thereafter, urge vanquishing a perceived adversary.

Notwithstanding, nuclear peace does not require that primary reliance be placed on nuclear deterrence. Some of the values can be relatively low. It only requires that when all the inhibitory components (DT, ML, WW, FR, AB) are multiplied together, they produce a required RLWillful base rate. Nuclear peace relies on that simple multiplication.

In military lexicon, C3I stands for command, control, communications, and intelligence, shown here in the snapshot with base rates of 1,769, 1,574, 1,946, and 1,822. A fatal failure in any of the four C3I components results in an accidental nuclear use. Even an infinitesimal probability of any C3I component failing presents enormous risk. To illustrate the great risk of accident, with the base rate for accident at 442 and the base rate for a willful use at 882, the composite base rate for a nuclear use (willful or accidental) is the 295 base rate shown in the snapshot.

To maintain the 442 accidental base rate requires that on average, the base rate for each of the four components be about four times larger, averaging 1,768 (stretching back to the beginning of the Middle Ages). The burdens to maintain reliability are so enormous that even an optimistic assessment suggests that the chance of an accidental use is often greater than the chance of a willful use. That such a large number of accidental components exists tends to also increase randomness in the short term.

Sophisticated nations, in the main, seek to tightly manage accident components and have well-developed methods for modeling chance of failure. All nuclear nations are not so able. The misfortune with a strong nuclear deterrent is that it can foster an accidental nuclear attack from an adversary who wrongly interprets its opponent’s intentions. It is the opponent’s command, control, communications, and intelligence structure that determines whether the sophisticated nation is “accidentally” attacked. No matter how elaborate a particular nation’s anti-accident systems are, it is at the mercy of its adversary’s anti-accident capabilities. Flight times to target from neighboring nations, from submarines, and from other sources often compress decision-making into nine minutes or so. Under such conditions, the chance of accident is heightened (Jeanes, F&S, p. 221). Even under less stressful conventional-war conditions, in 1988 the United States, identifying it as a F-14 Tomcat fighter, shot down an Iranian passenger plane killing its 290 occupants while the plane was on its customary and unthreatening flight path within a commercial air corridor.

Pulase Train of Nation-yearsIn sum, a nuclear use can be caused by a willful decision or accident. In the illustration here, the pulse going low indicates a failure. RLWillful refers to a willful nuclear use; RA, an accidental use. RL refers to reluctance level, which is the same as overall base rate (Jeanes, F&S, p. 173-3, 268-398).

Accidental use (RA) is a stimulatory component. If command is delegated to numerous military officers, then the chance of any one of them initiating a nuclear use increases. If anyone within the chain of command has the capacity to initiate a nuclear use and if he does so contrary to the wishes of leadership, then a command failure occurs.

A command failure also occurs when the command structure is subverted and an unauthorized person initiates an attack.

A control failure refers to a weapon being inadvertently detonated or falling into the hands of people (e.g., terrorists) who detonate it. Bombs for example may be stored in foreign countries, such as the Netherlands, that may be beyond the sphere of tight control in the owning nation. Or a control failure can result on your own territory. In January 1961 a recently revealed secret report indicates that a 4-megaton nuclear bomb, far larger than any now in service, "packing 260 times the explosive power of the weapon that decimated Hiroshima" nearly detonated when a US bomber on a routine flight crashed in Goldsboro, N.C. Three safety mechanisms designed to prevent accidental detonation failed. "Only a low-voltage switch kept it from exploding upon impact. ... Had the warhead exploded, radioactive fallout could have spread over the Eastern Seaboard, hitting Washington, Baltimore, Philadelphia and New York."

A communications failure can occur when an electronic signal malfunctions and is wrongly interpreted as indicating that a nuclear attack be launched, as in radar electronics detecting what is believed to be an incoming weapon requiring a retaliatory response. An intelligence failure refers to an event such as a report from a spy or other source that wrongly indicates such serious consequences that a nuclear use is initiated.

A recent example of an intelligence failure that started a conventional war is the false weapons of mass destruction justification that caused the 2003 Iraq war – an intelligence failure a decade ago as of this writing, which in nuclear weapon decision-making would need to reach the level of only one failure in thousands of years. Each of these accident subcomponents also present additional opportunities for failure. A complete failure in any of them results in a nuclear use (Jeanes, F&S, p. 203-268).

Stimulatory vs. Inhibitory Components.
There is a vast difference between stimulatory subcomponents (working separately to increase the chance of a nuclear use) and inhibitory subcomponents (working together to preserve nuclear peace).

In short, stimulatory: any of them can cause; inhibitory: any of them can prevent.

With stimulatory subcomponents, the average for all subcomponents is a multiple of the central component. For example, with four subcomponents in Accident (RA), the average subcomponent would require a base rate four times greater than RA. Thus with an RA of 442, multiply 442 by 4 to get a 1,768 average base rate for the four subcomponents [1,768=442*4; 442=1,768/4]. This occurs because if any of the subcomponents fail, then the primary component RA also fails, and in this instance, there is a nuclear attack.

Inhibitory subcomponents, in sharp contrast, are the root of the central component. All the inhibitory subcomponents must fail together for the primary component to fail. Take the case of RLWillful. It is made up of DT, ML, WW, and FR together. With RLWillful at 882, each of these four subcomponents (including deterrence) could average 5.45 and still deliver the 882 base rate [5.45=882^(1/4); 882=5.45^4].

With Accident (RA), average subcomponents are “*4” (stimulatory); with RLWillful average subcomponents are “^(1/4)” (inhibitory).

We will again return to this concept. Once you have nailed it down, you are in a good position to engineer nuclear peace. Now to make sense of it with additional examples.

Fixing it. The bundle of sticks. The three-legged stool.
We have looked at numerous components and have seen many which require astronomically high base rates to preserve nuclear peace. In this class of components, the high values are necessary because if any one of these components fail, then nuclear peace fails. There is a nuclear strike. Each of these functions rather like a leg on a three-legged stool. If any of the legs break, the stool tumbles.

On the other hand, there is another class of components – inhibitory subcomponents (DT, ML, WW, FR, AB) – in which all of them must fail together before a nuclear use occurs. These have much lower base rates. Like a bundle of sticks, if one of these momentarily fail, it still remains difficult to break the bundle. And the larger the bundle, the more difficult it is to break.

Mathematically these inhibitory subcomponents (DT, ML, WW, FR, AB) are handled differently than the no-component-can-fail subcomponents. They are literally multiplied together, as highlighted in the following formulas that calculate the base rate (RL) and RLWillful. Notice that deterrence (DT) is just one of the inhibitory subcomponents. It has no sacred standing. Increases in any of the other underlined inhibitory subcomponents can just as readily extend nuclear peace and increase RLWillful:

RL = 1 / (1 - (1 - (1 / (DT * ML * WW * FR * AB))) * (1 - 1 / CM) * (1 - 1 / CO) * (1 - 1 / CU) * (1 - 1 / IN))

Like the sticks in a bundle, morality, war weariness, and friendship (ML, WW, FR) are weak, often in the neighborhood of 2.6. Yet when their product is multiplied by deterrence (DT) at 50.1, the willful-use base rate skyrockets to 882:

RLWillful = DT * ML * WW * FR * AB

Table to Double Years of Nuclear PeaceNow think deeply. If you were to introduce any additional independent inhibitor, even a weak one, say with a base rate of 2, RLWillful doubles. You go from a RLWillful of 882 to a RLWillful of 1,764. This may not be intuitive, but takes hold in the math, and more significantly, in reality [1,764=882*2]. Keep this in mind.

But, as shown in the table here, the new RLWillful at 1,764 would have added only 4.5 years of nuclear peace when accidental use is at 442.

Accidental use, in this representation, is such a potent adverse influence that we would have derived 11.3 years of nuclear peace by doubling it alone, leaving RLWillful untouched.

As the table tells us, doubling both RA and RLWillful brings, at the median, an additional 22.7 years of nuclear peace, effectively doubling the years of nuclear peace [formulas and calculations can be seen in the spreadsheet, green tab “Formulas”].

All of these assume a nine nation nuclear proliferation.

There may be a broad antipathy toward nuclear weapons, but it is not a deep antipathy. Deep changes depend as much on the populace as governments. In the main, it is rare when a government moves significantly ahead of its people.

If there were a deep taboo in the public and governmental psyche, then we would find a TB in the formulas to signify a taboo. It does not exist. A taboo, if it were an additional independent inhibitor, say with a value of two or higher, would have been sufficient to have brought a doubling of RLWillful. And it is possible to introduce other independent inhibitors, including Absolute Controls (AB), that can increase RLWillful even more.

The interesting point is that it is well within the capacity of nations to engineer a nuclear weapon system that is considerably less dangerous. When the size of nuclear weapon systems are hugely reduced, there are two primary consequences. The chance of accidents can greatly fall and the severity of impact, even if a nuclear weapon is used, is reduced.

If proliferation is also reversed, that, too, reduces the chance of a nuclear use, as has been repeatedly shown here and in the formulas.

As long as weapons are embraced as a means of strength, rather than deplored as a taboo, progress towards reducing nuclear weapons risk will be stymied. The numbers and techniques you see on these pages and in the downloadable spreadsheet make it possible for you to see how a safer system can be engineered.

It is by no means an easy act to bring about in the real world, which too often has an enthusiasm for nuclear weapons, rather than a repugnance for them. If the public, diplomats, negotiators, and military do not get the math right, how can they get negotiation right?

In a similar context, I cannot help but think of a remark Stimson made regarding the massive deaths of World War I:

“War as a desperate and horribly destructive test of the whole fabric of civilization was war unthinkable in 1914.

If younger Americans found it hard in later years to reconstruct a proper image of life before the first war, many of their elders faced the same problem in reverse (Jeanes, F&S, p. 636).

The chance of a nuclear catastrophe can be considerably reduced by carefully engineering reliance on a variety of protective components and rolling back proliferation. A deterrence posture can be reduced, chance of accident reduced, and the various inhibitory components structured to create a far safer world.

The math is plain. The options are plain.

Probability accumulates.
Probability accumulates over time. When nations fail to take fundamental corrective action to preserve nuclear peace for the long term, but instead wait indefinitely before taking appropriate action, a nuclear use, given sufficient time, approaches certainty. Even highly optimistic base rates do not nullify this, as the math amply illustrates. The following table reviews how nuclear weapon history and the near term future appear at the discussed base rates.

Nuclear Weapon Spreadsheet

Proliferation again.
Proliferation is key. It takes exceptionally high base rates to preserve nuclear peace in a world with many nuclear-weapon nations. The above table illustrates the danger of a nuclear use even among ostensibly peaceful nations (having up to a 360-year base rate). Nine to ten nations ultimately constitute danger. If in the real world the upper limit of nuclear peacefulness is bounded by a 360 base rate, we would know that probabilities of a nuclear use are greater than those shown in the 360 column above. Similarly, if the base rate is between any of the three levels of nuclear peacefulness shown above, probability can be found by bracketing between them.

Proliferation compared 10, 8, and 2 nationsFor example, decade by decade, the table here uses a constant 270 base rate (the midpoint between 360 and 180) for all nations to illustrate the probability of a nuclear use, whether it be in a ten or eight-nation proliferated world, or in a classic two-nation Cold War scenario [values shown at the end of each decade; spreadsheet green tab “k-Process DATA”]. A ten-nation proliferation would include North Korea and Iran. Eight-nation proliferation would not.

A dictatorship and aggressive militaristic dynasty, such as North Korea, presents extreme risk. Unlike Iran, North Korea presents an extreme case. It already has nuclear weapons. Its government presents deeper and more vile problems than customarily encountered in the long history of nations.

In addition to the math, it seems prudent to appreciate the circumstances of the North Korean people and the extreme conditions of their prison system (virtual gulag/death camps), and perhaps components which over time may ameliorate difficulties. … Very difficult problem in any case … Hopefully the leaderships and people in the relevant nations can come to a satisfactory outcome.

Even if tolerable adjustments can be made now, they would not seem to alter the central concern that even eight-nation proliferation (without North Korea or Iran) presents high risk, which over the long term only modestly differs from ten-nation proliferation. The reduction of two nations (Iran and North Korea), albeit beneficial (especially since they appear to have low base rates), is not a cure.

Probability of Occurrence in Remaining LifetimeYour lifetime.
If you should want to consider the possibility of a nuclear use in your lifetime at a current nine-nation proliferation level, you can do so using actuarial tables. The difference between your present age and the expected length of your life would serve as the period under consideration. At nine participants and at stated base rates, the graph here illustrates the likelihood of occurrence during your lifetime
[spreadsheet green tab “k-Process DATA”].

This same graph can also illustrate the chance of attaining an important desirable goal, as in the Kablona Quest, when a lifetime of persistence is applied for beneficial reasons. Again, it is well to keep in mind that probability can be employed in both instances (positive or adverse outcome) to work to your advantage.

When the objective has a positive outcome, you can apply a sensible dose of persistence: lowering base rates and increasing number of participants to boost the likelihood of achieving the goal. When the event is highly adverse, as in a nuclear use, you can work to increase the level of protection by boosting accidental and willful base rates. And, you can work to reduce the number of nuclear-weapon nations.

 

Monte Carlo.

Until this point, the discussion has involved working with primarily fixed base rates such as 360, 180, and 90. A Monte Carlo Simulation can accommodate changing base rates and illustrates why even many nations being exceptionally peaceful in the vast majority of years still produces base rates that too frequently are disappointing.

A Monte Carlo Simulation provides a way to illustrate and test k under conditions that are entirely and unpredictably variable. It also has the advantage of reaching far beyond years that we would be exposed to in our lifetimes. I have included both the data from the simulation and its computer code in the spreadsheet [green tab “Monte Carlo Simulation DATA” tab].

As previously mentioned, computers can readily develop random numbers (using the RND function). The Monte Carlo simulation used here generated random base rates between 1.05 and 2,945 in each year, where each nation in each year substantially drew a completely random base rate within this range. The process was analogous to nations standing before an immense roulette wheel and annually drawing randomly placed numbers ranging from 1 to 2,945, the draw representing the nation’s nuclear-peacefulness base rate for that year.

A first inclination might be to say that a roulette wheel would present a mean and unfair interpretation of the ability of people to sustain nuclear peacefulness. Not so. Consider that there are 2,945 base rate slots on this wheel representing a wide variety of levels of peacefulness nations could choose from. The full range extends up to the upper limit 2,945-year base rate. Half of the base rates are greater than 1,473. Only one-in-eight is less than the 360-year base rate that is so frequently discussed here. In all of these instances, nations would appear nuclearly pacifistic. Only one-in-43 draws produced a base rate less than the number of years in which we have already had nuclear weapons [1,473=2,945/2; 8.18=2,945/360; 42.68=2,945/69]. Only once in 2,945 instances is a nation so warlike that it tends to initiate a nuclear use within a single year. In sum, the roulette wheel presents an overwhelming preponderance of probabilities that each nation in each year will be ostensibly highly peaceful.

The simulation, at nine nations per year, created 100 cases, each with 200 years of data. This method produced 180,000 nation-years of data (far more than we would ever have historical access to) and under conditions that may exceed real world variability [180,000=9*100*200].

Here we have a Monte Carlo Stress Test of pure variability. It gives an opportunity to test k against conventional statistical product series calculations under a wide variety of conditions. The results delivered by k and conventional calculations were identical. More importantly, the simulation gave an opportunity to visualize how the ultimate base rate tends to vary over time in the face of pure randomness.

The pens of historians and commentators might be predisposed to allot among nations the outcomes of nine turns of the roulette wheel in each year, giving the lowest values to nations they judge to be least peaceful. But when the objective is to determine the probability of a first occurrence, it makes no difference which nation receives a draw in the simulation. They are all mathematically treated the same.

2013 Random Distribution of Base RatePotentially a wide variety of base rates in 2013.
As shown in the graph, in the year 2013, the simulated base rates varied from 54 to 659. This variability was caused by randomness within the 100 samples generated by the program. About one out of four (25% of cases) were less than 275. The average was a 360 base rate. The median,the midpoint, is 368 at 50% on the x-axis. As you can well imagine, the histories that can be delivered under such a wide variety of base rates are apt to be vastly different – all of which can easily result from the hand of chance alone.

But even the most optimistic of all the 100 simulations did not produce cause for cheer. The best outcome was a 659 base rate. By 2013, a 46.9% probability would have resulted from it – dangerously near the median expected years to a nuclear use [k=417/659, where 417 is elapsed nation-years; 46.9%=1-1/EXP(k)]. The low 54 base rate would have been catastrophic at 99.96% probability [k=7.72=417/54; 99.96%=1-1/EXP(7.72222)]. Either of these extremes could have been explained by chance alone, albeit as outliers with a chance of about 1%. Both dangerous.

Insufficient Sample SizeWhat a surprise. Base rates fall.
Another trend appears, which we likely neither anticipated nor imagined. In the early years when there was little data, the results were hugely variable and early base rates were far higher than in subsequent years. Even at the 360-year average in 2013, we see base rates continuing to descend, albeit more slowly as years progress. Under this randomness, the system did not become more peaceful. To the contrary, it became increasingly dangerous before leveling off.

Kenneth Waltz has written that, “deterrence has worked 100 percent of the time. After all, we have deterred big nuclear powers like the Soviet Union and China. So sleep well.” This is the first and most serious error of failing to consider sufficient sample size.

Kahneman warned us about being biased and failing to consider sufficiently large sample sizes. When high reliabilities are required, these sample sizes must be huge. Prior to 1980, the graph was showing base rates in excess of 400. A hundred years later, the numbers would have been in the neighborhood of 315 or less. By the year 2013, we would not have had enough empirical evidence. The 360 average in 2013 is just a point on a downward curve.

Ever-eager political theoreticians and commentators would have had ample opportunity to fashion stories around whatever causes and reasoning they might deem sufficiently plausible to accommodate to the curve. In any of them, the author could easily and amply pontificate on explanations that would produce the number of years of nuclear peace to date. In 1962, at the time of the Cuban Missile Crisis, a base rate of 500 or higher might have looked like an example of deterrence working. But as base rates continue to fall to an apparent steady state of 256, such enthusiasm can be expected to dwindle.

From the outset, there has been too much confidence in feeble reasoning. If I may offer a suggestion in that regard, I again call to your attention Kenneth Waltz, a past president of the American Political Science Association: “Nuclear weapons make wars hard to start. These statements hold for small as for big nuclear powers. Because they do, the gradual spread of nuclear weapons is more to be welcomed than feared” (Jeanes, F&S, p. 4). [See counter arguments here and in spreadsheet, green tab “k-Process DATA”]

Relative change of base rate over the long termRelative base rates.
Normally when we discuss sample size, it is as if we can just create a larger sample, then we can develop rational opinions based on real-world evidence. Nuclear weapons do not afford this opportunity. As shown in the simulation, they require immense sample size. In the graph, base rates continually fall until reaching a presumed steady state hundreds of years from now. Nuclear peace, acting in a random process, cannot now be reliably judged empirically because the evidence before us would suggest higher base rates than the ultimate base rates that later characterize the system.

In the graph, the year 2013 (with the empirical evidence of 414 nation-years in a random system) would have suggested a base rate about 1.4 times higher than an underlying extended-term 256. Our history to date would have been insufficient to prepare us.

The teaching example. How far base rates can fall.
As dreadful as the context of this discussion is (the likelihood of a nuclear use), it offers a fine teaching example that is applicable in waiting time problems. When we talk about “steady state” conditions which appear to exist after a long period of time, we find that language is vague and, in the main, a term of convenience. Let’s think about the upper range of choice that was the first step in the simulation: the 2,945 base rate. The third row in the following table shows the k-value that defines the relationship between the upper limit base rate and the average base rate in the last year [UpperLimit k= UpperLimitBaseRate/ BaseRateInLastYear]. With these k-values, the reliability and the outlier rows (expressed in one chance in blank) are estimated.

Upper limit base rate, probability, and outliers

At the end of 200, 400, and 1000 years of sample data, the Monte Carlo simulation delivered what appeared to be approximately steady state base rates of about 281, 255, and 249 as shown in the table. But which of these, if any, is the true steady state? If we take the above table to be accurate, there is no precise steady state. The base rate would seem to continue to decline to about 150, or lower at the one-in-300+ million outlier point.

That the rate could decline to a level as low as 150, when it was otherwise far higher in the majority of cases, should not come as a complete surprise. Consider that there is no logical impediment that would prevent the roulette wheel, when spun through its 2,945 slots, from delivering numbers consistently less than (or close to) 150. Nor would it prevent a sufficient number of very low draws to lower the average, given a sufficiently large number of nation-years [for example always drawing numbers that approximately averaged the lower 20th of the scale; 19.6=2,945/150]. Such occurrences may be inordinately rare, but at the already suggested one-in-300+ million outlier level, they may begin to manifest themselves. That we already observed a substantial dropping in ending base rates as nation-years approached 9,000 in the simulation, and as outlier values concomitantly greatly increased suggests that within the range of outliers, there always lurks a yet more unusual distribution ready to drive the average base rate lower.

So it may well be that there is no definitive average base rate, but merely ending averages that exist for different sample sizes. Even the notion of an “adequately” large sample size is itself a continuously moving target.

At some point, a decision must be made to draw the line at a definitive outlier value beyond which we no longer consider the data. In many statistical problems, the numbers 95% or 99% impute a quality decision. However, at such reliability levels, we would effectively be saying that one-in-20 or one-in-one-hundred cases are outliers beyond the scope of the model [95%=100%-(1/20); 95%=100%-(1/100)].

Arguing for such lax standards would imply that average base rates in this model are in excess of 600 – a case which frankly appears difficult to support, either mathematically or ethically, given the horrendous consequences if wrong. Deadly and catastrophic consequences require high standards, as is implied by the table, often at reliability levels as high as 99.999%.

Applied statistics requires that a judgment call be made. There is no magic precise point where we can say: “consider only this sample size and ignore the rest.” Choosing the outlier point at which to no longer consider the data is a call that each person must make according to his or her own thinking and objectives. As shown in the table, the last year average base rate is a moving target because the outliers (here meaning cases not considered) rapidly change. To consider all but one-in-135 cases (as nuclear weapon proponents are apt to urge) is to be disproportionately risky. Considering all but one-in-35-thousand cases, or all but one-in-103-thousand cases implies a vastly different approach to reliability standards.

In a nine nation proliferation world, let’s focus on two columns in the above table: the columns with outliers at one-in-35,612 and at one-in-103,675, at base rates of 281 and 255, and at 21.6 and 19.6 median years to first use. A cutoff point near one of these two columns would seem to be appropriate. A substantial leveling in base rates and median years to first use was occurring within this range.

When we talk about it.
In our daily lives, we are apt to give far greater consideration to the clothes we wear, the car we drive, and the choice of our favorite beverage than we do to nuclear weapons. The likelihood of their being used or the extent of resulting catastrophe that might come from them is apt to be put on a back burner or altogether expunged from our minds.

This stance is probably evidenced in the mild forbearance to outright disdain we may direct toward unceasing protest and activist nuns, and those who unswervingly oppose nuclear weapons [an aside: observe that the nun’s activity should have been a C3I-accident-vulnerability teaching experience and that no malice was intended or carried out, instead under the expanded definition of terrorism since the Patriot Act it was treated as terrorism]. Peace Pilgrim, who walked alone and penniless more than 25,000 miles across America to inspire others, now seems lost in time, as are her words:

“We seem always ready to pay the price for war. Almost gladly we give our time and our treasure — our limbs and even our lives — for war. But we expect to get peace for nothing. We expect to be able to flagrantly disobey God’s laws and get peace as a result. Well, we won’t get peace for nothing … We’ll get peace only when we are willing to pay the price of peace. And to a world drunk with power, corrupted by greed, deluded by false prophets, the price of peace may seem high indeed. For the price of peace is obedience to the higher laws: evil can only be overcome by good and hatred by love; only a good means can attain a good end.”

On one hand, we have game theorists and a professional class who may argue that being right in all but one in 135 instances is good enough (corresponding to the outlier at a 600 base rate column in the table). On the other, the voices from the fringe do not couch their argument in reasoning that is now likely to win arguments. But, as outliers themselves, Peace Pilgrim, and those like her, may have a special insight into how probable outlying events, such as those considered here, tend to be. At least they seem closer to accuracy in their caution than those who have unbridled confidence in nuclear weapons.

Rare events.
Judith Selvidge, writing about rare events, observed:

“Under what circumstances should a person worry about the probability of rare events? Generally, events having very unlikely outcomes can simply be ignored by decision makers. However, when the consequences of the event are sufficiently important, it becomes worthwhile to try to determine how unlikely the event is. By assigning a value to such probability, the decision maker can then incorporate the possible occurrence of the rare event, along with the more common events, into a formal analysis of the situation and choose among alternative actions in a rational way. Appropriate actions may include developing contingency plans for dealing with the rare event, should it occur (rather than saying ‘it couldn’t happen’), or working to reduce the probability of the event if the outcome is undesirable.” …

“A decision maker responsible for setting safety standards for nuclear power plants, for instance, cannot avoid considering the possible occurrence of some extremely unlikely events whose outcomes entail the release of radioactive material from the power plant into the surrounding countryside. . . . In the specialty insurance business, an important rare event would be the total destruction of a $100 million bridge” (Jeanes, F&S, p. 72).

As is already abundantly evident, base rates sufficiently high to preserve nuclear peace for extended periods must be breathtakingly high. Consider the requirements in the model that at least one nation must be able to deliver an upper limit base rate at 2,945. Such a base rate is apt to require an upper limit willful-use base rate of 5,890 and a corresponding upper limit accident base rate of 5,890. Drilling down further yet would require an average upper limit for all four C3I accident components of an astonishing 23,560 years [with accidental and willful use being equal 5,890=2,945*2; 23,560=5,890*4; see spreadsheet green tab “Monte Carlo Simulation DATA” for calculations in table].

With regards to necessary base rates required to support nuclear peace, the upper limit in the Monte Carlo model must be roughly 11.5 times the presumed steady state value. To preserve nuclear peace at a 360 base rate or higher into the distant future would have required a 4,140 slot Monte Carlo roulette wheel at the outset [4,140=360*11.5; k= 11.5; reliability= 99.9990%; outlier= 98,716; see k in table].

The vital point is that safety is driven by the most peaceful nations. If, in their best efforts, their systems are incapable in their best years of reaching approximately a 4,140 base rate reliability level, then it follows that they cannot indefinitely support a steady state nuclear peace at a mere 360 base rate. To drive base rates to adequately high levels requires keen focus on best performance. And best performance can often be improved in unanticipated ways, for example a Canadian farmer can have a huge influence.

When this discussion began, we saw 360 base rates delivering unhappy outcomes. Our inclination then may have been to regard the presentation as being highly artificial and mechanistic. Part of that argument was correct. The discussion was too mechanistic. It centered on a 360-fixed base rate. Common randomness in the simulation shows that as sample size increases, average base rates fall. Statisticians refer this to as “regression to the mean,” an unfortunate double entendre in this instance.

Tidbits for Kahneman aficionados.

Daniel Kahneman, History and Rationality Video.
English begins at 02:25; note presentation at 10, 19, 30, 34 (WWI), and 38 minutes.
Written remarks and sections of the above.

Actors who are susceptible to hawkish biases are not only more likely to see threats as more dire than an objective observer would perceive, but are also likely to act in a way that will produce unnecessary conflict. ... For a more concrete image, suppose that a national leader is exposed to conflicting advice from a hawk and a dove. Our contention is that cognitive biases will tend to make the hawk’s arguments more persuasive than they deserve to be.

 

Human Lifespan Explainable as an Ailment Proliferation Phenomenon.

Using a Monte Carlo simulation, human life can be modeled to produce identical death rates as those appearing in USA actuarial tables. However, a few changes are required.

At age 60 for men and age 69 for women, human lifespan can be explained by a base rate 375. At those ages alone, the 375 base rate applies. Coincidentally it is quite close to the 360 base rate frequently discussed in the nuclear weapons case. But to use a fixed base rate, or even one that declines as expected according to sample size will not work.

The reason: the human body, in its capacity to sustain life, is astoundingly, abundantly, and beautifully remarkable. Before humans reach old age, on average, a reasonably well cared for human is extraordinarily unlikely to die. At about age 10 they achieve their lowest chance of dying. Thereafter the chance very slowly increases. It isn’t until reaching roughly age 60 or higher that probability of death rapidly increases.

Perhaps in day-to-day lives when caring for a person in advanced age, you may have found a hint as to why human chance of death can be modeled to produce the same outcome found in actuarial tables. Say you know an elderly man of about age 86, or woman of about age 90. Say you saw a cluster of 10 pills that they took daily.

Pills.
The number of pills may be the clue. Although people face diseases that increase in severity as they age, they also face a proliferation of ailments – some of which may be identified by the number of medications they receive or the medical procedures they undergo. In addition, other afflictions may remain untreated, regarded as the normal consequence of lifestyle or aging.

The proliferation of ailments model that follows does not fully or accurately account for true real world conditions, but it does produce identical death rates. Thus it can illustrate how multiple afflictions – all of which have ostensibly low probabilities of causing death – can indeed combine to produce customary death rates.

In the context of the theme presented here (proliferation of components all having the same base rates), the model just substitutes “afflictions” for the words “nuclear-weapon nations.” Because humans are so resistant to death, the Monte Carlo upper limit base rate is adjusted upwards about one and a half times higher than in the nuclear weapon case: 4,395= 1.49*2,945.

Proliferation of afflictionsAs you can see in the spreadsheet, from age 60 to age 90, each affliction would have base rates that fell between 694 and 767. Such annual chances of death would seem to be extraordinarily small. Yet, via a proliferation of afflictions, these base rates combined to produce actual base rates experienced from age 60 to age 90 [in the model, these actual overall base rates ranged from 49 to 652 for this age group].

From age 10 onward, the continuous application of probability in a roulette-wheel manner ate away at starting probabilities, but it was only with the additional force of proliferation of afflictions that real world USA death rates were produced.

In the present model we see that the first potentially life-threating afflictions begin at about age 24 for a man, but that in the actuarial tables a woman would not experience an equal chance of death until age 57. In terms of this model, she does not even reach one life-threatening affliction until then.

Throughout her life she would show fewer afflictions than her male counterpart. By the time she reached 90 she would have reached ten afflictions (similar in concept to a ten-nation nuclear proliferation). Her male counterpart would have reached ten afflictions at age 86, perhaps going through a sequence of ailments similar to those shown in the following table with cascading afflictions increasing chance of death.

Potential Afflictions.
Top causes of death in the U.S. include heart disease, cancer, stroke, chronic lung disease, accident, Alzheimer’s, diabetes, influenza and pneumonia, kidney disease, blood poisoning, suicide, liver disease, hypertension, Parkinson’s disease, and homicide. More stealthily than common identifiable causes of death, afflictions can come as the performance of the body and its organs decline.

Table of male afflictions in old age

Organs, such as the following can be involved: brain, colon, gall bladder, heart, large intestines, liver, lungs, pancreas, rectum, small intestines, spleen, stomach, adrenal glands, anus, appendix, bladder, bones, bronchi, ears, esophagus, eyes, genitals, hypothalamus, kidneys, larynx, lymph nodes, meninges, mouth, nose, parathyroid glands, pituitary gland, salivary glands, skeletal muscles, skin, spinal cord, thymus gland, thyroid, tongue, trachea, ureters, urethra, and so forth.

Opportunities for failure literally fill volumes, as attested to by the existence of medical libraries. In short, there is ample opportunity for a proliferation of low-grade risks, many of which (e.g., grey hair) are not even in the top causes of death listed here.

None of the afflictions individually in the model would have presented greater than a one-in-694 year chance of causing death. All of this might have led a person to believe that the chance of death coming from any of them was highly unlikely. On the contrary, as indicated, such afflictions together present a risk of death equal to those commonly experienced in real life.

Aged Base Rate Death by Disease TableJust as we can look at the life of an individual and the afflictions that he or she may face, we can look at the death process that we together face, as is done in the table here.

Usually disease death rates are reported as deaths per 100,000. For purposes of this discussion, that method is a bit convoluted. Instead, it is more practical here to use base rates. Full lifetime data, too, is a bit pointless, since only 6.2% of deaths occur prior to age 43 for males, and age 57 for females

For this reason, the focus is on the base rates for the aging group in the last column. Those base rates were derived by dividing the adjusted aged population by the number of people who died in each year from each of the afflictions. A case in point. For heart disease: 212= 126,374,182/ 596,339.

From the table,(Source) we see that, for the aging group, numerous afflictions, many with vanishingly low probabilities of occurrence, indeed do combine to produce high risk at a one-in-50 chance of death annually in the aged group. We also see that the group of Non-listed Causes, is one of the top three killers, hinting that other unnamed afflictions together are also potent.

You may ask, how do the above base rates compare to the 694 base rate in the proliferation of afflictions model? The top three categories present about 3.1 times greater risk. After that the following categories presented far lower risk. Lung disease - Alzheimer’s: 66%; Diabetes - Blood poisoning: 27%; and Liver disease - Parkinson’s: 15%.

The base rates in the table would have produced 2,512,873 deaths just as occured in the United States in 2011. You can calculate this for yourself using the method shown in the heart disease example above. Reminiscent of the nuclear-weapons case, the shown 16 disease groups together also illustrate the influence of proliferation traveling hand-in-hand with ostensibly low probabilities. Also by way of comparison, heart disease at a base rate of about 212 would present a slightly lower rate of occurrence than the frequently discussed 180 base rate in the nuclear case.

As for disease, some may choose to be inattentive to potential risks and greet afflictions with pills and medical procedures when they arrive. But as in the nuclear case, success (to the extent that it is possible) can come from increasing base rates for each component and reducing the degree of proliferation. In the human case this can be carried out by surveying the potential for afflictions to arise in the first place – the object being to eliminate them or stave them off in order to have a long life and, failing that, at least to undertake the challenge with the objective of increasing disability-free years near end of life.

That genetic components have influence is beyond dispute. That epigenetic components influence how genes express themselves is also beyond dispute. The history of disease is an ongoing march of how humans have altered lifespans in populations with substantially unchanged genetic makeup.

Looking at preventable causes of death worldwide we find: smoking, hypertension, poor diet and physical inactivity, alcohol consumption, high cholesterol, infectious diseases, malnutrition, toxicants, traffic collisions, firearm incidents, sexually transmitted infections, drug abuse, indoor air pollution from solid fuels, and unsafe water and poor sanitation.

A diligent, life-long, unswerving attention to counter each of these is warranted. But if truth be told, there is probably another item that ought to be added to the list of preventable causes. One of the top killers in the U.S. is pneumonia. If one were to be mindful about proliferation of causes of death, he or she might have gotten a pneumonia vaccine (often relatively inexpensive and perhaps lasting for a lifetime). By largely removing pneumonia, proliferation may be reduced by one for some people.

One of the greatest afflictions that we can face is the “affliction of inattention and inaction.” A large number of preventative causes come from being insufficiently attentive to adverse events, which may not express themselves as serious causes of death or disease as we incur them, but which in time do produce or contribute to death.

Managing a risk.
As a preventative measure, the frequent cry is for diet and exercise. But like many probability events, there is considerable ambiguity about what is happening and how the problem might be managed. The standing feature of the human body is that it is remarkably efficient. It takes very few calories to fuel it. It is nearly impossible to measure caloric intake each day within 50 calories, yet over time a daily fifty-calorie misjudgment can be sufficient to noticeably push one’s weight upwards. It is similarly difficult to measure exercise (informal and formal) with precision.

Rather than measuring either, perhaps it might be best to adjust both based on a single starting principle: the average number of calories you burn per day divided by your body weight. For example, if your naked body weight is 177 pounds and you consume an average 2300 calories a day, you burn about 13 calories per pound. Thus, you would now be in column 13 in the following table [13=2300/177]. Click here for the metric table.

The operative portion of the following table is the row showing listed calories per pound. The more general terms such as “Completely Inactive” to “Gorilla Active” will vary from person to person and according to sex. To reach the extreme case of “Gorrila Active” usually requires a level of caloric expenditure that is at least equivalent to a daily 56 minutes of vigorous walking uphill, vigorous swimming, shoveling, playing basketball or football (based on calculations derived from data in the CMDT 2005, p. 1215). At the other extreme, “Completely Inactive” is approximately equivalent to sleeping or reclining.

Body weight, caloric intake, exercise relationshipIf your objective were to work down to about 146 pounds, by increased exercise and lower caloric consumption you could move your weight down somewhere within the yellow highlighted region. It would be an exercise in nudging body weight down toward 146 pounds, employing a nutritionally plentiful, but not calorically dense diet, and moving exercise levels upwards.

This is an illustration decision-making under uncertainty. Once reaching the sought-after medically-appropriate weight, you would at least know the degree of exercise and caloric consumption necessary for you to sustain yourself at the desired weight over the long term.

Since most processed foods contain salts, fats, sugars, and inducements to consume more of them, the path to a suitable diet and body weight can be expected to be difficult. Appropriate body weight changes are usually gradual and require nutritionally effective natural foods. To seek to shortcut the process by using methods other than exercise to drive the “calorie per pound of body weight” upwards is to make your body less efficient and is usually decidedly deleterious, for example, smoking, alcoholism, and quack medications. Short of vigorous exercise, it is necessary to eat 2000 calories or less to meet the 146 pound objective in most of the cases shown.

There is a nice Okinawan expression “Hara hachi bu,” which means, “eat until you are 80 percent full.” The default condition for humans is to be hungry. For this reason getting down to an appropriate weight (and more importantly sustaining it) requires a considerable dose of realism and discipline.

Proliferation of afflictions in old ageThe multiplier.
The prior graph for number of afflictions and human age (duplicated here) can be read in another manner. It can be used to illustrate how rapidly probability of human death increases as we age. The affliction proliferation level roughly indicates how much greater your chance of death is in old age relative to risk at age 43 for a male, or age 57 for a female (both of these ages are represented at a proliferation level of one). When the multiple is 10, so too is your risk of death about 10 times greater than it would be at the base year start point at age 43 or 57. Proliferation tends to be a multiplier of risk.

This brings us around to the essence of proliferation. It is frequently a multiplier of the original chance of occurrence. When the objective is beneficial, the chance of success can be increased by the multiple. When the objective is deleterious, the chance of adversity similarly increases.

If, as proposed at the outset, the number of pills taken at an advanced age implies the number of afflictions, then the proliferation model might offer a plausible explanation for death rates experienced. More encouragingly, if you observe people at similar ages with few ailments, it may suggest that they are predisposed to healthier long lives. No real world ailment intensity data or its relation to human life span was used here, but in a proliferation context, it is certainly a rich opportunity for study.

Reconsidering proliferation.
Looking at human longevity exclusively through the prism of proliferation departs from reality at least to some degree because a portion of the risk comes from proliferation and a portion comes from the intensity of affliction. With detailed real world data, it would be easy to apportion risk between the two because whenever the proliferation estimate exceeded real world proliferation, the fractional difference between the two could be attributed to increased intensity of affliction. In industrialized countries, about 75% of deaths in old age come from heart disease, cancer, and stroke. The options would be to weight the intensity of these three more heavily or to subdivide them into subcomponents more in line with the proliferation approach.

This narrative has sought to illustrate similarities between proliferation of ailments and nuclear weapon proliferation. Each affliction was modeled with a base rate almost twice as high as in the nuclear weapon simulation [1.93=694/360]. This would seem to illustrate that longevity of nuclear peace at a ten-nation proliferation level is precarious – hardly preferable to the condition faced by a 90-year-old lady with ten ailments.

 

Severity Unfolding.

With probability, you can also assess magnitude of effect, whether it be positive or negative. I will make a few remarks about war and nuclear weapons for illustration and then happily move on to a more positive domain without returning to that topic.

The Passage, Rights and WrongsTeddy & Friends.

For a change of pace, let’s look at an immense change that occurred more than a century ago.

Vast and long-lasting changes can occur in a heartbeat – such as war to replace what seems to be everlasting peace. Long ago, lifelong Republican and U.S. Secretary of War during World War II, Henry Stimson, wisely remarked (as previously indicated in part):

“In 1894 came the Spanish war; it caught me napping. Until then, the United States had passed through a period of profound peace ever since the end of the Civil War. Not only had we been free from strife ourselves, barring occasional small affrays with Indians on our western frontier, but during that time there had been no wars in the outside world of enough importance to attract much popular attention. The century was apparently closing with a growing extension of democracy, freedom, and peace throughout the world, I can remember that that was my feeling. The thought of preparing oneself for possible military service hardly entered my head.” …

“War as a desperate and horribly destructive test of the whole fabric of civilization was war unthinkable in 1914.

If younger Americans found it hard in later years to reconstruct a proper image of life before the first war, many of their elders faced the same problem in reverse (Jeanes, F&S, p. 636).

It is not difficult to see why Stimson expressed such feelings. In 1893, 27 million people attended Chicago’s Columbian Exposition during its six-month run – this in a nation which at the time had only 66 million inhabitants. About one-in-three were there. Imagine it. It was the to-do event of the era.

Its scale and grandeur symbolized a newly emerging nation. To a population who had seen practically none of them, nor imagined their scope, electricity and heretofore unimaginable products were introduced which would transform the nation (p. 42-45, 51-58). The euphoria of progress was palpable and much welcomed. It far outdistanced London’s 1851 Crystal Palace Exhibition, the prior standard bearer (p. 6-13, 21-26). This might have well set the tone for the new age. But restlessness and recklessness did in such peaceful hopes.

The story is well told in Evan Thomas’ The War Lovers: Roosevelt, Lodge, Hearst, and the Rush to Empire, 1898.
At first blush, the title might seem to be an exaggeration. It certainly is not. Times then were less politically correct. Theodore Roosevelt clearly spoke his mind and wrote his thoughts.

As Thomas points out, “Roosevelt ... worried about ... the race becoming ‘over-civilized’ ... .The solution ... would come from tapping into more primitive instincts, the kind brought out by sport, especially by hunting, and most of all by war.” (p. 43) “In his more bellicose moods it sometimes seemed that just about any war would do.” In 1889 Roosevelt would talk of a “bit of a spar with Germany” and comment, “While we would have to take some awful blows at first, I think in the end we would worry the Kaiser a little.”(p. 59)

The New York Times Dec. 18, 1895 In 1894 Roosevelt wrote that he longed for “a general buccaneering expedition to drive the Spanish out of Cuba, the English out of Canada.” (p. 60) In 1895 he wanted to go to war with England over a border dispute she was having with Venezuela and looked forward to “the conquest of Canada.” (p. 68-69, 99) By March 1895 he was lamenting, “there won’t be a war with Spain” (p. 61). He spoke plainly of newspapers having “made long and vicious attacks on me for my jingo speech the other night.” (p.65) [Jingo meaning: advocating use of threats or war against other nations.] And in private correspondence he stated, “Being a Jingo, ... I would give anything if President McKinley would order the fleet to Havana tomorrow.” (p. 213)

Brands in The Reckless Decade similarly observed:

“No triumph of peace,” Roosevelt proclaimed, “is quite so great as the supreme triumphs of war.”

It dismayed Roosevelt that not all of his contemporaries shared his high opinion of the martial virtues. In fact, many even of Roosevelt’s friends thought him a little loony on the subject. A Harvard classmate remarked, “He would like above all to go to war with someone.” This college chum added that Roosevelt “wants to be killing something all the time.” William James, who had taught Roosevelt at Harvard … went on to say that Roosevelt praised war “as the ideal condition of human society, for the manly strenuous which it involves,” and that he treated peace “as a condition of blubber-like and swollen ignobility, fit only for huckstering weaklings, dwelling in the gray twilight and heedless of the higher life.” … George Smalley, a British journalist who followed Roosevelt’s career closely, remarked … “Roosevelt is as sincere as he is intemperate.” ...

A member of the naval affairs committee of the House of Representatives remarked, following a visit by Roosevelt, then assistant navy secretary: “Roosevelt came down here looking for war. He did not care whom we fought as long as there was a scrap.” (p. 292-293)

In Evan Thomas’ telling, Czar Reed, Speaker of the House, was the interesting counterpoint with high integrity and deep concern for the nation. He would have struck one as an unlikely hero – 300 pounds of smooth flesh that deplored exercise and was acerbic to a fault. He was completely unromantic about war. He had lost his best friend in the Civil War.(p. 113) As Thomas relates, Reed “argued that heavy defense spending invited conflict”(p.188) ... “He had a keen understanding of the darker side of human nature; he deplored war fever ... he did not wish to see the United States playing the role of the world’s policeman ... There was plenty of human suffering to attend to at home, Reed figured.” (p. 120)

If you were a common worker who sweated through a 10+ hour day building the railroads and the grand edifices at the turn of the century, or were simply seeking to get through your life – still suffering from the 1893 financial panic and depression – you may have felt that your life was already sufficiently strenuous and not especially in need of a war to bring vitality to it.

But Theodore Roosevelt was of a different fiber. He was of an extremely wealthy family. Even after having lost much of his money in his North Dakota ranch (p. 48) that failed in the horribly bitter winter of 1886, he was one of one of the ten most wealthy American Presidents – an estimated wealth of about $125 million in today’s dollars. He could have sat on his buns his whole life and lived comfortably. But that was decidedly contrary to his disposition and his parents’ ethic.

Further, he had some conquering to do, physically and emotionally – both of which he did by overcompensating. He was nearsighted and had a frail and asthmatic body to overcome. He did so by bounding, everlasting strenuousness, which frankly was usually self-inflictingly harmful or dangerous, at least the way he did it. Take boxing for example. From childhood on, he would do it, boasting of his bleeding and the various injuries inflicted upon himself and his opponent.

If Roosevelt did it, it was done to the extreme. In one of his boxing matches at the White House, he suffered a retinal detachment. It blinded him for life in his left eye. One of the few events he kept secret. His extended masculine adolescence and big game hunting, might initially strike one as Hemingwayesque, except that in Roosevelt’s case, it was done his way – with one blind eye and killing (with the help of friends) a thousand large animals. His vigor seems related to how many of us men seek to define ourselves. It is not clear-cut. We somehow must link some portion of our mind – our self-concept – with how we physically act and behave, especially how we perform under danger. Roosevelt’s rite of passage was continuous.

He took it at a blind run. Even today, our culture is influenced by his picture of the world. Take the cowboy story. The myth of the West was enhanced, if not largely created, by three upper-class friends, Ivy-League Easterners: Roosevelt in his book(s) The Winning of the West (he was a prolific author); Owen Wister, the writer of Western novels, including The Virginian; and Frederic Remington, the painter and sculptor of iconic Western images.(p.53) So much was this threesome successful that the cowboy hat we so frequently see in films may not have been the true hat of the West at all. According to some, it was the Derby hat, not the cowboy hat, that won the West. But, if truth be told, it seems that there were a vast variety of hats worn in the West.

Roosevelt’s writing, sporting, and professional life retained a void that still needed to be filled: “One of the commonest taunts directed at men like myself is that we are armchair and parlor jingoes, who wish to see others do what we only advocate doing.” (p. 245) He wanted to go to war. The Spanish American War (Cuba) gave him the opportunity to set off to hunt, as his letters state, “the most dangerous game” – man.(p. 300)

When Roosevelt went to war he did it in his own particular way. He was not a soldier. He was Assistant Secretary of the Navy, a post from which he resigned to form The Rough Riders. It was a brief but serious expedition. He went with Marshall, his servant; Bradshar, his orderly; and Davis, a key newspaper reporter. When he charged the hill, he rode wearing a gaudy blue and white scarf that marked him as a ready target. As Davis wrote, “No one who saw Roosevelt take that ride expected him to finish it alive.” (p. 324) He continuously exposed himself to extreme danger, shouting “Holy Godfrey, what fun,” (p. 326) reveling in his fortunately minor bullet wounds, and urging his troops on. His assault was on Kettle Hill, not San Juan Hill (which was captured by General Hawkins). Both are in fact relatively small hills that, without guns firing at you, you could easily walk up in a few minutes. Davis conflated the two.(p. 349) Neither history nor newspapers bothered much about the distinction.

Those who saw Roosevelt’s charge, did not see another battle taking place in his mind. His much admired father had paid a soldier to fight for him in the Civil War.(p. 22) Roosevelt needed to expunge that blot. It had been the emotional struggle, which, as his other “defects,” he overcompensated for.

A more thorough reading of his father’s role in the Civil War paints a picture far from shirking duty. Though his father did not serve as a soldier, he was responsible for creating and advancing the Allotment Commission, which enabled soldiers to designate a portion of their pay to go to their families, so that the families did not become destitute from the absence of a soldiering husband. He was away from home for years at a time, often at the front, signing up soldiers. By taking these self-sacrificing – even noble – actions, he preserved his own family (his wife being of delicate health and a staunch Southerner), and he served the nation and its soldiers – hardly a blot on one’s character.(p. 56-64) Roosevelt’s father died of cancer at age 46 when Theodore was 19. Father and son, as adults, never had an opportunity to resolve the matter.

Roosevelt’s battle was not benign. It was a part of an overall view espousing Imperialism, and was a break from the U.S. worldview that nearly completely focused on the continental United States. Roosevelt and his best friend, Henry Cabot Lodge, were the authors of this new view. Within the span of a few years, it involved the taking of Hawaii,(p. 204-228) the war in Cuba, and the war in the Philippines (234,000 deaths). U.S. President Cleveland opposed this approach, as did Czar Reed.

Lodge and Roosevelt championed Mahan’s 1890 book, The Influence of Sea Power on History: 1660-1783, as a means to extend American influence. Their excited interest in war and in an extended scope of U.S. influence alerted Asia and Europe. The book was placed in the wardroom of German ships and made a text at Japan’s naval academy.(p. 71) That didn’t seem to help much. Nor did it help much that his like-minded grandson, CIA operative Kermit Roosevelt, removed Iran's prime minister, Mohammed Mossadegh (imprisoned and under house arrest until his death), and put the Shah in full power. The consequences in the Middle East have been profoundly adverse to this date.

Theodore Roosevelt’s war-making bellicosity largely evaporated when he became president, where he is known for the Panama Canal, Trust Busting, National Parks and forests, and a legion of other positive acts. He is less well known for his progressive New Nationalism, which advocated minimum wage for women, the 40-hour work week, and Social Insurance to provide for the elderly, the unemployed, and the disabled. It was not until some 20+ years later, during the Great Depression, that many of these were enacted. As to the human side, he had a family of truly devoted, interesting people.

Roosevelt’s position was, “Peace comes not to the coward or to the timid, but to him who will do no wrong and is too strong to allow others to wrong him.” (p. 40) Such expressions stretch back as long as history has been written. They have a ring of sincerity and reason. But as preventative measures, they are too frequently feeble.

WWI.
Within 16 years of Roosevelt’s charge up Kettle Hill, the largest war ever known commenced with impressive firepower to back it up. It came with what we now call “Deterrence,” which they then called the “Armed Peace.”

Few of us children of the post-WWII era are aware of it, but Europe in 1914 was amply armed. France had also taken extensive defensive measures to impede an aggressor, known as the Maginot Line (PHOTOS).

After substantially 50 years of worldwide relative peace, World War I commenced unimpeded. From Barbara Tuchman’s The Guns of August and John Maynard Keynes’ introductory chapters of The Economic Consequences of the Peace, one gets the picture of an advanced pre-WWI world of cordiality and technological advances that provided opportunities for a high degree of comfort. One could leisurely sit at one’s desk, make telephone calls, easily engage in financial transactions such as stock purchases, read papers bringing prompt news from distant places, travel vast distances by railroad or ocean liner, and have monarchs of all Europe gather together for state funerals. In sum, it was a world exhibiting great organization, trade, and opportunity, where summer vacationers were suddenly obliged to return from abroad because their respective countries were unexpectedly transformed into warring enemies.

World War I was less for the purpose of deciding compelling vital interests than drifting into war because of diverse secondary sets of conditions which, in the absence of a means to inhibit them, ran a course of unparalleled destruction. Leaders of the aggressor nations displayed ambivalence and even a desire to stop the war at its outset but were unable to do so. The major powers of Europe were drawn into a general conflict that none had desired and few had foreseen, at least consciously. The world stood horrified as a moral society evaporated into savagery when Germany invaded Belgium.

As Barbara Tuchman related in The Guns of August, “on the first day of the invasion, the Germans began the shooting not only of ordinary civilians but of Belgian priests, a more deliberate affair” (Jeanes, F&S, p. 549-50).

Belgium was a neutral country with such close ties to Germany that its Queen was of a German family. The Germans vented their fury with mass executions of 150 civilians in Aerschot and 664 in Dinant. Tuchman’s recounting included the following:

“At Tamines some 400 citizens were herded together under guard in front of the church in the main square and a firing squad began systematically shooting into the group. Those not dead when the firing ended were bayoneted. . . .” At Dinant: “They were kept in the main square til evening, then lined up, women on one side, men opposite in two rows ... . Two firing squads marched to the center of the square, faced either way and fired till no more of the targets stood upright. Six hundred and twelve bodies were identified and buried including Felix Fivet, aged three weeks. . . .” (Jeanes, F&S, p. 565).

The recitation of these facts is not to impugn Germans. Now 100 years later, I believe you will find Germany to be an exceedingly admirable nation, and its citizens, at least within the realm of my personal experience, to be kind, interesting, and able. All of this together may help us to appreciate that, with base rates for major wars being almost uniformly in excess of 180 years, vast changes within nations do occur. The changes are unpredictable. Few, if any, nations have untarnished hands.

In the 1890-1900 decade, like Roosevelt had done, the United States went through a rite of passage. The nation was presented with a choice between two options. It could take the Czar Reed, Grover Cleveland, Chicago Exposition approach and place its focus on growing a new technologically constructive nation. Or, it could take a path that reveled in war, historical tales, and myths such as those based on Sir Walter Scott’s Ivanhoe,(p.16, 321) which were so appealing to Senator Lodge and the younger Theodore Roosevelt. Instead of making a clean choice, the nation blended the options – the former and more peaceful option suffering. Rather messy. To this date there is a temptation "to do a Roosevelt."

Lodge, a net negative contributor to mankind.
Senator Henry Cabot Lodge was a vain educated man of limited scope and high self-opinion – an armchair jingo. Margulies, in The Mild Reservationists and the League of Nations, described Lodge as a man “with his thumbs in his armpits and with a copy of Shakespeare in his pocket” who spoke in rasping tones words which were “often bitter.”(p. 11) Happily, Lodge was cuckolded with the delicate assistance of some of his close associates.(p. 129) Both Roosevelt and Henry Cabot Lodge had disproportionate faith in the capacity of a well-armed military to prevent war. Lodge’s view was especially wrong because it preempted any potential for an organization like the League of Nations to forestall war.

Explicitly, it was through Lodge’s hand that the United States never became a member of the League of Nations. That action was especially inappropriate because World War I began from nations drifting into war, without any vital interests at stake. It was also inappropriate because, as we were soon to find out, the causes for World War II gradually grew subsequent to the Treaty of Versailles.

The League would have been an ideal means to address such issues before they boiled over. Speaking indirectly of Lodge and writing in the third person, Stimson wrote,

“The rejection of the League was ... the greatest error made by the United States in the twentieth century ... For the men who hated the very notion of a League, Stimson would not speak so kindly. They must have been sincere, but it was a sincerity of purblind and admitted nationalistic selfishness, a sincerity of ignorant refusal to admit that the world changes, a sincerity embittered in almost every case by a hatred of the foreigner. It was the sort of sincerity, in short, from which wars are bred. And it bred one(Jeanes, F&S, p. 575-6).

 

The Severity.

There is a fine line in the novel Moby-Dick that warrants attention:

“‘I will have no man in my boat,’ said Starbuck, ‘who is not afraid of a whale.’ By this, he seemed to mean, not only that the most reliable and useful courage was that which arises from the fair estimation of the encountered peril, but that an utterly fearless man is a far more dangerous comrade than a coward” (Jeanes, F&S, p. 511).

Potential onset of war requires a fair estimation of the peril.

1882-2012 Average War Deaths WorldwideThe degree to which Roosevelt and Lodge were wrong about the capacity of the military to forestall war can be seen in the graph. Death rates trebeled [3= 450,000/150,000]. As Stimson pointed out, war deaths worldwide were very low prior to 1914. From 1853 to 1896 war deaths worldwide averaged about 150,000 per year. The great change, which Czar Reed so much wanted to prevent, came later.

In conventional war, about ten civilians die for each soldier that does. War is an expensive and self-indulgent way to achieve a Theodore Rooseveltian glory. Roosevelt lamented over-civilization. In the alternative, you have war. The consequences of the martial virtues can be quite precisely measured [spreadsheet green tab “WarDeaths”].

World War I and II are the huge spikes in the graph. From year 1939 to year 2000, excluding World War II, the extended average annual worldwide war deaths is about 450,000 deaths per year. In a world of seven billion people, it averages out to about a one-in-15,500 chance of death each year [15,556= 7,000,000,000/450,000]. This may look low, but with a lifespan of 80 years, it would mean roughly a worldwide average of a one-in-195 chance of death coming from war [194.4= 15,556/80].

War is hugely serious, especially in regions where it occurs. Conventional-war death rate can be compared to death by disease, 6.4 per 100,000 [6.428= 450,000/(7,000,000,000/100,000), worldwide and subject to wide variation as seen in the graph]. However, it is not a smooth purveyor of death, as are common diseases. Instead, it occurs in specific regions and takes with it much of the social fabric and infrastructure. We have long since passed the noble tournament and storming-of-castle stories that Sir Walter Scott’s Ivanhoe conjures up. We can step away from fiction and get precise.

Deterrence requires increasing the scale of war deaths in order to thwart a potential adversary; in short to increase the intervals between attacks (the hope being to infinitely increase the interval). The proposition that the interval can be vastly increased by nuclear weapons is doubtful.

It may well be that a 360 base rate and 36 years between nuclear uses in a ten nuclear-nation world is possible. But even if base rates could be vastly increased above this amount by increasing the sheer size of death-dealing capacity, it seems likely that it would produce results no better than the conventional-war system – 450,000 deaths per year.

The breakeven point.
The math is simple. Multiply the 36-year interval by the conventional-war annual death rate of 450,000. This equals 16.2 million deaths to preserve nuclear peace for an average 36 years. Similarly, forty-five million deaths would be traded for 100 years of nuclear peace [45,000,000=450,000*100]. These are the breakeven points, the implicit tradeoff of a large number of deaths for a long interval of nuclear peace. Only if fewer deaths were delivered could the average death rate from a nuclear attack be preferable to conventional-war death rates.

But even that shred of hope tends to evaporate. Since deterrence operates by postponing deaths, it concentrates them in a single year. In short, an acute spike of destruction – a deep shock that is far more devastating than slowly delivered adverse consequences.

Nuclear Deterrence Breakeven PointThe graph here shows the breakeven point for nuclear deterrence, where nuclear weapons produce a death rate equal to conventional war. In this sense, a nuclear war is not a preventive. It is an added risk over and above conventional-war deaths.

The disconnect with nuclear deterrence is that it proposes massive deaths to prevent war, but it is not capable of producing sufficiently high willful and accidental base rates to deliver hugely large intervals between nuclear attacks.

We have already seen that it is quite difficult to preserve nuclear peace for 36 years among ten nuclear nations. It will be all the more difficult to extend nuclear peace for 100 to 200 years. But if nuclear weapons are to be considered preferable to conventional war, this can only occur if corresponding total deaths from first strike and retaliatory strike do not exceed the total deaths indicated in the graph line. That deaths in excess of the shown numbers were anticipated during the Cold War would seem to suggest that nuclear weapons cannot produce lower rates of war deaths than conventional war.

A nuclear first strike is likely to be massive, and retaliatory strikes are apt to match it, if the victim is capable of it. Any nuclear first strike that produced a 16.2-million-death war can expect a response in kind [16.2 million= 36*450,000 annual conventional-war deaths]. The presumption is that a nuclear first strike must be sufficiently overwhelming that the adversary is unable to retaliate. This logic tends to drive deaths to very high levels. In the 1980s, most nuclear war scenarios anticipated 1,000 to 16,000 detonations (Jeanes, F&S, p. 321). As late as the 1990s, the USA was targeting 40 nuclear weapons at the modestly sized Ukrainian city, Kiev (Jeanes, F&S, p. 305). In 2009 the US had 5,113 warheads. In 2001, the U.S. had 2,500 targets in its nuclear war plans. To the extent of their capacity, it is unlikely that other nations would be less severe in their targeting.

The average population size for capital cities in nuclear nations is 7.6 million. Bernard Lown correctly observed that, “Few societies are more susceptible to the malevolent consequences of a nuclear detonation than rich, urbanized, highly developed, industrial countries.” Only a few bombs are sufficient to create a colossal number of deaths. In China alone there are more than 160 cities with populations in excess of one million; in India, more than 40; in Russia, more than 12; in the United States, at least 9; in Pakistan, 8. The large city model adopted by industrialized nations has concentrated huge populations into small areas. The electric grids, transportation networks, and infrastructure in them are vital to sustain life. Their loss spells additional death-dealing catastrophe. Population and infrastructure are a conjoined target.

Without much notice, conventional weapons with an explosive force roughly equal to the atomic bomb dropped on Hiroshima have already been dropped on Afghanistan. And, there are even larger conventional bombs. However, nuclear weapons, even with a matching blast pressure, present greater destruction because of vast heat, fire, and radiation. They are obviously also capable of immense blast pressure and destruction never before used in war, either by increased size of the weapon or by how many weapons are used in conjunction. And in another context now largely abandoned, casualties instead of infrastructure damage predominate with a neutron bomb. All of this is unacceptable.

Risk of Death from Nuclear WeaponsProbability & Severity.
Probability is intimately related to severity so some comment should be made on the matter. If for example an attack size was presumed to be 7.6 million deaths (the average size of nuclear nation capitals) and there was a 68.6% probability (as initially discussed at the year 2013 with a 360 base rate), then some nation in the nuclear club has probably already minimally risked 5.2 million deaths [5.2136= 68.60%*7.6; risk=probability*severity].

But if you were to argue that nuclear weapons present death rates at least as great as conventional war, then, as shown in the graph, nations as of year 2013 may have already risked 21.3 million deaths [21,300,300= 68.60%*69 years*450,000 deaths per year; at 360 base rate]. Even with casual observation, it seems obvious that the graph considerably understates risk. At the time of the Cuban Missile Crisis in 1962, the graph, in the most unfavorable case, would have delivered a risk of 3.2 million deaths. And again in year 1982 when nuclear weapons were being seriously protested, it would have delivered a maximum 14 million deaths.

In both instances, the risk of death appeared to be much higher. Especially the Cuban Missile Crisis: “In hindsight, historians put the odds for war at between one-in-three and even odds, with fatalities estimated at 100 million.” Using probability driven math, the risk for the Cuban Missile Crisis alone would have varried between 33.3 and 50 million deaths [33.3=.333*100; 50=.5*100].

To put the current proposed 21.3 million death risk into historical perspective, the current death risk is about 43% to 64% of the risk faced during the Cuban Missile Crisis [64%= 21.3/33.3; 43%= 21.3/50]. The current risk, however, is not a risk faced during a single week of crisis, but a minimal continuous risk into the indefinite future.

I believe you will find the graph curves in all instances considerably understate risk because nuclear weapons present a risk of death that is higher than conventional war. Conventional-war deaths serve only as a floor for expectations. Having said this, as indicated in the three curves, the prospect of risking 21.3, 28.0, or 30.7 million deaths in 2013 is sufficient warning of unacceptable risk – a risk that continues to increase and may be higher than shown.

There is ample reason for concern and ample opportunity for correction.

 

On June 12, 1982, one million people demonstrated in New York City’s Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history.

In the U.S., no powerful, major movement (governmental or civilian) since then has significantly opposed nuclear weapons.

In recent years, many elder statesmen have advocated nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz have called upon governments to embrace the vision of a world free of nuclear weapons, and in various sober op-ed columns have proposed an ambitious program of urgent steps to that end.

Critics of nuclear disarmament say that it would undermine deterrence.

"In 2010 a group of high-ranking Air Force officials, including its chief of strategic planning, argued that the United States needed only 311 nuclear weapons to deter an attack. Any more would be overkill."(p. 483)

 

With that, we close the discussion and move on.

 

 

Part III
The Prize.

The Road To Success Etude, allegorical map

Click here for original map

I am fond of this allegorical map.

The Kablona Quest was a story of seeking the prize.

Let’s say you are working on an endeavor ...

 

Ike Jeanes
Feb. 2013 - Feb. 2014
Pulaski, VA. 
Work in progress.
Because of needs elsewhere,
I now stop with the intent of returning.