Living: Main Street District, Dallas
Working: Main Street District, Texas
Laundry: The batcave. Just kidding.
This week in laundry I went to the dentist. America got a new president. And some people protested.
The anthology of immobility continues. As I write through another load of laundry. Hiding away in my corner of this two bedroom. An apartment not my own. My temporary refuge. The least temporary of lodgings in the last six months of my life.
Little has changed to my routine. I continue to put in eight to ten hour days on software and art projects. On makerspace classes. On game engines and computer vision.
The most remarkable moment in my personal life was a trip to the dentist. A necessity. And often neglected in the transient life of a nomad.
My impression of the week was, in one form or the other, likely the same as yours. In acknowledgement of the change of power in the United States Office of the President. In pomp, circumstance, and protest.
I saw the whole spectrum, from those engaged in celebration to those on the marching line.
Somewhere on that spectrum I see the people poised in palpable anticipation towards forthcoming deregulation. Their sentiment stands so strong that it lends the senses an impression of drool dripping from the mouth. It is the stream of something new.
The world subsists in undulation. In cycles both strong and small. Like the changing of the tides. The waves upon the shore. The drum of the dryer turning over and over. And the weekly cycle of laundry.
As it is with laundry, so too does economic policy seem to flow in cycle. With waxing moons of regulation. And waning moons of deregulation. Each, as the moon does to the ocean, pulling the tides of change in and out of shores world over.
What does it mean when we encounter one tide over the other? What are the causes of these tides of change? And what are their results?
Obviously, great number of people find regulation important. If not, then they would not band together in acts enacting regulation in the first place.
And obviously a great number of people find deregulation important. If not, then they too would not band together in acts enacting the deregulation.
These things happen. So why do they happen? What influences the preference for the one over the other? These simple questions yield complex answers digging at the very nature of human social behavior, power structures, and the American political system. Woe to the person who strives to tackle such complexities. I don’t expect to find full answers during a simple wash of laundry.
As a simpler approximation suitable to my chore’s timeframe, I opt to look at history. Regulation came with the railroads in the 1880s. It came with radio in the early 1900s. So did the FTC. And then in the wake of the Great Depression, in came in the form of the FDIC, the SEC, the FCC, and the Civil Aeronautics Board.
Then we saw the 1980s. The beginning of financial deregulation. Both in act, and as a cultural trend.
On the surface, one might simply assign regulation to the party of Democrat, and deregulation to the part of Republican. Such behaviors shape the brands of each party. Through such branding we create cultural alignments under each distinct story, meaning, and understanding.
And although brands, in the simple stories they tell, would have us believe in such an alignment, the truth proves far more muddled.
The repeal of Glass-Steagall, a New Deal era regulation, occurred under the 106th congress, 1999. Though this congress was house and senate republican majority, a Democratic president signed it into Law. Deregulation with a Democratic President.
Equally interesting, the Sarbanes-Oxley act, a regulation affecting public companies and accounting practices, was enacted under the 107th congress, with a split senate, a Republican majority House, and signed into law by a Republican President. Regulation under Republican House and President.
So the truth behind regulation and deregulation appears much more complicated than the party brands project on first blush.
It seems in these cases as well regulation and deregulation find no simple story. Peak unemployment in the 1980s, followed by a sharp decline, occurred while coinciding with an era of deregulation. While peak unemployment in the late 2000s, followed by an equally sharp decline, coincided with an era of strong regulation.
So perhaps the stories behind these trends aren’t trends at all. Maybe they are unique to each situation. And maybe it depends on the type of regulation.
Regulation surrounding technological advances may differ in trend strongly from regulation focused on economics and the financial sector. When the railroads came, so too came railroad regulation. When radio came, so too did radio regulation. Then we built an airplane industry. And then we built airplane regulations. And equal too to other communications advances. Telephone. Internet. And then the modern mobile phone – a telephone, radio, internet device.
In turn, so too did we find deregulation. Deregulation of aircraft ticket pricing opened the skies from a necessity of businessmen to a right of the average American. It equally enabled the average American aircraft carrier to merge and bankrupt to hearts’ content. An outcome of an industry requiring billion dollar equipment investments and razor thin margins.
The regulations for each of these industries share a common story. They all revolve around new and innovative technological advances. And these technologies are not limited to a personal commodity. They are tools of the modern globalization trend. These technologies improve the movement of goods, people, and information. And they require a superb amount of standardization and collaboration. Land rights and imminent domain require priority for railroads to work. Rail gauges need to be standardized. Radio spectrum must be honored and not interfered with. Every telephone needs to work with every other. And everyone on the internet needs equipment that behaves according to specification. And politely – otherwise communication lines become saturated. Services are denied.
In a certain extent, regulation exists as a technological component to any of these technologies. Because collaboration over limited resources (land, airspace, radio waves, and the like) sits at the heart of each of these technologies. It’s just as much a part of your cell phone as copper circuit boards and electronic bits are. Neither would exist to the modern extent without the other.
And these are the technologies at the heart of our modern technological revolution.
Regulation comes at the nascent birth of these technologies of shared resources. It evolves as the technologies evolve. That evolution includes additional regulation. And removal of that regulation. The ebb and flow seems to move with the changes in the technology. They evolve, much as the technology does. Which makes sense, since in a way they are a part of that technology.
While that applies to the technological components surrounding shared resources, it seems somehow less applicable towards safety regulations. This form of regulation appears much more response. A train wreck occurs. We find root cause. We add regulation. A plane crash occurs. We find root cause. We enact regulation. With each accident, disaster, and death, we look to ways to prevent such acts in the future.
And as it is with train crashes, so too with market crashes. Regulation in the financial sector often shows up in response to a specific failure. Sarbanes-Oxley showed up in the Enron wake. Glass-Steagall enacted in response to black Tuesday.
Regardless of where one sits on the spectrum of support for these financial regulations, they undeniably exist because a failure happened. A failure significant enough to affect the national economy. Enough for people to care about what happened. And strong enough so that people made an effort to prevent the fault from occurring in the future. Regulation is that fault prevention. It is the standardized process and practice in place to avoid failure.
Given enough time, in the absence of the failure and the people who felt the pain of it, the failure is forgotten. Once the sentiment has lapsed, so too leaves the desire to mitigate it. Mitigation isn’t free. There’s a cost associated with it. Whether directly, or in lost opportunity cost. Mitigation limits risks. And taking more risks increases the potential for gains.
In the absence of that desire to mitigate, the doors open. Regulation disintegrates. The cost of risk mitigation is no longer mandatory. And the opportunity to take risks exists once more.
While that may be true for financial regulation, that’s less true for safety regulation. We no longer regulate airfare. But between the FAA and homeland security, aerospace safety regulations stack up by the mile.
I think this has a lot to do with the economic value of a human life. How much should we spend to preserve a life? And how much should we spend to destroy it? We apply these questions in both poverty and in the war of nations. In slavery and towards refugee. On the one hand, we may say all life is priceless. On the other hand, we may make and support decisions that prove otherwise.
It’s a hard thing to evaluate. That’s on top of the fact that people have a hard time intuiting risks in general. All together that makes for a difficult time in evaluating the balance between the cost of risk mitigation, the likelihood that a failure might occur, and the cost should the failure occur. Which includes death in this case. Because of that rats’ nest, I think these regulations tend to find less deregulation than others.
Somewhere in this pile of regulation and deregulation, failure responses and likelihoods, and risk mitigation, I smell the oil of the engineer. The tools of the FMEA. The behaviors of risk management and mitigation.
Failure modes and effects analysis – FMEA – was originally developed by NASA as an engineering tool to identify failures and errors. And then try to address them. It it’s simplest form, this exercise manifests as a clean worksheet. From NASA the tool evolved into aerospace practices. From there, through Motorola and GE, it made its way into engineering disciplines on the whole by the culture of Six Sigma.
Risk Management exercises have evolved over the years. And made their way into other forms. Not just in the manufacture of parts, like airplanes, but also in processes and behaviors built to achieve certain desirable outcomes. Or avoid undesirable ones. Which, in a way, is the purpose and behavior of financial regulations.
In the form of risk management and mitigation that I’m most familiar with, there’s a multi-step process. There’s a desired outcome. Like the safe operation of a part. Or perhaps a healthy economy with low unemployment.
There are failures, many kinds of failures, that might occur against that intended operation. A fan might break. Or a stock might crash due to misrepresented financials. For each one of these failures, we assign two scores. The first is the likeliness of the failure. The second is the impact of the failure – the cost should the failure occur.
These two scores are multiplied. If the result stands higher than the generally agreed upon level of risk tolerance, then we spend capital (time, money, or other resources) on mitigating the risk in order to lower the likeliness of occurrence.
If the risk is below our acceptable level of risk tolerance, or if we’ve mitigated to a point where it falls below our risk tolerance, then we don’t spend capital on the mitigation. The failure may still occur, and the cost will still be incurred when the failure incurs. But given the opportunity for the failure to potentially occur into forever, combining the likeliness against the failure cost proves less expensive than the mitigated likeliness against the failure cost plus the mitigation cost. It’s on average less expensive to fail and fix rather than mitigate.
This practice works, and works well, in engineering. And it helps put everyone on the same page through concrete communication. It also helps solidify an agreed likeliness of any particular failure. Which is something that we are inherently bad at intuiting. And something that we will devaluate over time as it leaves our present memory. Even though your memory has very little to do with the failure likeliness.
This bias helps explain why regulation comes and goes. Because we likely over-evaluate likeliness right after a failure, and under-evaluate likeliness the longer it’s been since we last see the failure. We end up in a never-ending state of oscillation, instead of implementing mitigation appropriately against the likeliness.
By looking at regulation policy as a form of risk management and mitigation we introduce another concept to regulation. Risk tolerance. In each potential failure we identify if the likeliness, combined with the cost of failure, is below our risk tolerance. If so, we don’t mitigate. If it’s above our risk tolerance, we do.
In this sense, if likeliness of failure and cost of failure remain fixed – and they often do – then the decision to regulate or deregulate – that is mitigate or not mitigate – comes down to raising or lowering our risk tolerance.
In a small company, this might be an easy metric to judge. It may be low, because failures generally mean people lose their jobs. Or it might be high, like an aggressive startup taking many risks. Most startups fail. Each company’s risk tolerance varies greatly. But it’s easy to agree upon where it should be set. It depends upon the company’s goals.
When deciding on economic policy for an entire country, this seems much more difficult to identify. Because we’re serving the goals of many different people. Any given individual might have a very high risk tolerance – due in general to both resource access and behavioral preference – and others may have an extremely low risk tolerance – again due to factors including behaviors and general access to resources.
What I find most interesting, however, is the following.
Let’s assume for the sake of conversation that every citizen in America is equal in terms of their rights to be represented in American economic policy. That would imply that their weight of risk tolerance equally applies.
So if most people have a very low risk tolerance, and a few people have a very high risk tolerance, then the median of fairness would say we should mitigate more than not. Which means regulate more than not.
On the other hand, if the risk tolerance of every citizen increases, then the median risk tolerance also increases. Decisions weigh away from mitigation. Which means more deregulation.
Here’s what I find interesting about this situation.
If we find way to raise everyone’s risk tolerance, then we lower our need for risk mitigation. We have less need for regulation.
On the other hand, the more we lower everyone’s risk tolerance, the more we increase our need for risk management procedures. For regulations.
Deregulation is most appropriate when we raise every person’s risk tolerance. Regulation is most appropriate when we lower ever person’s risk tolerance.
How do you raise the median risk tolerance level? In general by increasing the median access to resources. Capital, food, social stratification, and health programs are all forms of resources. These are the tokens of socialist style programs. Be that social security, universal healthcare, or some other form of socialist program that essentially acts as resource redistribution at the end of the day.
That means that deregulation makes most sense in the presence of large amounts of socialist support systems. While regulation makes the most sense in the presence of small or nonexistent amounts of socialist support programs.
And yet, when deregulation comes, so too do cuts to social programs. And when regulation comes, then so too does increases to social programs. Whether by the US political culture, the binary party system, the culture of each party, the echoing effects of the Great Depression, or all these things in combination, decreases in regulation also come with decreases in risk tolerance. And increases in regulation often come with increases in risk tolerance.
Where really, the opposite seems to make more sense. At least in our hypothetical supposition. Increases to regulation mean we can afford to lower risk tolerance. And decreases in regulation requires a counterbalance in increases in risk tolerance.
But that’s not the way it works.
And while that may, on the surface, seem to make some amount of sense when it comes to financial regulation, this brief thought experiment feels as if it has zero bearing on the way environmental regulation works.
When it comes to the EPA and issues of environmental regulation, which attempts to address risks and failures in the environment itself, that’s a whole different mess. Because it’s so controversial.
But maybe here our tools for risk calculation might shed a little light. Even here we find likeliness of failure, and the cost of failure.
In general, the likeliness of failure seems to carry the disagreement. And because we’re humans and oversimplify everything, we prefer to view the likeliness in black and white. In terms of true or false. That environmental danger and disaster is either 100% likely, or 0% likely – it will or won’t happen.
This leaves no room for averages. Because nothing 0.5 happens.
But that is how likeliness works. Likeliness is a spectrum of could happen. Might happen.
And while disagreement on likeliness may be divided by opposites, the cost of impact doesn’t budge an inch. Because should global environmental disaster occur, the results would be just that. Disastrous. Not in your lifetime or mine perhaps. But it would be disastrous.
Again, as humans we find it hard to evaluate risk. And we also find it hard to evaluate costs. So says the research of Dan Ariely. And we are especially bad at both when the impact isn’t immediately in front of us. This is the root of procrastination. And as humans, we are so good at it.
We’re likely quite bad at evaluating cost and risk of a disaster with impacts far into the future. So, should we even care? Or how much should we care? Where should we set our risk tolerance?
I imagine your risk tolerance level for such a future failure depends on if you have kids. And how much you like them.
Using our risk evaluation scoring system, we might see the need to mitigate our risks, through environmental regulation, differently. Without changing our core opinions.
If we believe environmental failure likeliness sits at zero, then it doesn’t matter how high the cost of failure might be. Zero times that cost is still zero – a score that says no regulation necessary. On the other hand, a mediocre likeliness, even a somewhat low likeliness, against a very high cost of failure would still indicate a preference towards risk mitigation. Towards regulation.
As long as enough citizens have children. And like them.
So even if half the nation holds the opinion of a high likeliness, and the other half a low likeliness, in the face of a high cost of failure a compromise that sets the likeliness at a mid-level would still favor environmental regulation over deregulation. According to the scores.
But we’re humans and we believe in death before compromise. And if life is priceless, then that means compromise is even more so. Which explains why no one can afford to compromise. Apparently it’s too expensive.
While the cost of compromise may be too much for most to bear, the cost of laundry remains accessible. For a fist-full of quarters, or a favor from friends and family, we find the opportunity to wash away the past, and begin the week anew. That is the gift of laundry. And I am grateful to have it.