Tag Archives: It’s the Stupid Economy

by

No, Unemployment Didn’t Just Decline

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

It went up.

Unemployment in the United States actually rose in December 2014 from the previous month, from 11.56% to 11.66%. This is largely the result of a revision to November numbers declaring the unemployment rate during that month to be 11.56% instead of the previously released figure of 11.71%. The Reporting Gap, or the gap between the BLS’ official headline published figure and the actual figure that accounts for departures from the labor force, rose to a record high 6.06%. This is now 108% of the reported figure for unemployment, also a record high. The previous high for the Reporting Gap was set in September 2014, when the Gap stood at 6.04% over a reported figure of 5.9%.

Without the revision of November’s numbers, unemployment would actually have dropped by 0.05%. Of course, the reported figure plunged by a full 0.2% to 5.6%, the first time the figure has been that low since June 2008, when actual unemployment was a mere 6.87%. The last time unemployment was actually below 5.7% was in March 2007, when real unemployment was 5.54% and was reported to be at 4.4%.

Most importantly, the actual rate of unemployment, counting people who have left the labor force, remains 1.66 percentage points above the highest reported figure of the millennium.

While the American media has made much of a so-called “recovery” from the Great Recession that began in 2008, befuddlement continues to be reported on the nature of this recovery, how it is slow, protracted, and does not seem to be impacting those at the bottom of the economic ladder. Constant confusion has been expressed as to why it doesn’t “feel” like a recovery and why so many people seem to be unable to get work when the unemployment rate has purportedly plunged to almost economically desirable levels from a supposed peak at 10%. The Reporting Gap, which directly measures the number of hidden unemployed who have left the labor force, can be seen as a barometer of this otherwise inexplicable feeling. Not only have wages continued to stagnate, but the actual employment situation remains dire, despite much-vaunted reports that say otherwise.

Awareness of the issue of people leaving the labor force has been increasing over the past few months and years as people scramble to figure out why happy days appear to not be here again. Given that labor force participation hit a 36-year low in December 2014, it seems unbelievable that more people are not aware of the impacts of this loophole in how unemployment is reported by the BLS.

Real unemployment has been in double-digits since March 2009, marking almost six straight years where the actual number of unemployed has been larger than the reported peak of the unemployment crisis.

Here are your charts:

Real unemployment (red) and reported unemployment (blue), 2009-2014.

Real unemployment (red) and reported unemployment (blue), 2009-2014.

The Reporting Gap between real and reported rates, 2009-2014.

The Reporting Gap between real and reported rates, 2009-2014.

NB: The graphs above were edited on 10 January 2015 after I realized that I’d accidentally pulled October’s graphs instead. I normally don’t edit things on this page, but this seems to be a sufficiently trivial mistake and something a little misleading, so it warrants a quick change. The above graphs are now accurate, circa December 2014.


This is part of a continuing series on the under-reporting of unemployment in the United States of America.

Past posts (months indicate the month being analyzed – the post is in the month following):
November 2014
October 2014 – age assessment
October 2014
September 2014
August 2014
April 2014
December 2013 – seasonal assessment
December 2013
March 2013*
August 2012*
July 2012* – age assessment
July 2012*

*My initial analyses led to a slight over-reporting of the impact of the reporting gap, so the assessments in these posts are inflated, as explained and corrected in the December 2013 analysis.

by

321,000 Jobs Fail to Change Unemployment Data

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

Nothing changed in November. At least as far as the jobs numbers go.

The BLS announced in today’s report that unemployment and labor force participation were unchanged at 5.8% and 62.8%, respectively. This means we have precisely mirrored last month’s 11.71% actual unemployment with a Reporting Gap also matching last month, at 5.91%. It’s the third straight month the Reporting Gap exceeded reported unemployment figures, representing that more than double the number of people the BLS claims are unemployed are actually unemployed.

The Reporting Gap is plenty disturbing, but nothing much else is new or interesting or distressing about this jobs report. Save one thing. The headline that the BLS wants you to remember about this month’s report is that “nonfarm payroll employment increased by 321,000”. The economy added 321,000 new jobs! This must be good news, right?

But if we added 321,000 new jobs and nothing changed, then what happened? How can we have such massive job growth and still have the exact same reported unemployment, labor force participation, and real unemployment?

Because 321,000 “new” jobs are just what it takes to maintain the status quo. This many added jobs is only keeping us at stagnant levels of job growth in terms of the percentage of the population actually employed. The much vaunted job growth, evidence of the recovery and incoming days of elation, simply represents stagnancy and maintenance of a situation where nearly 12% of America finds itself jobless.

Yet we find extremely misleading statements like this in the report:

In November, job growth was widespread, led by gains in professional and business services, retail trade, health care, and manufacturing.

The next eight (8!) paragraphs go on to detail the supposedly strong across-the-board growth, numerically, by industry.

Of course, this is like arguing that you have made a bunch of money because your gross receipts went up by 5% during a year in which inflation rose by 5%. There’s nothing actually there. The only changes are simply what is required to keep up with the growth and shift in population. It’s like saying “Yay, income tax revenues increased 10%!” in a year in which income levels also rose 10%. Except you actually get to keep that money if you’re the IRS. Whereas here, there’s nothing to actually show for all the allegedly huge job growth.

This is a really important lesson for processing job data in future. 321,000 sounds like a big number. It’s just the maintenance level, what’s necessary to keep up with the nation’s growth as a whole. Anything less than that is the situation actually getting worse for people. The BLS told it to you right there in the report. They just hope that you don’t think about the data for more than five seconds and then go uncork some champagne. Some champagne that you wouldn’t have bought otherwise, which might actually stimulate some fake economic growth they can crow about next year.

Here are your charts:

Real unemployment (red) and reported unemployment (blue), January 2009 - November 2014.

Real unemployment (red) and reported unemployment (blue), January 2009 – November 2014.

Reporting Gap showing the distance between real and reported unemployment, January 2009 - November 2014.

Reporting Gap showing the distance between real and reported unemployment, January 2009 – November 2014.

Stock market closing levels (blue) with line demonstrating Reporting Gap (red), January 2009 - November 2014.  The correlation between our self-delusion about the state of unemployment and the market's meteoric rise in the last six years is startling.

Stock market closing levels (blue) with line demonstrating Reporting Gap (red), January 2009 – November 2014. The correlation between our self-delusion about the state of unemployment and the market’s meteoric rise in the last six years is startling.


This is part of a continuing series on the under-reporting of unemployment in the United States of America.

Past posts (months indicate the month being analyzed – the post is in the month following):
October 2014 – age assessment
October 2014
September 2014
August 2014
April 2014
December 2013 – seasonal assessment
December 2013
March 2013*
August 2012*
July 2012* – age assessment
July 2012*

*My initial analyses led to a slight over-reporting of the impact of the reporting gap, so the assessments in these posts are inflated, as explained and corrected in the December 2013 analysis.

by

Americans Leaving the Labor Force: Who are These People?

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

I’m not the only person talking about the people who have left the labor force in the last ten years. It’s a hard story to ignore, though most of the media is up to this challenge. But CBS and the Washington Post have actually run recent articles on this issue, though none of them go so far as to actually include these people in a real unemployment figure like I do. Many people have attributed the shift entirely or largely to the aging population in the United States, to the fact that Baby Boomers are retiring and leaving the workforce.

This is bunk.

I examined this issue very lightly in August 2012. But since that time, I’ve come to understand the BLS reports and numbers much better and explored their very cool query tools for previous reports of their monthly Current Population Survey. So I drilled down into their reports and mined the data and put Excel to work on it. Who are the people who have left the labor force? How old are they? What can we infer from their age?

These people are young, not old. Less than half of them are Baby Boomers. Less than a quarter of them are Baby Boomers. Less than a tenth of them are Baby Boomers.

Just about 9% of them are Baby Boomers. Here’s your graph:

That's a really young group of people!

That’s a really young group of people!

The graph represents about 7.5 million people who have left the labor force in the last ten years, since October 2004. Only 7.5%, or a little less than 600,000 of them, have “aged out,” crossing the magic threshold in our nation of 65 years. While 42% of them (nearly 3.2 million people) are younger than 25.

But wait, you are saying. Everything in the world is telling you that the American population is getting older, fast. Doesn’t it make more sense that all the people are just aging and thus leaving the labor force naturally, if not gracefully? How could this graph possibly be true when millions and millions of Americans are in fact aging out of the period of life in which we expect them to hold a job and earn a wage?

Yes, the American population is aging. But that growing population is increasingly staying in the workforce as it ages. The growth in the overall population of the older portions of the populous is being outpaced by the percentage of people staying in the labor force. In simpler terms, Baby Boomers are holding onto their jobs longer and preventing younger people from taking those jobs.

Here’s what this graph looks like:

Note that there are only two groups where growth is outpacing growth of those not in the labor force.  The oldest two.

Note that there are only two groups where growth is outpacing growth of those not in the labor force. The oldest two.

So even though the oldest age groups are growing steadily, most of the people who are aging are actually staying in the labor force into that age. Some of them are retiring and thus leaving the labor force, but a far smaller number than are leaving the labor force aged 16-54.

Perhaps the most amazing stat in that graph is the crash in population aged 35-44, reflecting a very small population in late Generation X. While the number of people that age has actually declined by 10% in the last 10 years, the number in that age outside of the labor force has somehow increased. This is why about a million more people of that age are now outside the labor force than would be had labor force participation rates remained constant.

So, about those labor force participation rates. Here’s what they look like:

By percentage, the oldest Americans are actually entering the labor force, not leaving it.

By percentage, the oldest Americans are actually entering the labor force, not leaving it.

Even though the differences are small, this may be the easiest graph to see where the trends are headed. Only among those aged 55+ are labor force participation rates actually increasing. Meanwhile, the younger populations are consistently fleeing the labor force at the fastest rate. Which is why 42% of the people who’ve left the labor force (or more accurately, never entered the labor force) are under 25. Now nowhere are the rates increasing so much that it makes the population immune to the overall trend of people still leaving the labor force overall, numerically, though it’s close among those aged 55-64. Most of these people are working and many of them would have retired and left the labor force in the old economy. Now, very few of them are doing this. Only a paltry 100,000 people in this age range have left the labor force of the 7.5 million who have left overall, that tiny sliver in the first graph.

So the old are keeping jobs longer, the young are not getting jobs, but everyone is having a harder time staying in the labor force across all age ranges.

This is not a picture of a normal aging and retiring population. Less than 10% of the overall story of labor force desertion is about that. Over 40% of it is about people who never got a single job and thus can’t be counted as unemployed. Sure, more of these kids are in school. But by spending money they don’t have on mountainous debt for getting a degree that does less for them in the labor force, I think we can safely count these 3.2 million people as heading for official unemployment. In the meantime, they’re just part of the Reporting Gap that makes our 11.7% unemployment pretend to be below 6%.


This is part of a continuing series on the under-reporting of unemployment in the United States of America.

Past posts (months indicate the month being analyzed – the post is in the month following):
October 2014
September 2014
August 2014
April 2014
December 2013 – seasonal assessment
December 2013
March 2013*
August 2012*
July 2012* – aging assessment
July 2012*

*My initial analyses led to a slight over-reporting of the impact of the reporting gap, so the assessments in these posts are inflated, as explained and corrected in the December 2013 analysis.

by

Unemployment Drops to 5-Year Low… of 11.71%

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

Unemployment in the United States dropped precipitously in October 2014 according to this morning’s report issued by the Bureau of Labor Statistics (BLS). The report shows that actual unemployment dipped by almost a quarter-percent month-over-month, declining from 11.94% in September to a five-year low of 11.71% in October. This is the lowest US unemployment rate since July 2009, when national unemployment was 11.53%.

The reported official figure declined to 5.8%, keeping actual unemployment over double the official figure for the second month in a row. The Reporting Gap declined to 5.91% from its all-time high last month of 6.04%.

Actual unemployment is measured by including workers who are not in the labor force but normally would be during a time of economic health. These include both people who have left the labor force and, increasingly, those who have never been able to enter the labor force and thus are ineligible for official unemployment classification. Labor force participation edged up to 62.8% this month, beating a 36-year low of 62.7% from September.

This graph shows unemployment rates, comparing the actual figure including those not in the labor force as opposed to the official reported figure:

Real unemployment (red) vs. reported unemployment (blue), January 2009 - October 2014.

Real unemployment (red) vs. reported unemployment (blue), January 2009 – October 2014.

And this shows the evolution of the Reporting Gap over the same period, demonstrating the distance between reality and what the media reports:

Reporting Gap, January 2009 - October 2014.

Reporting Gap, January 2009 – October 2014.

Unemployment remains noticeably above the highest figure reported during the so-called Great Recession. Reported unemployment peaked at 10.0% in October 2009 (when actual unemployment was 12.69%). Real unemployment peaked at 13.17% in June 2011 (when the reported figure was 9.1%). Since that peak, unemployment has dropped by only 1.46%, while the reported figure has declined by 3.3%.

Real unemployment has been in double-figures, which it was reported to be in only one month during the last five years, for 68 straight months.


This is part of a continuing series on the under-reporting of unemployment in the United States of America.

Past posts (months indicate the month being analyzed – the post is in the month following):
September 2014
August 2014
April 2014
December 2013 – seasonal assessment
December 2013
March 2013*
August 2012*
July 2012* – aging assessment
July 2012*

*My initial analyses led to a slight over-reporting of the impact of the reporting gap, so the assessments in these posts are inflated, as explained and corrected in the December 2013 analysis.

by

Automation Nation

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

Something has taken place slowly over the past few years in the United States. And it’s basically complete.

No, the whole nation isn’t one company. Yet.

There are basically no manual appliances left in public spaces. Save for the occasional water fountain (and there are automatic ones and they basically don’t work and are completely terrifying – the process is basically walking up to a machine and having it spit at you), you are no longer expected to press buttons or flush toilets or turn cranks. Everything is automated. Faucets, toilets, urinals, hand dryers (both paper towel dispensers and hot-air blowers) all have little red motion sensors that determine when the appropriate time to work is without the overt control of the humans, well, using them.

At first blush, of course, this seems like enormous convenience. No longer do we have to actually exhaust our digits by – the horror – turning a knob! Merely wave hands in the general direction of where water is supposed to spring from and – voila – it is sprung! Weep, ye ancestors of humanity who died toiling in the men’s room of the past, furiously pulling your own paper towels from their slot.

But once we get past the idea of these machines and to their actual use, the picture of perfect ease gets murkier. For one thing, the shelf life of the motion sensors is awfully short. And what this means, functionally, is that the closer to the end of the sensor’s life we get, the more it becomes like – wait for it – a button. I think we’ve all had the experience of furiously trying to get the water to release from the spigot by waving, then contorting, then sort of just pawing our hands around the vague sink area, hoping to just make the faucet work already, something never before so challenging in the history of faucets. In most instances, if you can actually just see the blinking red light indicating the sensor’s location, you can merely press your finger to it and the thing will do its job. Unless the sensor is so old that is just broken, which is an increasingly regular occurrence. I’m not exactly certain how often sink handles had to be replaced in the old days, but I bet it was less than once a decade in all but the roughest of establishments.

Which brings us to the question of the purported environmentalism of all this automation. Certainly hand-dryer blowers are nothing new to bathrooms in America, though they used to have (gasp!) buttons along with their lecturey signage about how hot air was more sustainable and environmental than paper. Which seems sort of true in a world where we don’t question where the electricity comes from or what its creation is doing to the planet. I haven’t precisely run the numbers on paper towel count vs. how much power it takes to run a blower and I don’t know exactly how to quantify x number of trees vs. y units of coal energy in terms of what it’s doing to Earth… I’m not sure this information is exactly knowable (see also: paper or plastic, which has been punted to canvas/tote, which itself has raised a whole host of environmental questions regarding just how many tote bags humans need). But I do know how much energy and/or trees used to go into faucets or urinals. That would basically be zero.

But when we factor in the replacement costs and that impact on sustainability, not to mention the process of turning every appliance in a public restroom into a little computer, it doesn’t seem like this was a terribly environmentally motivated decision at all. If this had an environmental angle at all, it seems to have been swept up in what so many such endeavors devolve into in this country: an opportunity to spend money buying something new because it’s environmental! Never mind that the whole point of the movement hinges on reducing and reusing instead of manufacturing and buying… the best way to show support for the environment is to buy Environment-Brand Stuff! I’m not saying every or even most environmentalist actually buys in (get it?) to this mentality, but it feels like the mainstream of people feel good about themselves for buying more products than they otherwise would as long as they have a vague greeny feel about them. And, like most things, this scales up in big institutions.

I guess there’s the other environmental issue of the person in the bathroom who leaves the sink running for a week or who flushes the toilet 27 times when they actually need to maybe twice. The environmental issue here is water and the thesis is that bad/neglectful people will waste more of it than very smart machines. I’d be more of a believer in this if the machines themselves demonstrated much intelligence. I am really looking forward to being able to get a Google self-driving car, but if they run on the same general principle as these motion sensors built into public restrooms, they will alternate between stopping in the middle of nowhere, then kind of stutter-stepping for a few feet before stopping forever and smashing full-tilt into whatever they can hit. It’s an unnerving and increasingly common experience to be sitting on a public toilet when the motion sensor decides that you simply must be done already and decides to throw full suction at the bowl without warning. Similarly, I’ve had to do bizarre dances and door-swinging maneuvers to try to leave other stalls in a human condition for the future user, endlessly cajoling the stingy motion sensor to release its cleansing waters.

I’m sure the process of adding a blinky red light to every appliance in every public restroom in the country helped the economy a lot and may be credited as one of the only reasons we’re supposedly out of the Great Recession. (Of course, as I’ve discussed, we’re not.) But I suspect the real reason for all this automation wave has more to do with something I’ve discussed even more recently, which is the national obsession with the outbreak and spread of deadly diseases. Or, y’know, at least colds.

It’s become an increasingly known and documented fact that our hands, seemingly our most innocuous and extendable parts, are dirtier than the worst incarnation of the Peanuts character Pigpen, while other areas we might be more concerned about, such as our thighs, are actually remarkably clean. This fact does not deter anyone from shaking hands with others or even mean that we tear down the doors of public restrooms, but it probably is the main culprit behind removing handles from toilets, urinals, soap-dispensers, and sinks in public places. The sink one is especially important, since we’d have to touch the handles with freshly-washed hands over and over again… who knows where those freshly-washed hands have… oh.

You can probably guess by now that I think this effort as a way of stopping disease, much less serious disease like, say, ebola, is a fool’s errand. Admittedly my father did raise me to flush a public toilet with my feet rather than my hands, but I really doubt a lot of people before were getting sick from public bathrooms. More likely it was from kissing people and shaking their hands and going to hospitals and all the other really contagious things that we tend to do as a species. And maybe I’m wrong and it’s marginally aiding our health never to have to touch handles in bathrooms. But my guess is that the margin, if applicable, is really quite small and is dwarfed by the expense of constantly getting new sensors installed (to say nothing of it being erased by the sensors that just become buttons).

No, like many innovations of our modern world, my guess is this one is more about the illusion of increased health, the gentle placebo that comes from replacing something vaguely icky with something really frustrating. As long as we don’t think that the public restroom is making us sick, then it’s worth any expense to be spared the indignity of turning a knob that some other human has touched before us.

Of course, people continue to stockpile ebola suits and get them for the whole family … just in case. So maybe a time will come soon when we will all literally live in our own little bubbles and never have to touch each other at all. Just make sure you sneak your cell phone inside the suit first. It’s awfully hard to text with ebola-suit-fingers.

Dave?  What are you doing, Dave?  I believe you'll find that very difficult without your ebola suit, Dave.

Dave? What are you doing, Dave? I believe you’ll find that very difficult without your ebola suit, Dave.

by

Unemployment Down to 11.94%, Now Double Official Reported Figure

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

What I predicted in April 2013 has finally happened.

The BLS released their September report on unemployment today, announcing that unemployment had dropped from 6.1% to 5.9% in the United States. Unemployment did drop between August and September, when accounting for those unemployed by virtue of not being in the labor force, but by a much more modest margin, from 11.99% to 11.94%.

11.94% is tied for the lowest unemployment rate in the US in five years, dating back to August 2009. Unemployment then was only 11.76%, but was reported as 7.9%. Things in August 2009 were slightly better than they are now, but people perceived them as being 33% worse. Although it may not be fair to say things were holistically better, because unemployment was starting to skyrocket toward its peak of 13.17%, while recently the trend is decidedly flat. Unemployment has been between 11.94% and 11.99% for four straight months, and between 11.94% and 12.17% since February.

The gap between the reported figure and the real figure hinges on this insidious quote from the BLS report out this morning:

“The civilian labor force participation rate, at 62.7 percent, changed little in September.”

The way that should be phrased is more along the lines of: The civilian labor force participation rate hit a new Great Recession low at 62.7 percent in September, reflecting that more people have now left the labor force since 2007 than are currently considered officially unemployed.

It’s easy to say that things are “little changed” when they dip 0.1%, even if that reflects a new record low. The problem is that 0.1% of the labor force-eligible population is 248,446 people. The alleged decline in unemployment of 0.2% month over month was 329,000 people. So a substantial majority (76%) of the people who allegedly stopped being unemployed actually just left the labor force. Thus the real unemployment rate was only down 0.05%, not 0.2%.

Here are your graphs for this month:

Real (red) and reported (blue) unemployment in the US, January 2009-September 2014.  Source: BLS

Real (red) and reported (blue) unemployment in the US, January 2009-September 2014. Source: BLS

The gap between real and reported unemployment in the US, January 2009-September 2014.  Source: BLS

The gap between real and reported unemployment in the US, January 2009-September 2014. Source: BLS

I decided to cut the graph to starting in January 2009 instead of January 2007 as I have in the past, mostly to emphasize the contrast and let the recent trends stand out a little more.

The Reporting Gap, or what I’ve dubbed the “Crazy Factor”, hit an all-time high last month with today’s report, and now finally stands higher than the reported figure in unemployment. The Reporting Gap is 6.04%. Unemployment is reported as being 5.9%.

Put another way, 9,262,000 Americans are currently unemployed by virtue of counting in the official statistics. An additional 9,481,787 Americans have either left or never been able to enter the labor force and are currently unemployed by that circumstance. These people have no jobs and, unlike the officially unemployed, no income.

The total number of unemployed Americans is more than double what we think it is. It is 18.75 million people, not 9.25 million.

If you’re wondering, the last time the labor force participation rate was this low, it was in February 1978. At the time, there was still a significant gender gap in terms of women being expected and/or able to work, as well as a burgeoning economic crisis. But the BLS is content to call a 36-year low in labor force participation “little changed”.

In fact, maybe the starkest graph I could show you is this one, from the BLS website itself, with no extra analysis on my part:

BLS labor force participation rate, January 1978-September 2014.  Source: BLS

BLS labor force participation rate, January 1978-September 2014. Source: BLS

Does that look like a recovery to you?


This is part of a continuing series on the under-reporting of unemployment in the United States of America.

Past posts (months indicate the month being analyzed – the post is in the month following):
August 2014
April 2014
December 2013 – seasonal assessment
December 2013
March 2013*
August 2012*
July 2012* – aging assessment
July 2012*

*My initial analyses led to a slight over-reporting of the impact of the reporting gap, so the assessments in these posts are inflated, as explained and corrected in the December 2013 analysis.

by

Unemployment Remains Stable… at Double Reported Figure

Categories: A Day in the Life, It's the Stupid Economy, Quick Updates, Tags: , ,

Was going to post a longer thing about the nature of cameras and privacy and all the Rices from Rutgers that get rung up by audio-visual equipment, but that’ll keep for now, especially since everyone under the sun has something to say about it. Not least of which is The Onion, whose point that only people who get caught on camera doing things wrong get punished sort of misses the point underscored here, which is that everyone is on camera all the time now and that may not be as bad as everyone wants to think. But more on that later, when I’m not quite as sick or I haven’t made it to 1:30 PM and failed to eat today, especially when sick. Go me.

I took most of the summer off from reporting on unemployment, in part because I just wasn’t in much of a blogging groove in general for most of the last year. So here we are again, revisiting the figures I last posted in April.

At that point in the year, the reporting gap, or the gap between real unemployment and that figure the B(L)S puts out monthly, stood at a record-high 5.87%. Now that figure has hit a record high in both June and August, modestly up to 5.89%.

Here are your graphs:

The red line is real unemployment, accounting for those who have left the labor force.  The blue is the reported figure.

The red line is real unemployment, accounting for those who have left the labor force. The blue is the reported figure.

The gap between real and reported figures.

The gap between real and reported figures.

You know the drill with this by now. Nothing is changing much in terms of unemployment. The public narrative about stagnation of unemployment and the “recovery” is actually right for once. But it’s right at 12%, not 6%. Unemployment currently stands at 11.99%, up from 11.94% in July. That July number is the best it’s been since August 2009, when unemployment was still skyrocketing, which was 11.76%.

But 12% unemployment remains 2% higher than the supposed peak of unemployment in the Great Recession, which was in October 2009 (reported at 10.0%). Unemployment actually peaked 2 months later, at 13.13%. It’s periodically touched 13% a few times since then, most recently last October.

The Reporting Gap (second chart) remains a perfect index of the crazy factor in our current perception of the economy as it affects real people on the ground. And that remains at its peak. And the way we would be publicly treating the economy, jobs, and public policy if it were publicly disseminated that unemployment has been hovering between 12-13% for FIVE YEARS is so radically different than the status quo as to make them different worlds altogether.

You’re not crazy. The jobs are not coming back. Someday, we will change the model of our economy. But for now, it remains ostrich time.

by

Unemployment Actually Up in April, Reporting Gap Hits All-Time High

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

There are some more detailed posts in the work since I’ve decided I’m going to re-energize this blog and start posting again, but I think it’s time for a quick continuation of our series on the misreporting of unemployment. (The latest in the series was in December 2013.

Before everyone became obsessed with the latest US mass-shooter, there was this big groundswell of celebration that the unemployment problem has finally been fixed in America as reported unemployment crashed in April to 6.3%, from 6.7%.
And yet those just graduating from college and facing the new job market don’t seem to have it any easier. People still talk in hushed tones about how this “recovery” seems sluggish and iffy even though the numbers have normalized to nearly pre-2008 levels and the stock market has gone through the stratosphere.

So what gives? Is there anything to this story that unemployment is starting to recede so greatly?

The good news is that unemployment in March 2014 was at its lowest level in four and a half years, since August 2009. That is no small thing. It seems there is some actual movement in the economy!

The bad news, of course, is that unemployment that month was still, uh, 11.99%. Or 1.99% higher than the highest reported rate of the whole “Great Recession”. That was in October 2009, when real unemployment was 12.69%.

Yeah, we just set a nearly five-year record low. At exactly 0.7% lower than the perceived height of unemployment during the Recession.

Here’s the chart:
Unemployment20072014Apr

You can read past posts in this series to get an understanding of the rationale for the “Real Unemployment” figure, but it includes those who have left the workforce and are thus left out of American unemployment figures. People who have any sort of employment, even if it’s only a couple hours a week, are still included as employed. The only difference is that my Real Unemployment figure counts those who were in the labor force before the Great Recession as in the labor force now, based on a reasonable healthy-employment figure of percentage of the population in the labor force.

There’s more bad news, of course. As you can see on that chart, unemployment actually ticked up in April, to 12.17%, from that epically low 11.99%. Only a modest 0.18% rise, but that’s a big difference in the story from a 0.4% decline that was reported. And that 0.68% jump in the Reporting Gap led to an all-time high in that figure, of 5.87%.

To wit:
UnemploymentRepGap20072014Apr

As you can see, the reporting gap was actually declining all of 2014 before last month’s surge, with the reported figures closing in on the actual figures somewhat. Of course, this “closing in” is pretty relative, given that we are rapidly approaching the point where the under-reporting of unemployment eclipses the total figure for unemployment itself. Or, put another way, the point where more than half those unemployed in the US are not considered to be so by the official figures. With the reporting gap at 5.87% and reported unemployment at 6.3%, this very real tipping point seems reachable by year’s end.

I’ve discussed a lot how crazy this phenomenon makes people feel and how the surge in reporting gap mirrors that made in the stock market during the same period, so I won’t repeat myself more. Just wanted to give an update on where we stand and how insidious it is that people think April was a banner month for employment when it actually reflected regression.

Consider how differently people would be treating questions of employment, income inequality, and capitalism if they publicly discussed the fact that, from BLS’ own statistics and an understanding of what the labor force means, unemployment has been at 12% (if you’re willing to grant me 11.99% as 12%) or above since August 2009.

by

No Jobs in January: A Glance at Seasonal Adjustment of Unemployment

Categories: A Day in the Life, It's the Stupid Economy, Quick Updates, Tags: , ,

I was curious to analyze the seasonal adjustment data after the last post I made about unemployment data and under-reporting from 2007-2013.

Basically, seasonal adjustment follows a similar shape every year. Not the exact same shape, which is interesting, but my initial inclination to post the graph of each of the years from 2007-2013 individually wound up looking like a well-tread rut line with a couple of alterations high and low in weird outlier months. So the average of the last seven years seems more useful in looking at what’s really going on with unemployment as it cycles through the months.

Here it is:
SeasonalAdjustmenta

I realize I may have done this graph upside-down from what you’d expect – being high on this graph means that actual unemployment (i.e. not seasonally adjusted) is that much higher than the reported seasonally adjusted rate, whereas lower means that the actual unemployment is lower than what’s reported. In other words, in the average January, BLS is shaving a percentage point off the unemployment tally, while in July, they’re adding about 0.6%.

Unsurprisingly, July is the best time to get a job. There’s seasonal work, more people (mostly young whipper-snappers) enter the job market, the weather’s good, people are buying ice cream. Okay, so it’s mostly seasonal work. Summer camp opens and teachers aren’t considered unemployed during their well-earned rest. Everybody celebrate.

January, by contrast, is a disaster. I think December would be too, except there’s seasonal work there to combat that as retailers add tons of temporary workers for the Christmas rush, making that and April the statistically least adjusted months. Basically, winter is bad, summer is good, and fall kind of bumps along being okay.

This is a footnote to all my other posts about how crazy you should feel when the numbers come out. It’s also a really bad sign about what’s going on right now in the economy, because ain’t nobody hiring in January. The numbers are going to take a point off the figure, plus all the people I’ve discussed who are coming off unemployment with Congress putting them off long-term benefits. So we’re in for a heck of a gap this month between what’s reported and what’s real. Honestly, the report could say that January unemployment is 5.5% and that the US lost jobs. That’s probably about what’s going to happen.

by

Unemployment Reporting Gap Hits New High to Close 2013

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

Happy New Year, everyone. There haven’t been posts here for a long time, but that’s probably going to change again soon. Hopefully a lot of things will change soon.

Regardless of which, a new milestone has been set: the all-time high gap between the reported unemployment figures and the actual unemployment figures in the United States of America was reached in December 2013, according to data gleaned from the BLS report out today. This gap is 5.85%, slightly eclipsing the previous record of 5.82% set in October 2013.

With unemployment reported as being 6.7% total, we are approaching the point where under-reporting of unemployment leaves out fully half of those actually unemployed. With the recent cessation of long-term benefits extensions by the federal government, we could see that point reached as early as the January 2014 report, due out in a month.

If you’re confused about this, it’s part of a series of reports I’ve done analyzing the crash in employment as hidden by the diminishing size of the labor force as a percentage of the overall US population. The previous reports are from August 2012, September 2012, and April 2013.

The numbers in the below graph will look a little different from those three previous reports. There are two reasons for that. One is that, in today’s report, the BLS adjusted seasonal adjustment for several months dating back to January 2009. All of their adjustments in 2013 reduced reported unemployment, but it did adjust up in a few past months as well. I have used the updated figures. The second, more important, reason is that I miscalculated somewhat in my previous reports. I used a compounding formula where I added the under-reporting percentage to those already counted in unemployment, which slightly increased my actual unemployment figures more than they should be. The average miscalculation was by 0.38% from January 2007 through April 2013, peaking at 0.74% in March 2013. I apologize for this miscalculation and assure you that the new formula is correct, as reflected in this graph:

Graph showing real and reported unemployment in USA, January 2007 - December 2013.

Graph showing real and reported unemployment in USA, January 2007 - December 2013.

The miscalculations led me to erroneously report that unemployment reached its Great Recession high at 13.8% in July 2011. This is inaccurate. The all-time Great Recession high for US unemployment is actually a mere 13.17%, reached in June 2011. While October 2013 nearly reached that high, at 13.02%, we’re way off that heady pace in December 2013, at only 12.55%. Unemployment had reached its Great Recession low in July 2013, at 12.28%, the lowest since unemployment was sky-rocketing in August 2009, when it was 11.76% and on its way up to 12.36% the next month.

Unemployment has been between 12.2% and 13.2% for the last four and a quarter years.

Does this make you feel like you’re crazy, when the media has reported a decline from 9.6% to 6.7% over that same period, with a brief high at 10% flat?

The craziness has been steadily increasing up till the December 2013 high, as reflected in this graph:

Graph showing gap between real and reported unemployment in US, January 2007 - December 2013.

Graph showing gap between real and reported unemployment in US, January 2007 - December 2013.

As observed in the prior posts, this meteoric rise in the difference between reality and the reported reality has corresponded very well with the meteoric rise in the stock market over the same period of time. Not that corporations really want to employ people, but that a lot of the euphoria over the alleged recovery which has strangely yet to actually manifest on Main Street is generated by this under-reporting.

Given that today’s report offered a crash in the officially reported unemployment rate even though fewer jobs were added than in any report for the last few years, the media explanations are finally bringing a tiny bit of attention to the reality of the unemployment rate’s inaccuracy as a metric for, well, the state of employment in our economy. In other words, it’s become impossible to hide the gap between reality and the fairy tale being told by how the official statistics are calculated. And surely that will only increase when the tsunami of long-term unemployed lose their benefits and are correspondingly omitted from the labor force.

Nonetheless, the scale of magnitude is breathtaking. For the BLS to represent that we had one month of double-digit unemployment when we’ve been suffering under more than four years of it is beyond the pale. And the picture of a steadily improving job market when we’ve had no such thing at all during that span is nearly as unforgivable.

The more you can spread this truth, the more you can help your friends who have been left out by the consistently failing American job market understand that it is BLS and the media that are crazy and wrong, not them.

by

Stock Market, Misrepresentation of Unemployment Hit Record Highs

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

It’s been a while since I’ve been posting regularly, which is something that the extra time of summer will hopefully fix. And it’s been quite a while since I’ve revisited the issues of misrepresentation of the unemployment rate that have come from the disappearing labor market in the United States in the last few years. As a review, please check the original post from August of last year and the September follow-up.

Therein, I explain that by counting unemployment only as a percentage of the labor force and not the overall population, the unemployment rate leaves out millions of people who are functionally unemployed. Not underemployed or under-appreciated, but actually just straight-up not working when they could or should be. The labor force as a percentage of adult population has been crashing since the start of the financial crisis and it’s far outpacing the aging of the population.

I hadn’t posted about the data since August. And while the reported rate of unemployment has dipped from 8.1% to 7.6% during that span, the actual rate when factoring in the labor force’s hidden unemployed has only declined from 13.6% to 13.4%. And while the reported rate has steadily declined, giving an image of slow but consistent improvement in the labor market, the reality is more distressing: after a brief dip in October to as low as 12.9%, unemployment is actually very close to its peaks since the crisis began, just 0.4% below the recession-era-high of 13.8% in July 2011. And it’s on its way up, steady or increasing since October of last year.

Real and Reported Unemployment, January 2007 - March 2013

Real and Reported Unemployment, January 2007 - March 2013

That graph is scary enough, with trend-lines clearly diverging and painting an opposite narrative of the economy. And this narrative dictates almost everything (well, everything that isn’t overtly rigged) about how our economy functions and the decisions people make. Though it doesn’t reflect the reality for people on the ground, how hard it actually is or isn’t to get jobs. But when people are having just as hard a time to get jobs as they have for the last three years, but they’re being told that it’s getting easier, this is highly corrosive to their morale, hope, and ultimately way of life. They personalize and internalize something that’s actually a broader struggle. And more damningly, we make policy based on the assumption that these flawed narratives we are telling ourselves are true, that we’ve done enough to help the little guy because while seven-and-a-half percent unemployment is still a bit uncomfortable, it’s basically livable and showing signs of improving. When really, we’re still at the height of the crisis.

So here’s the money chart, the one that really tells the tale of this deception and how inflationary its impact is. This is the one that shows the gap, over time, between real and reported unemployment:

Gap between Real and Reported Unemployment, January 2007 - March 2013

Gap between Real and Reported Unemployment, January 2007 - March 2013

You could call this chart a lot of things. The Deception Chart. The Manipulation Chart, if you want to put a more sinister spin on why we use such an outdated version of unemployment and fail to track all the people who fall out of the ability to seek work or, more often, never have a real job in the first place from which to become “unemployed.” The I Think I’m Going Crazy Chart, to reflect the above phenomenon in starker terms, as people are unable to find work when the media is telling them that everyone else is finding it easier. And this chart, with a couple of bumps, is steadily upward. And hit a record high in March 2013 at 5.8%. At the same time the Dow Jones and other stock markets in the US started hitting their record highs as well, signalling the supposed end of the financial crisis.

5.8% may not sound like a big gap to you. It’s more than most economists think is a healthy total rate of unemployment for a society, so that should tell you something right there, but it still sounds like a manageable number. But when you consider that it’s a 76% increase in the unemployment rate from what’s currently being reported, that should probably put it in perspective for the skeptics out there. Unemployment is a 76% larger problem than people think, than the media reports, than the US imagines. We are approaching an easily foreseeable moment, if the general trends in each direction continue, when the actual rate of unemployment is double what the US perceives it to be. All because of how we choose a denominator in the most revered vital sign of economic growth for the little guy.

The most revered vital sign for the big guy is the stock market, and it’s hard to imagine that the big players therein aren’t aware of the data in this post. Of course for them, labor is not necessarily a sign of health or growth. Getting more done with fewer workers is the goal. And the efficiency achieved in this goal is a lot of why the market has been up when most people aren’t feeling any more well off. The problem is that when the market has so ruthlessly edited out jobs and labor, there’s simply not much reason they would bring them back. You have to be feeling pretty fat and happy to hire people that you don’t think will add value to your company. And by adding people who are going to work less efficiently, with their back less against the wall, then you risk the whole system. Ruthlessly keeping the jobs at a minimum is beneficial to all the big corporations at once.

The other big culprit in this phenomenon is the ongoing consolidation of wealth and corporations. The fewer companies, the more “efficient” (read: less labor) they can each be. The more that Wal-Mart and Starbucks grow, the more they can keep higher-wage lower-efficiency jobs from smaller firms out of the market. And this is before they even start colluding on what they’re going to pay or how thinly they’re going to stretch their workforce to ensure maximum efficiency across the board.

The theoretical back-end of all this is that eventually people will have so little money from not working that they will be unable to spend it at Wal-Mart and Starbucks and then the whole scheme will crash. The problem is that the very rich have been able to spend enough money that they make the overall economy seem much healthier than it is because it is relying so heavily on catering to the few people who have expendable income. But not in a way that creates more jobs for the middle or lower class so much as making those classes fight for a few jobs that largely cater to serving the super-rich. Of course, the US must somehow actually keep these people alive, through welfare or disability or unemployment benefits for those lucky enough to be counted, and at that point they also have the tiny bit of expendable income for a latte or a Wal-Mart gun.

The long-term implications of this general direction, though, look a lot like feudalism. Increasing power and influence and cash for those at the top, increasing revolving around the top for everyone in the middle and lower rungs, sufficient pressure on the working classes to make them put up and shut up with whatever they get in order to be among the few who actually have a job. Obviously we’re not to the crisis point of this general trend yet – eighty-six-and-a-half percent is still relatively high employment compared to some places in Europe or what things could be here in a bit. That still has most people working. But what happens if the real unemployment gets to 20% and they’re reporting 5%? Is that kind of internalized pressure sustainable? How wide can the gap between reality and surreality (or at least between the reality of the rich and the reality of most people) get before something breaks?

In the meantime, as we ponder that question, the trend-line of the reporting gap and the stock market continue to be roughly correlated. And the longer that correlation takes place, the less it feels like coincidence. That doesn’t make it causal, per se, but it does mean that both could be reflections of the very deep problem of the way our society is currently designed. Corporate capitalism isn’t a better system than others; we just haven’t seen how colossally this one fails yet. But it seems most of us will be lucky enough to see this one disproven in our lifetimes, if current trends continue.

by

Let’s Talk About Class, Baby

Categories: A Day in the Life, It's the Stupid Economy, Politics (n.): a strife of interests masquerading, The Agony of the Wait is the Agony of Debate, Tags: , , ,

Yesterday, I tried to tell a story about what I saw on the last APDA weekend of the year, a story about debaters and debate and ideas and personal struggles and hopes and dreams and triumphs and disappointments. It was laden in my perspective and not attempting to be particularly objective – as I believe was clear throughout the 11,000+ words, it was couched in how I saw certain people and things and events and should not be taken as an objective record, any more than any piece that any individual writes, whether it’s labeled fiction or non-, should be taken as fully objective.

I actually thought when I finished it that it was too long and rambly for anyone to fully read and that it was ultimately probably going to fail at its initial objective, which was to weave a story about class background and competitive incentives into a human tale of competitive drama on the largest APDA stage of the year. For whatever reason, this self-assessment seems to be a bit short-sighted. Lots of people read the piece, in whole or in part, and (unsurprisingly) many people had objections. Fortunately, many people addressed those objections directly to me, enabling me to both fix certain things that were not intended (shortening or omitting names so that Googling someone wouldn’t lead to that post if they didn’t want it to) and to engage with people in 1:1 conversations about what bothered them, which I think was mutually informative.

But the biggest thing that kept coming up with people who wrote me seems to be essential to address on a larger scale. And because people felt the last post was at times too personal and too direct (some even called it ad hominem, which I disagree with but understand why they said that), I want to keep this post as abstract as possible so we can explore an idea rather than people specifically. Yesterday’s post was a story about people and events. Today’s post should be about an idea. The idea of class in contemporary America and how it affects people, their perspectives, and their decisions. And perhaps that’s even jumping ahead of the cart. The preliminary question, the one that many asked me, is whether class is even something we can or should talk about at all, especially on a personal level.

I felt it was important to tell the story of Nats Finals through the lens of class because that seemed to be clearly underlying a lot of the argumentation and perspectives that people were making. I feel it’s disingenuous and kind of crazy to tell the story of NDT Nats Finals without ever mentioning race, given the nature of the arguments that Emporia State made, the demographics of the participants, the larger question that the debaters themselves were asking. And I saw the same thing happening in APDA Nats Finals, especially in the context of semifinals (which is why I told the whole narrative that way); it was essential to what was happening in Hoff Theater last Sunday that there were people of privilege and people of less and it impacted their arguments and the way they made them. I want to be clear that I don’t think it necessitated the way the round played out – someone accused me of arguing that Syracuse couldn’t engage with arguments about high finance because they were from a lower socioeconomic background, which was not my intended argument. My argument was more that class struggles and conflicts and perspectives were visibly alive in the room and those things matter to how people approach daily life in this society, much less competitive debate.

So let’s back up a few steps. Is it reasonable or fair to say that class background innately impacts one’s perspective, or can? Is it impolite to even weigh income, privilege, access, and financial resources when looking at a person and how they interact with their environment? Several of you said it was. Unsurprisingly, I disagree.

I guess the first question is whether class is an immutable characteristic, something like race or gender. I don’t think that would mean that we couldn’t cite it or discuss it, but it would mean that making arguments or generalizations based on expectations of class would be more like stereotyping or saying something unfair than it would be like discussing something valid or valuable. I think it’s clear and obvious that one cannot often choose their class – one is born where one is born and one can’t choose what one is or one’s family or surroundings any more than one can choose to be male or female. So in that sense, maybe it’s a little like race or gender. But I think it’s also clear that class is, at least theoretically, flexible. One cannot have a childhood where one is Black for a while, then White, then finishes up Korean. But it is quite possible to have that kind of flexibility in terms of class and to experience a wide gradient of class standing. Many people have had this experience growing up, myself included. And certainly in childhood, that’s less about one’s own choice than the choices of others, but that flexibility separates it from being something innate about one’s identity. The older one becomes, the more clear it is that this is a changeable part of one’s identity. It’s complicated, because someone who is born into a fabulously and effortlessly wealthy family can probably never fully shed that – they probably don’t have the means or ability to spend themselves into being poor and it’s probably unreasonable to expect someone in that perspective to walk away from their family to shed their possessions and see how the other half live, a la Into the Wild. So, it’s mutable, but not always a choice. I think this puts class squarely in a gray area of sorts between race/gender and the decisions people make in their daily life. So, understandable that people feel uncomfortable, but probably not the same kind of third-rail that discussing race/gender and making assumptions based on that would be.

Next, there’s the politeness argument. I was raised, as most everyone was (I suspect), that it’s not polite to ask someone how much money their family makes. Many people just seem to have a visceral distaste for talking about people directly as though some have more money than others, however true it may be. There are two key arguments for this, I think: one, that it’s uncomfortable for the rich to have to admit that they have more access and more things and two, that it’s embarrassing for the poor to have to admit that they don’t. This argument and perspective is deeply embedded in American culture and is probably hard for people to question. But I think this argument precisely is where we get at the heart of why it’s so important to talk about class.

First of all, I would posit that this standard is impossible. There may have been versions of America with greater wealth equality or subtler ways of spending by the rich that made this standard viable or at least aspirational, but I simply do not believe that it’s possible to hide the amount of access and freedom that money buys the rich or denies the poor in modern American society, especially not in college. There are people who always stay in hotels when they travel, who always can fly wherever they want (and do frequently), who vacation in foreign countries and resorts rather than around the corner, and these people talk about doing these things in their life. And asking those people to never discuss such things is crazy and wouldn’t work. It’s their life; they should be able to talk about their expenditures of time and money. Meanwhile, others struggle to buy a dinner that’s not provided by a tournament, get uncomfortable when there are things that require money, quietly decline to participate in Secret Santa activities or other things with money as a checkpoint because they simply can’t afford it. It’s obvious to all observers why they can’t partake in these things that would otherwise excite them – some people are subtle about why this is happening and pretend they just don’t like anything, while others are open and honest about what the score is. But all make it clear to anyone paying attention why the barriers to access are where they are.

Some of these examples are about college and the debate world, but they date to times well before that. Despite being raised on a standard of not talking about these things, I couldn’t help but come back from a friend’s house in grade school and ask why someone had three game systems I’d never heard of and we were saving up money for a black and white television. My parents were always incredibly honest with me about what our standing was, especially since we went through phases of being relatively well off and then, when my parents’ business failed, not so much. But talking about it relative to others was still a bit uncomfortable and taboo. I’m old enough now to recognize this is mostly about parental self-consciousness and feeling bad about not being able to provide the same lifestyle that other children are living. But it’s not like anyone actually succeeds at preventing children from understanding, whether they discuss it or not, precisely what’s going on.

So at the point where people are going to figure out what’s happening, and something really is happening, then I would say that muzzling discussion on class in context is a form of oppression. In our society, money is freedom. Money has been used as the blanket under which everything is covered, access to everything is dependent on and proportional to money, with a few thin exceptions like voting and our crappy public education system (arguably, since there’s access to private schools, even this is just a rigid financial access question). Money affects the quality of what you get at every level, thus impacting your future abilities and access in a vicious upward or downward spiral. So the only question is whether we can confront this issue head-on in an effort to do something about it, to mollify, mitigate, or combat it in some way, or whether it proceeds unchecked and undiscussed as a silent force.

This may be a slightly extreme dichotomy I’m painting. I’m trying to proceed with this post in a robust and intellectually honest way as though someone were arguing against me. So you might say that we don’t have to discuss it interpersonally to think about it politically. That we can discuss the abstract motivations and impacts on a societal level without bringing the individuals around us and their particular place on the ladder into play. And that crossing that line is the gulf between appropriate and inappropriate discussion.

Several reasons why I think this is not a reasonable place to draw the line and why I think that’s an extension of oppression. First of all, I would analogize it to the privilege people experience from being white or male or straight or otherwise advantaged in our society. Advocates of greater equity and self-awareness everywhere regularly ask us to “check [y]our privilege.” To be aware of the subtle and omnipresent advantages one enjoys by being in a majority category or one that has traditionally enjoyed power or position. While this is not a reason to be biased against straight white males, per se, it is quite clearly to me for straight white males (or any one of those three) to consider that what they take for granted is not the experience of others and to make extra efforts to be understanding and inclusive of others who were born into a different category. And only the most defensive straight white males would be angry for being called out as belonging to those groups and being asked to consider how different it is to be otherwise.

You could argue that you can see white maleness innately, but you can’t see wealth or class. One, I think that’s laughable on face – wealth and class come out in the way one dresses, the things one does, the decisions one makes, the stuff one has, and often the way one talks about everything in society. Also, even if it’s totally cloaked, sexual orientation is also almost completely cloaked outside of witnessing relationships directly, which many people are quite successfully private about. And the thing about the “check your privilege” standard is that it’s not just something we rely on people to do for themselves. To keep people honest, it’s often important for people to say that phrase directly to each other, to remind people who take something for granted and overlook it that they’re in a different category and point out how that impacts what they’re saying or doing in the context of others. “Check your privilege,” in other words, is kind of meaningless if it’s on the honor system. It at times requires direct confrontation in order to be effective.

And maybe this is more the place of family and friends than someone further removed in order to be effective and not make someone defensive. That an outsider or someone distant asking someone to check privilege is less effective or appropriate than someone one knows will love them at the end of the day doing same. I’m mildly persuaded by that claim, but I think major public events cross the line into something owned and shared by a wider community and that discussing this privilege and the desire to check it is a wider point of access. For example, if someone straight made a claim in a Nats Final that was clearly heteronormative, I don’t think only their close LGBT friends could question them on that. I think it would be reasonable for anyone in the audience, gay or straight, to raise the issue in a public discussion.

But I also think that not talking about it is oppressive because it’s a way of pretending that it doesn’t exist. Quite simply, when it’s deemed impolite to discuss something, it’s a way of everyone pretending that things are not the way they are. And there may be places where this is in fact appropriate behavior, if the thing we’re discussing doesn’t really impact anything or would only be the source of some sort of cruel repercussion. For example, if someone had a disability or a handicap, it doesn’t seem meaningfully important to always self-awarely point this out at every turn, because the ideal is that it should not affect that person’s ability to compete or have access. However, if someone is wheelchair-bound and the round is in a place with only stairs, then it does seem reasonable to discuss. So the standard is probably where the question of background does or might affect one’s ability to compete or one’s ability to access certain things. And I would argue that class and wealth impacts literally every aspect of access. That it is so directly proportionally tied to questions of access that it is like a question of how many ramps you have for your wheelchair.

How is this the case? Well, for one, having money and a particular societal status just makes things easier. It makes it easier to have stuff, to have flexibility, to have the freedom to be unconstrained by having to work, having to sacrifice time and energy to do certain things to enable the life one wants to live. But the perspective of having money and having been acclimated to a certain class also tends to make one’s perspective on life much easier and more filled with possibility than someone at the bottom rungs of the socioeconomic ladder. Someone whose family lacks resources sees the world as less filled with opportunity and often has less access to opportunities than someone who is accustomed to getting what they want. And in a world where money and connections can actually often buy access, this only gets worse over time.

More perversely, in my opinion, and I understand that this is not a belief that is necessarily held by everyone who has wealth and/or privilege, the prevailing American ethos is that the people who are in higher socioeconomic positions deserve to be there. I recognize that a lot of people are trying to fight this perception at some level and that the financial meltdown of recent years did some good in combating this misperception about capitalism. But still the vast majority of Americans believe that wealth is correlated with effort and that people are rich because they worked harder than those who are not. And this is something that categorically separates issues of class, especially in America, from things like race or gender. No one would argue that someone is White and not Black because they deserve to be treated better in some way – the very typing of that text makes me cringe with how horrific and offensive it is. And yet those are precisely the types of assumptions that underpin class distinctions in society, especially for those born into their standing.

I’m not going to take the time to prove the many things about the diminished social mobility that are true of contemporary America and especially true of any society with large wealth disparities. But it’s pretty clear that mobility is highly limited in a society where the gap between rich and poor is widening daily, that this reflects the old adage of the rich getting richer and the poor getting poorer. Which innately precludes many of the poor getting richer or the rich getting poorer. And everything in such a stratified society is structured to ensure that people continue to pursue the widening of that gap. Even in a world with a couple exceptional billionaires like Bill Gates and Warren Buffett, nearly all the rich will seek to enrich themselves further at the expense of the poor, while all the poor will be powerless to combat this trend, lacking the resources to do so. And even Gates and Buffett have only changed their tune in recent years after spending years in the capitalist melee trampling the little guy, be it rival businesses or consumers, so they could get ahead and enrich themselves. These are just sort of the rules of profit-driven capitalism, but they have a deep and real affect on everyone existing in the society governed by this framework.

What all this adds up to is that very few people who start out in lower or middle classes will ever reach the upper echelons of wealth. But those that do are going to likely have to play the capitalist game to do so. Which is where another aspect of class, the one I find least controversial, comes in. Which is what one chooses to do with one’s life, one’s aspirational class, if you will. Which is where the teachers get separated from the hedge fund managers.

Now I’m not trying to paint everyone working in a hedge fund with the same broad brush entirely and maybe I did a bit too much of that in my last post. You don’t have to lie, cheat, and steal to work in a profit-obsessed firm that puts no stock in human feelings or the impacts on the bottom rungs of society. However, it’s an environment where most people are fine with pushing the limits of whatever one can get away with, where most people are making decisions that create things like Enron or 2008 or bubbles or runaway compensation for people who do nothing that actually produces, creates, or enhances anything tangible in the world whatsoever. And, quite simply, it’s hard to be a good man in a bad state. It’s hard enough to care about anything in America writ large, between our distracted media and our obsession with money and our warmongering trashing of the rest of the planet. But it gets a lot harder when one self-selects into an environment where everyone else believes in the ruthless valuation of enrichment over people, values, or principles. And again, maybe not every hedge fund office or law school is like this. But most are.

And it happens insidiously, in the way that most oppression in America does. The phenomenon is all too common. Someone wants to go to law school to be one of the good guys, to stand up for the little guy. So they take out six figures of debt to cover the future education that will help them be an advocate for the good. But then they have all this debt they have to pay off, so they work in a firm for five years. And at that firm, they represent corporations using their leverage and weight and ability to afford a talented lawyer to either beat up small corporations or actual individuals, get away with violating their rights because the legal system is a place where money can often replace truth. And while they do that, they may feel conflicted or stomach-churny, but they feel the ends justify the means and they’ll make up for it standing up for the little guy someday. All the while, their entire peer group and surroundings are people with a different set of values, people who are unapologetic about their decisions, people for whom selfishness is the primary ethos. They get accustomed to this perspective, maybe tire of arguing for alternatives that feel especially hypocritical when one is representing Big Business in some capacity daily anyway. So slowly their conviction gets eroded. Meanwhile, they start getting used to a certain lifestyle, a certain amount of comfort and expectation of flexibility, mobility, access, stuff. And they start taking that for granted, having a hard time imagining going back to a harder life of sacrifice and discomfort when they and everyone they know now enjoys this comfort. So five years become ten years, twenty. Eventually they decide that it’s just easier to ride out life for the big firm and maybe donate all their riches at the end of their life to some worthy cause. Meanwhile, they continue to perpetrate the harms on the little guy they only went to law school in order to protect.

I can only imagine this story is played out even more often in hedge funds or other financial pursuits than it is law schools. And it’s pervasive in law schools and a huge part of why things don’t change in this society. The instrument of debt ensures that those few people capable of leveraging talent and ambition into social mobility are thus hamstrung by their financial disadvantages into becoming part of the machine they might otherwise change.

So, a bunch of counter-arguments probably stem from this. One is that the increased flexibility and options make it more likely that those in the higher classes actually resist the pull of debt (no need for it) and other things and are more able to think and behave independently and stick to their liberal convictions, if applicable. Maybe. I certainly think that’s possible for those who are choosing to avoid lucrative professions altogether. Certainly there are people who are well-off who intend to become public high school teachers or join the Peace Corps or TFA or work for lower wages in a non-profit. And those people are commendable for these choices. But the fact that those who are not pursuing these things are not seems to me like valid grounds for discussing or criticizing people who instead choose to be all about the Benjamins.

Another argument is simply to question everything I’m saying about the system of American capitalism and say there’s nothing wrong with it, that rising tides float all boats and that growth and positive change stem from everyone ruthlessly pursuing their own self-interest. It’s hard for me to engage with this argument because I find it so laughable and frustrating, but this may be at the core of the class issues I’m trying to illustrate. It’s easy to argue about engines of American capitalism and quality of life standards from the top. It’s a lot harder to do this from the streets of the Tenderloin in San Francisco or other drug-addled gang-ridden neighborhoods for whom opportunity is a four-letter word. Economics is ultimately a zero-sum game and the pursuit of profit and greed creates vast inequities for those at the bottom that requires either starvation and deprivation or a massive government safety-net to try to keep those people alive. The quality of life and standard of living for most Americans has actually declined in the last five decades, since these things are mostly on a relative scale. You can watch things like this super-popular and insightful viral video to get a better sense of what I’m talking about. People rarely have any real conception of how great the wealth divides are in this country and how meaningfully that detracts from the life of the vast majority of people. And the culprit is not just capitalism, but unchecked faith in capitalism.

The final argument against what I’ve been saying actually takes me back to another debate round, another one involving Harvard that was the final round of a title tournament, one that was everything the Nats Final was not. This one featured C. and Josh, mentioned in the earlier post and here vaguely anonymized per their request, against a team from Hart House, the University of Toronto’s debating society. The resolution was not chosen by the competitors as it was a “tight-link” tournament where the competition provides the topics, but I was told later that the four competitors were all debating for the sides they personally passionately believed. This was the 2013 North American Championship, and the resolution was that a humanitarian should choose a field where they will make the most money possible and donate money to charity rather than working directly for a less lucrative pursuit in a non-profit.

This round was excellent, and a clear win for Harvard on Gov. And while I have a lot of respect for the Hart House team, I think a lot of why they dropped was that they missed some of the best counter-arguments to the perspective endorsed in the resolution. They did question whether one will still donate as much money after a time or whether they will become disaffected and uncaring, to which Harvard responded by saying this was against the terms of the resolution. And I think that’s half of the best argument. But I think the larger problem is whether one will still care about charity at all after a certain amount of time lived in a world where most other people are ruthless selfish capitalists. Both sides in that round agreed that this would be the ethos of most of those surrounding someone in such a lucrative profession. And at that point, I think it’s even less about getting accustomed to a certain standard of living or expectation of comfort. It’s about being peppered constantly by a peer group that tells you, no matter how liberal and generous you are, that you deserve all your money, that you are better than other people, and that you should just be in it for you. That’s one of the biggest problems with these class environments and how they self-select for ensuring that people are, first and foremost, guardians of inequality and the societal structures that perpetuate it.

Undoubtedly, not every class environment perpetuates this. Of course there are exceptions. In talking about phenomena, one must sometimes generalize in order to be talking about anything; otherwise the conclusion of every statement or post or article would be “Well, sometimes this but sometimes also that; things are complicated! Let’s go get a sandwich.” I would rather err on the side of something sweeping and thought-provoking that offers a direction than contemplative sandwich-eating while marveling at the world’s complexity. This is, after all, my blog.

But I think most class environments do perpetuate the things outlined above because it’s just much harder for people from privilege to be aware of it constantly, to consider how their advantages affect others, and to constantly question or rail against everyone in their environment telling them that they deserve these advantages. And these privileges probably transcend the socioeconomic, though I think they’re most pernicious there. Surely an outsider to debate might question the entire enterprise as us pressing our intellectual advantage and elitism at the expense of those unlucky enough to be born with such talents.

But that’s precisely where I disagree and why I think it’s so important for debate to be pro-intellectual but class-mitigatory (and -aware). Because debate and public speaking and rational thought are things that can be taught. Anyone from any level and any background can learn these things and be good at them. Many have disbelieved me about the truth of this statement and I would like to think that I’ve helped to prove them wrong to the extent that I’ve had any success at all on the circuit as a debater or a coach. And, unlike the pursuit of wealth or privilege, the pursuit of knowledge and rhetorical skill are more or less unmitigated goods. We would prefer a world where everyone tried to press their talents and intellect to the highest reaches. We would not prefer a world (or I wouldn’t at least, and I don’t think you should either) where everyone based all their decisions off of profit maximization and tried to edge each other out on those grounds.

And I know many representatives of Harvard in the prior post would then say that their case was trying to be intellectually challenging and stimulating. I believe that many of them sincerely felt this was the case. A lot of what I was trying to do in yesterday’s post was illustrate sufficient context to show why many many people did not feel that way. Which has to do with history and tactics as well as class. Without 2005 and 2012 as backdrops, there’s no way that 2013 would have been perceived the way it was.

So all of that prompts a question about where the lines of what intellectual rigor is and isn’t, how much access people should have and knowledge and ability to keep up with speedier discussion and all that. I’m not looking to entirely rehash yesterday so much as explicate some of the more controversial stuff and why I went there. So we’ll leave that for another time and it’s probably better placed in quieter 1:1 discussions.

My point is merely to say that we all know class is there. We can all see its vast manifestations, how it comes across in the sense of entitlement and privilege of many people, the access they have, the expectations they have about their future and how they contrast with others with different upbringing. And this diversity of background, in the right attitude, is an asset to be celebrated and explored and examined carefully. One of the great things about APDA is that it does bring people, like the best college experiences, from widely differing monetary (and other) backgrounds together and shoves them forward into a marketplace of ideas. But we are doing ourselves a disservice if we ignore this diversity or deem it impolite to discuss, even in its personal manifestations. It is the failure to question profit as an end-all and be-all motive that has enabled the vast escalation of wealth disparity in our society. If we fail to point out how class colors our perspectives and access now, we are only magnifying the harms of past mistakes and dooming ourselves to a future where we can’t consider or correct the increasing divides between us.

by

The Problem of Chronology

Categories: A Day in the Life, It's the Stupid Economy, The Long Tunnel, Tags: , ,

Perhaps the biggest problem with life is that it is lived in order.

There are a lot of trite aphorisms with strong grains of truth along these lines. Youth is wasted on the young. If I knew then what I know now. But these are actually not the kind of chronological disorder that I’m focusing on herein. Rather, they actually rely on the same faulty logic about chronology that plagues all of our behavior – namely, that things get better. That we get better, smarter, wiser with time. That somehow experience is a teacher that never dozes off or skips a class or berates us too harshly for us to learn properly, but that we always, in some rigid linear fashion, learn a progressive amount from what we have been through.

In the words of friend and former debater Farhan Ali, “I call bullshit on this.” (It sounds somehow softer and more lovable in Farhan’s vaguely Kuwaiti Indian accent.)

The other flawed implication of the linear age model for improving our viewpoint is that it ignores the impacts of the lenses of various phases in the cycle of our lives. We assume that adults are wiser than predominantly more impulsive teenagers, that even teens have a leg up on wide-eyed naive children, that those retired or retiring from life have more insight into it than those in the throes of work, that those closest to death have the most understanding of living. There are certainly individuals who follow these guidelines, but it seems to me there are just as many exceptions. Moreover, it seems highly problematic to just assume that the later stages of life provide more understanding of it when it is imperiled by the unique challenges of that phase of existence. Older age, for example, is a time plagued by a diminishing amount of control of one’s body, when people tend to become markedly less sure of things and start to feel the age-old optimism of ever-upward improvement shift in themselves. These processes are inevitable and cannot be blamed on those experiencing them, but at the same time they directly contradict the notion of linear upward growth in quality of life and perspective. It is ludicrous to expect people in this phase of their life to be eternally growing in optimism as their body and future expectations start to turn against them, yet the functions of chronological trajectories do just that. And thus we somehow expect the older generations to reach the pinnacle of hope and elation just as they realistically have to come to grips with being on the downslope of their own time on Earth.

Worse still, the acknowledgement of any downslope too often becomes tantamount to an unmitigated disaster because we are unable to accept retraction, retreat, or repeals without getting histrionic. I’m as guilty of this as anyone, to be sure, but it’s inherited from a culture that makes us choose between growth and bankruptcy, that encourages us to live at the absolute furthest extent of our means whenever our own income or worth expands, that will offer us credit and loans beyond our means at the first sign of prosperity. It’s not merely that we inculcate a belief in growth and linear progression that causes problems, but we instill an even worse inverse perception that going backwards might as well be utter collapse. No wonder so much of our society turns to drugs (prescribed, illegal, or the worst of all of them, alcohol) at the first sign of failure or hardship. We can’t cope, by and large, with the idea that the future may be worse than the past, that any year ahead of us could be something other than our best year ever. People are quick to respond to deaths and divorces, things that should be devastating setbacks with placatory words of optimism and things to look forward to. I know I dealt with this and it made me sick. A healthier, more reasonable acknowledgement would be to recognize that the next year or two will probably be the worst of your life to date, which still may not be the worst of all-time. But that this does not mean having to give up, because life is an unpredictable wave, not some unflinching pyramid to be climbed at the steepest possible angle. I tried to share this kind of viewpoint with other recent divorcees I reconnected with recently and they seemed staunchly resistant to it, unable to acknowledge that life can get worse sometimes, trying to echo their friends who put it in a context of being for the best somehow. It was frustrating.

And I get it, in part. I think people want to keep hope alive. And hope is a very good thing, “maybe the best of things.” Truly. But hope is as far from the expectation of linear improvement as practice is from the assumption that one will be the next Michael Jordan or Barack Obama. And only an idiot would tell the middle school kid shooting freethrows in the gym or the debater staying late after practice to go over her case one more time that they are destined to attain the highest heights imaginable. Yet that’s the mythology that creeps into us when we expect linear progression, when we expect that everything always gets better. And the radical disappointment from falling short is tantamount to living one’s whole life in regret that one isn’t always at the very top. In other words, it’s a way to manufacture unhappiness and feelings of failure from all kinds of things that should reasonably amount to a life well lived.

Which illuminates perhaps the most insidious problem of chronology in our life, which is that it clouds and overrides the past. The past can so rarely be appreciated for what it truly was, instead being overshadowed by the context of the precise moment of the fleeting present. The entire past winds up as a giant stepping stone for wherever one is at the second one chooses to look at it, and the understanding of what those days and months and years meant entirely changes based on whether times are good or bad now. If one is currently satisfied and feels they are on the expected conveyor belt toward a better future, all manner of disaster, transgression, and pain can be overlooked as mere necessities on the road to bliss. Whereas if one is currently holistically dissatisfied and disappointed, even the highlights of the past are a jumble of wrong turns and missed opportunities. Even those who put it in the proper proportion and perspective, more even-keel souls, have a tendency to mute the effects of the past, to under-appreciate it or put it in a box of less experienced decisions when one had so much to learn and grow from.

It is precisely this perspective about the past that I seek to question. Just because we march through the days one at a time in irreversible lock step does not mean that we are moving forward. And to truly understand and appreciate our lives for what they are, I believe it’s crucial to mine the past not just for opportunities to learn more and gain a greater perspective, but for actual insights into when we might have been better people, when, in the past, we might have understood more about the world. Experience is capable of teaching us, but it’s also capable of corrupting us (see yesterday), of ruining our spirit, of inflicting us with trauma and paranoia and the wrong motivations and perspectives that can take years to undo just to get back to zero. We pick up addictions, we fall into groups of people that mislead or distract us, we start spending our time poorly and lose our way. If we recognize these as mistakes instead of insisting that we are all in some sort of linear fairy tale, then we can harness the power of the past to right our path and (yes) try improve over time. The barriers to understanding this somewhat obvious reality of life are part of why it’s so hard for people to break down and go to rehab when they need it, why it’s so hard for so many people to admit fault for things and just apologize properly. The idea that we have gone backwards at any point in our life is anathmetic (a word I think I’ve made up, derived from anathema) to our perspective on our own human spirit. We think that to admit a setback is to crush hope, as though improvement in a linear permanent fashion were the only possible option and anything standing against that meant we were either broken or on an infinite downward spiral, as though motion were always perpetual and it either moved one way or the other.

The truth of life is always muddier and more complicated and more unpredictable than we want to make it. It’s absurd to think that we grow endlessly. That’s cancer, that’s steroids, that’s capitalism. We change endlessly, to be sure, but it’s just as likely to be for the worse. And only through understanding that can we not only keep ourselves from being bad people, but we can keep ourselves in perspective. Yes, of course the goal should always be to find hope and improvement. But sometimes that might require recognizing that, say, 1998 was our best year, that maybe this year is our worst, and that this knowledge and understanding can be harnessed to parse what is most meaningful and valuable about life to try to improve next year. Maybe not to make it our best, or even better than this year, but to make it better than it would have been if we just kept assuming that years would somehow unflinchingly get better over time without our help.

by

Cliff-Jumping

Categories: A Day in the Life, It's the Stupid Economy, Politics (n.): a strife of interests masquerading, Tags: , ,

It’s really hard to tell whether there’s an actual crisis underway in the US federal government over sequestration. The original title of this post was going to be “Sequestration Now!” and it was going to be written as many as five months ago at various points in history. History cannot be undone, any more than money can be unspent or elections can be unwon. We have the government we have and most of us had about as much choice over that as we did about whether our neighbor bought a gun, about whether they or anyone else we know chose to use it. For all our talk about freedom in this country, a world of seven-billion does not afford us much actual discretion over how our lives go.

There is not a lot of discretion used in the so-called discretionary spending of the federal budget. Of the $1.277 trillion ($1,277,000,000,000.00) spent by this entity each year, $712 billion ($712,000,000,000.00), or 56%, is spent on “defense”. Defense being one of those euphemisms like “pro-choice” (there’s that choice issue again), “pro-life”, or even “fiscal cliff” being used to refer to the edge of something that would probably be great for America. In our language, we are so accustomed to embedding a viewpoint that we don’t even think of it cognitively anymore. We merely accept the nature of our slanted universe and try to amble awkwardly toward our destination without getting seasick. No wonder so many people choose to rebel against the order without words. They seem corrupted before we begin.

Given that there’s already a 56-44 split in discretionary spending toward guns and bombs, it actually seems rather unfair that only 50% of the sequestration cuts would be made toward defense. The cuts themselves are a mere 9% of the total budget, hardly a drastic reduction for a private spender to contemplate amidst a financial collapse. And despite the fact that they would heighten the split advantage for attacking people, I’m still in favor of the cuts going forward. Some reduction in death is probably favorable to no reduction in death, or so the media seems to represent that people believe these days.

The nature of the fiscal cliff and the allegedly radical sequestration cuts that were proposed to force compromise are reminiscent of what a parent does to an unruly child. If you can’t get along with each other, then both of you will lose something you dearly want. One could argue the cuts don’t go far enough, that all of the dessert should be taken away, at least for a while, but I suppose 9% is about as much as one could hope to take from those who have everything. It remains to be seen whether even this sort of third-grade punishment will work on a Congress so detached from the realities of everyday America that their approval rating is competitive with the percentage of those cuts they’re trying to impose on themselves. Maybe they should try grounding themselves next. It’s hard to take cuts seriously when your first stop after budget negotiations is a foreign island or a Swiss ski resort. After all, only the little people pay taxes, which is why we’re facing this kind of precipice in the first place. Cuts to other people’s livelihoods, salaries, and programs must still feel rather remote compared to the bottom-line of the account safely secured in the Caymans or Delaware.

Nevertheless, it’s worth noting that sequestration would have a significant impact on the American psyche, if not the actual numbers. Everything is always supposed to grow in this country, under this economic regime, making any sort of cut feel like a direct personal insult to our individual sense of entitlement. The notions of being responsible, of restraint, of imposing restrictions on oneself and then following them has very little to do with the American Dream. We’re supposed to be bigger, better, stronger, more reckless and ruthless. Tithing to the god of fiscal responsibility is a dramatic step back from such lofty goals. It might force people to recognize that zero-growth is the future, that living within our means is the metaphor for the twenty-first century, if there is to be a full century. It offers some hope that the only voluntary cuts we make will not be in the classroom and will not originate from the barrel of a gun.

No, it will not result in a change in elections. The approval rating of Americans for their Congress peaked at 21% in 2012, right before they elected 90% of incumbents who were running to return to their jobs in Congress. This is not cognitive dissonance so much as proof that the system is rigged to offer no choice, no discretion, no option for real or lasting change. There’s gerrymandering, the two-party system, cynicism and entrenchment, corporate sponsorship, the desire to vote for a winner, and a whole host of issues I rail about here from time to time. In sum, calling our elections a democracy is not, at this point, all that different than calling Mubarak’s Egypt a democracy. Elections are held, people vote, their votes are tallied, and none of this in any way resembles a process by which individual preferences would create some sort of government. The way an objective history or even a contemporary outside perspective would describe the status of the American experiment is so radically different from the way we see ourselves that it may actually defy gravity. Self-awareness is not really a featured highlight in American exceptionalism. It’s not something we compliment in our daily culture. We find the delusion of grandeur lovably entrepreneurial, while knowing one’s limits is somewhat trashy and banal. This is the culture that created the reality TV star while shunning those who urge caution and honesty.

A fitting mascot for the US’ current trajectory, in the context of the world, might be Don Quixote. But someone made the mistake of arming Quixote with nuclear weapons, armored vehicles, and the world’s largest military budget to throw at those windmills. The windmills are no more of a threat than whatever the US is fighting, but down they go all the same. It seems to be a time for attacking those most vulnerable, those least likely to pose a threat.

And I’m not really referring to the shooter in Connecticut who opened fire on an elementary school, though of course I’m referencing it indirectly. At least 176 children have been killed by drones alone in Pakistan alone since 2004. Is there a reason we find this less horrific than what happened in Sandy Hook? There’s racism, I suppose, since most of the Connecticut kids were white. There’s nationalism, for sure, since they were all Americans and most of us can’t place Pakistan on a map or name one fact about it other than some vague notion of it being Muslim and therefore an enemy. Religious prejudice then, too. And I guess some sort of institutional versus individual distinction. If one person comes up with the idea for slaughtering a school full of children, we’re horrified, as long as that person doesn’t serve in some sort of official role with the government or its “defense” wing. Slap a uniform on Adam Lanza and he becomes a real American hero.

Oh, I know, you’re saying there’s an intentionality issue too. Lanza meant to kill kids, while Obama only means to kill those that would somehow kill us. But doesn’t that miss the point? Isn’t it actually kind of worse to just happen to slaughter 200 children as a byproduct of some goal you assure us is lofty than to intentionally kill 20 of them? I’m not defending the shooter any more than I would defend any committer of violence, but if we’re making a comparative argument, at least he did precisely the damage he intended, which sort of recognizes a certain dignity to human life, however much he violated it. America is so indifferent to the wake of its damage that it assures us that 176 children couldn’t possibly matter, since they aren’t citizens of our country. We think, like imposing the fiscal cliff on ourselves, that grossly disproportionate response is the only response, which is how we justify causing hundreds of 9/11’s in other nations in response to the one we experienced.

The problem, in part, is that the US doesn’t know who its enemies are. We assume that they must live in foreign mountains, must sail from ports abroad. They are the expendable children we can send robots to exterminate before they even know that danger is pending. No time to hide or huddle or seek an adult for comfort before their body parts are scattered to the four winds. We do this. Our tax dollars. Your flag that you stand and salute represents the maiming and killing of hundreds of children. This is your contribution to the world.

And for what? So that we can tote our guns and feel superior. We are the best people in the world, we who turn the gun on ourselves. We are our own worst enemy, our only enemy, somehow, the only one who truly means us harm. It is our own children who grow up to pose the threat to America. And in some twisted self-referential vortex, maybe Adam Lanza knew that and took the drone strikes home to our own children, decided to be the robot and short-circuit the cycle against a future enemy within. And before you click the red X at the sickness of what I’m suggesting, you should know that I’m not defending or justifying his acts of “national defense”. That’s the whole point. There is no defense of this defense. There is only offense and offensiveness and horror. This is the end result of chasing enemies with firearms and firepower, of looking for the threats and wiping them out.

Life is a threat. Existence implies an end to that run. Chase the enemy long enough and you’ll shoot the mirror. The only one responsible for your own mortality is you, because you were born.

If Adam Lanza had been sitting in an Air Force office in Nevada and pressed a button that dropped a bomb on a Pakistani elementary school, killing 28, you would salute him in the airport as you both flew home to see your families. You would thank him for defending you against those horrible brown people who must be posing some sort of threat. You would honor and revere him, so grateful that he did what he did.

We are the bad guy. We are the world’s bogeyman. We are the ones who make it bad in the name of good.

Only when we stop being everyone else’s enemy can we begin to consider no longer being our own.

by

Unemployment Ticks Up to 13.6% in August

Categories: A Day in the Life, It's the Stupid Economy, Tags: ,

The actual unemployment rate ticked up in August, to 13.6%. It was at 13.5% in July. 13.6% is the highest rate since August 2011, which matched that figure, while July 2011 was 13.8%, the highest since the so-called “Great Recession” began in the United States of America.

Here is a graph of unemployment since January 2007, with reference to the reported BLS data on unemployment, which reported a 0.2% decline in unemployment, a stark contrast with the actual 0.1% increase:
Unemployment2007Aug2012

The methodology for this was explained earlier on this blog and involves the reduction in the labor force from the BLS’ own data. As the labor force continues to plummet, the invisibility of the unemployed correspondingly skyrockets. Even the expanded measures of unemployment measurement such as “U-6” which claims to include “discouraged workers” fails to capture the actual depth of the unemployment question as it currently stands and most expanded measures have higher overall rates of unemployment, but are damningly reporting similar shapes of unemployment movement, most of which show great improvement in 2011 and 2012.

As is clearly visible from the above chart, however, this is not the case if one includes people who would have been considered part of the labor force in 2001 as part of the labor force now. And that doesn’t include all the people who have aged out of the labor force in the ensuing decade, but does include the graduating high schoolers and college students replacing them, who are (of course) notoriously unemployed. Most of these individuals never have held a job and thus have never been considered part of the labor force, thus explaining the massive decline in the size of what is considered the labor force as a percentage of the American populous aged 16+.

In early 2001, 67% of the population was considered part of the labor force, a reasonable benchmark for a country where capitalism is briefly and illusorily “working”. In August 2012, that figure hit a Great Recession record-low of 63.5%. Because a smaller labor force not only leaves out many unemployed people, but it also shrinks the number of people who need to be employed within it to make the economy look healthier than it is, the gap between reality and the reported reality is growing at an alarming rate.

This chart shows the size of this gap between what the BLS reports officially and what its own data, when considering departures from the labor force, illustrates:
ReportingGap20072012

Scary, no? In 2008 and 2009, the BLS was basically reporting the terrifying free-fall picture of the economy as it tumbled in its haste to eliminate jobs to restore some semblance of profitability. But in late 2009, tons of the people who had been out of work for a while left the labor force, never to return, making unemployment look like it had plateaued and generating talk of a recovery. Reported unemployment peaked at an even 10% in October 2009, but quickly started falling thereafter, never to reach the dizzying heights of double-digits again.

Arguably, in early 2010, there was a tiny little recovery according to both of the above charts. From December 2009 to April 2010, real unemployment declined from 13.6% (where it stands today) to 12.8%. The reported rate hovered around 9.9%, mostly because people were re-entering the labor force as some actual jobs were created and they tried to get them. By October, however, we were back up to 13.5%. But the media was reporting 9.5% and talking of green shoots. By March 2011, the reported rate was below 9% for the first time, but actual unemployment was 13.3%, essentially flat with the record highs of the period. The stock market continued to rally on word of the recovery, one that all reporters who went out and talked to real people were surprisingly unable to find.

In fact, let’s plot the gap of the reporting on unemployment against the stock market figures since it bottomed out in March 2009:
DJIRepGapMar09Aug12

And I think that’s officially the most terrifying thing I’ve ever put on a graph. No, it’s not perfectly correlated. But the Dow Jones takes into account a couple more things than unemployment. Interestingly, that early 2010 period when unemployment was actually better than it was being reported (at least on a directional level) was where there was the least correlation. At the same time, arguably the entire recovery and perception that the economy is improving, that US equities are worth something again, that everything’s going to be okay in the medium-term and (of course!) the long-term, all of the history of the US economy since March 2009, is based on a lie. Or at the very least, a subtle oversight.

Keep in mind that this gap has nothing to do with the “under-employed,” itself a meaningful data point of people working a few hours instead of full-time. Not one person in this gap is under-employed. They all have no work, or at least no legal work. Not one hour.

The gap is currently at a record-high 5.5%. For context, that was the entire (reported) national unemployment rate in May 2008. Or the entire (real) national unemployment rate in January 2007, when the reported rate was 4.6%, the last time the gap was below 1%. We are currently under-counting the unemployment rate by the entire unemployed population of January 2007.

Numerically, everyone who was out of work in January 2007 is currently so far out of work that they cannot even be counted by the BLS. They aren’t trying to work, they may never work. And meanwhile, the entire American public’s perception of the economy is what is slipping through that gap.

I have been saying for a long time that we’re moving toward a post-work economy, one where the model of making a wage for employment is archaic and only viable for the few. It looks now much more like we’re hurtling toward that reality. With blinders on so we don’t see the cliff until we’re midair.

by

Derivatives

Categories: A Day in the Life, It's the Stupid Economy, Politics (n.): a strife of interests masquerading, Read it and Weep, Telling Stories, Tags: , , , ,

Perhaps you saw “Abraham Lincoln, Vampire Hunter” in theaters this summer. Or you’ve read Pride and Prejudice and Zombies, the best-selling novel. Maybe you’ve seen or read one of the millions of derivations or mash-ups or sequels or post-scripts or pre-scripts to already established works out there in the American culture.

No? This is what it looks like…

It's a real thing.  I couldn't make this stuff up.  In fact, no one could!

It's a real thing. I couldn't make this stuff up. In fact, no one could!

It looks like that, a cover like that, but it also looks like the death of a culture. America is a place in particular that has always prided itself on its creativity, its ingenuity, its ability to come up with novel (pun intended) solutions to complicated problems. This is the birthplace of so many innovations and inventions and “outside-the-box” thinking that’s been the precursor to the wealth and riches that we lord over the rest of the world.

But things have changed lately. In their hunger for money and the desire to turn every pursuit into a business model, originality has been sacrificed in favor of a sure bet. After all, originality also brought us credit-default swaps and toxic assets, right? Publishing houses and agents used to seek dynamic, exciting, original writers. Now they want to know what your “comps” are, books that are so alike to yours that they prove there’s a market for what you’re trying to write. A market, not because it’s good writing, but because they’ve already liked a book exactly like yours. I used to shudder in the fear that someone would scoop my unwritten plots and take the limelight of creative inspiration I’d cracked open or been lucky enough to tap into. Now I welcome the realization that the plot of American Dream On has enough thematic similarities to The Hunger Games that someone might believe I was riffing on it when I wrote it before its publication. (To say nothing of the widely reported notion that said book was just a rip-off of an earlier Japanese movie which matches major plot points almost exactly.)

This is perhaps not a surprising trend in a country racked with economic woes after a dream of endless prosperity, nor especially in a land so obsessed with safety and certainty after one terrorist attack that it is willing to attempt to subjugate the rest of the world and its own citizenry just to avoid the possibility that 3,000 people could die at once again. Not surprising, maybe, but remarkably disheartening. The best balm for the recent hardships of the nation, one would think, would be originality and creativity. But as Congress faces a patent inability to compromise and potential Presidents continue to present a rematch of rejected 1980s theories, there’s a vast dearth of variation from an ever-predictable norm. It’s no wonder that nearly every Hollywood movie slated for creation is actually a recreation or a sequel. And we continue to buy and absorb this rehash, just as we accept the two major parties’ offerings every four years. Because we haven’t the money to make a choice and we’re not in the top corporate offices where these decisions are being made.

But the snake is eating its own tail. There’s no evidence that this desert of good new material is insidiously brought about by maniacal corporate officers so much as that the system itself incentivizes them to favor the sure bet over the risky original proposition. And the consumers have only the power to choose between Tweedle Dum and Tweedle Dee, reaffirming the apparent wisdom of risklessness. And on the cycle continues.

It wasn’t always like this, however. American writing and movies have circled the globe, gaining recognition for their depth, insight, creative power, and new way of looking at the world and its inhabitants. So what happened? When did we go from making new things to recycling the same animated plot that probably wasn’t enough for a first movie into a fourth?

There are a lot of contributions, some of which I mention above, but I think the biggest and best explanation is a phenomenon I’ve observed in countless manifestations, from people to non-profit organizations to historical nations. It’s something clearly embedded in our human nature, but fighting it may be the last best hope for people to break out of molds that earn their associative names by entrapping us with stale thoughts and decaying thinking. It seems American creative culture and its would-be admirers have crossed over the tipping point from feeling like they have more to gain from the future to feeling like they have more to lose.

This single concept, the idea of whether the future is about potential and benefits (which encourages risk-taking, bold thinking, and dramatic action) or about the possibility of loss (which encourages defensiveness, safeguarding, shoring up, and sitting tight) probably effects more of our daily lives than we would like to think about. This is what makes recessions so deep and can make poverty so liberating with the right mindset (but realistically makes poverty so debilitating). This is what makes people who grew up bungee-jumping and horseback-riding afraid of leaving their house for weeks at a time as they age. This is what turns liberals into conservatives when they become successful. It’s what turns revolutionaries into tyrants. If we could pull a lever and prevent someone from ever tipping over this apex, mandate that they always feel they have more to gain from the future than they do to lose, we would cure uncounted social ills and political pitfalls.

Alas, defensiveness is not so easily cured. Many people have an enormous amount of wealth, power, influence, and comfort stacked up, especially in this country. They chronically fear someone coming to take it away, be it in the form of regulation, taxation, theft, extortion, nationalization, or pure greed. Even if they don’t really like what they have, even if what they have fails to provide them happiness or any other higher good, they will defend it to the death if they think they have more to lose than they do to gain. It’s in our nature to hoard and protect when we are fearful or even cautious about the times ahead. It’s backed by millennia of evolution and reinforced by centuries of history.

Incidentally, this is why banks aren’t loaning money and the rich aren’t hiring people. And why those things will persist for a long time to come, perhaps as long as this country persists. No one has more to lose than the banks and the rich, almost tautologically. And the banks can continue to get free money from the government as long as interest rates stay low, so there’s no incentive to take the risk of a loan. And the rich don’t need to “spend money to make money,” because they already have money. So those tax breaks and cheap loans just go in their back pocket as they hunker down more closely over the piles of coin in the counting house.

Believing that there’s more to gain than to lose is about more than trite platitudes about happy days or mornings in America or popping anti-depressants. It’s about a belief that one hasn’t attained that much, or enough. And most often, that isn’t measured in material goods so much as notoriety, recognition, or true accomplishment in terms of changing the world. This is precisely why the revolutionaries so consistently flip into oppression as soon as they get into power, or within just a few months. The turnover from having nothing to having everything is so fast that they literally don’t know what to aspire to anymore, while they’re immediately becoming accustomed to having more than 99% have ever dreamed of. Those who have more to lose than to gain are terrible leaders, ever watchful and fearful of being criticized, unseated, disregarded, losing the power and influence they (feel they) worked so hard to gain. It’s the hungry and desperate that provide the ingenuity and spark necessary for true leadership.

So how to we hold the imaginary carrot a few yards out in order to make ourselves run for it? The key is complicated, but I think the most accessible answers to this are in two essential areas. We must first embrace a certain healthy amount of dissatisfaction with our present affairs, whatever they may be, and we must secondly and correspondingly become comfortable with change.

The latter could contain a whole volume of material (and I believe it does, perhaps floor-to-ceiling volumes, as nearly the entire Self-Help section of any bookstore is really just “get comfortable with change” in long-winded and bound format, rephrased over and over in the hopes that someone might listen). Nevertheless, the point bears repeating that change is the only constant and resisting it is as foolish as fighting a gale with saliva. Just the other day, my new boss told a roomful of people, myself included, that he’s looking to produce a line of T-shirts with the slogan Embrace the Uncertainty. It’s a powerful message and one I took to heart, especially as he expounded on the need for not freezing in place with the entire class of 2016 inbound, they not thinking about the pressures that new leadership might exude on a university so much as that their college careers (and by extension, their lives) are about to start.

I’ve always felt more at home with uncertain futures and changing venues than most, but the last three years of this blog alone could well tell you that I’m no guru when it comes to accepting whatever life surprises you with. This is a struggle for all of us by virtue of our humanity, it’s why so much advice for the species is so simple and, dare I say it, derivative. Embracing uncertainty, welcoming change, it’s hard. It’s like waking up young in the dreadful night, envisioning the monster under the bed, then jumping from above to tackle-hug it and give it a sloppy kiss. Or, put another way, it’s like loving your neighbor no matter what they do. It’s one of those really challenging near-impossibilities. Especially when you have stuff or people or circumstances in your life that you like. It takes so much work and energy to find things that you like, be they pastimes or cohorts or jobs or places, that losing them or altering them seems a fate worse than death.

Which brings us to the first part, the somewhat easier bit, the healthy dissatisfaction with the present. This is easy to get carried away on and, despite what you may think, I’m not about to launch into a call to depression for all readers. Rather, it’s important to be a critic and a skeptic of one’s own choices and the path they’ve wended. Not to the point of self-recrimination and -doubt, unless said are truly warranted, but sufficiently so that one is able to craft an aspirational trajectory for the future.

This is extremely counter-intuitive. Almost all of us have the final goalpost being happiness, however we define it. No matter how we define it, happiness consists in feeling full, satisfied, like there’s nothing more one needs or wants or has to strive for. Contentedness, comfort. And yet this feeling is, itself, a form of death. No, really. Because at the point where one is comfortable, one doesn’t want to move. And if one doesn’t move, how can one find anything interesting that one hasn’t already found?

Imagine you’re in a chair. And your chair is uncomfortable, rotting in the seat, prickly in the back, set at the wrong angle. You get up! You’re motivated to find a chair that’s not as painful. You’re ready to look around for a while, maybe leave the house and go to stores or yard sales or junkyards till you find something manageably sittable. Maybe you go through 5, 7, 18 chairs. And then, glorious then! Then you find the chair that’s comfortable, has the cushioning in the right place, well-angled armrests, the whole bit. What happens next?

You fall asleep.

And you don’t go traveling again, because the opportunity cost is time in this chair.

That chair is happiness.

Don’t get me wrong, it’s nice to sit in that chair. I have a real-life chair much like this at home, and I spend a lot of time in it. I’m not getting rid of it (though I’m open to a future, or trying to be, in which I don’t have it anymore). I would never tell anyone to just make do with the first cruddy chair or to stop looking for a nice one.

But we also can’t sleep away our time and potential in the comfy chair. Because then life becomes the story of sitting instead of exploring, doing, interacting, being. And that, my friends, is not what life was designed to be.

Life is about the journey. Maybe the rest of the self-help books are about that. You know what else is about that? One of my favorite movies of all-time, “Finding Nemo”. Which they’re re-releasing (now in 3D!) in a month, in theaters. Because they can do that now. Spruce up a movie that’s already had its day in the sun (or I guess, more accurately, the refrigerated shade) and release it to watch while you’re wearing glasses. For more money.

Because it’s derivative.

And I’ll plunk down my fourteen bucks or whatever 3D movies cost these days and recite the lines I know by heart and bob my head with the turtles and shudder at the sharks, along with a bunch of much younger kids who don’t know how old this magic is. Who feel, unlike almost everyone else in the theater, that maybe, just maybe, they have more to gain than to lose from living into the future. Maybe they’ll have the creative solutions.

Or maybe they’ll grow up to write Finding Nemo in Abraham Lincoln’s Vampire Civil War. And oh, what a hit it will be!

by

Follow-Up: No Effects of Aging

Categories: A Day in the Life, It's the Stupid Economy, Politics (n.): a strife of interests masquerading, Quick Updates, Tags: , , ,

A couple days ago, I posted at length about jobs and where they’re going and the hidden unemployment rate. I promised to hop back onto the BLS website, a wealth of statistical information about our country, and look up the effects of the much discussed aging populous on the sapping of the labor force in America.

I expected that I might find something to mitigate the alarming findings of my last post, those that included that the unemployment rate is actually closer to 13.5% than 8.3% and that note, more importantly, that the unemployment rate is near its peaks of this recession with no signs of ebbing. I expected to find evidence that the aging population was responsible for a decent chunk of the people fleeing the labor force and thus not getting counted in the unemployment figures traditionally discussed by the media in this nation.

Instead, I found this:

SrNoLabor

What this graph shows is the percentage of seniors (aged 65 and above) who are not in the labor force as a percentage of the entire population that BLS counts in their labor survey (civilians age 16+ who are not institutionalized). Keep in mind that the percentage of those in the labor force has crashed by over 3% of the entire population during this recession (or at least since 2001), so if that’s mostly about aging, you’d expect a big uptick in this number. Or at least something visible.

There is a rise in late 2011 and 2012, but it’s almost imperceptible. The overall movement in this entire chart is about half a percentage point. And while we are at the highs of the non-labor-force seniors as a percentage of population for the last decade, this fact cannot be held responsible for the massive gulf between unemployment figures as reported and unemployment figures counting those who’ve left the labor force.

I could release an “age-adjusted” chart, but there are two problems with this. One, the difference in the line between the one I posted earlier this week and the one I’d post now is almost nothing. Two, we cannot assume that every senior in today’s world is out of the labor force by choice. Given that a higher percentage of seniors than ever before are working, many of those not working may be just as desiring of work as their younger counterparts.

Were the gulf even a full percentage point, let alone two or three percent, it would be worth it to adjust the rate for age. But in the absence of any major shift in seniors out of the labor force as a percentage of the overall population, I’m inclined to stand by my original chart and the rate of 13.5%. 13% is probably a slightly safer figure if you really want to wring your hands and mitigate. But that’s not enough of a shift to indicate any notable change in the trendlines for real unemployment.

And as a housekeeping note, this post is inaugurating my new “It’s the Stupid Economy” tag for the blog. I have a feeling I’m going to be tracking a lot of these things as I do more thinking about this in the coming months.

1 2