The Immersion Fallacy

The rain was coming down in torrents.

A hurricane had come ashore in South Carolina. And now the entire state was getting drenched. Including the hilly Upstate region.

This development was inconvenient enough. But a big time college football matchup between was set to be played Upstate, featuring the Notre Dame Fighting Irish and the Clemson Tigers.

Both teams were undefeated going into the matchup. The game was slated for a primetime kickoff slot, with the promise of a national TV audience.

A hurricane was not going to disrupt proceedings.

And so, the pageantry of the weekend went on. Fans rolled into town, and so did ESPN’s College Gameday.

The premier college football preview show set up a stage in the middle of Clemson University’s campus. And despite the rain and wind, the show went on as planned, with hosts bantering from behind a desk.

I was watching at home, and things didn’t look so bad at first. The canopy over the stage and the protective gear over the cameras likely had something to do with that.

But then, I saw the crowd behind the stage. Throngs of college students appeared to be nearly blinded by the windswept rain. And the ground they were standing on had become a boggy mess.

Suddenly, the cameras zoomed in on one student with a particularly youthful face. His shoes were off, and his pants were cuffed below the knee, Tom Sawyer style.

With the eyes of America on him, the student took off his shirt. Then he took a step back and leaped, faceplanting into a pile of mud.

The crowd went wild. But as I watched from my couch, I had a different reaction.

Horror.


Many of us have acute fears. Stimuli that cause us to panic, shut down and lose function.

Mine is mud.

The slippery byproduct of water and dirt repulses me like nothing else. I fear slipping on it, getting it on my clothes, or tracking it into my home or my vehicle.

This aversion is quite on brand for me. I am a neat freak. And nothing is as stubbornly messy as mud.

But the lengths I go to when avoiding this substance are somewhat extreme.

I’ve turned down opportunities to cruise in ATVs before, for fear of getting mud on my clothes. I’ve avoided hiking or running on dirt trails for weeks after a rain event, just to keep my shoes clean. And back when I was playing baseball as a kid, I was too frightened to slide on a wet field.

I realize this behavior is totally irrational. Getting dirty is not the end of the world. And there are plenty of proven ways to clean the mess off.

Yet, I can’t help myself.

I’m not alone in this regard. While I haven’t met anyone who avoids mud the way I do, I know plenty of people who have gone to irrational lengths to avoid their own fears.

But that’s starting to change.

There is an abundance of services out there to reform the spooked. Services that dub themselves immersion therapy.

The premise is straightforward. Immersing someone in the stimuli they fear can reduce their anxiety. It can show the worst outcomes to be unlikely or nonexistent. And the process can break the spell of fear.

And so, many have covered themselves in insects, touched the scaly skin of snakes, or listened to the boom of fireworks. They’ve done all this to face their fears head on.

Perhaps this is what that college student at Clemson University was doing when he bellyflopped into a mud pit on national television.

But I wasn’t about to follow his lead.

I knew better.


What is a fear anyway?

Is it an aversion we’ve picked up through experience? Or something we’re born with?

Many point to the first explanation. They see our origins as blank slates, onto which societal stressors – such xenophobia or bullying – and individualized stressful experiences – such as dog bites or near-drownings – are projected.

This theory posits that fears are accumulated, rather than innate. Which makes it possible to unburden these fears through methods like immersion therapy.

It’s a neat theory. A tidy one. And one that might be too good to be true.

Indeed, I’ve come to believe that the second explanation for fear is more accurate. I assert that fear is part of our DNA from Day One.

There’s plenty of evidence behind this assertion. Infants can curl their bodies in a protective stance long before they can crawl, talk, or understand language. And many physical changes to human genetic code over millennia have helped shield against lethal dangers.

Fear is an element of our survival. One that keeps us from becoming an unwitting snack for a lion or from wandering aimlessly off a cliff’s edge. It’s an inextricable part of us.

Even the most societal-oriented fears can fall under this definition. It’s true that no one is born racist. But the fear of abandonment from the pack is most certainly innate.

Redirecting the source of that existential fear from the pack to the outsider is a predictable shift. Why let the fear become a self-fulfilling prophecy when it can be used to keep our pack’s competitors at bay?

We gain security and acceptance in this process, without experiencing any of the pain of our actions. It’s a no-brainer, on the most primal of levels.

Yes, fear is an inextricable part of us. It always has been. And it always will be.


So, what does this all mean for immersion therapy?

Is it a farce? A sham? A load of nonsense?

Yes and no.

It’s undeniable that immersion therapy has some positive outcomes. Those who are terrified of spiders, or heights, or whatever else can find equilibrium around the same stimuli. They can live life more freely and fully.

These are all good outcomes. Desired outcomes, really.

But these fears have not been cured in the process. Arachnophobes remain arachnophobes, even if they no longer turn ghostly pale in the presence of spiders. Acrophobes are still, at their core, apprehensive of heights.

No, what immersion therapy has actually done is reframed the fear. Instead of reacting to the previously distressing stimuli, the brain has been trained to ignore them. The reaction that the phobic experiences – the one visible to others – it’s gone.

Yet, the fear itself remains in some far corner of the phobic’s brain.

This is not a trivial distinction.

For our society has consistently misrepresented fear. We’ve determined that it’s something that can be rooted out. That must be rooted out.

And so, we’ve waged multifaceted campaigns to create a world where racist, homophobic, and anti-faith impulses cease to exist. We conduct wide-scale immersion therapy to promote a world that is more equal in terms of acceptance and opportunities.

We make progress. We inch closer to the finish line. And then the ugliness rushes right back in.

This whole process is demoralizing for those crusading against the darkness of fear. They can feel like Sisyphus – pushing a boulder up a hill, only to see it tumble back down in the end.

But perhaps a shift in perspective can get them off this hamster wheel of misery.

Perhaps those crusaders can abandon their pursuit of the root cause of fear. And perhaps they can focus on redirecting its manifestations instead.

This means eliminating racist, homophobic, or anti-faith actions – all while acknowledging that the underlying Fear of the Other will remain.

The crusaders can still turn to immersion as their preferred tactic. But they must recognize that their efforts simply constitute a rewiring, not a demolition. The ignition coil can be manipulated, but the engine remains in place.

Such a compromise might be a hard pill to swallow, particularly for those with the purest of ideals. But it’s a necessary one. Particularly if we want to attain the objectives we strive for.

The immersion fallacy is real. We must govern ourselves accordingly.

Embedded Insecurities

It’s a three-story building.

Tan brick facades. Double-hung windows. A distinctly 1920s look.

On each of the edifice’s four sides, a set of doors provide entry. Above them, four Roman columns support a structure holding a modest clock.

The building is quaint. Not majestic.

And yet, it’s of great historical importance.

This building, you see, is the Old Collin County Courthouse. It sits in the center of a leafy square in downtown McKinney, Texas. A bevy of shops and restaurants surround the square in all directions.

Long before Dallas’ suburban sprawl overtook McKinney, this was the heart of Collin County. It’s where residents would gather to conduct business and gather supplies. It was a gathering place.

That spirit is still alive in the shops and restaurants surrounding the square – a refreshing oasis from the strip malls so prevalent in greater Dallas.

It’s still alive 32 miles west in Denton, where another set of shops and restaurants surround the Old Denton County Courthouse. And it’s still alive 28 miles west of there in Decatur, where some modest establishments buttress the Wise County Courthouse.

In fact, a similar scene can be found in many of Texas’ 254 county seats. Nearly every town has its county courthouse – or former courthouse – on a square, with shops and eateries around it.

The same can be said for municipalities outside the Lone Star State. When I visited the town in rural Missouri where my father was born, it had the same setup as McKinney. So too have towns I’ve frequented in North Carolina, Nevada and Vermont over the years.

This is no coincidence.

The courthouse square setup is an American staple. And while its utility might have faded in the era of 15-gallon gas tanks and Walmart supercenters, its importance most certainly has not.


Did you hear?

Those three words represented the start of seemingly every conversation when I was in high school.

Gossip was the name of the game, and we all fancied ourselves to be Michael Jordan.

It would be harsh to fault us for these delusions. Adolescence is a near-impossible assignment. A quest to find the answers within while complying with the abstract ideals of coolness.

It’s confounding mission. One that could demoralize and distress even the strongest willed of teenagers.

And we were no match for it.

So, we shifted our gaze. We galvanized around the stumbles our peers made on the journey. The land mines that we could avoid, now that others had triggered the trip wires.

We gossiped.

Most of this gossip made the halls of my high school the old-fashioned way. Someone witnessed something – or claimed to – and shared it with the group.

But a nascent technology called social media had also found our cohort. And suddenly, some of the fodder for gossip was originating online.

Things, of course, are far different these days. Online rumors re now the norm, not the exception. And social media-based discourse has gotten so toxic that it’s spawned a new name – cyber harassment.

This has led to severe effects for modern-day adolescents. And those effects have led some states to consider bans on social media for minors.

I understand where this movement is coming from. Several young people have taken their own lives because of cyber harassment. It’s tragic, and I feel for their families and friends.

But I do wonder if the proposed bans will have the desired effect. For the root cause of the toxicity afflicting adolescent culture is not social media – or even the Internet itself.

It’s gossip.

And gossip is firmly rooted in our society.


Back to that county courthouse in McKinney, Texas for a moment.

The building sits mostly vacant now. Courtrooms and county offices reside in an expansive building five miles away.

The modern courthouse is surrounded by parking lots and a highway. A supermarket and several other stores sit a couple exits down the highway, along with a movie theater and an assortment of restaurants.

The highway is now the central corridor for McKinney residents. Anyone looking to pick up supplies, take in mass entertainment, or conduct official business sets their vehicle’s GPS for U.S. 75. The shops and restaurants around the old courthouse – while still frequented – are off the beaten path.

This modern arrangement has its advantages. Residents can gather supplies from store shelves, pay for them at a self-checkout kiosk, and load them into their car in the parking lot – all without making eye contact with another human being. Efficiency reigns supreme.

But at what cost?

You see, back when the highway didn’t exist and the courthouse was based downtown, the luxury of secluded shopping simply did not exist.

Anyone heading for supplies was going to have to head to the courthouse square. They were going to have to engage with the store clerk, even if just to hand over payment. They were going to see other locals milling about. And those other locals were going to see them.

Any misstep in this adventure would be harshly scrutinized.

Whispers would softly spread around town. And judgmental stares would brand the afflicted like a hot iron.

Yes, the gossip mill was as much a part of life as maintaining a vocation and putting food on the table. Commerce on the courthouse square took two forms of tender – dollar bills and embedded insecurities.

People measured their success not only by what they had, but how it measured up to others. The fear of inadequacy loomed large.

Treks to the courthouse square offered opportunities to disprove that notion. To put on airs, to act proper, to get a pulse of where one really was. And hopefully not to be confirmed as a pariah in the process.

These days, that style of commerce has faded. But if we think the associated demands have not, we’re kidding ourselves.

People are still dealing in embedded insecurities. They’re still keeping up with the Joneses and yearning to gain acceptance.

But now, they’re doing all this online. They’re depending on an unsavory place where judgement converges from all angles at warp speed.

Yes, everything from neighborhood forums to social media mom groups to websites like People of Walmart lives in cyberspace 24/7. And all of it turbocharges the courthouse square effect.

McKinney, we have a problem.


How do we solve the puzzle? How do we reconcile our desire for validation with the risks of critique-based abuse?

These questions have dogged us for a couple decades, if not longer.

Some have proposed attacking the riddle’s central premise. By ridding ourselves of embedded insecurities, by affirming that we are adequate and no one else’s perceptions are worth a damn, we can sidestep the strife entirely and live happily ever after. Or so they say.

It’s an appealing concept. But not a realistic one.

You see, embedded insecurities are not a bug of our society. They’re a feature of our existence. They’re hard-wired into our brains for a reason.

Like just about any other species, we rely on a group for security. Without the power of the pack, we are so much more vulnerable to so many threats.

We stand little chance of warding off these threat time after time on our own. Fight or flight only gets us so far.

So, we find sanctuary in numbers. We conform to shared rules and make ourselves presentable to masses. All while harboring anxiety about triggers for rejection.

Drowning out this impulse won’t cure us of its effects. It will only accentuate them.

No, the key is to channel those embedded insecurities. To balance those inevitable questions of adequacy with constructive answers. To openly engage and to grow from the interactions.

And to do all this away from cyberspace. Far afield from the trolls, keyboard warriors, and endless scrolls that do us no favors.

It’s time to engage with each other in public again. Human to human, with our five senses as a guide.

It’s time to pick up on cues – both verbal and nonverbal – and to adapt our behavior accordingly. To be honest without being cruel. To find a common denominator of acceptance, even with those we disagree with.

The courthouse square might no longer be the physical center of society. But its spirit still can be.

Let’s make it so.

Gardens to Tend

Swish!

It was the telltale sign of a good shot in basketball. The audible marker of an orange ball grazing the nylon strings of a net.

Growing up, I’d watch a fair number of basketball games on television with my friends. Michael Jordan or Kobe Bryant would launch that orange ball into the stratosphere. And as gravity brought it closer to the basket, I would wait for that sound.

Swish meant success. And success is what kept me watching.

My friends would often focus on other aspects of the game – the crossover dribbles, the thunderous slam dunks, the gaggle of celebrities sitting courtside. But I was fixated on that swish.

It sounded cathartic. It provided more context than the often-blurred TV picture could.

And it was something of a novelty.

You see, I didn’t just watch basketball with my friends. I sometimes played it them as well.

These informal games or shootarounds often took place on outdoor courts in local parks. We brought the ball. The park had the rest.

Well, some of the rest.

You see, the park courts wouldn’t be confused for the glamorous ones Jordan and Bryant dominated on television. Instead of hardwood, there was blacktop. And instead of nets, there was…nothing.

There’d be no swish sound to indicate a made basket. There might be the clank of the rim or the thud of the backboard if a ball didn’t make its way through the hoop cleanly. But on the purest of shots, you’d hear nary a thing.

This bothered me. So, one day, I asked my father why the nets were missing.

I think those hoops used to have them, he replied. But then someone stole them. And that will keep happening if the city put new ones up. So, they’re leaving them be.

I was floored.

I’d never considered that public basketball courts could be anything but a net benefit. I’d never contemplated how others could use that public access for bad intent.

But now the blindfold was off. And there was no going back.


Several years ago, Malcolm Gladwell took aim at country clubs.

The acclaimed author and podcast host had traveled to Los Angeles for business. But when he ventured out for a morning run, he found himself relegated to a narrow dirt path wedged between a busy boulevard and the high fences of a golf club.

Those fences infuriated Gladwell. So, he made a podcast episode about them, and lambasted what they represented.

In the episode, Gladwell questioned why a group of golfers got exclusive access to the outdoors in a car-dominant city. He pointed out that in Canada and Scotland, golf course grounds were open to the public on certain days, or in certain parts of the year.

Surely, America could follow this pattern too, Gladwell argued.

This reasoning appealed to me. For I’ve long detested country club culture.

The exclusivity. The snobby attitudes. The idea of paying dues to get outdoor access.

None of it jibed with my own experience.

When I was growing up, I swam in the ocean at public beaches. I hiked through public nature preserves. And I played those aforementioned basketball games in the park with my friends. All without paying a dime.

These adventures were formative in my life. And I felt others deserved similar opportunities.

But I realize now that things were never quite so simple.

I might have moseyed about in my youth, enjoying myself for free. But there were others who looked after the spaces I frequented. A full roster of folks who kept those locales tidy and kept me safe.

There were workers at the park who mowed the grass and cleared the trash. There were lifeguards at the beach who saved swimmers from drowning. There were forest rangers who ensured the trails at the nature preserve were safe for hiking.

These officials took their jobs seriously, and they acquitted themselves well. But one must wonder if they felt as if they were rolling a boulder up a hill.

You see, the open nature of these parks, preserves, and beaches made their work obsolete quickly. Even after their garden was tended, a new crowd would converge on the space to lay waste to it once again.

It was as if they were fastening a fresh basketball net to the rim each day, with full knowledge that it would be gone by evening. No amount of salary or plaudits makes this work rewarding. And in a vacuum, the arrangement itself hardly seems to make much sense.

So, no, the answer to the country club problem isn’t as simple as Gladwell made it. But a better solution is out there.

We just need to shift our perspective to find it.


A few years ago, I captained a neighborhood kickball team.

The team didn’t play all that well. But where we played was anything but.

All the games in our league took place at a nearby sports complex. The fields were well-designed and meticulously maintained. They were far better than some of the fields I played high school baseball on.

And the scene outside the lines was no less impressive. Knowledgeable referees oversaw the kickball contests. And representatives from the recreation department kept an eye on the proceedings, resolving any situations that might arise.

This all floored me at first.

For we were technically playing ball in a city park. A public space, open to all.

And yet, none of the associated chaos had found its way here. It was all so…organized.

Perhaps this had something to do with where we were. Namely, a small suburb outside of Dallas. There was plenty of space to be found all over town, and thus little impetus to run this complex into ruin.

But I think the orderliness could also be attributed to the fine print of the kickball league. All teams had to pay a fee to register. (The community manager for the neighborhood I live in covered those costs for our squad.)

On top of that, all property owners in this suburb paid taxes and fees – with that money supporting both the recreation department and the sports complex.

These costs, while not exorbitant, sent a powerful message.

Yes, you can play ball here. No tall fences will keep you out. But if you think you can desecrate these fields on our dime, you’ve got another thing coming.

This suburb was not letting anyone shield their eyes from those who tend the garden. That process was instead shared, allowing order to rise from chaos.

Perhaps this is the model Malcolm Gladwell is looking for. Perhaps this is the scenario my younger self would have thrived in.

There’s only one way to know for sure.

So, let’s stop treating public spaces like entitlements. And let’s start treating them as gardens to tend instead. Let’s mind the space as if it were our own. And let’s respect those tasked with maintaining it.

A little shift can go a long way. Let’s forge that path.

The Custer Bias

In June of 1876, a regiment from the 7th Cavalry of the United States was in a conundrum.

Scouts had found a large encampment of Native Americans along the Little Bighorn River in the Montana Territory. This tribal encampment violated laws confining Native Americans to reservations. And the 7th Cavalry’s mission was to force them to comply.

Relations between native tribes and United States Army installments in the region were not good. Several skirmishes had broken out between the two already, and the 7th Cavalry had every reason to believe this encampment would be hostile to their demands.

And so, the regiment’s leader – Lieutenant Colonel George Custer – charted an attack. The Cavalry would be split into three brigades, encircling the encampment. The troops would trap the natives into compliance.

It was a bold strategy, but also a risky one. Custer had no idea how many warriors might be among the tribe, and how those ranks compared with his own. He also knew far less about the terrain his regiment was on than the natives.

By pressing ahead, Custer was taking a chance. And anyone who’s taken an American History class knows the rest.

The first brigade got bogged down by Lakota Sioux and Chayanne warriors just as Custer’s brigade was trying to flank the encampment. Native warriors spotted Custer’s men and attacked them with superior numbers.

The brigade had nowhere to retreat to, and not enough firepower to press on. It was systematically cut down. Every member of its five companies – including Custer – were killed.

The Battle of Little Bighorn was effectively over. But the legend of Custer’s folly was just beginning.

For generations, Americans would hear of Custer’s Last Stand. It was the ultimate cautionary tale of risk gone wrong.


Why would Lieutenant Colonel George Custer attempt such a bold maneuver? Why would he so unabashedly put the lives of his men in danger?

Military historians have been trying to answer this question for decades. For Custer wasn’t exactly a novice when he reported to the Montana Territory. He was an accomplished military leader who had led Union Army brigades in the American Civil War.

The volunteers under Custer had repelled Confederate forces at just about every turn, including the Battle of Gettysburg in July 1863. He knew what he was doing.

Or did he?

You see, at Gettysburg, Custer’s volunteers faced off with a Confederate cavalry force twice their size. Custer’s brigade was somewhat detached from the heart of the Union Army, and the Confederate cavalry caught him by surprise.

Undeterred, Custer led counterattack after counterattack with his own cavalry. The vicious fighting stalled the Confederate brigades, effectively preventing them from rendezvousing with other columns of fighters.

Once the cannon fire of the main battle could be heard in the distance, the Confederate cavalry retreated. Custer had won.

Custer had taken a massive risk exposing his cavalry so extensively. The chances of them getting overrun were as good as them prevailing.

It was effectively a coin flip. But the coin came up in Custer’s favor. The risk paid off

This certainly gave Custer confidence. Confidence to assume even more risk.

That attribute was what allowed him to rise to the rank of Lieutenant Colonel, and to get posted to the Montana Territory. And it’s likely what made his daring plan at Little Bighorn seem anything but.

Live by the sword. Die by the sword.

Call it The Custer Bias.


Lieutenant Colonel George Custer met his demise nearly 150 years ago. So, why discuss his travails in such detail.

Well, I believe they’re just as prescient in this era as any other.

We are a far different country now. A global superpower, fueled by big business.

But today’s industrial leaders are just as happy taking risks as Custer was. If not more so.

This sustained boldness perhaps most notable in big tech, where companies can shift from hypergrowth to cost cutting on a dime. But it’s also present in manufacturing, thanks to the rise of just-in-time inventory processes. It’s present in retail, where large brands venture into new product lines or sales channels time and again. And it’s present in dozens of other industries.

There are many reasons why this behavior is so present among corporate leadership. Many have pointed out that investors don’t stand still. And neither do competitors.

That’s all accurate. But the biggest reason leaders lean into risk-taking? It’s The Custer Bias.

Think about it. Just about any corporate leader has already taken a risk to get where they’re at. Maybe they were an entrepreneur, who defied the odds to found the business they now helm. Or perhaps they rose through the corporate ranks, trying something bold to fuel their breakthrough.

With those bold moves in their rearview, these corporate leaders are keen on rolling the dice once more. For all the market forces out there – consumers, competitors, investors – are yearning for them to push the envelope.

Of course, it could all go sour. And if it does, they could lose everything they’ve attained.

Such an outcome would sting. But not as much as the status quo does for others.


It’s easy to idolize the maverick leaders. To deify a Steve Jobs or a Howard Shultz or a Reed Hastings.

But for each one of them, there’s someone who failed at their mission. Someone who took a risk to create a tech company, or revitalize a small coffee shop, or disrupt the entertainment industry. And someone whose risk didn’t pay off.

These failures lurk in the shadows. You can’t prove a negative, and we have little patience for tales of what could have been.

And so, we comb through the success stories. We search for patterns and commonalities – all while forgetting about the inherent randomness.

Yes, success can seem inevitable if you weed out the duds. And that delusion can make risk-seeking appear less dangerous. Safe, even.

This is undoubtedly a tragedy. Not perhaps not in the way you might think.

Those burned by a risk gone bad will surely suffer – regardless of whether it’s the first or fifteenth risk they’ve taken. But those who avoid risk will suffer even more.

For there is no safe passage for the risk averse these days. Those who play it safe will still find themselves under the direction of renegades.

They’ll find themselves reporting to dice rollers infested with The Custer Bias. In this modern era, how could they not?

Yes, it is all too possible to stay away from the fire and still get burned. But what if that such a fate wasn’t so inevitable?

It’s time to turn the tables on The Custer Bias. To be less cavalier with the risks we take. To pay more credence to the odds of chance. And to avert our eyes from the shine of favorable outcomes.

Such actions run counter to our nature. But they’re essential to our survival.

So, let’s stop following the path of Lieutenant Colonel George Custer. Our story deserves a better ending.

The Cost of Success

On the morning of August 7, 2021, 88 female distance runners gathered at in Sapporo, Japan. They were set to embark on a 26.2-mile journey for Olympic gold.

The field was littered with accomplished athletes – record holders and outright marathon winners who hailed from all corners of the globe. Among them was a 27-year-old Wisconsin native named Molly Seidel.

Seidel hadn’t racked up any marathon wins or set any records at that distance before. In fact, she’d only raced in two marathons before jetting off to Japan.

She’d done well enough in one of those races – the United States Olympic Trials Marathon – to earn her spot at the starting line. But few were expecting much from her as she took on the world’s best.

The weather in Sapporo was brutal that morning. Bright sun baked the streets, and high humidity made the air feel heavy.

The conditions evened the playing field somewhat. So, as the race reached its final few miles, there was no breakaway leader. The alpha pack remained largely intact.

The expected contenders were in that pack – runners from Kenya and Ethiopia. But so was Molly Seidel.

The TV commentators looked on with astonishment. Would Seidel hold on? Or would any number of factors – the pressure, the conditions, the fatigue – cause her to fade?

Less than twenty minutes later, the answer emerged.

A Kenyan runner crossed the finish line first. Another Kenyan was the second across the line.

But the third runner? That was Molly Seidel.

Seidel had secured a Bronze medal – only the United States’ third ever medal in the women’s Olympic marathon. And she’d done it in style – finishing a mere 26 seconds behind the gold medalist.

With one incredible race, Seidel had become an American hero. Her post-race interview – where she told her family back home to Have a beer for me – went viral. Her face was on TV screens from coast to coast. Her following on social media and the workout app Strava grew exponentially.

It was an incredible story. But one that would carry a heavy price.


What do we do after an accomplishment?

It seems like a silly question to even ask. For in American society, there is but one answer: Accomplish more.

Successful entrepreneurs look to capitalize on the next big idea. Oscar winning actors yearn to tackle the next big role. Musicians seek to launch the next big album.

And athletes seek the next big competition.

I know this as much as anyone.

As regular readers know, I’ve taken up competitive distance running in recent years.

I’ve done this for many reasons, including fun and fitness. But I’ve also yearned to push my limits.

I had this objective in mind when I signed up for my first half marathon. I’d never raced anywhere close to that distance before, and I was more than a bit apprehensive. But I trained diligently and set what an aggressive goal for my finish time.

As I made my way into the starting corral, I was still unsure if I’d hit my goal time. But 13.1 miles later, I looked up at the clock and found I’d beaten it by 10 minutes.

I was elated, but I didn’t celebrate for long. By the end of the day, my focus had turned to my next half marathon, where I aimed to post an even better time.

I did just that, lowering my personal best by nearly three minutes. So, once again, I set my sights on an even better performance in my next race.

I attained that as well. And I was on my way to tackling even loftier goals when injuries got in the way.

That broke the spell. With my running future suddenly murky, I was left to ponder what was already behind me. What I’d attained before and might never accomplish again.

This swing from highs to lows was brutal. It nearly destroyed me.

But it wasn’t all that unique. Many distance runners must contend with it. Including Olympic bronze medalists.

Molly Seidel followed up her podium performance in Sapporo with a fourth-place finish in the 2021 New York City Marathon. She set a personal best in that race.

Seidel was on her way to a similar performance in the 2022 Boston Marathon when she injured her hip. She had to bow out of the race 16 miles in.

Suddenly, the next goal wasn’t right in front of Seidel. There were no personal bests to chase, no marathons to win in her immediate future.

Instead, an arduous rehab awaited Seidel. Along with the real possibility that her best races might be behind her.

But Seidel didn’t have the luxury of coming to terms with this in private, as I had. She was a professional runner at this point, with sponsors to please and a livelihood to maintain. Plus, she had millions of runners across America following her every move.

Each workout she posted on Strava would be scrutinized. Anything she said on Instagram would be commented on.

And if she didn’t post anything to those places, her followers would notice that too.

The expectations were sky-high. There was no room to be human.

This all took its toll on Seidel. So, she started speaking out about the mental challenges she was facing. And she eventually took some time away from the sport to reset.

It wasn’t a universally popular decision. But it was the right one.

Molly Seidel found herself in an impossible situation. And she did what she needed to make it manageable.


As I write this, another Olympic Games is in full swing.

There have been plenty of memorable performances. And a few surprises on the level of Seidel’s bronze medal run in Sapporo.

But behind all the glamour and athletic glory, there’s been a steady conversation going on. An open discussion about what these venerable athletes must contend with.

You see, most athletes at the Olympic games are not set up to capitalize on their success. The International Olympic Committee does not generally pay medal bonuses, and most national delegations only pay a modest reward to their decorated athletes.

These are the remnants of a system formed by elitist 19th-century aristocrats obsessed with the spirit of amateurism. It was an impractical system then, and it’s no less impractical today.

(The fact that the rapper Flava Flav is financially supporting the United States Women’s Water Polo Team is both noble and absurd.)

And yet, the system remains.

What Olympic athletes don’t get in money, they get in attention. Over the course of two weeks, they have the eyes of the world upon them. Literally.

It’s a spotlight many would crave, an opportunity wholly worth seizing. But it only comes around every four years.

Add it all up, and you have accomplished athletes gaining massive followings overnight, but without a corresponding gain in dollars. They’re stuck in the purgatory of notoriety – carrying all the pressures of fame without enjoying the spoils of it.

It’s no wonder that these athletes are forced to chase the next Olympic cycle and the next world record. Their relevance relies on it. Their followers demand it. Their finances might depend on it too – if they’re lucky enough to amass corporate sponsors.

And it’s no wonder that so many of these champion athletes – Caeleb Dressel, Allyson Felix, Simone Biles, and others – have nearly broken under these pressures. Much like Molly Seidel, they’ve found themselves saddled with the impossible.

We seem to have reached an inflection point. We can no longer hide behind the myths of athletic heroics carrying the day. There’s no denying the humanity of the athletes who captivate and inspire us. Not anymore.

But it’s what we do with this moment that matters.

Will we commit to giving talented athletes more than our attention? Will we provide support in all facets – from financial to medical to emotional? Will we offer up some grace if their journeys take a left turn, or if they feel compelled to step back?

Will we be better than we have been?

There’s only one real answer. Only one response that will stand the test of time. Only one path that will stay on the right side of the moral boundary.

Let’s make sure it’s the one we choose.

Success needn’t be cost-prohibitive – whether it’s found on the athletic field or beyond its boundaries.

It’s time we make it right.

When We’re Old

It felt like I’d been hit by a ton of bricks.

Muscles ached. Joints creaked. Pain proliferated.

What had I done to endure this? Hike a mountain? Lift heavy boxes? Plunge a shovel into the dirt?

Nothing of the sort. I’d simply slept in my own bed. And now I was waking up wrecked.

This has become my new reality. I’m getting older. And the cracks in my armor are starting to show.

Some days, I might feel sore all over for no apparent reason. Other days, I’ll wear down faster than I used to. Still other days, it’ll take me longer to remember things I once recalled instantly.

And on the worst days, all three outcomes converge upon me.

These disruptions are still relatively mild – more inconveniences than anything. I’m still relatively young, and I remain fiercely independent.

Still, they offer a dire warning. For aging only goes one way, and I’ve still got plenty of runway left for it to do its worst.

It will only get tougher to navigate the obstacles in my path going forward. And the cost of failure is sure to get higher.


When I was young, I spent a lot of time with my grandfather.

I would read children’s books with him. I’d build model train sets with him. And occasionally I’d steal his glasses and scamper off.

I’ve written a bunch about my grandfather – my mother’s father. The child of depression-era Brooklyn turned World War II veteran turned high school math teacher. He often regaled me with stories from his life. And in the process, he sparked my fascination with narrative.

The reason I shared all this time with my grandfather in my early years was that he was already retired. He volunteered at an art museum now and then, but he mostly helped care for me.

Back then, I didn’t quite grasp how unusual all this was. I didn’t understand that few people even had the option to retire in their mid-50s, still able-bodied and sharp as a tack. I didn’t grasp how rare it was for people to be able to bond with their grandchildren as much as they desired, free of professional or financial obligations.

I did notice my grandfather aging as I grew up. He had a triple bypass when I was 5 years old, and he seemed a bit more fragile after that. Recurring back problems made his posture a bit more hunched as the years went on. Occasionally, he would shuffle instead of walk.

I took it all in stride, to the degree a child could. I knew I’d need to be a bit more patient with my grandfather, and that some physical activities were off the table.

But what I hadn’t considered was what things would have been like if he were still working. Would the slow physical decline have gotten in the way of his job responsibilities? Would he have been forced out of his position? And what would he have done if he had been?

I never had to consider these prospects for him. But I surely will for myself.

It’s now harder than ever to retire at an early age. A rising cost of living and shrinking safety blanket make longer career timetables a reality.

And yet, we have little acceptance for the consequences of working into our later years. Particularly the impact of aging.

We cringe when public figures – entertainers, athletes, politicians – stay in their roles too long. And we could hardly be blamed for doing so.

These prominent people can gracefully exit stage left. They’ve accumulated enough trappings of fame to sustain them for decades.

The cards are in their hands. So, when they don’t play them, we’re left wondering why.

But few of us have the same advantages. Our options are few and far between.

So, we’re often stuck hanging onto our professional positions for as long as we can. Even as our body and mind start to fade away. And even as the world tries to cast us off.

It’s terrifying. But it’s true.


Several years ago, I started running competitively.

I was well into adulthood at this point. And years removed from my high school cross-country exploits.

I wasn’t exactly pining for those long-gone days. And I wasn’t masochistic enough to crave the sensation of sore legs, burning lungs, and a sweaty brow.

So, what got me back into racing? The allure of the fountain of youth.

Now, I’m no Ponce de Leon. I realized that there was no backwoods stream in Florida to sustain me forever.

But I believed that leveling up my fitness would help me stave off the debilitations of aging. While my less-active peers would degrade physically over time, my body would operate like an advanced machine.

This theory proved true for a bit. I got into the best shape of my life. And I posted impressive times in distance races over and over.

But then, I broke.

An injury sidelined me. Then a second. And a third.

MRI scans, physical therapy sessions, and doctor’s visits became commonplace. The word surgery went from a frightening concept to reality. Yet, I persevered through it all, determined to get back on track.

Still, I couldn’t shake a feeling. The feeling that something was different.

I was struggling to recover from my workouts, even if they were a shadow of what I once breezed through with ease. I was tweaking muscles as I got up from a chair or stepped out of the shower. And I was waking up sore nearly every day.

Despite my best efforts, it seemed that aging had caught up with me. No amount of exercising would forestall the inevitable.

If anything, my fitness efforts would collide headlong with the rip current of Father Time. I’d need to fight three times as hard just to be a step below where I used to be.

I wouldn’t say I’ve made peace with this outcome as much as I’ve rationalized it. For while running is a passion of mine, it’s not my profession. My mind is what earns me my keep, and it’s shown no signs of decline.

At least not yet.

I know that my cognition will also start to slide someday. That gaps will start to form, that failures will start to mount. I’ll fade into a shell of what I once was by any measurable dimension. I’ll start hearing others referring to me as elderly.

Given the economic realities of this society, there’s a good chance I’ll still be working then. I might desire to ride off into the sunset. But I won’t have the horse to get me there, the way my grandfather did.

I’ll be trapped in a living purgatory. Taking up space in a world that wants me to move along but provides me nowhere to go.

This is the cost of inaction when it comes to aging. Collective denial allows its problems to proliferate. And to crush us all someday.

It’s time to take a different path. To embrace clairvoyance about our future. And to use that perspective to calibrate our present.

This is a big ask. But it’s a critical one.

So, let’s not drop the ball.

We all deserve a soft place to land when we’re old. Let’s make sure we have one.

Art and Science

The two Alka Seltzer tablets fell out of my hand, landing in a glass of water.

A subtle hissing sound rose from the glass. The circular tablets disintegrated into a fine powder as the water transformed into tiny bubbles.

It was like the homemade volcano model I showed off to my parents and teachers back in second grade.

Only I wasn’t 8 years old anymore, looking for an A. I was an adult, looking to ease the burning sensation in my throat.

And that would demand a Part 2 of this experiment. It would require me to ingest the contents of this bubbling glass, so that they could neutralize the acid in my throat.

So, without hesitation, I gulped down the concoction. And within a minute or two, my discomfort dissipated.

This was the power of modern medicine. A vivid testament to the wonders of science.

But it might not have been possible without art.

You see, this whole Alka Seltzer setup is unique. Most other medical remedies come pre-prepared, making them far simpler to consume.

This posed a problem when Alka Seltzer first hit the market. The extra work of dropping tablets into full water glasses threatened to scare away consumers. And without robust sales, the product line would be doomed.

So, the makers of Alka Seltzer turned to advertising. Marketers invented the jingle Plop Plop, Fizz Fizz. Oh, what a relief it is.

There was precious little science behind this rhyme. It was mostly artistic expression. But it worked wonders.

Consumers added Alka Seltzer to their cabinets, followed the instructions from the jingle, and saw the desired results. This pattern continued for decades, until I was the one dropping tablets into a water glass on my kitchen counter.

Art and science had come together. And we all reaped the benefits.


There’s a poignant scene in the film The Dark Knight.

Batman is interrogating The Joker at the Gotham Police Headquarters, and the masked crusader asks why the sociopathic villain wants to kill him.

I don’t want to kill you, The Joker replies. You complete me.

This exchange encapsulates the relationship between art and science. They find themselves in the same venue time and again – and at tension with each other.

Take cooking. Many are drawn to the art of it, and TV shows – from Diners, Drive-Ins and Dives to The Bear – have only furthered that perception.

But there’s a heavy dose of science in cooking as well. Ingredients meld, char, evaporate, or congeal, resulting in palatable textures and flavorings.

The clinical precision of these changes has helped countless chefs notate their recipes and share them with the masses. And the members of those masses have been able to whip up reliable meals as a result.

Yet, this scientific contribution to cooking is all but forgotten by most. It’s constantly overshadowed by the glitz that often comes with meal preparation.

Whether it be Hibachi’s tableside acrobatics, elaborately plated desserts, or surprise menu specials at five-star restaurants, people go wild for the art of cooking.

It’s flashy. It’s notable.

But it’s only part of the picture.


I am putting these words on the page. And you, dear reader, are taking them in.

This is the process of writing. Of sharing testimony through the written word.

What should we make of this process? Is it an art or a science?

Many would lean heavily toward art. The trope of authors crafting novels in secluded cabins remains prevalent. The Michelangelo of the Moleskin moniker still sticks.

Yet, if you were to ask an author about their process, you’d likely get a measured response. One filled with rules, patterns, time management hacks, and much more.

Many writers, as it turns out, don’t sit around waiting for inspiration to strike. They take a scientific approach to their craft, mixing artistic talent in along the way.

I know this, because I am one of them.

As I write this, Ember Trace has been running for close to a decade. For more than 450 weeks, I’ve shared a fresh article with you, dear reader.

This venture has been my passion, and my pleasure. But make no mistake, it’s entailed plenty of work.

Such efforts cannot be chalked up solely to artistic expression. On finding a dose of inspiration and putting it on the page.

No, a great deal of the credit goes to science. On uncovering what works best for topic generation, article length, and literary style. On determining which days and times work best to type away on my computer. And on replicating that successful formula, over and over.

There’s certainly some art involved. But my work is built on a foundation of science.

As such, I bristle a bit when I’m labeled a creative. And I roll my eyes when others say they’re too left-brained to do what I do.

It’s not that they cast me on the wrong side of the divide. It’s that they put me on one side to begin with.

Writing is not art or science. It’s both.


I could keep going. I could bring up more examples of disciplines we consider to be strictly art or science. And I could share how we’re mistaken.

But I’m not going to do that. Your attention is much appreciated, dear reader. And it’s worthy of something far better than an endless ramble.

I will pose a question though. Why are we so hesitant to accept reality?

It seems we can’t wrap our brains around the idea that art and science can co-exist. It’s too nebulous, too uncomfortable.

So, we focus on the inherent tension between them, and we seek to resolve it definitively. Even as such a quest is doomed to futility.

It’s high time we take a different approach. It’s time we look at that tension as an opportunity, rather than a threat.

Indeed, if we can manage the intersection between art and science in cooking, writing, and other disciplines, we can differentiate ourselves. We can get one step closer to mastery of those crafts. And we can stay one step ahead of whatever innovations yearn to commoditize them.

Leveraging the tension can do us a world of good. But only if embrace the mission.

Art and science might be strange neighbors. But they belong together.

Let’s put the wedge away.

Youth and Experience

The ball wasn’t going where I wanted it to.

Sometimes it would slice. Sometimes it would hook. Sometimes it would skid across the grass.

With each swing, my frustration mounted. And a sense of dread started to sink in.

You see, I had come to this driving range near Fort Worth with good intentions.

I was unemployed at the time, residing in an extended-stay hotel, and applying to jobs left and right. But none of it was going well.

No hiring managers were willing to take a chance on a career-changer with no experience in their industry. Few even offered me an interview. And all the while, I was burning through my savings to fund my food and lodging.

I needed to get away from it all. To spend an hour or so outdoors, doing something that could clear my head. And spending $20 to hit a bucket of golf balls seemed like a sensible choice.

But now I was kicking myself.

My hand was chapped from gripping the golf club too tightly. My golf pants and polo were drenched in sweat. And my doubts about my golf game threatened to rival those of my employability.

Was I ever going to be able to earn an honest living again? And if I did, would I even be able to live life to the fullest?

If this day was any indication, the answer was no.


It’s been more than a decade since that afternoon on the driving range.

I’m now gainfully employed, and I’ve advanced in my career. I have a true place to call home and tangible financial stability.

At first glance, I have everything the younger me once craved. But looks can be deceiving.

These days, I could go to the driving range just about any time I desire to. The cost is negligible, and the stakes are low.

And yet, I don’t do that. I haven’t for years.

For the joy in that activity has dwindled for me. Just as it has for so many others.

Some of this change is physical. I don’t have the stamina to do as much as I used to. And when I do wear myself down, my body aches for days.

But the shift is also mental. I’ve lost the capability for unbridled glee. And the sensation of letting myself go now feels foreign to me.

For example, there was a time when I loved roller coasters. I would patiently wait in line for hours at the theme park, boldly lock myself into the safety harness, and cheer with vigor through each dip and turn of the track.

I was having the time of my life.

I still want to love roller coasters in this way. And occasionally I do find myself riding one.

But as my body is defying the laws of gravity, my mind is somewhere else. It’s staring down from a distance as I dip and twist and invert.

I’m just not there anymore. Not completely.

This, I believe, is the encapsulation of experience.

Growing long in the tooth can make a person somewhat jaded. It can leave one detached from the thrills of life. It can estrange one from the reckless abandon of innocence.

With those connections severed, the only way to relive such sensations is through one’s own memories.

And so, from my high perch of career and fiscal stability, I look back longingly at my younger self. The one who would venture out to the driving range to clear his head, even if such a trek was to end in futility.

The older me might have the trappings of a successful life. But not the inclination to get the most out of it.


A few weeks after my ill-fated trip to hit golf balls, I got a call back for a job application I’d submitted.

The hiring manager wanted me to come into the office for an interview. I accepted the invite.

The interview ultimately went well. While I wasn’t one to count chickens, I was relatively confident that I’d be offered the job.

So instead of microwaving a pouch of rice back at the extended stay hotel, I went to a Cajun restaurant for a proper lunch.

Sitting at the bar in my suit and tie with a plate of fried crawfish in front of me, I was hopeful. This was just the start of the pathway to success, I told myself.

I think back on that memory of myself more than I’d like to admit. For that young and scrappy version of me was looking unabashedly at who I am today. And yet, I find myself just as unabashedly staring back.

We’re both staring through the murky portal of time. Each wanting what the other has — and neither knowing it.

Truth be told, we each want to believe that there’s no inherent tradeoff between youth and experience. That gaining one doesn’t necessitate losing the other.

But given the inextricable truth of that tradeoff, we’re each looking to fill a hole in the current version of our life. For one, the substance to sustain the joie du vivre. For the other, the joie du vivre itself.

It’s devastating in a way. Even tragic.

But it’s the reality of my life. And I’m not alone.

Indeed, many of us look longingly at our former glory, just as we once stomped our feet yearning for our future to arrive. If we think hard enough on it, we can each find our own split-screen moment.

But should we? That’s open to debate.

There’s something to be said in leaving the past behind and living in the moment. On recognizing that what’s gone is gone. And on giving it no further mind.

But there’s also value in sustaining those memories. On recognizing the sensations we once had. And on gaining context from those recollections.

Such thinking might not eliminate the tradeoff between youth and experience. But it will provide helpful context in assessing our lives. It will also make us more empathetic and socially aware — which is always a plus.

The key to this, of course, is discernment. We must be able to glance at our youthful past without getting consumed by the memory.

That’s easier said than done. I’m Exhibit A as to how challenging it can be.

But I’m working on it. And I will continue to do so.

I hope I’m not the only one.

On Counterfactuals

My father shuffled the cards and dealt them out.

Face up on the table in front of me were a 6 of Diamonds and an 8 of Spades.

My sister and mother each also had two cards face up in front of them. I took a quick glance at the cards. But then, my father set me straight.

Don’t worry about them, he stated. The goal of this game is to beat the dealer. In this case, me.

I asked him how I might do that.

It’s simple, my father replied. Your cards just need be closer to 21 than mine, without going over 21.

My father went on to explain the rules of Blackjack.

Both my cards were face up, while one of his was face down. I’d have to add my cards together and determine if my hand was better than the dealer’s.

This was a guessing game as much as it was an exercise in arithmetic. But there still was some skill involved.

For each round, the dealer would ask each player if they’d like more cards to help their cause. If any of the players said Hit me, they’d get another card. If they said I’ll stick, they wouldn’t.

This meant that if I didn’t like my chances, I had ample opportunity to improve them. But I’d need to manage that opportunity artfully.

I looked at my cards again. They totaled 14, which was a far cry from 21. I’d surely need more to win.

So, when my father asked what I wanted to do next, I emphatically said Hit Me. He dealt me a 2 of Hearts, bringing my hand up to 16.

This still seemed too far from 21. So, during the next round, the words Hit Me again left my lips.

I got an 8 of Diamonds.

I had a higher card total than the dealer. But I’d also gotten my hand up to 24.

I’d busted. I’d lost.


I didn’t take my failure all that well.

But as I sat there sulking – as 8-year-olds do – my father took a moment to coach me up.

You don’t need to go ‘Hit Me’ on every turn, he said. Sometimes the math makes that too risky.

Sometimes the best way to win is to stick.

This stunned me.

I had never considered how not doing something could be more impactful than springing into action.

How could I have?

My entire life to that point was defined by motion. I bounced from activity to activity, at school, at home, and everywhere else.

Sure, there was plenty of downtime. Regimented bedtimes in the evening, regular naps in the mid-afternoon, and so on. But I had no recollection of the stillness, as I was unconscious throughout those quiet moments.

I’d never really gotten good at mastering the pause. At seizing the non-event. At embracing the absence of action.

All these years later, I still haven’t excelled in those areas. And I’m not alone.


You can’t prove a negative.

This is a common refrain. You hear it often during Monday Morning Quarterback sessions.

The point is straightforward. Time moves in one direction, and only on one track.

We can ponder what would have happened if we didn’t make a certain move, meet a certain person, or pursue a certain dream. We can muse about how much better or worse we’d be for choosing a different path or encountering a different fate.

But these are just pontifications. We can’t know for sure.

There’s plenty of logic behind this theory. After all, we humans have long been proficient in notating things. As we’ve evolved from stone etchings to silicon computing chips, we’ve kept the thread of recording events alive.

Those data points have proven essential to understanding our world. We recount history so that we might replicate successes and avoid repeating disasters. We keep scientific notations to prove hypotheses and spur innovation. And we look at numeric indicators to help prognosticate what’s to come.

Absent these readings, we have nothing. No data to ground our musings in. No substantive proof of how an alternative path would have played out.

And so, the prevailing wisdom has been to ignore the negatives. To avoid spending energy on what could have been. To proclaim Hit me when the dealer offers another round of cards, over and over.

Yes, away from the Blackjack table, the do-nothing option is too unproven to even be an option at all. No wonder we don’t pursue it.

But, at long last, that might finally be changing.


In recent years, a term has garnered some buzz.

Counterfactuals.

This term describes an alternative fact set. Not in the form of lies or half-truths, but more in the prevalence of empirical simulations.

Counterfactuals have existed for quite some time. But their use was traditionally limited to certain situations, such as courtroom testimony. (Think of the question from prosecutors in too many Law and Order episodes: And if that hadn’t happened, what would you have done?)

But now, things are changing. Thanks to advances in data science and artificial intelligence, we can take a fresh look at the past. We can change one input and see what the statistical outcomes were likely to have been.

This new age modeling has changed the game for decision making.

It’s broadened the scope of possibilities beyond the triumphs and failures of record. It’s helped us to preview occurrences without clouds of doubt. It’s allowed us to experiment free of the shadows of collateral damage.

Yet, this potential still comes with a cost. Namely, the cost of our innocence.

No longer can we be willfully blind to the road not taken. No longer can we shun the outcomes we – or our predecessors – had not experienced firsthand.

Those storylines now written in probabilities and code. The do-something- option, the do-something-else option, the do-nothing option – they’re all out in in the open.

It’s our obligation to look at them before choosing a path forward.

This might seem like a daunting task. An uphill climb. A joyless sojurn.

But it doesn’t have to be.


I am a huge fan of Malcolm Gladwell.

Longtime readers are familiar with my Gladwellian obsession. His bestselling books adorn my bookshelf. His acclaimed podcast fills my audio feed.

There’s a certain clarity in Gladwell’s work. A mix of eloquence and boldness in his statements.

But that’s not what draws me to him like a moth to a flame.

Malcolm Gladwell is somewhat of a contrarian. He’s embraced counterfactuals since long before it was cool. Before there was data science and advanced computing to back up his views.

Indeed, in those early days, Gladwell would often dive deep into obscure datasets and historical studies to support his claims. He would connect disparate dots in a manner that wouldn’t become clear until the story was nearly over.

Gladwell’s perspective was maddeningly uncomfortable to me when I first encountered it.

I yearned follow the prevailing winds. I desired to kowtow to custom. I wanted to go Hit Me on every round.

I had no appetite to upset the apple cart. I wasn’t buying what Malcolm Gladwell was selling.

Gradually, though, his well-informed perspectives won me over. I became less consumed by perspectives, and more enamored with getting closer to the truth – as unsightly as it might be.

The prospect of encountering counterfactuals became exciting, not exhausting. And my decision-making chops flourished.

I no longer play Blackjack. But if I did, I’m certain I’d be far more proficient at it these days. For I understand the subtle pull of the do-nothing option in an environment yearning for another card. And I’m willing to give it an audience.

Such power lies within all of us. I am sure of it. We just need to harness it.

And that starts with the right mindset. With embracing counterfactuals, rather than running from them.

Are you ready to take that quest?

Deep in the Heart

It was hard to miss.

As I drove by the pasture on the way to work, an irrigator was hard at work dampening the sod.

An industrial-strength spigot fired blasts of water 10 feet in the air, before gravity and the wind took over. The water would fall to the ground in a thick mist, allowing one pump to bring water to several square feet of land.

Then the fixture would rotate a bit. It would reload, firing a blast of hydration to fall on an adjoining patch of ground.

This pattern continued until the circle was complete. Then the cycle would start again.

The morning sunlight made all this quite a spectacle. The water appeared as a transparent curtain as it fell back to earth. A million tiny bubbles were transfixed in the air.

It was a sight reminiscent of an exotic destination. A waterfall secluded in the jungle, perhaps. Or the craggy cliff face where the frothy sea collided with the land.

And yet, this location was anything but.

No, this water was falling on land as flat as a pancake. Across the pasture, some longhorn steers grazed. And behind the thick mist was the asphalt of a highway and the glass façade of an office tower.

This was Texas personified. And I couldn’t imagine myself anywhere else.


Every fall, pictures of massive corsages proliferate through social media.

The floral displays are up to three feet high. And they often adorn the fronts of dresses that high school girls wear to the homecoming dance.

Or so I’m told.

You see, the spectacle of mums at the homecoming dance is a distinctly Texan tradition. It exemplifies the school-age experience in the Lone Star State — an experience I never had.

I was 20 years old when I first set foot in Texas, and 22 when I formally made it my home.

I still had some maturing to do in those days of early adulthood. But there was no doubt that I’d grown up elsewhere.

This dichotomy has dogged me a bit.

Sure, I chose to dig my boots into Lone Star soil at my earliest adult opportunity. But I can never claim to be a Texas Native.

The region I can claim native status in – the Northeastern United States – well, I left it at my earliest opportunity. I was a high school graduate, a teenager who realized that many of his happiest moments were found on vacations far from home.

I yearned to follow the thread of that intuition, to try out somewhere new for size. And college offered the perfect opportunity to do just that.

So, I moved from New York to Miami. And I spent my undergraduate years under the warm South Florida sun.

The experiment had mixed results. I was grateful to be out of the Northeast, harboring no real desire to return for the long haul. And I thrived in school, ultimately graduating with honors.

But as that graduation date approached, I was overcome by a certain feeling. A feeling that Florida could not be my forever home.

I belonged somewhere else. But where?

I was sorting through that question when I got a job offer in West Texas. I accepted without hesitation. And not long after moving west, I recognized that I’d found my answer.

This is where I was meant to be all along.


Growing up in America’s oldest and most populated region meant making several assumptions.

The winters would be cold. The summers would be sticky. And no matter the weather, the traffic would be awful.

From an early age, I recognized that my family’s suburban home had a modest backyard and no garage. But at least we had a yard and a car. I know plenty of people without either.

I never did ask why we all signed up for this. I didn’t have to.

Even as a child, I understood that the Northeast was a vanguard of culture and a beacon of professional opportunity. That’s why most of my family had made their home in the region. And why the families of my friends had done the same.

I respected that tradition, even as I moved to defy it. But the reactions I got for doing so caught me off guard.

Family and friends would lampoon my new home, evoking the most outlandish stereotypes. They’d rail against politics in Texas. Or they’d derisively refer to the state as The Flyover Zone.

I brought this on myself to some degree. On my first trip north after my move, I sported boots, Wrangler jeans, and a belt buckle – in the middle of summer.

But as the years flew by — and it became clear that I wasn’t moving back — the derision continued. It was as if my choice to swap zip codes was a betrayal. A wayward trek that flaunted an invisible boundary.

This rankled me.

The winding road had finally led me home. Yet, I was still the only one to accept it.


The pasture was now in my rearview mirror.

As the shadow of the office tower hovered over me, my mind began to wander.

I saw beauty all around me. In the rustic cattle patch bathed in sunlight. In the curtain of mechanical mist dampening it. And in the modern marvels – the highway and the office building – providing a backdrop.

Maybe that vista wasn’t everyone else’s cup of tea. But it sure was mine.

I suppose this is a prime reason why I’ve remained steadfast in my devotion to the Lone Star State. Perhaps it’s why I’ve grudgingly endured the underhandedness from those who reside far beyond the Pine Curtain.

Texas is deep in the heart of me. I’ve found beauty in both its grandeur and its monotony. I’ve found grace in the kindness of its populace. I’ve found grit through its tradition of resilience.

I’ve found myself through it all.

Others might not see what I see here. And ultimately, they don’t have to.

I just hope that they respect my decision. My right to put a stake in Lone Star ground. And to find peace on the Southern Plains.

Home is where the heart is. Mine resides here.