![]() |
![]() |
![]() |
William Harvey, Program Manager for Strategic Initiatives and University Professor, brings a refreshingly practical perspective to leadership and problem-solving. Throughout the conversation, William shares how his diverse background—from the Marine Corps to manufacturing to academia—has shaped his approach to developing people and tackling complex challenges.
William’s philosophy on leadership centers on flexibility and situational awareness. He describes his approach as stepping into whatever role the moment demands, whether that’s ownership, delegation, coaching, or sponsorship. Drawing an analogy to the movie “300,” where King Leonidas steps into missing spots, William explains that he doesn’t declare his role upfront but instead reads the situation and fills gaps as needed. For critical moments—safety incidents, major quality investigations, or when someone is truly struggling—he leads directly. But for planned activities, he creates safe spaces where people can develop new competencies without the pressure of real-time crises forcing immediate action.
One of William’s most compelling insights challenges a common assumption in problem-solving work. Before jumping into any methodology or framework, he insists on establishing two fundamentals: does everyone agree it’s actually a problem, and where does it fit in the priority list? Without that shared understanding and commitment, all the problem-solving methods in the world won’t matter. William also emphasizes diversity of thought as critical to collaboration, pointing out that perspectives shaped by education, family upbringing, international experience, and other life factors often matter more than visible diversity markers alone.
William has learned to manage his own influence carefully. Recognizing that as a senior person, he can easily sway a group, he’s developed tactics like voting before discussion and speaking last. He presents ideas as straw man arguments, deliberately inviting critique by asking what’s wrong with the plan rather than assuming he’s considered everything. This approach reflects his understanding that mental models are never fully accurate—they only become more accurate through constant refinement based on the gap between expectation and reality.
The conversation reveals how William has built learning directly into organizational rhythms at multiple levels. In daily huddles, one-on-ones, and formal after-action reviews, he creates space for reflection. But his most powerful discovery came accidentally when he started asking, “Who’s done something worth recognizing since we last met?” before discussing what needs improvement. Within about 30 days, finger-pointing disappeared. By layering genuine praise first, William found that people became far more willing to collaborate on problems, seeing issues as process failures rather than personal attacks.
William also shares his practice of using pre-mortems, taking insights from past post-mortems to identify what could fail in new projects before they launch. This forward-looking application of learning prevents teams from repeating mistakes. He references the “zoom in, zoom out” systems thinking model, noting that while most people excel at zooming in on technical details, they often forget to zoom out to see handoffs between functions and other systemic issues that could derail success.
Looking ahead, William is exploring how AI can make learning content more effective by customizing delivery to resonate with diverse learners—matching accents, appearances, and contexts to help information land more powerfully. It’s a natural extension of his commitment to intentional inclusion and meeting people where they are.
Connect with William on LinkedIn
Lean Coffee Episode 7In Episode 7, Mark Graban and Jamie Flinchbaugh talk about the Olympics, French Press coffee, answer listener questions, discuss Starbucks plastic stoppers, and try to figure out if The Muppet Show has a future. If you can’t find the theme, don’t worry, because this is Lean Coffee Talk and we can explore all sorts of things without a theme.
We each pour a French Press Coffee in our new Lean Coffee Talk mugs and explore the differences between immersion and percolation brewing (hint: French Press is immersion). We explore some of the nuances but also the inherent simplicity that sometimes makes coffee easy. We then get into listener questions focusing on two topics: KPIs and change management. We explore both topics in terms of when it works and when it doesn’t, how rigid or flexible to be, and what behaviors help enable success.
We then talk about product design and customer experience through the lens of the plastic Starbucks cup stopper. Or is it a stirrer? Or is it a fluid dynamics damper? OK, probably not the last one but this little piece of plastic gives us plenty of questions and insights regarding waste, the jobs customers need done, and customer personas.
We end hoping that the single episode of The Muppet Show is turned into a green light for a full series.
Please review us and follow!
Gregory J. Scaven: Curiosity and Discipline in Problem-Solving
![]() |
![]() |
![]() |
Gregory J. Scaven, CEO, Board Director, Partner, and currently President at Scaven Enterprises, LLC, brings over 30 years of technical engineering leadership and more than 20 years as a P&L leader to this conversation about problem-solving. With deep expertise in pyrotechnics, explosives, and propellants across automotive, aerospace, and defense industries, Greg shares how his approach to problem-solving evolved from the lab to the boardroom.
Greg’s introduction to problem-solving came through the lens of high-reliability engineering, where devices that “go boom” must do so only when intended. Working in an industry demanding “six-nines” reliability or better, he learned the discipline of corrective action processes, where finding the true root cause wasn’t optional. Greg emphasizes that his early training taught him to demonstrate the ability to turn failure modes on and off, then prove the effectiveness of preventative actions. This rigorous foundation shaped everything that followed.
The transition from engineer to business leader brought formal problem solving training through the Danaher Business System. Greg describes how Danaher focused on training leadership teams, not just front-line workers, because problem solving is a critical leadership skill. The emphasis was revolutionary for him: spend 70% of your time defining what the problem actually is. Greg explains that coaching teams to frame problems correctly became more important than diving into technical details, and he learned to limit his organization to no more than three major problems at any time, integrating them into regular leadership reviews.
Throughout the conversation, Greg returns to a central theme: critical thinking matters more than following forms. He cautions against becoming a slave to any tool, insisting the power lies in the thinking process itself. When young engineers worry about filling out corrective action paperwork, Greg redirects them to focus on what they’ve learned. He consistently asks teams to reframe their problem statements as new learning emerges, recognizing that the problem definition itself can evolve.
Greg draws a clear distinction between what he calls “cause problems” and “creative problems.” As an engineer, he dealt with cause problems where scientific rationale could explain failures through tolerance stack-ups and environmental conditions. As a P&L leader, he faces creative problems like sales shortfalls, where turning failure modes on and off isn’t possible. This is where experimentation becomes powerful. Greg encourages teams to quickly test their top three ideas, look for early returns, and double down on what works while abandoning what doesn’t.
Creating a learning culture under P&L pressure requires deliberate effort. Greg believes great businesses are naturally curious, filled with people who aren’t afraid when experiments fail. He looks for teams that iterate without waiting for permission, teams that come to him saying, “We tried this, it didn’t work, so here’s what we’re doing next.” That’s his definition of success. Greg emphasizes accountability for follow-through rather than results, building on concepts from his military background around the commander’s intent. Teams that understand the big picture, maintain discipline, and show bias for action don’t wait for scheduled reviews when critical issues arise.
Greg’s approach reveals how curiosity, discipline, and real-time responsiveness create problem-solving cultures that deliver. His journey from engineering to executive leadership demonstrates that while the problems change, the principles of critical thinking, experimentation, and learning remain constant.
To connect with Greg or learn more about his work, visit his LinkedIn profile at www.linkedin.com/in/gjscaven.
Steve Brown of Google DeepMind fame on Leading AI Transformation
![]() |
![]() |
![]() |
Steve Brown has spent years helping organizations see around corners. As a former executive at both Intel Labs and Google DeepMind, where he served as their in-house futurist, Steve brings a unique perspective on what happens when rapid technological change collides with practical business reality. In this conversation, he challenges leaders to move beyond fear and cost-cutting mentality to embrace AI as a tool for genuine value creation.
Steve explains that being a futurist isn’t about making predictions—that’s for fortune tellers. Instead, it’s a discipline of examining trends, understanding how they intersect over time, and mapping possible futures. But the landscape has grown increasingly complex. The pace of AI development has accelerated so dramatically that projecting even six months ahead has become challenging. What makes AI particularly difficult to forecast isn’t just the technology itself, but the ripple effects of having powerful intelligence available on demand at low cost. As Steve puts it, this changes everything about everything.
When it comes to implementation, Steve grounds his approach in a framework he calls “possibility and purpose.” He sees AI creating an enormous landscape of what’s possible, but warns that the real leadership challenge is figuring out what not to do. By finding the intersection between corporate purpose and this expanded possibility space, organizations can focus their efforts where they’ll create the most value.
Steve offers a fresh perspective on AI’s relationship with human qualities, such as empathy. While acknowledging that AI simulates rather than truly experiences emotions, he points to promising applications like AI therapists that can reach people who would never seek human help. The key is understanding when simulation serves a genuine need versus when it creates friction in developing essential human skills—like learning to navigate relationships and failures.
The heart of Steve’s message centers on reimagining AI not as a replacement for humans, but as a collaborative teammate. He describes three types of AI agents organizations should consider: offload agents that handle boring repetitive work, elevate agents that amplify human capabilities, and extend agents that enable people to do things they couldn’t do before. This framework transforms workforce planning from a zero-sum game into an expansion strategy. Steve points to Jensen Huang’s vision at NVIDIA—growing from 30,000 employees to 50,000, supported by 100 million AI assistants—as an example of thinking about amplification rather than reduction.
Steve argues that AI project failures typically stem from three core issues: immature technology, poor change management, and messy data. Organizations succeed when they start small with bounded projects, balance short-term wins with medium and long-term initiatives, and treat AI implementation as fundamentally a change management challenge rather than just a technology deployment. He emphasizes that everyone owns the AI transition—from line of business to HR to IT—though having a Chief AI Officer can help drive the organizational transformation required.
Rather than obsessing over traditional ROI calculations, Steve encourages leaders to focus on the human challenges that AI can solve. When the average knowledge worker spends 32 days per year just searching for information, cutting that time in half represents massive value that goes beyond simple efficiency metrics.
Learn more about Steve’s work and access his several resources:
AI Resources
AI Course
https://www.stevebrown.ai/ai-course
AI Workshops
https://www.stevebrown.ai/workshop
Keynotes
https://www.stevebrown.ai/keynotes
YouTube
Amazon book “The AI Ultimatum: Preparing for a World of Intelligent Machines and Radical Transformation.”
Connect with him on LinkedIn
https://www.linkedin.com/in/futuresteve/
Embracing Failure: Dr. Melisa Buie on Learning to Faceplant![]() |
![]() |
![]() |
Dr. Melisa Buie brings a fascinating perspective to the challenge of failure, one forged through decades of building high-powered lasers and leading manufacturing transformations in the semiconductor industry. With a PhD in Nuclear Engineering and Plasma Physics from the University of Michigan and over 15 years at Coherent, Inc., Melisa has spent her career solving complex technical problems. But it was a personal struggle that led to her latest book, “Faceplant: FREE Yourself from Failure’s Funk,” co-authored with Keely Hurley.
Melisa shared a compelling story that became the catalyst for her book. Despite being completely comfortable with failure in the laboratory, where experiments routinely don’t work, and models need constant refinement, she discovered she was terrified of failing in her personal life. When she took a Spanish class at Stanford and tried speaking her first sentence to a friend, the friend burst out laughing. Melisa’s immediate reaction was to shut down completely. She realized she had developed a fixed mindset about failure outside the lab, and this contradiction troubled her deeply.
She spent years reading everything she could about failure, learning, and growth, ultimately developing the framework that became “Faceplant.” The book’s title came from Melisa’s co-author, Keely, who has a gift for turning her own missteps into hilarious stories. For Keely, every failure was just another face plant to laugh about, and the metaphor stuck immediately.
The subtitle’s use of “FREE” isn’t just clever wordplay; it’s an acronym for a practical framework: Focus, Reflect, Explore, Engage. Melisa explained that the framework grew organically from her lean manufacturing background, particularly the principle of Hansei, which emphasizes self-reflection followed by self-improvement. The first two steps help clarify what actually happened and understand your role in it, while the final two steps push you toward curiosity and experimentation.
When asked about organizational barriers to learning from failure, Melisa highlighted the critical importance of psychological safety, pointing to the work of Amy Edmondson and Mark Graban. She noted that leaders often unintentionally shut down learning through their behaviors, even when they genuinely believe they support it. Melisa offered concrete examples to watch for: Is it easier to get approval for a half-million-dollar piece of equipment than to run a five-thousand-dollar experiment? If equipment purchases are immediate but experiment proposals sit unopened for weeks, that reveals the organization’s true priorities. She also pointed to meeting dynamics when brainstorming sessions fall silent except for one voice, or when only a single idea emerges, and everyone rallies around it without discussion, those are warning signs.
Perhaps most striking was Melisa’s deliberate choice to use the word “failure” throughout her book, rather than softer alternatives like “learning opportunity” or “mistake.” She explained that failure makes us deeply uncomfortable, and she didn’t want to step over that discomfort. When one friend admitted to only failing once in life, Melisa felt sad for them, because without taking risks and chances, we miss the rich opportunities that failure provides. She acknowledged the irony: in the lab, ten failed experiments in a design of experiments might be considered a beautiful success because of what was learned. But she wanted to be honest about calling things what they are, pushing past the positive platitudes about failure to actually embrace it.
Learn more about Melisa and her work at www.melisabuie.com and www.faceplantbook.com, or connect with her on LinkedIn
Lean Coffee Episode 6In Episode 6, Mark Graban and Jamie Flinchbaugh bring “lean coffee” to Lean Coffee Talk, kind of. But first, we haven’t caught up in a while, so we recap various items like Christmas, college football playoffs, and the other football played in England. They discussed coffee, specifically the most important element of coffee…the roasted beans! Yes, everything else does matter, but fresh beans are vital. Both of them buy local and share one of their favorite spots each.
Mark and Jamie then took audience questions, although not live. Listeners had the opportunity to submit questions (and might win a free Lean Coffee Talk mug in the process) that we would answer on the show, and that form is still active, so you can submit questions for future episodes. We discuss higher education, psychological safety, getting lean going in your department when the company might not be supportive, and how to push back or redirect lean requirements that are misguided or misapplied. Balance was a strong theme in this discussion.
We close out with our typical cultural share. Jamie was building custom playlists on Spotify using ChatGPT. Mark watched Brandi Carlile’s holiday streaming special from inside her own home with family and friends, and will see her in concert.
Please review us and follow!
Managing NASA’s Most Complex Mission with Scott Willoughby
![]() |
![]() |
![]() |
Scott Willoughby, Vice President of Program Excellence at Northrop Grumman and former program manager for the James Webb Space Telescope, joined Jamie Flinchbaugh to share insights on leading one of the most complex systems ever built. With 35 years at Northrop Grumman, a NASA Distinguished Public Service Medal, and membership in the National Academy of Engineering, and we have to include a degree from Lehigh University. Scott brought deep wisdom about managing massive programs where failure simply isn’t an option.
Managing the James Webb Space Telescope meant dealing with a system seven times larger than Hubble that had to operate at minus 400 degrees Fahrenheit, a million miles from Earth. Scott explained that tackling such complexity requires breaking problems down through systems engineering, but with a critical twist: don’t trust yourself. Everything on Webb was done in twos. NASA and Northrop Grumman each built independent models, particularly for thermal and dynamic performance. When pointing a telescope at light from 13.5 billion years ago, stability matters, and even small temperature changes cause mechanical components to shrink and expand. The two teams challenged each other constantly, ensuring they reached the same conclusions before moving forward.
When models disagreed, which happened often during iteration, teams had to get intimately familiar not just with their own work but with how the other side modeled things. Sometimes, differences came down to using different densities or levels of detail. Other times, teams discovered they were working from different versions of test data. Scott emphasized that much of technical work is about getting people to communicate, to say their assumptions out loud rather than keeping them in folders or inside their heads.
Creating a learning culture among world-class engineers and PhDs required leading by example. Scott realized early that being a leader didn’t mean knowing everything. He deliberately asked questions that seemed obvious, sometimes the wrong questions, to get beneath the surface. He echoed back what others said in his own words, creating what he called a safe zone in the middle of dialogue where you don’t have to be right until the end. By showing vulnerability and modeling openness, he encouraged teams to converge on solutions without anyone feeling accused of being wrong.
Testing followed a crawl, walk, run philosophy. Scott stressed taking the hardest punch as early and as low in the system as possible. They qualified components by subjecting them to extremes beyond predicted conditions, building margin into designs for things they couldn’t model perfectly. The hardest day in any satellite’s life is usually day one, which for Webb lasted six months as systems were deployed and activated for the first time.
One of Scott’s favorite stories captured the power of listening to everyone. When membrane tears appeared during sunshield deployment testing, engineers wrestled with an apparently intractable problem. The solution came from a technician who suggested using something like a squid jig from his fishing tackle box to align the 107 pin holes through multiple membrane layers gently. His compliant device solved one of the program’s most complicated problems. Scott learned that elegant solutions sometimes come from understanding how things get built, not just how they’re designed.
For transparency with stakeholders, Scott developed a rhythm of meeting every three months to discuss what had happened since the last time, what they were doing now, and most importantly, what challenges lay ahead. By forecasting risks before they materialized, discussing backup plans, and building anticipation for difficult tests, he made it easier to discuss both failures and successes. What advice would he offer to anyone stepping into similar roles? Take a deep breath, realize it won’t go perfectly, and talk to others who’ve been there. Growth doesn’t occur without discomfort, and leaders get measured not by perfection but by how they respond to adversity.
Learn more about Scott’s work at https://www.northropgrumman.com/, https://science.nasa.gov/mission/webb/, and https://www.imdb.com/name/nm12283488/. Connect with Scott on LinkedIn.
Smart Idiots and Brave Thinkers: Rethinking Critical ThinkingIs courage the missing ingredient for successful critical thinking, and why is critical thinking still one of the most critical skills for every human? As we start to explore critical thinking, it’s a term that’s thrown around as loosely as leadership or integrity, but it is very much worth examining and understanding. Not only is it important today, as it always has been, but it will very likely be even more important in your future. I will explain why, and also break critical thinking down into fundamental elements that are all important. The pathway to improving critical thinking is in improving the ingredients.
So, before we jump to why, the four ingredients that I’m going to focus on are the cognitive ability or the intelligence, which is the engine that drives critical thinking. That is supported by the breadth and depth of knowledge, whether domain-specific or not. The third ingredient is emotional intelligence and your ability to self-regulate through decision-making, which acts as the steering for your critical thinking. And lastly is the courage and the will to think critically. And this last element is, to be honest, a subset of emotional intelligence. However, it is worth calling out as a separate ingredient, because without it, all the fundamental aspects of critical thinking and independence get washed away. Now, why is improving our critical thinking so important?

Daniel Kahneman, the famous Nobel laureate in economics, said, “We think, each of us, that we’re much more rational than we are.” This points out that our ability to reason is one of the keys to self-improvement and to navigating the complexities of life despite our own flaws, yet despite our opportunities to practice, we may not be as good as we believe.
If we also consider the stakes of critical thinking, this is much of what Thomas Jefferson referred to when he said, “If a nation expects to be ignorant and free, in a state of civilization, it expects what never was and never will be.” While those are both high ideals, there are also practical purposes behind critical thinking, including the idea of employment.
The Hart Research Associates 2018 Employer Survey found that 78% identified critical thinking and analytic reasoning as the most important skill they seek in employees. Furthermore, the National Association of Colleges and Employers Job Outlook Surveys found in 2023 that 28% of respondents ranked critical thinking as the single most important competency, and in 2025, 96.1% of employers rated it as important. The American Association of Colleges and Universities’ employer research showed that 93% of employers value critical thinking more than a university degree, which demonstrates a reflection of today’s shift from checkbox hiring to skills-based hiring.
So critical thinking has always been important, from Aristotle to Thomas Jefferson to getting a job today. But in the future, it’s likely to continue to be even more important.
As social media has increased, the ability to discern what’s real, what’s fake, and what’s exaggerated is much more difficult. Misinformation campaigns, whether by an individual or by a state, have become commonplace, with bad actors in Russia once generating both sides of competing protests in a Texas town without ever setting foot on the ground themselves.
Going forward, in an AI-based world, the ability to think for yourself and think critically both around the inputs and outputs from AI may become one of the most essential human skills that separates us from the raw intelligence of AI. This is perhaps one of the key moments where raw intelligence is surpassed by AI, and therefore may be one of the least important elements of critical thinking, although we certainly shouldn’t throw it away in the process.
Let’s turn to those four ingredients and start with that cognitive ability. The cognitive ability is the engine that drives things. This is your intelligence. It’s what powers your ability for critical thinking. There is no question that it is insufficient, but it is still what allows you to process what you’re absorbing, to consider more than one variable at a time, and to find new connections and new insights out of a complex world of information. I won’t focus a lot on why it’s important, as it’s fairly obvious, but we’ll highlight some of its gaps.
Diane Halpern, author of Thought and Knowledge, states, “A high IQ is not always an indicator of good critical judgment, since sometimes people with high intelligence are not exempt from biases or rigid thoughts.” That’s what makes these other variables and ingredients so vitally important.
Furthermore, intelligence has some catches that we have to watch out for. The idea of motivated reasoning indicates that highly intellectual people are better at rationalizing their biases. Therefore, while biased, they can create a sound argument supporting that bias, which can fundamentally turn them into what we can endearingly call a smart idiot. So while making people smarter sounds good, we don’t want to end up with a world of smart idiots!
So having a powerful engine can be very valuable. But as anybody who’s driven a sports car can tell you, a powerful engine is not enough. So let’s turn our attention to knowledge.
Knowledge is our navigational map through the world of critical thinking. This begins with domain knowledge in the topic that we’re applying critical thinking within. Understanding the variables, the cause and effect connections, the systems dynamics, and the historical patterns for any topic (whether geopolitical, physical, strategic, or even simply human) is a key ingredient for critical thinking. For example, conspiracy theories are primarily believed by those who lack domain knowledge in the conspiracy domain to understand why it couldn’t possibly be true.
It is, of course, important to recognize that no domain has completed its development of the knowledge base, as science continues to unveil new insights and new understanding of the universe. Most recently, the James Webb Telescope, which I discussed with its Program Director on this podcast, was launched and led to new insights about how the universe works.
Therefore, domain knowledge will never be complete, but the most effective critical thinkers leverage both the existing critical knowledge, as well as holding space for both new discovery and unwinding past assumptions that may no longer be valid.
Beyond domain knowledge, there’s also breadth of knowledge. The book Range demonstrates very clearly the value of a broad knowledge base. The ability to cross over domains can lead to creative thinking, new insights, as well as simply a broader understanding of how the interconnected world works. The book helps us understand the value of generalists who understand many domains, while not discounting the value of specialists with deep domain expertise.
It’s important, therefore, for critical thinking that we read, we study, we learn. This does not mean a college degree, although a college degree has been used as a proxy for having learned certain things. But as the quote from Good Will Hunting demonstrates, that same knowledge is available for $1.50 in late charges at your local library. It’s the pursuit of that knowledge that is the key, by any means you pursue it.
This is one of the places where AI can provide us access to more knowledge. This will allow more people to engage in critical thinking if they learn how to both utilize AI to gain access to previously inaccessible knowledge, but also enough domain expertise to discern whether what they’re reading is sound or not. I, for example, found both the Diane Halpern quote and the Thomas Jefferson quote by using AI tools to do research on this subject.
I also discovered this useful quote from Anatole France: “An education isn’t how much you’ve committed to memory or even how much you know. It’s being able to differentiate between what you know and what you don’t.” Which means the value of the persistent pursuit of expanding our knowledge is a perpetual and worthwhile pursuit.
Considering emotional intelligence, Christopher Dwyer states, “If the impact of emotion on thinking is one of the biggest barriers to critical thinking, as I believe it is, then the ability to self-regulate your thinking in a manner that accounts for such potential impacts is of utmost importance. This is because the existence of biases is not related to intelligence. Biases are often based on other factors, such as dopamine, where the confirmation bias allows us to feel good that we were right all along, and can trick us into a false interpretation of observable facts.
This is why emotional intelligence is the steering that helps us navigate our engine through the map of knowledge. It helps us dampen, although not eliminate, our biases. It allows us to stay with a question longer through the chasm from not knowing to finding answers. That can be a very uncomfortable place to be, knowing that knowledge is needed or knowing that a conclusion is needed. It takes restraint not to quickly wrap things up but to stay with discomfort.
This is why most effective problem-solving is designed to slow us down and to force us to think more deliberately and critically, because our instinct is to rush to that conclusion, to check the box, and to close the door. But staying with the problem longer, through a series of steps (as I write about in my book People Solve Problems), is what allows us to dig deeper and uncover new insights, new ideas, and new solutions.
Aristotle states, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” This means that we can examine all sides of an argument and examine solutions that are clearly not going to solve our problem, to be curious about what aspects of those are useful, insightful, important, or informative. This is how we understand the other side of an equation. This is how we understand the other side of an argument. This is how we process bad news (whether we got a bad performance review or were fired, or had someone give us a bad review on a speech or a book). We allow that information that may hurt or sting to be examined and understood, and leveraged for future benefit.
I rediscovered recently and wrote about in this blog post this quote from John F. Kennedy in his commencement speech at Yale University in 1962:
“For the great enemy of truth is very often not the lie (deliberate, contrived, and dishonest), but the myth (persistent, persuasive, and unrealistic). Too often, we hold fast to the clichés of our forebears. We subject all facts to a prefabricated set of interpretations. We enjoy the comfort of opinion without the discomfort of thought. Thinking requires effort and responsibility. This means that the prejudices of the present must not be allowed to obscure the truth of the past, nor must we ever assume that the truth is necessarily in the middle of opposing viewpoints, nor must we see merit in both sides of a question simply because they are opposed, nor must we expect that the truth will always be found by splitting the difference between two opposite ideas.”
As we deploy emotional intelligence within our critical thinking, one of those important elements is empathy, which is a key idea for understanding opposing viewpoints. I wrote about the idea of rigorous empathy here, with the idea that it allows us to truly understand someone’s path, experience, and context. That rigorous empathy does not forfeit your freedom to draw your own conclusions and to think critically, but as a tool of more deeply understanding.
Another dimension of emotional intelligence is courage, but I decided here, as part of this framework, to extract courage as its own ingredient, as courage and will provide the fuel for critical thinking.
Indira Gandhi said it well: “You have to have courage, courage of different kinds. First, intellectual courage to sort out different values and make up your mind about which is the one which is right for you to follow. You have to have moral courage to stick up for that, no matter what comes in your way.” As she talks about intellectual courage, the idea is that you have a responsibility as well as an opportunity to decide things for yourself.
This begins with discarding identity based on ideas. Identity politics is a practice in which you belong to a broad set of ideas, which is a dangerous trap because you cannot discard one idea without discarding your identity. This leads to rigidity and zealotry, and sometimes dramatic outcomes from those traits.
Carl Jung wrote about this, interestingly, in the context of flying saucers in the 1950s, when there was mass hysteria around UFO sightings based on the Cold War. He wrote, “Thinking is difficult, therefore let the herd pronounce judgment.” This is what allows us to achieve a sense of belonging to a conclusion, but courage allows us to deconstruct that reality and sometimes swim upstream against what those around us believe. That takes courage.
Russ Payne wrote, “We live in an age where not having the right opinions can get you kicked out of your group. Many people would rather die than not belong.” This is where the courage is needed first to have independent thought through critical thinking, then further courage to give voice to that critical thinking, and further courage again to act on it.
At a smaller scale, yet equally important, because it occurs every day, courage is also required to muster the energy for critical thinking. Whether it applies to how we eat, or sleep, or work, or exercise, it is very easy to follow the path of least resistance. The idea of the cognitive miser wants us to preserve our mental energy for when we need it most, but each of those moments where we determine that we should be applying critical thinking means that moment, we must expend the mental energy.
In his book The Diary of a CEO, Steven Bartlett recounts a powerful lesson on the dangers of groupthink and the lack of courageous critical thinking through a story of a high-stakes meeting where a leader asked his team to rate a new idea. While Bartlett privately judged the pitch as a “1” out of 10, he watched as every colleague before him succumbing to the pressure of social conformity, unanimously praised it as a “10.” When the spotlight finally landed on him, the momentum of the room was so overwhelming that he found himself meekly echoing the “10” despite his internal conviction. This anecdote serves as a stark warning that without a culture of psychological safety, the desire for harmony will too often silence the courageous truth, leaving organizations blind to their most critical flaws.
In a famous product launch example, automotive legend Bob Lutz has said the Pontiac Aztek is what happens when internal momentum and deference beat honest feedback. The Aztek bombed in early market research with one respondent saying, “I wouldn’t take it as a gift,” yet the organization pushed ahead anyway. No one had the courage to either think critically about what they were doing, give voice to that thinking, or act on that viewpoint.
Whether that means taking the time, creating the environment, pushing off distractions, or sitting down with a pen and paper, the courage to do the hard work is a daily challenge, but a vitally important one if we are to deploy this most critical skill.
Critical thinking is more than intelligence. Our cognitive abilities may provide the engine, and our access to knowledge (both retained and pursued) helps us navigate, and our emotional intelligence allows us to steer, but it is the daily courage that allows us to face the moment where critical thinking is important. And those moments happen every single day for every single person.
For self-improvement, don’t focus on developing critical thinking, but instead consider how you can cultivate each of the respective ingredients that make up critical thinking.
Rick Pedersen of Old Norse Consulting on Knowledge Gaps in Product Development![]() |
![]() |
![]() |
Rick Pedersen, owner of Old Norse Consulting, joined host Jamie Flinchbaugh to explore why product development demands a fundamentally different approach to problem-solving than traditional business processes. During their conversation, Rick explained that while most business functions involve transactional processes that can be documented and repeated, product development centers on building knowledge to solve problems that have never been encountered before.
Rick draws a clear distinction between information gathering and genuine knowledge gaps. He explains that a true knowledge gap exists when answers cannot simply be looked up or obtained from an expert. Instead, teams must invest time and resources in building prototypes, running tests, or conducting simulations to create new knowledge. Rick advises teams facing uncertainty to document potential knowledge gaps quickly, then filter them to determine which require actual investigation versus simple research.
The conversation revealed how knowledge creation serves as the lifeblood of product development, much like flow serves manufacturing. He emphasizes that the real value in product development comes from creating new knowledge and making it reusable. He compares this to compound interest, where teams that fail to document their discoveries essentially discard their gains rather than letting them accumulate over time. This results in organizations repeatedly solving the same problems across different projects, representing significant waste.
Rick advocates for a shift from traditional task-oriented project management to organizing work around knowledge gaps. Rather than focusing solely on completing action items, teams should orient their efforts around closing knowledge gaps through what he calls fast learning loops or fast learning cycles. This approach helps teams understand why they are performing tasks and keeps the focus on building knowledge that enables better decisions.
When discussing learning from industry leaders like Toyota, Rick cautions against simply copying their systems. He stresses the importance of understanding the thinking behind why successful companies use specific tools and behaviors, then adapting those principles to each organization’s unique situation. He recommends starting small, selecting one or two pilot projects where teams can experiment with new methods while receiving coaching along the way.
Rick recently launched the LPPD Bootcamp, an immersive workshop designed to accelerate learning about product development principles. He explains that the workshop addresses a fundamental challenge in product development: the years-long timeframe makes it difficult to see results and adjust quickly. The bootcamp compresses an entire product development cycle into less than a week, allowing participants to experience how different improvements interact and deliver benefits. The environment also helps teams practice cross-functional collaboration and establish shared reference points they can draw upon when working on real projects.
Throughout the conversation, Rick emphasized that successful product development requires teams to recognize knowledge gaps, invest in closing them systematically, and capture what they learn for future reuse.
For more information about Rick’s work, visit oldnorsellc.com and LPPDBootcamp.com, or connect with him on LinkedIn.
Reflections on AI and Humanity With Arianna HuffingtonArianna Huffington was hosted at Lehigh University for a wide-ranging discussion centered on AI, but covered much more. I certainly will not try to summarize the entire conversation, but will focus on three key takeaways and my reflections on them as she told stories and shared perspectives.
The first was fundamentally the opportunity to learn from every experience, even from failure (if failure is even the right word). Huffington states, “Failure is not the opposite of success, it’s the stepping stone to success.” That applies to many different things, but one of my favorite and most compelling stories that relates to this theme was when she ran for governor of California.
She was one of a wide swath of candidates who eventually lost to Governor Arnold Schwarzenegger, and she was only in the election for one and a half months. However, during that time, they did an online campaign which got picked up by other media, including national media, and essentially went viral.
She learned from that experience the power of online media, and that insight eventually led to the starting of The Huffington Post, which of course really helped accelerate her career, her influence and impact on the world.
The lesson we should all take away from this is that we should be open to a pivot, open to possibilities, and open to opportunities. We should learn from all of them because you never know when a lesson may appear that may set you on a different trajectory.
The second takeaway centers on AI and her focus with Thrive AI Health. Her belief is that she can democratize healthcare coaching. It’s important to note we’re not saying healthcare, but healthcare coaching.
In part, her focus with Thrive is on fundamentals of health that are often not treated and ignored by healthcare professionals, such as sleep. In Thrive, in her book, and in Thrive AI, she has a lot of focus on habits, nudges, and microsteps.
The goal is to be pragmatic and make things fit within your life in a way that actually makes sense and is likely to be implemented. As that happens, habits are small, little things that start to become routine. For example, as she focused a lot on sleep, my phone is never next to my bed. It is often two floors away. This is a very significant habit where she believes it’s an excuse that many people use, needing their phone as an alarm clock or a backup alarm clock. But that really just helps support a bad habit instead.
Nudges, of course, are things that just move us in that direction and help us adjust our habits. Microsteps are essentially actions that we take, but they can be very small actions that move us in the right direction, whether that’s around hydration, sleep, or stress.
While Thrive AI isn’t democratizing healthcare, it is meant to democratize healthcare coaching, where you can get personalized coaching while AI learns your lifestyle, preferences, history, and so on. Most people do not have access to coaching, which means they often end up just learning tips and tricks without verification on social media.
The third takeaway is that she talked about the history of dethronement, with ideas originating from Freud’s 1917 paper A Difficulty in the Path of Psycho-Analysis, where we essentially dethrone a core idea (almost a collective identity) from a scientific or technological viewpoint, and new ideas and new perspectives come along that dethrone old ones.
The first major dethronement was overthrowing the idea that the Earth was the center of the universe. This was the Cosmological Blow brought about by Copernicus and Galileo. The second dethronement was the Biological Blow brought about by Darmin. The third is the Psychological Blow, which is what Freud was promoting around psychoanalysis and the unconscious mind (which is incredibly bold to position his views and his stature as equivalent to these other giants, but I digress).
But where this leads is that AI is possibly another dethronement moment, where AI essentially dethrones human intelligence as a core part of our identity. If AI fundamentally becomes more intelligent than humans, then we have to have a conversation about who we are. Are we all about our intellect, or are we more about our consciousness?
Do we have to focus less on making a living and more on making a life? She believes this is the key to education and perhaps, in fact, in an AI-heavy world, we need more focus on the humanities, even the study of classics, to help develop that consciousness around what it means to be human.
As many things that are simply jobs to be done, knowledge to be held, or knowledge to be wielded in a raw intellectual world, AI may eventually significantly replace humans. And so humanity actually becomes what’s fundamentally more important.
These were interesting takeaways from the conversation with Arianna Huffington, and I encourage you to go listen to speakers wherever you have a chance, whether you agree with them or not, to help stimulate your thinking and your own ideas around how you view the world.