The Role of Humans in Strategy
What Remains Quintessentially Human
Why Strategy Still Depends on Insight, Judgment, and Collective Intelligence
If algorithms are modelling markets, optimising portfolios, testing scenarios, and surfacing patterns at a scale no human team could match alone, then what is left for humans to do?
Humans possess the unique ability to rupture dominant frames by questioning underlying assumptions, redefining system boundaries, and reinterpreting signals that do not fit prevailing models. And human-driven reframing originates as a result of social dialogue, metaphor, and experimentation, rather than through analytic decomposition alone (Orrù et al., 2023). In contrast, Artificial Intelligence (AI) systems operate within the frames encoded in their objectives, training data, and evaluation functions. So, whilst AI can accelerate analysis within frames, the human capacity to disrupt and reconstruct those frames remains central to breakthrough thinking and strategy.
What differentiates heart-pounding strategy from ho-hum strategy is insight; the moment when a problem is reframed rather than merely solved. Insight is discontinuous, non-linear, and often arrives after an impasse. Aha! moments still matter because, without deep insight and the willingness to reframe strategy when market conditions change, organisations decline and become irrelevant. Critical thinking and reframing matters because many strategic failures are not execution failures, but failure to reframe when context changes (R. Biloslavo et al., 2024; F. A. Csaszar, 2024).
Strategy at Frontiers: Why Uncertainty Changes the Rules
Frontier environments such as long-duration space missions illustrate why traditional strategic assumptions break down under extreme uncertainty. In these contexts, past performance is not merely an unreliable predictor of future outcomes, it is dangerously so. Unknown problems are guaranteed, and learning must occur as fast, or faster, than conditions change. This is why organisations such as NASA have historically prioritised simulation, experimentation, and collective sensemaking over rigid plans. Strategy becomes an adaptive capability rather than a predictive artefact. Attempts to manage uncertainty through extrapolation risk the well-documented Dunning–Kruger effect, where individuals overestimate their competence in domains characterised by high uncertainty and limited feedback (Benda et al., 2021; Buçinca et al., 2021; Kruger & Dunning, 1999). Under conditions of uncertainty, the future remains fundamentally unknowable, and the core question shifts from “What do we know?” to “How quickly can we learn?” (R. Biloslavo et al., 2024; Organisation for Economic & Development, 2025a).
The Limits of Traditional Strategy
Traditional strategic planning models assume relatively stable environments, linear causality, and hierarchical control. However, decades of research in complexity science show that organisations operate as complex adaptive systems, where cause and effect are non-linear and only apparent in hindsight (R. Biloslavo et al., 2024; Organisation for Economic & Development, 2025a).
Under such conditions, long-range forecasts and rigid execution plans are poorly aligned with reality. Instead, contemporary strategy must be adaptive, purpose-driven, and human-centred.
The NASA 4-Dimensional System reflects this shift. Developed in environments where failure is unacceptable and certainty is rare, it reframes strategy as a dynamic system shaped by purpose, learning, and collective leadership rather than prediction and control. Three evidence-based shifts characterise this approach:
1. Missions, Not Horizons
Traditional strategy is often organised around short, medium and long-term horizons. While useful for planning, this framing can prioritise predictability over purpose. Research shows that strategy emerges as a pattern of action over time, not solely from deliberate plans (R. Biloslavo et al., 2024; F. A. Csaszar, 2024). Mission-led strategy offers a more resilient anchor. A mission articulates enduring intent, answering why the organisation exists, while allowing methods, timelines, and tactics to adapt.
Goal-setting research demonstrates that meaningful, purpose-driven goals significantly increase motivation, persistence, and performance and large-scale empirical studies further show that organisations with a clearly articulated purpose exhibit stronger strategic alignment and superior long-term outcomes, particularly under uncertainty (F. A. Csaszar, 2024; National Institute of & Technology, 2023).
2. Iterative, Not Linear
Linear strategy models assume that analysis precedes action and that outcomes follow predictably from decisions. Complexity research challenges this assumption. In complex systems, action generates feedback that reshapes both the system and the problem itself (R. Biloslavo et al., 2024; Organisation for Economic & Development, 2025a).
An iterative approach treats strategy as a cycle of experimentation, feedback, and learning. Research on agile organisations shows that shorter planning cycles and continuous learning improve adaptability and performance in dynamic environments (R. Biloslavo et al., 2024; Ruokonen & Ritala, 2025).
Leadership changes support this shift, with adaptive leaders creating the conditions for experimentation and reflection rather than supplying answers in advance (F. A. Csaszar, 2024; A. Sriharan et al., 2024).
3. Bottom-Up, Not Top-Down
Traditional strategy development is typically top-down. However, research consistently shows that excluding employees from strategic sensemaking reduces engagement, innovation, and execution effectiveness (Berretta et al., 2023; F. A. Csaszar, 2024).
Collective intelligence research demonstrates that diverse groups outperform individuals on complex problem-solving tasks when participation is structured effectively. Those closest to customers, technologies, and operations often detect emerging risks and opportunities first.
Bottom-up strategic planning processes generate superior strategic outcomes by improving strategic sensing, strengthening implementation, and increasing organisational adaptability over time. By deliberately incorporating employees and stakeholders closest to customers, technologies, and operational realities, participative strategy processes widen the organisation’s information base and mobilise distributed expertise, leading to more accurate diagnosis of problems and higher-quality strategic decisions (Cai & Canales, 2022; Guarin et al., 2025; Iddrisu & Mohammed, 2024; Kim & Cho, 2023; Wang et al., 2022).
One well-studied method for enabling bottom-up strategic engagement is LEGO® Serious Play® (LSP). Grounded in constructionist learning theory, LSP enables participants to externalise thinking through model-building, surfacing tacit knowledge that is difficult to articulate verbally (Berretta et al., 2023; Hao et al., 2025). Empirical studies show that LSP enhances psychological safety, participation, and shared understanding of complex strategic challenges (Berretta et al., 2023; Hemmer et al., 2024).
Six Human Capabilities Strategy Still Depends On
1. Insight Generation
Insight involves restructuring the problem space itself rather than optimising within existing frames. This quintessentially human capability is insight-driven, which cognitive psychology distinguishes insight from analytic reasoning as sudden, non-linear, and often preceded by impasse (F. A. Csaszar, 2024; G. Orrù et al., 2023). AI can detect patterns; humans redefine problems. Humans use insight to create, and rupture, frames, and framing determines what is considered a problem, which options are visible, and which trade-offs are deemed acceptable. Research in cognitive psychology and strategy demonstrates that many strategic failures are not optimisation failures but framing failures, where organisations solve the wrong problem extremely well (Roberto Biloslavo et al., 2024; Felipe A. Csaszar, 2024).
2. Purpose Construction and Strategic Judgment
Beyond goal setting, humans play a central role in constructing purpose and exercising strategic judgment when objectives conflict or conditions shift. Purpose is not merely a motivational artefact but a cognitive organising principle that guides attention, prioritisation, and trade-off decisions under uncertainty. Research in strategy-as-practice shows that purpose shapes what decision-makers perceive as salient, legitimate, or actionable, particularly when data are incomplete or contradictory (Felipe A. Csaszar, 2024). Unlike algorithms, which optimise against predefined objectives, humans interpret which objectives should matter and when they should be revised.
Strategic judgment becomes especially critical in environments characterised by novelty and moral ambiguity, where no amount of historical data can resolve competing values or interests. Judgment involves integrating factual analysis with normative reasoning, contextual awareness, and accountability for consequences (Roberto Biloslavo et al., 2024). Empirical studies indicate that organisations that explicitly articulate purpose and empower leaders to exercise judgment, rather than rigid rule-following, demonstrate greater resilience and strategic coherence during periods of disruption (National Institute of & Technology, 2023; Organisation for Economic & Development, 2025b). Purpose construction and judgment therefore remain irreducibly human capabilities at the core of effective strategy.
3. Learning, Adaptation, and the Abandonment of Sunk Costs
While the text addresses iteration, it does not yet explicitly foreground learning and adaptation as a distinct human capability grounded in the ability to abandon prior commitments. Cognitive and organisational research consistently shows that strategic failure is often driven not by lack of information, but by escalation of commitment and sunk-cost fallacies—biases that cause decision-makers to defend past decisions rather than update them (M. Orrù et al., 2023). Humans, when supported by reflective practices and psychological safety, are uniquely capable of recognising when a strategy no longer fits reality and deliberately letting go.
Adaptive learning in strategy is not automatic; it requires sensemaking, emotional regulation, and identity flexibility, particularly when evidence threatens existing narratives or power structures (Berretta & Helfen, 2023). Studies of high-reliability organisations, including aerospace and healthcare systems, show that adaptive performance depends on leaders’ willingness to treat failure as diagnostic information rather than reputational threat (S. Sriharan et al., 2024). This capacity to learn, unlearn, and relearn—especially under pressure—cannot be delegated to algorithms, which lack both ownership of past commitments and responsibility for their consequences.
4. Ethical and Value-Based Judgment
Strategy involves normative choices about trade-offs, risk, and societal impact. While AI can support risk detection, accountability and ethical judgment remain irreducibly human (National Institute of & Technology, 2023).
5. Meaning-Making Under Uncertainty
In uncertain conditions, people seek coherence rather than certainty. Leaders play a central role in framing ambiguity and constructing shared meaning; a fundamentally social and narrative process (Berretta et al., 2023; Hao et al., 2025).
Conclusion
Contemporary strategy is no longer about predicting the future, it is about building the capacity to respond to it. Across psychology, leadership studies, and complexity science, the evidence converges on a clear conclusion: organisations perform best when strategy is purpose-driven, adaptive, and inclusive. By shifting from horizons to missions, from linear plans to iterative learning, and from top-down control to bottom-up engagement, leaders transform strategy from a static document into a living discipline. In a complex world, the most effective strategies are not the most detailed. They are the most alive.
If this article has sparked some new thinking, then check out this one on In Strategy It’s No Longer Optional to have a Cybernetic Teammate.
Get certified in Lego® Serious Play® or the NASA 4-Dimensional Leadership Program?
Our course details can be accessed here. If you have any questions reach out to us by email at info@crazymightwork.com
References
Benda, N. C., Das, L. T., & Abramson, E. L. (2021). Trust in AI: why we should be designing for appropriate reliance. Journal of the American Medical Informatics Association.
Berretta, E., & Helfen, M. (2023). Collective sensemaking and participation in strategy work. Organization science.
Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A. (2023). Defining human–AI teaming the human-centered way: A scoping review and network analysis. Frontiers in Artificial Intelligence.
Biloslavo, R., Bagnoli, C., & Edgar, D. (2024). Strategy as a dynamic and adaptive capability in complex environments. Long Range Planning.
Biloslavo, R., Edgar, D., Aydin, E., & Bulut, C. (2024). Artificial intelligence (AI) and strategic planning process within VUCA environments: A research agenda and guidelines. Management Decision.
Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction.
Cai, J., & Canales, J. I. (2022). Dual strategy process in open strategizing. Long Range Planning, 55(6), 102177.
Csaszar, F. A. (2024). Artificial intelligence and strategic decision-making: Evidence from entrepreneurs and investors.
Csaszar, F. A. (2024). Understanding strategic insight: Problem formulation, judgment, and decision quality. Strategic Management Journal.
Dell’Acqua, F., Ayoubi, C., Lifshitz-Assaf, H., Sadun, R., Mollick, E. R., Mollick, L., Han, Y., Goldman, J., Nair, H., Taub, S., & Lakhani, K. R. (2025). The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise.
Finkenstadt, D. J. (2024). Contingency scenario planning using generative AI.
Guarin, A. D., Townsend, K., Wilkinson, A., & Edwards, M. (2025). Time to voice? A review and agenda for longitudinal employee voice research. Human Resource Management Review, 35(1), 101059.
Hao, X., Demir, E., & Eyers, D. (2025). Beyond human-in-the-loop: Sensemaking between artificial intelligence and human intelligence collaboration. Sustainable Futures.
Hemmer, P., Schemmer, M., Kühl, N., Vössing, M., & Satzger, G. (2024). Complementarity in human–AI collaboration: Concept, sources, and evidence.
Herath Pathirannehelage, S., Shrestha, Y. R., & von Krogh, G. (2024). Design principles for artificial intelligence-augmented decision making: An action design research study. European Journal of Information Systems.
Iddrisu, I., & Mohammed, B. (2024). Investigating the influence of employee voice on public sector performance: The mediating dynamics of organizational trust and culture. Social Sciences & Humanities Open, 10, 101096.
Kim, T., & Cho, W. (2023). Employee Voice Opportunities Enhance Organizational Performance When Faced With Competing Demands. Review of Public Personnel Administration, 44(4), 713-739.
Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
National Institute of, S., & Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Organisation for Economic, C.-o., & Development. (2025a). AI in strategic foresight: Opportunities and challenges.
Organisation for Economic, C.-o., & Development. (2025b). Strategic foresight and uncertainty in public and private organisations.
Orrù, G., Monaro, M., Conversano, C., Gemignani, A., & Sartori, G. (2023). Human-like problem-solving abilities in large language models: A study with ChatGPT. Frontiers in Artificial Intelligence.
Orrù, M., Gualandris, J., & Ventresca, M. J. (2023). Cognitive frames, problem reformulation, and strategic change. Organization Studies.
Ruokonen, M., & Ritala, P. (2025). Managing generative AI for strategic advantage. Research-Technology Management.
Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124.
Sriharan, A., Sekercioglu, N., & Mitchell, C. (2024). Leadership for AI transformation in health care organization: Scoping review. Journal of Medical Internet Research.
Sriharan, S., Sinsky, C., & Patel, K. (2024). Learning from failure in high-reliability organisations. Academy of Management Perspectives.
Swaroop, S., Shakir, M., & Bernstein, M. S. (2025). Personalising AI assistance based on overreliance rate in AI-assisted decision making.
Wang, Q., Hou, H., & Li, Z. (2022). Participative Leadership: A Literature Review and Prospects for Future Research. Frontiers in Psychology, 13, 924357.
Want original, well-researched articles like this on a monthly basis? Sign up for our newsletter. Subscribe and share below!

