In Strategy It’s No Longer Optional to have a Cybernetic Teammate
Six Ways Human-AI Strategies Outperform Human-Only Equivalents
Whether it’s a People and Culture, Marketing, Technology, or Whatever Kinda Strategy you are working on, it’s no longer optional to recruit a cybernetic teammate. According to Dell’Acqua et al. (2025) human teams working with Artificial Intelligence (AI) outperform human-only teams.
In many leading organisations, that is already shaping how strategic decisions are made across functions like forecasting, risk analysis and customer insight but this article explores six ways that AI goes beyond automation to provide real strategic value by enhancing sense-making, foresight and decision quality. From large-scale data integration and predictive analytics to market intelligence and scenario simulation, AI is now embedded in strategy to detect patterns, anticipate change and stress-test strategic assumptions.
In these systems AI does not supplant human judgment; it amplifies it. Here are six concrete ways that AI can complement your strategic work:
1. Data Analysis & Strategic Insight (AI for large-scale sensemaking)
AI’s first strategic role is brutally simple, it finds patterns humans cannot see fast enough. Large-scale data integration and machine learning improve an organisation’s ability to detect weak signals, correlations, and structural shifts, a prerequisite for strategic sensemaking. This aligns with research showing that data-driven organisations outperform peers only when analytics are embedded into decision processes, not treated as after-the-fact reporting (F. A. Csaszar, 2024; Herath Pathirannehelage et al., 2024)
In practice:
– Netflix uses AI to analyse viewing behaviour at scale, shaping content investment decisions, portfolio balance, and regional strategy, not just recommendations.
– Unilever integrates AI across consumer insight, marketing effectiveness, and supply chain data to identify emerging demand patterns earlier than competitors.
What the research says:
Analytics improves strategic judgment when it narrows attention to what matters, rather than attempting to automate judgment itself (F. A. Csaszar, 2024; Herath Pathirannehelage et al., 2024)
2. Predictive Analytics (AI as probabilistic foresight, not crystal ball)
Predictive models do not “know” the future, but they excel at estimating conditional likelihoods. This matters because strategic environments are precisely where human intuition struggles. Feedback is delayed, signals are noisy, and confidence often exceeds accuracy (Benda et al., 2021; Buçinca et al., 2021; Swaroop et al., 2025). AI improves foresight by making assumptions explicit and outcomes probabilistic.
In practice:
– Amazon uses predictive analytics to forecast demand, inform capital investment in logistics infrastructure, and optimise inventory positioning at scale.
– UPS applies AI forecasting to anticipate delivery volumes and routing complexity, shaping both operational execution and long-term network strategy.
What the research says:
Prediction improves decisions when leaders treat forecasts as inputs to judgment, not replacements for it (Benda et al., 2021; Buçinca et al., 2021; Swaroop et al., 2025)
3. Market & Competitive Intelligence (AI as continuous environmental scanning)
Classic strategy research highlights the importance of sensing changes in the external environment, competitors, regulation, technology, and customer behaviour. AI dramatically expands this sensing capacity by continuously scanning and synthesising external signals.
This capability aligns with the “sensing” component of dynamic capabilities theory (F. A. Csaszar, 2024; Ruokonen & Ritala, 2025).
In practice:
– Procter & Gamble uses AI-driven market intelligence to track competitor activity, pricing, and sentiment across categories and geographies.
– Tesla monitors regulatory shifts, technology trajectories, and competitor moves globally to guide product and manufacturing strategy.
What the research says:
Competitive advantage increasingly comes from how quickly organisations detect and interpret change, not from static positioning alone (F. A. Csaszar, 2024; Ruokonen & Ritala, 2025)
4. Scenario Planning & Strategic Simulation (AI as a safe space for being wrong early)
Scenario planning exists because humans are systematically overconfident and prone to linear thinking. Research shows that exploring multiple plausible futures improves strategic learning and reduces surprise (Finkenstadt, 2024; Organisation for Economic & Development, 2025a). AI accelerates this process by generating scenarios, stress-testing assumptions, and simulating interactions across complex variables.
In practice:
– Shell has long used scenario planning increasingly AI-enabled to explore energy transition pathways and geopolitical uncertainty.
– U.S. Department of Defense uses AI-driven simulations to evaluate strategic responses in highly uncertain, adversarial environments.
What the research says:
The value of scenarios lies not in prediction, but in expanding managerial cognition and preparedness (Finkenstadt, 2024; Organisation for Economic & Development, 2025a).
5. Risk Assessment & Early Warning (AI as institutionalised paranoia)
AI is particularly effective at detecting anomalies that precede major failures, financial irregularities, operational degradation, safety risks, and cyber threats. However, research and policy frameworks emphasise that risk detection must be paired with governance, transparency, and accountability, especially when AI informs strategic decisions (National Institute of & Technology, 2023).
In practice:
– JPMorgan Chase uses AI to identify fraud patterns and systemic financial risks across global operations.
– Airbus applies AI for predictive maintenance and safety risk identification across aircraft fleets and production systems.
What the research says:
Responsible AI use requires lifecycle risk management, not just model accuracy (National Institute of & Technology, 2023)
6. Decision Support (AI as co-pilot, not commander)
Perhaps the most consistent finding in human AI research is this, AI systems work best when they support human decision-makers, not replace them. Effective systems clarify options, surface trade-offs, and support recovery from error (Berretta et al., 2023; Shneiderman, 2020). Strategic accountability remains human by design.
In practice:
– Microsoft integrates AI into executive dashboards to support prioritisation, resource allocation, and strategic trade-offs.
– McKinsey & Company uses AI-enabled decision platforms to help leaders compare strategic options using structured evidence.
What the research says:
Human-centred AI improves decision quality when systems are transparent, contestable, and aligned with human authority (Berretta et al., 2023; Shneiderman, 2020). AI does not replace strategy. It strengthens the conditions under which good strategy becomes possible; better sensing, richer foresight, disciplined exploration, and higher-quality decisions.
If this article has sparked some new thinking, then check out this one on The Role of Humans Strategy
References
Benda, N. C., Das, L. T., & Abramson, E. L. (2021). Trust in AI: why we should be designing for appropriate reliance. Journal of the American Medical Informatics Association.
Berretta, E., & Helfen, M. (2023). Collective sensemaking and participation in strategy work. Organization science.
Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A. (2023). Defining human–AI teaming the human-centered way: A scoping review and network analysis. Frontiers in Artificial Intelligence.
Biloslavo, R., Bagnoli, C., & Edgar, D. (2024). Strategy as a dynamic and adaptive capability in complex environments. Long Range Planning.
Biloslavo, R., Edgar, D., Aydin, E., & Bulut, C. (2024). Artificial intelligence (AI) and strategic planning process within VUCA environments: A research agenda and guidelines. Management Decision.
Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction.
Cai, J., & Canales, J. I. (2022). Dual strategy process in open strategizing. Long Range Planning, 55(6), 102177.
Csaszar, F. A. (2024). Artificial intelligence and strategic decision-making: Evidence from entrepreneurs and investors.
Csaszar, F. A. (2024). Understanding strategic insight: Problem formulation, judgment, and decision quality. Strategic Management Journal.
Dell’Acqua, F., Ayoubi, C., Lifshitz-Assaf, H., Sadun, R., Mollick, E. R., Mollick, L., Han, Y., Goldman, J., Nair, H., Taub, S., & Lakhani, K. R. (2025). The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise.
Finkenstadt, D. J. (2024). Contingency scenario planning using generative AI.
Guarin, A. D., Townsend, K., Wilkinson, A., & Edwards, M. (2025). Time to voice? A review and agenda for longitudinal employee voice research. Human Resource Management Review, 35(1), 101059.
Hao, X., Demir, E., & Eyers, D. (2025). Beyond human-in-the-loop: Sensemaking between artificial intelligence and human intelligence collaboration. Sustainable Futures.
Hemmer, P., Schemmer, M., Kühl, N., Vössing, M., & Satzger, G. (2024). Complementarity in human–AI collaboration: Concept, sources, and evidence.
Herath Pathirannehelage, S., Shrestha, Y. R., & von Krogh, G. (2024). Design principles for artificial intelligence-augmented decision making: An action design research study. European Journal of Information Systems.
Iddrisu, I., & Mohammed, B. (2024). Investigating the influence of employee voice on public sector performance: The mediating dynamics of organizational trust and culture. Social Sciences & Humanities Open, 10, 101096.
Kim, T., & Cho, W. (2023). Employee Voice Opportunities Enhance Organizational Performance When Faced With Competing Demands. Review of Public Personnel Administration, 44(4), 713-739.
Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
National Institute of, S., & Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).
Organisation for Economic, C.-o., & Development. (2025a). AI in strategic foresight: Opportunities and challenges.
Organisation for Economic, C.-o., & Development. (2025b). Strategic foresight and uncertainty in public and private organisations.
Orrù, G., Monaro, M., Conversano, C., Gemignani, A., & Sartori, G. (2023). Human-like problem-solving abilities in large language models: A study with ChatGPT. Frontiers in Artificial Intelligence.
Orrù, M., Gualandris, J., & Ventresca, M. J. (2023). Cognitive frames, problem reformulation, and strategic change. Organization Studies.
Ruokonen, M., & Ritala, P. (2025). Managing generative AI for strategic advantage. Research-Technology Management.
Shneiderman, B. (2020). Human-Centered Artificial Intelligence: Three Fresh Ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109-124.
Sriharan, A., Sekercioglu, N., & Mitchell, C. (2024). Leadership for AI transformation in health care organization: Scoping review. Journal of Medical Internet Research.
Sriharan, S., Sinsky, C., & Patel, K. (2024). Learning from failure in high-reliability organisations. Academy of Management Perspectives.
Swaroop, S., Shakir, M., & Bernstein, M. S. (2025). Personalising AI assistance based on overreliance rate in AI-assisted decision making.
Wang, Q., Hou, H., & Li, Z. (2022). Participative Leadership: A Literature Review and Prospects for Future Research. Frontiers in Psychology, 13, 924357.
Want original, well-researched articles like this on a monthly basis? Sign up for our newsletter. Subscribe and share below!

