Amid the usual ceremonial praise of national security discourse, Alexis Bonnell’s authentic, top-down approach to AI strategy at the 2024 GovCIO AI Summit stands as a transformative blueprint.
By: Dr. Al Naqvi, CEO of American Institute of Artificial Intelligence
In a landscape where national security literature often resembles an endless loop of ceremonial praise, my approach stands as a deliberate deviation. I am no fan of the typical puff piece – a genre all too eager to exalt agencies and leaders without question or critical analysis. My resistance to such unexamined adulation is neither a personal quirk nor an overreaction to the status quo. Rather, it is a demand for rigor in a field where geopolitical stakes are high, and objectivity must reign supreme. In my work, objectivity is paramount; it’s a mandate, not a preference. That’s why my analyses tend to challenge rather than cheer, cut through idealism to find realism, and stand resilient against the delicate fictions we’re so often sold.
In the world of AI transformation, genuine leadership has remained elusive – sorely needed but seldom found – and I had long been searching for it without success. That changed at the 2024 GovCIO AI Summit. Completely by chance, I encountered a government leader who broke through my usual skepticism with an authentic, compelling vision for AI. Her name is Alexis Bonnell (CIO of AFRL), and this article is not another entry in the national security praise parade. It’s a recognition of a truly worthy accomplishment, a vision that finally merits my vote of trust.
I arrived at the GovCIO AI Summit braced for the usual uninspired approach to AI – a parade of success stories, isolated use cases, tool showcases, and acquisition updates. This formulaic fare has become the default in agency and government contracting circles, where performance is too often gauged by tallying projects, amassing suppliers, or dutifully adhering to White House directives rather than fostering an authentic strategic vision. In a culture that confuses volume with value, my expectations for encountering a true, transformative approach to AI were low. But shocks, when they’re pleasant, are always welcome – like discovering a stock you forgot about has suddenly skyrocketed. Ms. Bonnell’s session was that kind of pleasant shock. But before I delve into her vision and the strength behind it, let me clarify what I mean by “strategic AI.”
For years, I’ve held firm to my belief that we’re not approaching the AI revolution in the right way. I’ve dissected this issue in articles, highlighted it in my talks, and even dedicated an entire book to it – At the Speed of Irrelevance. To put it simply, imagine someone asking, “What’s the Internet revolution all about?” and the answer comes back as, “It’s about building websites.” While websites are foundational, defining the Internet by them alone is a gross oversimplification. The Internet isn’t about isolated pages; it’s about connecting the world, transforming business, and empowering people across the globe to express ideas, conduct trade, and engage in the modern economy. Similarly, the AI revolution isn’t just about compiling “use cases.” That’s missing the bigger picture. AI’s potential lies in reshaping entire systems, rethinking strategy, and fundamentally altering how we understand and solve problems on a global scale.
An authentic AI vision rests on at least five interconnected pillars, beginning with AI Strategy and AI Tactics. AI Strategy is the structured path by which an organization defines how it will achieve and sustain competitive advantage through AI. In the RFP-driven realms of government contracting and agency operations, however, tactical tasks dominate the agenda, sidelining strategic and operational depth. Competitive advantage is often viewed as a commercial preoccupation, leaving agencies and departments out of this critical pursuit. But this omission is shortsighted. In a world where geopolitical adversaries are amassing unprecedented power, competitive advantage isn’t just a corporate concern; it’s a national imperative in both warfare and deterrence. The government may rightly demand that contractors articulate their competitive edge, but agencies should likewise question: what advantage does our agency bring to the global AI landscape?
In an era of rapidly advancing threats, our greatest asset will be a strategically guided, AI-enabled edge. Expecting a random assemblage of “use cases” to evolve into a coherent and durable strategic advantage is as improbable as expecting randomly scattered pixels to yield the Mona Lisa. Building a true AI strategy requires a deliberate, top-down approach that recognizes AI as a paradigm shift – not merely a tool, use case, or application. This shift redefines how all forms of work are accomplished, from creating new knowledge to executing physical tasks to managing back-office operations. At its core, an AI strategy is about fundamentally reimagining and transforming the nature of work itself.
AI Operationalization bridges the gap between strategic vision and actionable tasks, creating an essential link that transforms lofty plans into practical outcomes. Just as strategies without operational goals lack direction, so too are tasks meaningless without a guiding strategy. This principle, while foundational in Joint Forces (JF) doctrine, is often disregarded in AI implementation within agencies. Operationalization is not a haphazard collection of projects; it’s the construction of a layered, integrated operational framework. This framework serves as a scalable platform, allowing use cases, applications, and automation to coalesce into a cohesive whole.
Consider how an auto manufacturer develops a flexible production platform from which multiple vehicle models can emerge seamlessly – AI operationalization works the same way. It builds a dynamic infrastructure, purposefully designed to accommodate shifting priorities and new developments. By constructing an adaptable operational foundation, organizations can respond to emerging needs while preserving the integrity of their overarching strategy. In short, operationalization is about establishing a robust yet agile base from which AI’s potential can be fully realized, ensuring that strategy and execution remain in constant alignment.
AI Organizational Dynamics recognize that AI is far more than a mere technology; it represents a new class of “workers” integrated into workflows alongside human colleagues and other automated systems. These AI “workers” bring distinct attributes and operate in unique configurations, leading to an emerging type of organization that demands a shift in how we approach organizational structure and behavior. Rethinking traditional, human-centered organizational models in light of these new AI dynamics is not just forward-thinking – it’s essential.
Often, leaders hedge their AI rollouts with assurances that AI won’t lead to job displacement, but this can be a misleading and potentially harmful reassurance. A more realistic and responsible approach is to model AI’s impact on the organization comprehensively, identifying which roles and skill sets may be phased out by automation and which new roles AI will create. This requires an understanding of AI’s place within the organization, including how it reshapes work, reallocates human effort, and introduces new competencies. An organization’s success in an AI-driven future depends on its ability to understand and embrace these organizational dynamics, viewing AI as a true, integral part of its workforce.
AI Relativeness is a nuanced dimension of strategic AI, distinct yet interwoven with the pursuit of competitive advantage. While AI strategy charts the course to gain an edge, AI Relativeness defines what that edge actually means in a constantly shifting landscape. Relativeness acknowledges that competitive advantage in AI is not static; it is a moving target shaped by the actions of adversaries, the current technological frontier, available resources, the effectiveness of investments, and the capacity to set and recalibrate vision. In essence, AI Relativeness requires organizations to continuously assess and redefine their competitive standing, not just relative to internal goals but in response to an evolving global and technological context.
To navigate this dynamic, organizations must embrace a multifactor optimization approach, blending technological capabilities, strategic foresight, resource allocation, and talent development in a constant recalibration process. Success demands not only technical acuity but a disciplined, responsible vision – one that keeps pace with emerging threats, seizes technological opportunities, and aligns AI efforts with core values and long-term goals. This relentless pursuit of relevance and advantage is no simple task; it requires sustained human insight, ethical responsibility, and an unwavering commitment to adaptive, forward-looking strategies that ensure AI remains a powerful, purposeful tool in the organization’s arsenal.
AI Technological Trajectory reflects the explosive evolution of AI from traditional machine learning to today’s advanced architectures and beyond. Where once we had isolated algorithms, AI now boasts a sophisticated ecosystem: machine learning gave way to deep learning, which ushered in transformer architecture, revolutionizing multimodal and agent-based AI capabilities. Innovations like synaptic intelligence have enhanced explainability, reasoning engines now amplify the power of large language models, and efficient small language models (SLMs) free up hardware, allowing AI to push further boundaries. This progress isn’t just technical; it begs for a new structure, a human-led order to harness its vast potential.
Yet, despite these advancements, many agencies have clung to rudimentary technologies – robotic process automation (RPA) in particular – as a substitute for genuine AI. RPA wasn’t embraced because it was visionary but because it was straightforward. This “easy-to-understand” approach led us to a strange halfway point: a horse attached to an auto frame, a technological compromise that ultimately constrained the future by limiting our imagination and willingness to embrace true AI capabilities. Instead of forging a high-impact government-industry collaboration, we settled for a convenient but uninspiring partnership, where RPA and other simplistic solutions became mistaken as the face of AI.
As we (thankfully) transition beyond the RPA era, we must also reckon with the superficiality of the “use cases and tools” approach to AI. This limited vision allowed traditional suppliers – many with minimal AI expertise – to enter the field under the guise of AI-ready providers. Suddenly, predictive analytics rebranded as “AI” was achieved by repurposing BI software, and data operations – recast as MLOps – became little more than data pipeline tasks. Agencies found themselves with vast amounts of data yet little insight, and contracts awarded to legacy suppliers created a high opportunity cost. Entrusted to define the government’s AI future, these suppliers delivered a limited version of AI capability that matched their own skillset, not the potential of the technology itself. Even as some suppliers attempted to mask their lack of AI sophistication, previous experience carried more weight than actual innovation.
To truly advance, we must adopt a fresh perspective – one that recognizes the art of the possible and refuses to settle for outdated solutions. Moving forward, the AI trajectory should embody the full spectrum of its power, shifting away from superficial rebranding to a deep, strategic commitment that propels us into a genuinely transformative AI future.
With that off my chest, you’ll understand why my search for genuine AI leaders felt like chasing a mirage. Session after session, conference after conference, year after year, my hopes for vision-led AI were met with a steady stream of disappointment. Slowly, my expectations sank to the point where I became little more than a bystander, a mindless cog in the machine. Even the hope of discovering a true leader faded so deeply it barely registered anymore. Then, unexpectedly, a spark reignited. As she began to speak, I felt the unmistakable thrill of something real. At long last, I had found it – a true vision for AI.
I could hardly believe what I’d heard. After the session, I stopped her in the hallway, needing direct confirmation. I asked her outright to verify her vision. Without hesitation, she did. I pulled out a notebook, sketched a rough diagram, and she didn’t just observe – she took my pen, added her own details, expanding on the concept with precision. In that moment, the verification was complete. She was authentic, real, and her vision was the genuine article. I couldn’t hold back; I told her just how deeply satisfying it was to finally encounter a true vision for AI.
So, what is her vision? Let me explain it in my own terms while honoring the depth of her thinking and her terminology. She envisions the future of AI as one where humans establish a profound relationship with knowledge – a relationship powered by RAG-centered intelligent automations. RAG, or Retrieval-Augmented Generation, combines an organization’s proprietary data with large language models (LLMs) to create dynamic and tailored AI applications. In her vision, these RAG-based applications, along with other types of automations, form a network of capabilities designed and driven by innovators across the organization. Her framework encourages anyone with a passion for knowledge-sharing to create their own “knowledge center,” fostering a culture of discovery and growth.
The essence of her vision lies in making this process not only purposeful but also enjoyable, while serving real, tangible needs. To build this “society of knowledge centers,” she identified four foundational pillars: empathy, intimacy, curiosity, and learning.
- Empathy: At the core of this framework is a profound respect for others and a commitment to understanding their needs.
- Intimacy: This involves recognizing shared needs and developing a true understanding of what is required to meet them effectively.
- Curiosity: A heightened awareness that drives not only internal exploration but also an outward understanding of adversarial strategies, plans, and capabilities.
- Learning: A state of constant growth, driven by an insatiable desire to acquire new skills and refine existing ones.
Her vision doesn’t just focus on the tools or technologies – it’s about transforming how knowledge is created, shared, and applied. By design, RAG protects data, as its capabilities are user-driven and inherently advance the Zero Trust principles. This approach enables a safe, decentralized diffusion of knowledge-driven applications, creating a “society of applications” that can later be stitched together by a common agentic thread. In her future, the fusion of innovation, security, and human-centric design forms the foundation for AI’s transformational potential.
Now, let me explain why Ms. Bonnell’s vision meets the rigorous standards of what a true AI vision and strategy should be. Each of these seven elements demonstrates the depth of her understanding and her commitment to transforming AI into a foundational force for innovation and empowerment. Let me expand on each point and bring the argument home.
- Top-Down Vision:
Ms. Bonnell doesn’t start with the fragmented bottom-up approach that defines many so-called AI strategies, which often fixate on use cases or tool-building. Instead, she envisions an enterprise-wide transformation where AI serves as a unifying force across all levels. This top-down perspective ensures that every piece of the organization contributes to a cohesive, strategic whole, rather than becoming siloed efforts with limited scope and impact. It’s about crafting a comprehensive framework, not patchwork solutions. - Integration of Technology, Developers, and Users:
Unlike conventional approaches that separate technologists from end users, Ms. Bonnell’s vision fosters seamless integration. She emphasizes a collaborative ecosystem where developers and users are co-creators. This ensures that technology is not just built for users but evolves with them, driven by real-world needs and ideas. Her approach transforms users into innovators, aligning technical capabilities with human creativity and practicality. - Unleashing Creativity:
Central to her vision is the empowerment of individuals to think, create, and solve. She doesn’t impose rigid structures but provides a fertile environment for creativity to flourish. By enabling people to envision solutions and solve problems autonomously, she democratizes AI, allowing innovation to emerge from any corner of the organization. This unleashing of creativity is the hallmark of a future-focused strategy that values human ingenuity as much as technological capability. - Mission-Driven Standards:
Ms. Bonnell sets clear standards that align with mission goals, ensuring that innovation serves a larger purpose. This is critical in AI strategy, where the allure of shiny new technologies can often distract from organizational priorities. By grounding her vision in mission objectives, she creates a guiding north star that ensures every AI initiative contributes to measurable, impactful outcomes. - A Society of Automation and Human Partners:
Her vision isn’t just about isolated systems or individual tools; it’s about creating a dynamic ecosystem of automation and human collaboration. This “society” recognizes the interdependence of humans and intelligent systems, fostering partnerships where automation enhances human capabilities rather than replacing them. It’s a holistic model that redefines work and relationships in the AI-driven world. - Demand-Driven Accessibility:
Ms. Bonnell’s approach is inherently democratic. Applications are not hoarded by select groups or dictated from the top but made available to all legitimate users who need them. This demand-driven model ensures that AI is accessible and actionable, empowering individuals across the organization to leverage its power responsibly and effectively. - Anticipating Phenological and Evolutionary Dynamics:
Perhaps most impressively, her model anticipates the dynamic nature of AI systems. By incorporating the principles of evolution and adaptation, she ensures that automations remain relevant, flexible, and capable of growing with changing needs. This forward-thinking approach makes her vision resilient in the face of technological advancements and shifts, positioning it to endure and thrive.
Ms. Bonnell’s vision is not a blueprint for incremental improvement – it’s a manifesto for transformation. By combining strategic depth, human empowerment, operational accessibility, and evolutionary foresight, she has crafted a model that transcends the limitations of today’s AI implementations. This is not the AI of “use cases” or “tools”; it is the AI of ecosystems, partnerships, and progress. Her vision doesn’t just meet the requirements of a true AI strategy – it sets the standard. It is a vision of what AI can and should be: a force that redefines the nature of work, augments human potential, and drives an organization, and indeed a nation, toward sustainable, strategic advantage.
My conversation with her ended on an unexpectedly lighthearted note – a high five, followed by her distributing her signature wristbands. It was a moment that felt symbolic, as though marking the beginning of something truly transformative. A vision had been born – not just an abstract idea, but a blueprint with the power to shape the future. If executed properly, this vision holds the potential to give the United States a profound competitive advantage over its adversaries, redefining how we harness AI for strategic, operational, and national progress.

