Early in paper #2 in this series, I made a reference that it would be the only paper in the third offset strategy (3OS) series with a table of contents. That turned out to not be true; this paper, while only about ⅓ the length of that one, still necessitates it, as changing the doctrine, policy and many tactics, techniques and procedures (TTPs) for both the Department of Defense (DoD) and in particular, the United States Air Force (USAF) is probably too long to write in a single narrative that fits well.

So, warning up front: reading this entire thing in one sitting probably results in a few points being made over and over (and over!) again because I may mention them in multiple sections. That's not an accident; it's by design so each of the fifteen sections in this paper can be viewed in a vacuum and make sense, but I didn't feel like publishing fifteen separate blogs for part 6 of the 3OS series.


Executive Summary

The Department of Defense faces an urgent dilemma: adversaries are fielding adaptable, low-cost technologies faster than our acquisition system can respond. This paper argues for a fundamental shift in posture—away from exquisite, slow-moving programs toward software-defined, contractor-powered, and effects-driven force design. The thesis is simple: victory in the next conflict will not come from the most advanced single platform, but from the force that learns and adapts fastest.

Key Concepts:

  • Software-Defined Warfare: The central thesis is that victory in future conflicts will be determined not by the superiority of individual platforms, but by the speed at which a force can learn, adapt, and deploy new capabilities. This requires a shift towards a software-defined, effects-driven force design.
  • Time as the decisive currency: Shifting measures of effectiveness from platform counts to loop speed: time-to-patch, time-to-field, and time-to-data.
  • Information as a Weapon: The article posits that the U.S.'s strategic advantage lies in its information economy. Data, software, and networks should be treated as primary maneuver elements, not just as enablers for hardware.
  • Software as the arsenal: Integrating the civilian technology base through continuous authority to operate pipelines, Modular Open Systems Approach, and reciprocity by design.
  • Mass and autonomy: Scaling Collaborative Combat Aircraft and swarms of small unmanned aerial systems to overwhelm adversaries, guided by cost-per-effect doctrine.
  • Acquisition Reform: The author advocates for a complete overhaul of the acquisition process to match the speed of software development. This includes the widespread adoption of continuous Authority to Operate, which allows for the rapid and secure deployment of new software and technologies.
  • Resilient networks: Agile Combat Employment that treats command-and-control and data survivability as its primary weapons system.
  • Organizational redesign: elevating Defense Innovation Unit with its own Major Force Program, aligning US Cyber Command's mandate with portable funds, and embedding new billets (Integration & Interoperability Manager, Software Design & Development Supervisor, Contractor & Vendor Relations) as metabolic elements of future squadrons.
  • Organizational Redesign: The article proposes significant changes to the Department of Defense's organizational structure, including elevating the Defense Innovation Unit and creating new software-focused roles within squadrons.
  • Civilian-Military Integration: The author emphasizes that the civilian software market is the U.S.'s true arsenal and calls for closer collaboration with contractors.

The takeaway: The Department of Defense must rewire its system economics—acquisition, budgeting, training, and operations—around speed, software, and survivability. Commanders must maneuver software like they maneuver aircraft or armor. Those who learn faster will win.


1) Operationalizing Information

The U.S. wins when we align doctrine and dollars to what our economy actually produces at scale: information, software, and networks—not just metal and composites. I’ve been building this case across Parts 1–5, but let’s state it plainly up front: the strategic advantage is no longer a single exquisite platform; it’s the institutional metabolism that turns data into decisions, code into combat power, and telemetry into faster learning than any adversary can match. That’s the 3OS translated into a software-first economy—policy and practice that behave like our best digital firms, not our slowest programs of record.[1],[2],[3],[4]

In Part 1, I argued that our force design still treats software as “enablement” around a hardware core. That lens is backwards. Data, models, and code are maneuver elements. They are taskable (“push this model to these squadrons by 1400”), targetable (adversary will try to corrupt, exfiltrate, or deny them), and protectable with the same seriousness as fuel, munitions, and runways. When we elevate them to first-class operational objects, we get better options at lower cost and higher tempo. That reframe is already latent in our policy stack—DoD's Data Strategy, Zero Trust and Cybersecurity Framework (CSF) 2.0, and zero trust architecture (ZTA)/risk management framework (RMF) guidance—but we haven’t fully operationalized it at the unit and wing level.[3],[5],[6],[7] We need to view “data as ammunition” which of course changes planning: you don’t just plan sorties; you plan data flows, model updates, and application programming interface (API) contracts as part of the scheme of maneuver.

From that premise, a few consequences fall out.

First, replace platform-centric planning with software-centric force design. Tactics and techniques live in code and telemetry instead of on a slide deck labeled “CONOPS.” It means you iterate. You observe a failure mode on Monday, adjust a classifier or control law on Tuesday, deploy to a canary group on Wednesday, and fly the TTP on Thursday—capturing real-world performance signals the entire time. That’s the DevSecOps playbook, not a metaphor: pipelines, tests, versioning, rollbacks, and automated compliance vaults. We already have the scaffolding—Software Modernization Strategy, DevSecOps Reference Design, and the Software Acquisition Pathway—but we’re underusing it as if it were a compliance exercise rather than the main effort.[4],[8],[9],[10] In practice, a “software-centric force design” means standing up a Software Bill of Tactics (SBOT) for each mission family and treating it like the technical order that it is—only faster (I’ll detail SBOT in later sections).

Second, push acquisition clocks to match DevSecOps cadence. In a white paper[11] my co-authors and I contrasted RMF's governance strengths with hazard-based engineering like System-Theoretic Process Analysis (STPA) and the Defense Advanced Research Project Agency's (DARPA's) Automated Rapid Certification of Software (ARCOS). The point wasn’t to pick a winner; it was to re-time the system. We need continuous delivery, continuous authorities to operate (cATO), continuous test. We need faster tools to acquire dual-use technologies and onboard them in days, not months. That is not sloganeering; it’s an architecture and a set of authorities. Pre-approved pipelines with inherited controls. Automated evidence. Live risk scoring pinned to common vulnerabilities and exposures (CVEs), software bill of materials (SBOMs), and mission hazard analysis so authorizations are a stream, not a gate.[4],[8],[11],[12],[13],[14],[15],[16] When we do this right, the “decision to field” is often a decision to advance a progressive rollout percentage, not a two-year milestone review. Test is telemetry. Certification is continuous. The longer we wait to make this the default, the more we subsidize the attacker’s observe, orient, decide & act (OODA) loop.

Third, elevate cyberspace from incident response to persistent campaign. If you’ve read Part 2 and Part 5, you know where I land: our adversaries already operate as persistent actors shaping terrain before crisis—pre-positioning in critical infrastructure, living-off-the-land, and iterating their own playbooks against our seams.[17],[18],[19] Treating cyber as a ticketing queue “after an event” is like treating air defense as “we’ll scramble after the crater.” Campaigning in cyber and info space is not optional hygiene; it’s how you set theater conditions for every other domain. That means defend-forward with allies, hunt-forward as a normal muscle movement, and—crucially—closing the loop so those ops continuously harden the code and data pipelines that our warfighting depends on (we’ll come back to this under Joint All-Domain Command & Control (JADC2) and data contracts).

Fourth, make budget artifacts pay for usable code and telemetry—not milestone theater. We don’t get the behaviors we admire; we get the behaviors we budget. If planning, programming, budgeting & execution (PPBE) and major force programs (MFPs) reward PowerPoint maturity and paper risk burndown, we’ll get exactly that. The PPBE Commission already pointed to outcome-based measures; the Office of Management and Budget's (OMB's) A-11 and DoD Financial Management Regulation (FMR) give us the levers to translate delivered, running software—and retired tech debt—into real execution credit.[20],[21],[22] Concretely, that means adding budget-relevant, audit-defensible counters for: time-to-patch for CVE-listed vulnerabilities; time-to-field for middle tier acquisition (MTA)/urgent operational need (UON/JUON) pathways; time-to-decision (T2D) against JADC2 objectives; and cost-per-effect for collaborative combat aircraft (CCA)/small unmanned aerial system (sUAS) packages. When a program kills stale code or decommissions a dead interface, that should accrue the same reputational and financial wins as drawing down obsolete hardware. Deprecation is not failure—it’s vigor.

What does this look like at squadron scale? Imagine a fighting wing where every mission thread has:

  1. A declared data contract (publishers, subscribers, latency, service level objectives (SLOs), decision rights)
  2. A cATO pipeline with inherited controls to push model and software deltas on cadence
  3. A telemetry backbone where operational test & evaluation (OT&E) is not a separate phase but a standing fabric. Ops generate data → telemetry flows into the SBOT repo → tests run → a model patch or feature flag shifts behavior → new TTP emerges. Days, not program objective memoranda (POM) cycles. This is not sci-fi; it is ordinary in the best software shops and fully supported by existing DoD policy—if we operationalize it.[3],[23],[24],[25],[26],[27]

What changes for commanders? Two muscles:

  1. Ask for software effects like fires
  2. Treat APIs like terrain

A commander should be as comfortable saying “I need a model patch to tighten target classification thresholds in sector Bravo within 48 hours” as “fire for effect.” And staff should be able to publish an API fragmentary order (FRAGO) that adds a subscriber and changes a schema field with rollback baked in. That’s what modular open systems approach (MOSA)/future airborne capability environment (FACE)/universal command and control interface (UCI)/Command, Control, Communications, Computers, Cyber, Intelligence, Surveillance & Reconnaissance (C5ISR) & electronic warfare (EW)'s Modular Open Suite of Standards (CMOSS) are trying to enable at system scale; we need to normalize it at operational tempo.[28],[29],[30],[31] When the interfaces are the terrain, the team that can change the map fastest wins.

What changes for the commander? We divorce the approval process for the authority to operate (ATO) from an un-involved cyber professional who is divorced from the tactical fight, and instead align it like any other professional function as a supporting role. The authorizing official (AO) should be providing the commander an assessment of the cyber risk for a given piece of software—whether it be bespoke custom-made software from a mission design series (MDS) aligned software factory or dual-use software acquired through an MTA vehicle—and the commander, who actually has to do real mission risk assessment should then weigh the risks of cyber vulnerability against the risk of mission failures from lack of software support. This is no different than how a comms officer in the 6 shop will advise on the use of frequency hopping vs. fixed frequencies for a mission and what the relative risks are. Currently, we empower AOs to have absolute power over the employment of software who have only the faintest idea of mission risk when they say no. And saying no is something they are overtly incentivized through numerous channels to do.[11]

The man on the ground will always be the best ground commander. We owe it to them to feed them better data and interconnectivity with the autonomy around them. (Booz Allen Hamilton)

“But what about the big things?” Hardware still matters—airframes, munitions, depots—but their decisive edge now comes from the software and data woven through them. The platforms don’t go away; they get out-learned or out-updated. The 3OS vision only delivers if we bias the whole enterprise toward learning speed. That means two more institutional shifts that will echo through the rest of this series:

  1. Governance that accelerates. We will use RMF for what it’s great at (traceability, accountability), but shift assurance left into design safety (STPA/ARCOS) and right into runtime (live controls, kill switches). The cATO “by design” posture is a commander’s safety instrument, not a waiver path.[4],[8],[11],[12],[13],[14],[15],[16] Continuous RMF (cRMF) is not easier than RMF; it's much harder. But it increases warfighter lethality and it is a lot cheaper for the US taxpayer than what we do now. When serving the lethality of the force and the taxpayer can both be done better, we owe it to ourselves to do the hard work to implement the better solution.
  2. Campaigning that compounds. Cyber and information ops should continuously harden our SBOT and data contracts while raising adversary costs. Hunt-forward is not a press release; it’s an ingest that updates our models and signatures weekly.[18]

If this feels demanding, good—it is. But it is also coherent with where we already said we’re going in strategy documents. The gap is not intellectual; it’s operational. In Part 1 (“Redefining the Third Offset Strategy”), we argued for a software-first force that treats information, code, and networks as the primary sources of combat power. In Part 2 (“An Assortment of Problems”), we mapped the adversary set—Russia, China, Iran, North Korea, Venezuela, terrorists, and narco-cartels—and showed how each exploits hybrid tools (cyber, finance, information, proxies) to grind us down. In Part 3 (“Three Hammers”), we demonstrated how acquisition policies aren't just a delay, they are in fact a strategic risk to the security of the United States. In Part 4 (“Why a Peanut Butter Sandwich Is More Deadly Than a Nuclear Weapon”), we argued the entire strategy and tactics process needs to change. In Part 5 (“Tactical Cyber: Why the Model of Sacrificing All Victories for Strategic Illusion Never Works”), we shifted from ticket-driven incident response to persistent, allied campaigns. Section 1 of this paper is the bridge: put those behaviors at the center of doctrine and budget, not on the margins.

So the thesis is simple: operationalize information. Treat data, models, and code as maneuver elements. Replace platform-centric planning with software-centric force design. Retime acquisition to the pace of DevSecOps. Elevate cyberspace to a persistent campaign. Pay for outcomes measured in running software and telemetry.

If we do this, our airpower becomes fractal, our decision cycles compress, and our learning outruns adversaries who still believe victory is a line item of steel. If we do not, the fastest coder in the fight will write our future for us—and we will be left briefing PowerPoint to a war that has already moved on.

2) Doctrine & TTP: The Software-Centric Loop

Bryon Kroger and Enrique Oti famously opined when they ran Kessel Run that they intended for the Air Force to—and I’m paraphrasing—become a software organization that happens to fly planes.

There’s myriad reasons why that is not going to happen. Based on the Kessel Run post-mortem, the biggest being that “the cavalry isn’t coming.” But that doesn’t mean the TTP adaptation shouldn’t happen to work in a circumstance where software-centric operations aren’t the norm. This is that playbook.

Our doctrine has to assume that software—not metal—is the fastest maneuver element. Now we must codify a loop where operations change code and code changes tactics—on purpose, on a clock, and at scale.

It's now the loop we’re standardizing.

The loop is simple to say and hard to live: ops → telemetry → code change → deploy → new TTP. The unit of work is no longer a platform upgrade or a yearly tactics manual—it’s a small, auditable change set to an interface, model, rule, or visualization that measurably improves the mission. Pipelines (DevSecOps), not PowerPoint, carry those changes forward.[4],[8],[10] Every sortie, exercise, and watch floor becomes a data-gathering event; every sprint becomes a tactics update; every release carries its own rollback and hazard analysis.

That means:

  • Telemetry by default. Instrument mission threads the way we instrument web services: golden signals, structured logs, and labeled data that survives contested links. No data, no test.
  • Trunk-based development with feature flags. New behaviors ride behind flags, can be canaried to one cell or one CCA element, and rolled back without drama.
  • Small diffs, documented effects. A commit message that says “re-tuned jammer geofence; +7% track continuity under global navigation satellite system (GNSS) spoofing” is a tactics entry.

This is not theory. We already run parts of it inside existing authorities; the work now is to make it doctrine.

This isn't the end goal; a Pathfinder that understands code is a game changer, but it's better to leverage the American Arsenal (Staff Sgt. Renee Seruntine/DVIDS)

From platform FRAGOs to API FRAGOs

Our FRAGOs still assume platforms. But effects now chain across APIs and data contracts, not just radios. Issue API FRAGOs: short orders that publish the interfaces and schemas others must honor to plug into a mission thread. The point is speed: when the schema is the order, the ecosystem can comply in hours, not weeks.

API FRAGO, example (abridged):

  • Thread: Targeting → CCA swarm → effects
  • Change: TargetMessage.v4 → v5 (adds EWConfidence, deprecates SensorID)
  • Publisher: C2 cell JADC2/Targeting@AOR-W
  • Subscribers (required): CCA Mission Computer, sUAS EW payloads
  • SLA: ≤250 ms intra-flight, ≤1.5 s to edge C2
  • Effective: D+5 1200Z; v4 sunset D+20 1200Z
  • Reference: MOSA/FACE/UCI/CMOSS profiles[28],[29],[30],[31]

An API FRAGO like this is a tactics update: it tells units how to fight together through interfaces they can automate. It also gives test and training a clear target—validate conformance, measure latency, and cross-check outcomes.

cATO by design: guardrails at runtime

Continuous ATO is not a slogan; it’s an operational control system. We pre-approve pipelines, platforms, and patterns so units can ship on day one with inherited controls.[4],[8] RMF still governs, but we move the most important checks into runtime: dependency hygiene (CVE watch), SBOM drift, identity posture, and hazard-based monitors tied to mission outcomes.[8],[11],[12],[13],[14],[15],[16]

Three practical moves:

  1. Pipelines are the accreditation boundary. If your code flows through a known, signed pipeline with attested build steps, you inherit those controls by default. If that means 80% of the controls are from that pipeline, units focus on the 20% that’s mission-unique.
  2. Live risk dashboards. Tie CVE exposure, SBOM deltas, and telemetry-based hazards to an operational heat map. If a new CVE lights up a component in a mission thread, risk owners (those in ops, like commanders of maneuver elements on G-series orders, not just Chief Information Officers (CIOs) in cushy Pentagon seats) get the authority to pause or roll back.
  3. Red-team cadence as safety practice. The same way we do flight safety stand-downs, we run hazard drills against live stacks at a predictable tempo. ATO is now a verb, not a certificate.

Bake JADC2 data contracts into TTPs

Today, our TTPs describe the “what” and “who” but hand-wave the data plumbing. That’s backwards. For any joint thread (ISR, fires, mobility, C2), TTPs must include a Data Contract Block:

  • Producers & frequency: who publishes which message at what rate
  • Subscribers & decisions: who consumes, to make which decisions
  • SLO/Service Level Agreement (SLA): latency, jitter, freshness, and loss budgets
  • Authority: who can change the schema and on what notice
  • Security posture: classification, ZTA policy, cross-domain rules

You can write this in one page—and it makes the difference between “concept” and working kill chain [3],[23],[24] It also lets test and OT&E verify outcomes deterministically: if the SLO isn’t green, the tactic isn’t real.

Request software effects like fires

Leaders must get comfortable asking for software effects with the same precision as they request fires. Make Call-for-Software Effects (CSE) a standard report format—quick, unambiguous, and executable within hours:

CSE Nine-Line (sketch):

  1. Thread: (e.g., “CCA EW escort / AOR-W”)
  2. Observed issue: (“GNSS spoofing increased track dropouts >12%”)
  3. Desired effect: (“Hold ≥95% track continuity under spoofing”)
  4. Levers permitted: (model retune / threshold change / feature flag on / user interface (UI) tweak)
  5. Risk bounds: (no impact to fratricide guardrails; latency budget unchanged)
  6. Telemetry to collect: (list signals; label schema)
  7. Test gate: (what constitutes success; canary size; rollback trigger)
  8. SLA: (time to canary; time to theater)
  9. Authority: (mission commander / risk owner)

This is how we turn “I need help” into a change ticket that wins the sortie. It also generates clean artifacts for OT&E and ATO, closing the loop.

Tie OT&E to telemetry (or don’t do it)

Every trial and exercise should be a data sprint that immediately yields:

  • A labeled dataset committed to the mission repository,
  • A merged change set (code, model, or interface), and
  • A TTP delta with measurable effects.[25],[26],[27]

If an event doesn’t produce those three, we should ask why we ran it. This is how we stop funding “demo theater” and start funding learning velocity.

A day-in-the-life example

A sUAS cell in "the western AOR" reports spoofing-driven drops on a maritime ISR thread (Part 2’s adversary playbook in action). They file a CSE. The software cell pulls last night’s telemetry, runs a targeted model retune, and ships behind a feature flag via an accredited pipeline (inheritance from either government owned or contracted cATO service providers). A canary on one flight proves the SLO; API FRAGO v5 goes out with a minor schema addition (EWConfidence). OT&E validates latency and continuity; the TTP entry is updated with the new tactic and rollback note. Total elapsed: 72 hours, not a POM cycle.

That’s the loop. Not hypothetical—just institutionalized.

What changes on Monday

To make this real without waiting on a new strategy document:

  • Publish the first three API FRAGOs for one mission thread per wing. Keep them tiny. Enforce them ruthlessly.
  • Stand up the risk dashboard tied to CVE and SBOM on that same thread. Give the mission commander pause/rollback authority. There are contractors already with ATOs and reciprocity that do this anyway.
  • Run a 30-day CSE sprint. Require every squadron to submit at least one CSE, however small. Measure time-to-canary and time-to-theater.
  • Tag your telemetry. Pick the five golden signals for the thread; standardize names and units; enforce in code reviews.

Do that once, end-to-end, and people will stop asking “what is JADC2?” because they will have used it.

How we’ll measure ourselves

Doctrine that doesn’t change budgets and behavior is just literature. For this loop, the scoreboard is blunt:

  • Time-to-canary (CSE filed → first live test)
  • Time-to-theater (CSE filed → wide release)
  • Percent of sorties with feature-flagged behavior (learning at the edge)
  • SLO attainment on data contracts (latency/freshness)
  • Defect escape rate (how often we roll back; trending down is good)
  • TTP churn (number of small, merged changes per quarter)

These metrics align directly with the references we already cite—DevSecOps pipelines, software modernization, cATO, and open standards[4],[8],[11],[28],[29],[30],[31]—and with the ethos of the series so far: move faster than the problem learns.

The heart of this section is cultural, not technical: we’re giving commanders a new muscle memory. Ask for software effects. Have a staff that can write orders as interfaces. Treat risk as a live signal, not a binder. And insist that every operation makes the code—and therefore the tactic—better. That is how a software-first force fights.


3) Fractal Airpower & Cheap Mass (sUAS)

Mass wins—when mass can learn. The center of airpower is shifting from a handful of exquisite aircraft to teams of inexpensive, updatable uncrewed systems that saturate sensors, complicate targeting, and trade software speed for hardware cost. Part 2 showed adversaries iterating cheap threats at scale. Part 3 examined how the current structure of software delivery is so bad, our premiere systems can't communicate with each other. This section turns that into force design: fractal formations of sUAS at the edge, commanded by humans and CCAs, stitched together by open standards, and defended by a living counter-counter-UAS playbook.

Why cheap mass, now

A million dollars of software updates across ten thousand airframes can out-pace a billion-dollar platform that can’t change. Attritable sUAS and CCA are already pointing the way: CCAs shift risk and add reach; sUAS give us density, deception, and strike opportunities that punish any adversary who presents fixed, expensive targets.[32],[33] Programs like Golden Horde (networked weapons behaviors) and Skyborg (autonomy core) were interesting experiments to prove we can compose effects from modular autonomy and comms, albeit program management for both programs could only be considered a failure given the results and management of the programs themselves.[34],[35] The logistics tail is catching up too; AFWERX's Agility Prime showed how to partner with commercial supply chains when it helps us move parts, batteries, and airworthiness faster,[36] albeit that was more about Grand Power competition.[37] The thesis is supported by evidence not merely in these US-based examples: scale beats exquisite when the code moves faster than the counter. The most obvious example of this is Operation Spider's Web where Ukraine inflicted massive asymmetric financial—and strategic—damage on Russia through the use of sUAS.[38]

These drones, costing $600–1,000, successfully struck aircraft such as the Tu-95MS and Tu-22M3—bombers worth billions that Russia uses to launch Kh-101 and Kh-22 missiles, respectively. - Kateryna Bondar

This is just commercial with open source software for fun. Forget Ukraine's Spider Web; what autonomy will be able to do at scale is a whole order of magnitude scarier. (Intel)

Fractal formations: team-of-teams by design

Build the force like a fractal—repeatable cells that roll up cleanly:

  • Element (x4 sUAS): one sensor leader, one EW/disruption, one decoy, one shooter. Or any such combination as needed by the mission. Mesh by default; alt-nav and peer-recovery baked in.
  • Flight (x4 elements): adds a CCA or manned teammate as the orchestrator and high-power relay.
  • Package (x4 flights): composes mission threads (ISR → target → effects → battle damage assessment (BDA)) through published APIs.

This isn’t just formation math; it’s an API contract. The interfaces—MOSA, FACE, UCI, CMOSS—are the doctrine for how teams discover, authenticate, exchange, and act. If a vendor can’t emit/ingest the message set with the right SLOs, they’re not on the team.[28],[29],[30],[31] Keep the schemas small and versioned, and you can shift behaviors across thousands of airframes overnight. There's already an app (Fleetforge) for that which is fully hardware agnostic with unlimited government rights. Delivered on a Platform as a Service (PaaS) with an ATO and continuous delivery on commercial cloud.

A day-zero kit each unit needs

  • Mission schemas and profiles (FACE/UCI/CMOSS) pinned to a version.
  • Feature-flagged autonomy behaviors (approach, jam, feint, swarm split) with guardrails.
  • Telemetry labels (golden signals) identical from sUAS to CCA, so learning scales.
  • Canary tools to light up one element, then one flight, before pushing package-wide.

Counter-counter-UAS: plan to be jammed, spoofed, and hunted

The adversary gets a vote—and a jammer. We have to assume GNSS spoofing, uplink denial, and layered counter UAS (C-UAS) from day one. So we pre-bake resilience:

  • EW hardening: frequency agility, low-probability comms modes, autonomy floors that keep formations useful when blind.
  • Directed-energy integration: treat the tactical high-power operational responder (THOR) (the US Air Force's (USAF's) high power microwave (HPM) developed by the Air Force Research Laboratory (AFRL)), the US Navy's (USN's) high energy laser with integrated optical-dazzler and surveillance (HELIOS) (naval laser), and the US Army's (USA's) directed energy (DE) maneuver short-range air defense (M-SHORAD) (land laser) as friendly fires with published kill boxes and hold-fire logic so our swarms don’t ride into our own beams.[39],[40],[41]
  • Deception as a TTP: dedicated decoy elements that present plausible signatures to burn the enemy’s magazine and decision time.
  • Firmware agility: when they land a new exploit, we rotate keys, patch firmware, and swap behaviors inside days (if not hours), not quarters (or often years)—because the kill chain is now also a patch chain.[42],[43],[44]

Make this muscle memory: every C-UAS discovery yields a counter-counter update—a tiny change set to comms parameters, geofences, or autonomy thresholds—pushed through the same pipelines that carry software to jets.

Supply chains and firmware are lines of communication

Cheap mass fails if our parts and code are compromised. Treat supply chain and firmware provenance like fuel lines:

  • Zero Trust for control links and ground stacks (identity, micro-segmentation, continuous posture checks).[7]
  • Secure Software Development Framework (SSDF) processes for every vendor touching flight code or ground control station (GCS) binaries; no exceptions for “small teams.”[45] This only applies to software factory work (whether executed by government or contractor), not to dual-use technologies that go through the contracted rapid RMF rigor.
  • SBOM at the edge (signed, queryable), plus an automated CVE watch that flags components in live mission threads when a vulnerability is exposed.[46],[47]
  • Provisioning hygiene: hardware roots of trust, per-airframe credentials, revocation that works when disconnected.
  • Spare-part discipline: motors, props, batteries treated as tracked configuration items, even if they are fully expendable. If a lot goes bad, we know which thousand airframes to pull.
This is dull until it saves a wing. It will.

Culture: permission to iterate in public

We don’t break cultural gravity with memos; we break it with visible wins. Pair that with top-cover pathways that pull quantity through the system: the Rapid Defense Experimentation Reserve (RDER) to stress test promising tech in real mission threads, and the Defense Innovation Unit (DIU) to scale what works with production contracts instead of indefinite pilots.[48],[49] Tie both to a clear message: we’re not buying “a drone,” we’re buying a learning curve.

When the author of this paper was working for AFWERX, he made this video[50]—apologies for the abrupt ending without a nice cherry on top—but it summed up the realities of near-peer use of sUAS and use of proxies to bleed the DoD's coffers dry. If anything, this video was a PowerPoint + Artificial Intelligence (AI) reflection of the efforts to tell the story of paper #3 in this series. While the video pre-dates that blog, the author was active duty at the time and trying to use it to tell stories inside the Pentagon. As those failed (but things like Operation Spider Web[38] and the $8.5t (trillion with a T)[51] wasted on the Global War on Terror (GWOT) are both reality, it turns out the video was accurate all along.). Now that the author is concerned more about getting this information out in mass than just pointing it to terrible leadership at former commands, this is just hosted for the public's consumption.

Tactics you can fly this quarter

  • Edge-heavy ISR: sUAS elements create a moving picket line with local fusion; CCAs sprint to exploit contacts. Publish the ISR schema, enforce the SLO, and you’ve got a living net.
  • Magazine depletion: decoy elements advertise appetizing signatures to pull enemy interceptors or C-UAS shots, while shooters trail at offset angles. Count cost-per-shot—we want them firing dollars at our pennies.
  • Swarm escort: sUAS throw noise and false paths in front of CCAs during ingress; if EW bites, autonomy floor switches the swarm to pre-coordinated dead-reckoning patterns until the link returns. The pre-coordination can be constantly updated by AI at the edge until link failure, making enemy prediction modeling exponentially more difficult.
  • Pop-up strike: cached mission packages (maps/models) live on edge nodes; a feature flag unlocks a new behavior for a specific target set without touching the rest of the stack.

Each tactic is a small interface + small behavior. That’s why it scales.

Training and OT&E: rehearse the interference

You fight how you test. OT&E must simulate the ugly: GNSS lies, satellite communications (SATCOM) dropouts, spectrum crowding, adversary spoof farms, and blue-force laser keep-out zones. Make every event yield:

  1. a labeled dataset
  2. a merged change set
  3. an updated TTP with measured effect
If a trial doesn’t produce those, we ran a pageant, not a practice.

Metrics that matter

Doctrine is what you resource. For cheap mass, the scoreboard is ruthless and public:

  • Cost-per-effect (including spares & comms) vs. the enemy’s cost-per-shot.
  • Time-to-patch (firmware or autonomy) from detection to fielding.
  • SLO attainment on ISR/targeting schemas (latency, freshness, continuity).
  • Attrition tolerance (fraction of mission completed under x% losses).
  • Inventory churn (how many airframes or payloads we retire intentionally each quarter to keep the curve hot).
  • C-UAS burn ratio (enemy shots per blue sUAS lost).[42],[43],[44]

If these numbers move the right way, you’re winning—even before the headlines catch up.

What changes on Monday

  • Pick one thread (maritime ISR, border ISR, or base defense) and publish the v1 schema with SLOs. Enforce it across one wing.
  • Stand up a sUAS cell with four elements and a CCA partner; give them a 30-day CSE sprint: one change per week, measured in the scoreboard above.
  • Pre-wire directed energy into blue rules of engagement (ROE) and TTPsTHOR/HELIOS/DE M-SHORAD as planned effects with keep-out logic.[39],[40],[41]
  • Turn on SBOM/CVE alarms for the sUAS/GCS stack; empower the mission commander to roll a canary when something lights up.[7],[46],[47]
  • Buy for learning, not perfection: use DIU/RDER lanes to contract batches of 100–1000, not showcases of 10; plan for >20% to be retired in 12 months and for vendors to rapidly swap based on lowest total cost of ownership. Prevent vendor lock even if it means more work for program managers (PMs) and contracting officers (KOs) at the Program Executive Office (PEO).[48],[49]
  • Brief the wing on failure math: we’re going to lose airframes and win campaigns. That’s the trade.

Cheap mass isn’t about throwing bodies at problems; it’s about throwing brains at scale—software brains that update faster than the adversary can target. Build the force fractally. Make the interfaces the order. Normalize directed energy and deception as routine teammates. Treat supply chain as a frontline. And then measure what matters: how quickly the swarm learns.

That’s airpower in an information-first fight.

4) DIME & Hybrid Warfare Strategy (Exploit Their Vulnerabilities)

Hybrid war is a campaign of campaigns. It’s not one silver bullet, it’s synchronized Diplomatic, Informational, Military, and Economic (DIME) threads that create compounding pressure tied to commander’s intent. We already declared this direction in the National Security Strategy and National Defense Strategy; the gap isn’t concepts, it’s cadence.[1],[2] Our Cyber Strategy adds the missing verb—persistent—and names the logic: shape, contest, and impose costs every day, not just during “incidents.”[17] This section turns that policy into a practical playbook: map the adversary’s load-bearing dependencies, apply legal/financial/info/cyber levers in parallel, and keep the costs on.

Target their load-bearing beams

Russia: energy leverage. Their power projection rides on hydrocarbons and the ability to coerce via supply. The European Union's (EU's) REPowerEU moves and the wider European energy diversification agenda showed how to turn that lever back on Moscow—re-route flows, blunt pricing power, and shrink the coercion window.[52],[53],[54] Make this structural: treat liquefied natural gas (LNG) contracts, pipeline chokepoints, and grid interconnects as operational terrain; bake energy stress tests into wargames and diplomacy so Moscow’s best play carries political risk every time.

Russia's pipeline network is vast and potentially prone to cyber attack. Or so it could be... (BBC)

The final paragraph in the Russian section of paper #2, published in August of 2023 read the following:

The US DoD cannot afford to wait until Russia potentially collapses demographically or economically as Russia is already well aware of these shortfalls and has pivoted to exploit US philosophies, doctrines and policies as a weakness in hybrid war. Russia cannot wait for the turmoil caused by the 2024 election season to come fast enough and use their hybrid tools to potentially alleviate the DIME pressures. Quite to the contrary, the US must continue to ratchet up the pressure until Russia doesn't just capitulate in Ukraine, but is deterred from destabilizing western interests in perpetuity. Russia's aims don't seek to make the world a better place, but rather just to make a select group of Russians even richer than they already are through the use of deceit and death. Until that model is destroyed, Russia's nationalistic demise must remain a priority.

None of that has changed.

PRC: manufacturing + capital pipelines. Their advantage is scale (manufacturing) and reach (capital/tech acquisition). We don’t beat scale with rhetoric—we beat it with guardrails and alternatives. Tighten advanced computing export controls to slow military-use tech transfer;[55] use Committee on Foreign Investment in the United States (CFIUS) authorities to shape the inbound/outbound capital that packages intellectual property (IP) for re-export.[56] Pair the stick with a carrot: onshore/ally-shore where it matters and standardize data contracts so coalition industry can plug in without bespoke integration costs (the doctrine from Section 2’s API FRAGOs). The goal isn’t autarky; it’s selective friction in the parts of the stack that convert to military advantage fast. China has a massive number of internal problems that will eventually force positive changes. These acts will keep China from leveraging their current advantages because imposition costs will always be higher than the outcome of rational actions.

Iran & North Korea: sanctions evasion and cyber financing. Both regimes use cyber to raise funds and bypass controls; our lever is financial plumbing discipline plus consequences that stick. Treat Office of Foreign Asset Control's (OFAC's) ransomware guidance as a standing order: push compliance and transparency through exchanges, insurers, investor relations (IR) firms, and managed service providers (MSPs) so the easy off-ramps close.[57] Combine that with public attributions and hunt-forward finds (see below) as well as Black List Letters of Marque 2.0 activities (see below) to keep their cost of doing business rising.

Keep “hack the voter” on the board

We learned the hard way that targeting the voter and the attention algorithms is cheaper than targeting the ballot box.[58],[59],[60],[61],[62],[63] In Part 2, we walked through how Russia and others pair narratives with cyber theft/leaks to generate cycles of outrage on platform rails; in Part 5, we argued for moving from ticket-clearing to campaigning with allies. Keep that energy: make defend forward the default posture—shape the environment before the news cycle starts.[18] That means:

  • Pre-bunk, not just debunk. Build content libraries and media partnerships that explain the playbook before it runs.
  • Authenticity infrastructure. Verify provenance at machine speed for high-risk narratives (e.g., deepfakes tied to election timing).
  • Information Operations (IO) + cyber pairing. When theft plus leak is the pattern, our counter is resilience ops (reduce the leak’s half-life) plus counter-exposure (burn the adversary’s TTPs in public to degrade future effect).
I'm sure you did... But did you know what you were doing? (Bob Foran)

Algorithms are terrain—seize the choke points

Treat platform ranking, recommendation, and ad delivery as maneuver terrain. You don’t control it, but you can shape it:

  • Truthful amplification: pack authoritative narratives into formats the ranking systems reward (consistency, velocity, interaction).
  • Latency as a weapon: use pre-approved message kits so surrogates can flood the zone in minutes during a crisis, not hours.
  • Friction to malign ops: coordinate with platforms on rapid throttles for known playbooks (coordinated inauthentic behavior, boosted hacked materials) while staying inside U.S. speech constraints.[62],[63]

The test isn’t vibes; it’s reach vs. reach: how fast we get authoritative context in front of the same audiences the adversary is buying or botting.

Pre-position Hunt-Forward teams with allies

US Cyber Command's (CYBERCOM's) hunt-forward operations are the most successful “small footprint, high leverage” tool we have: forward teams gain TTPs, indicators, and tradecraft straight from adversary networks, and share them back into U.S./ally defenders at speed.[65] Institutionalize the rhythm:

  • Standing invitations with priority partners tied to election cycles, energy events, or major exercises.
  • Telemetry return as a deliverableTTP packages published in hours to CVE/SBOM pipelines, not white papers months later. We'll expand this more in section 12 of this paper.
  • Reciprocity: allies get early warning and tooling; we get ground truth that makes our defend forward posture real.[18]

Build and use a DIME Tasking Order (DTO)

Treat DIME like air tasking: a weekly DTO that aligns diplomatic asks, information ops, military actions, and economic/legal moves to a single commander’s intent. Understand that while the President and Congress ultimately control that intent, the Unified Combatant Commanders (UCCs) are "moving and shaking" across DIME significantly, and need to significantly more. A DTO page might look like:

  • Intent: Raise the marginal cost of Russian energy coercion through Q3 while preserving allied supply resilience.
  • Diplomatic: EU energy consultations + targeted assistance for interconnect upgrades.[53]
  • Informational: pre-bunk content series on energy blackmail tactics; coordinate release windows with allies.
  • Military/Cyber: hunt-forward on energy sector partners; publish hard-won TTPs into CVE-driven patching drills.[17],[65]
  • Economic/Legal: enforce price-cap compliance; update export control frequently asked questions (FAQs); CFIUS signaling on sensitive deals.[55],[56]

The DTO is how we replace “everyone doing good things” with effects that stack.

Codify a deterrence ladder for cyber + info ops

Deterrence here is not a one-shot threat; it’s a transparent ladder of consequences that moves across domains and stays below unintended escalation. Anchor it in the DoD Cyber Strategy (persistent engagement) and the earlier 2018 cyber strategy (defend forward).[17],[19] A usable ladder:

  1. Silent friction: blocklists, takedowns, and behind-the-scenes demarches.
  2. Attribution + exposure: name responsible units, publish TTPs, and burn infrastructure in public.
  3. Financial/legal squeeze: targeted sanctions, export control denial, CFIUS signaling, and secondary risk warnings for enablers.[55],[56],[57]
  4. Cyber counter-effects: proportional, reversible disruption of the hostile campaign infrastructure tied to clear redlines.[17]
  5. Cross-domain costs: visible exercises, posture adjustments, and, if needed, conventional responses that signal risk to what the adversary values.

Publish the rules, log the steps, and climb deliberately. Ambiguity helps the offense; ladders help the defense.

Letters of Marque 2.0: Bounties, Guardrails, and Continuous Cost-Imposition

If we’re serious that the civilian software market is the arsenal, then we need lawful ways to task and pay that arsenal for effects—defensive and offensive—without conscripting everyone into government payrolls or slow-rolling them through bespoke compliance obstacle courses. The 18th-century mechanism already exists in our constitutional toolkit (letters of marque and reprisal, specifically in Article I, Section 8, Clause 11). We modernize it for cyberspace, wrap it in contemporary law and alliance norms, and make it measurable. Think privateering, but with SBOMs, CVE/KEV, and JADC2, not sails.

Joshua Barney, an American sailor who once sailed with a Letter of Marque from US Congress. (Hulton Archive/Getty Images)

We stand up four transparent lists—each with a public charter, published bounty schedules, and harsh guardrails. The point isn’t vigilantism; it’s to operationalize the civilian arsenal inside a strategy that’s (1) legal, (2) controllable, (3) auditable, and (4) fast.

The Blue List (build the tools)

This isn’t a target list—it’s a fee schedule for capability. We pay developers to produce, maintain, and safely custody government-directed exploit chains mapped to CVEs/KEVs and priority emulation plans. We prefer open-source scaffolding under permissive licenses (so we can security-review and fork if needed), but we control release and use under contract. The deliverable isn’t a “zero-day grenade”; it’s a tested module with reproducible build, SBOM, usage predicates, and a lawful-use wrapper that binds it to authorized operators, mission contexts, and rules of engagement. Blue List work folds into red-team programs, hunt-forward kits, and cost-imposition options that live under U.S. authorities and coalition legal frameworks (defend-forward isn’t a bumper sticker; it’s a pipeline).[17],[18],[19],[45],[46],[47],[123]

The White List (protect the commons)

Here we fund continuous open source software (OSS) supply-chain hygiene—looking for dependency poisoning, typosquats, malicious updates, and CI/CD compromise in the libraries we all rely on (OpenSSL-class projects, container bases, crypto libs). No per-find “bounty” here; we pay for coverage and dwell-time reduction: monitored package sets, mean-time-to-detect, and coordinated disclosure velocity. White List contractors push fixes upstream, generate attestations, and publish risk advisories that our cATO lanes can consume automatically.[5],[45],[46],[47]

The Red List (find and fix ourselves)

This is a bounty schedule for U.S./allied infrastructure—critical services, DoD enclaves, defense industrial base (DIB), base networks—the places that keep ACE alive. Red List pays more than Blue because dwell-time here is lethal. Rewards are tied to patch availability + deployability: you get paid more if you deliver a fix or configuration that our lanes can push immediately (with rollback tested) and if you supply detection content and forensics playbooks. Payment also scales with blast-radius avoided (rewarding responsible disclosure paths that minimize exploitation risk).[5],[6],[7],[44],[81]

The Black List (cost-imposition targets)

This is the only actual target list, and it is overt once published. Getting on it is not. Nominations flow to the National Security Council (NSC); State, Defense, and Justice jointly validate; a Foreign Intelligence Surveillance Act (FISA) court reviews the package for lawful scope and minimization; Congress (specifically the Senate, through the Senate Select Committee on Intelligence) authorizes issuance under an updated cyber letters-of-marque statute. Up to publication, everything is classified. After publication, any licensed privateer (read: bonded, cleared vendors under standing contracts) can compete to lawfully degrade the listed entity’s capabilities in tightly defined bands (disruption, not destruction; strict Law of Armed Conflict (LOAC)/Tallinn compliance; human-safety carve-outs; no critical-infrastructure spillover) with escrowed digital-asset payouts on verified effects. “Transparent to the world, auditable to the government.” That means privacy-preserving but compliant rails—Treasury/OFAC guardrails apply; payouts are pseudonymous to the public but fully traceable to U.S. oversight. No cowboys, no crime-as-a-service; licensed firms only, revocable charters, real penalties.[17],[18],[57],[124] While the other three lists don't really require Letters of Marque legislation and can be implemented using consortia other transactional agreement (OTA)/indefinite delivery, indefinite quantity (IDIQ) agreements/contracts, the Black List will require legislative action.

The Black List is cool, but it's not this cool. (NBC)

A few hard rules keep this from turning into a tragedy of the commons:

  • Deconfliction with CYBERCOM and allies is real-time. If a Blue/Black action collides with an in-progress operation or intel source, the stoplight turns red and the chartering authority pauses payment until conflict resolves.
  • No “stockpile and pray.” Blue List modules have expiry and review cycles; if a CVE/KEV fix lands and defenders patch, we retire or repurpose.
  • Civilian protections and human safety are non-negotiable. Effects that risk physical harm, medical systems, or public safety are out of scope unless explicitly authorized with additional safeguards (rare, and under military command).
  • Allies first. If a listed entity has infrastructure in a partner state, we use consent-based playbooks; Black List isn’t a hall pass to create diplomatic incidents.
  • Metrics or it didn’t happen. We score: vulnerability dwell-time, time-to-patch on Red finds, OSS coverage on White, effect-per-dollar on Black (with collateral-risk score at zero), and collision rate with ongoing ops (should be vanishingly small).

In practical terms, Letters-of-Marque 2.0 helps us do what we already say we’re doing—defend forward and impose costs—but at market speed, with broader hands and narrower risk. It pays Blue to keep the toolchain ready, pays White to keep the commons clean, pays Red to make our house safer, and pays Black to make an adversary’s day worse—all inside law, with telemetry, timelines, and a published ladder of consequences.

How we run this on Monday

  • Name three adversary dependencies per theater (e.g., refinery throughput, satellite links, capital controls), and assign a DIME owner per dependency.
  • Stand up a DTO cell inside the UCC with officers from State, Treasury, DoD, and the interagency; one 2-page DTO every Friday.
  • Wire intel to action: require that every Hunt-Forward product triggers a CVE/SBOM check across affected sectors within 72 hours.[46],[47],[65]
  • Election clock discipline: 90 days before key votes, pre-bunk packages and authenticity tooling are staged with platforms and allies, ready to flood commercial algorithms through both official and proxied channels with truth data as opposed to foreign state propaganda that takes advantage of narrowly optimized algorithms for emotional engagement to increase revenue.[58],[59],[60],[61],[62],[63] Data output can be done both through burnable networks stood up via containerization in commercial sectors and via official press releases through Public Affairs (PA).
  • Measure what matters: time-to-DTO effect, adversary campaign half-life, price-cap compliance rates, reach-for-reach ratios on major narratives, and cost-per-marginal-attack imposed by our controls.[1],[2],[17]
  • Begin draft legislative activities for Letters of Marque 2.0.

Hybrid war rewards coordination speed. If we align DIME-scale actions to commander’s intent and run them on a DTO cadence, we stop treating adversary strengths as facts of nature and start turning them into load-bearing liabilities—every week, on purpose


5) OODA Across the Enterprise (Acquisitions, Research, Development, Test, Evaluation and Sustainment)

OODA is no longer a cockpit trick; it’s an institutional metabolism. The organizations that learn fastest win—even when their platforms aren’t the fastest. That’s the plain reading of the last decade of software-in-defense, from the Defense Innovation Board’s diagnosis (ship smaller changes, more often) to the Air Force’s “Accelerate Change or Lose” and follow-on Action Orders.[27],[66],[67] This section is about turning that mantra into mechanics you can run every week across acquisition, research & development (R&D), and test.

Collapse Observe/Orient

Observe isn’t a brief; it’s telemetry. Every operational thread—air, space, cyber, logistics—should emit event streams and traces that flow into common stores instrumented for model ops (data versioning, lineage, drift detection). Pair that with two outside-in feeds that must be treated as first-class citizens:

  • Known Exploited Vulnerabilities (KEV) watch: treat Cybersecurity and Infrastructure Security Agency's (CISA's) KEVs like a standing frag order. If a KEV touches anything in your mission thread, a patch/mitigation SLO clock starts within hours, not quarters.[47] While the MITRE CVE list is our development standard we hold our own software against during dev cycles, the KEV list becomes a 5m target set: these are beyond zero days and into the realm of script-kiddies; KEVs are going to be employed by organizations at the tactical edge, not just strategic state actors like GRU Unit 26165 and Unit 74455 or PLA Unit 61398 who engineer zero days for advanced persistent threat (APT) vectors.
  • SBOM intelligence: suppliers publish SBOMs; we continuously match them to KEV/CVE and vendor advisories so orient is automated, not manual. When the dependency graph moves, your risk picture updates without a meeting.[47],[68]

The point isn’t more dashboards—it’s fewer surprises. When logs, model metrics, KEV hits, and SBOM deltas live in one fabric, “orient” is a query, not a tiger team.

Shorten Decide

“Decide” should be bounded authority plus fast paths, not heroics. We already have the statutory tools; we just don’t route enough decisions through them:

  • Use the fast lanes deliberately. We've already made the software acquisition pathway (SWP) the preferred acquisition tool for software.[69] The SWP intentionally leverages some of the most flexible capabilities from MTA—the commercial solutions opening (CSO) atop the OTA[9]—to make software acquisition faster, but the cybersecurity onboarding must accelerate as well.[11],[70]
  • Empower PMs with pre-approved patterns. Give PMs authority to ship inside guardrails (DevSecOps pipeline, cATO inheritance, data contracts) without staging milestone theater every sprint.[8] The commander on G-Series orders will take a standardized body of evidence (BOE) to determine if they will use the new code; PMs and AOs are now lethality-enablers supporting a command staff, not gate-keepers beholden only to a CIO-driven hierarchy.
  • Codify decision SLOs. Example: “If an increment is within MTA thresholds and uses the approved pipeline, the default decision is ‘go’ in ≤10 business days.” You can’t outrun delay with more slides; you outrun it with default-to-yes rules that leadership must actively override.

Part 1 framed the economy we actually have—software, data, networks—and Part 4 argued for hazard-based control; “Decide” is where those meet: leaders decide once to trust patterns, then stop re-deciding every deploy.

Shorten Act

Act is shipping—small, safe, continuous. Two ingredients make that possible at enterprise scale:

  • Commercial plumbing: pre-stage multi-cloud landing zones with identity, logging, and service mesh so teams deploy to compute-as-utility, not snowflake stacks.[71],[72] Tie this to the DoD Software Modernization Strategy and the DoD DevSecOps Reference Design so pipelines are boring, repeatable, and inherited, not artisanal.[4],[8]
  • S-curve releases: ship features behind flags; roll via rings (canary → squadron → wing → theater). If a KEV/CVE spikes or telemetry shows regressions, rollback is a toggle, not a memo.

When the platform is an S-curve, your Act loop runs on engineering cadence—not POM cycles.

The Data-Enabled OODA Loop as others have envisioned it. (FlexRule)

Make PPBE Pay for Learning

You get the behaviors you budget. Right now, PPBE pays primarily for starts and sustainment, not learning velocity. The fix is mechanical:

  • Define budgetable software effects. Treat deployed code, retired code, and telemetry coverage as deliverables whose acceptance triggers obligation/expenditure. Killing bad software should book as a win, not a loss.[20],[21],[22]
  • Adopt outcome measures that cross threads. Time-to-patch CVE, time-to-field (MTA/UON/JUON),[73],[74] T2D (JADC2 data path)[4],[8],[72] and cost-per-effect (e.g., CCA/sUAS in Sec. 7) become the portfolio scoreboard.
  • Resource the boring plumbing. Fund shared pipelines, common data models, and test ranges as infrastructure—not as “nice to have” line items that get raided in execution.[20]

We don’t need new poetry here. The PPBE Commission already laid out the direction; our job is to wire the accounting to the learning.[20]

Turn OT&E into a Continuous Test Fabric

The current OT&E muscle memory: a big event, a big report, a long wait. That’s incompatible with software velocity. We need test as a fabric:

  • Instrument everything. If an exercise doesn’t generate labeled datasets and failure modes that feed back into code within days, it’s theater. DoDI 5000.89 and 5000.90 give you the hooks to require telemetry-driven evaluation and program management that plans for it.[25],[26]
  • Shift left and right. Use synthetic ranges/digital twins in dev when possible, then carry the same scenarios into live ops so results are comparable and regressions obvious.
  • Publish reusable artifacts. Test threads should end with datasets, scenario packs, and model benchmarks that other units can run—once made, used many times.

This isn’t anti-rigor. It’s the only way to keep rigor while the world moves.

The Eight Clocks

Every headquarters should see—at a glance—eight clocks that define enterprise OODA:

  1. Time-to-effect: Delta time for mission capability; this is typically a UI enhancement or other such impact on a mission thread, but is measured in seconds towards outcomes that impact the commander's ultimate value stream: warfighter effects.
  2. Time-to-detect: from incident/CVE/KEV publication to alert in the mission thread.[47]
  3. Time-to-patch/mitigate: from alert to fix in production, measured per system and supplier.[46],[47]
  4. T2D: from validated requirement to funded path (MTA/UON/JUON/Other).[73],[74],[75]
  5. Time-to-field: from funded path to capability in user hands (release ring cadence).[4],[8],[72]
  6. Time-to-rollback: from anomaly to safe state (feature flag or config)
  7. Time-to-deprecate: from decision to retire to last user off (technical-debt burn rate).[20]
  8. Telemetry coverage: % of mission thread emitting the agreed event schema (observe/orient health).

If a capability improves these clocks, it’s winning—even if the slide is boring. If it doesn’t, it’s noise.

Monday Morning Version

  • Mandate CVE/KEV/SBOM integration in every pipeline by the end of the quarter; publish the SLOs and measure them weekly.[46],[47]
  • Default to MTA/UON/JUON for software increments under defined thresholds; publish the thresholds and train PMs and their associated KOs on how to apply them.[73],[74]
  • Stand up a commercial oriented “standard stack” (identity, logging, service mesh, flag service) and make it the only allowed landing zone for new code unless waived at the PEO level.[4],[8],[72] Use contractor owned, contractor operated (COCO) model for onboarding commercial dual use technology, and separate stacked COCO model for developing government owned IP for government-only problems. Both use the same exposed API model to integrate effects, but are separate contracts managed from independent PEOs to prevent a single monolith from taking over the API models. We must prevent the mistakes made with data ownership in the Maven contract from happening again.
  • Adopt the Eight Clocks as the portfolio dashboard and tie quarterly reviews to movement on those clocks.[20],[21],[22]
  • Rewrite OT&E tasking so every event must produce datasets/benchmarks that feed a backlog within 10 business days.[25],[26]

Tie-backs to the series

  • In Part 1 we argued we should organize around information/software/networks as maneuver. OODA-as-metabolism is how you operate that organization.
  • In Part 2 we mapped adversaries who already cycle fast (sanctions evasion, energy leverage, information ops). You don’t out-message them with slower loops.
  • In Part 3 we showed that acquisitions policy is no longer the bedrock of American supremacy, but is actually the reason we're falling behind our peers despite exceptionalism in the commercial sector; Section 5 is the leadership plumbing that turns this around while modernizing the department to align with commercial advantages.
  • In Part 4 we showed that acquisitions policy alone won't usher in the future; we have to reframe how we think about TTP. Here, that becomes runtime guardrails around shipping, not gates that freeze learning.

Speed of flight still matters. But speed of learning beats speed of flight. If we wire CVE/KEV/SBOM into observe/orient, route decisions through MTA/UON/JUON acquisitions, act on COCO pipelines, pay for deprecations, and make OT&E a fabric, we stop admiring OODA and start living it—across the enterprise, every week.


6) Swarm-on-Swarm, Directed Energy & Micro-EMP

Treat the swarm like combined arms, not a gadget problem. The force that layers low-cost kinetic, EW, cyber, and deception by cost-per-effect will win the attrition race against mass sUAS and attritable CCAs. That means we build an effects stack where cheap counters meet cheap threats first, reserving exquisite shots for exquisite targets—and we wire this stack into the same software-centric loop we laid out above.

Swarm-as-Combined-Arms (by cost-per-effect)

Publish a shot doctrine: who fires first, at what range, and at what density:

  • Deception & cyber pre-shot: spoof, saturate, and feed garbage—force the adversary’s autonomy to chase ghosts; burn their batteries and operator attention before we burn our magazines.[42],[43],[44] They will be doing the same thing to us, so its imperative to have superior AI software churn rates to win this skirmish every time.
  • EW as the workhorse: deny C2 links, jam GNSS, and wring guidance stacks until they fall into soft-kill envelopes. While EW has the best cost curve for mass sUAS, particularly when paired with decoys that pull swarms off defended routes,[42],[43] it can be overcome in many ways, ranging from exquisite autonomy—that is still pennies per sUAS on the edge when at mass—to hardening.
  • Directed energy for volume kills: when density spikes, lasers and HPM systems reset the economics, turning multi-$k threats into multi-$ per shot defenses—if we pre-position power, cooling, and clear lines of sight.[39],[40],[41] Not a tactical panacea—often a life-saver for agile combat employment (ACE) deployments against scaled low-cost sUAS attacks.
  • Phased kinetic as the backstop: guns, air-to-air missiles (AAMs), and interceptors finish what soft-kill and DE don’t. Air-to-air guns delivered from low-cost sUAS becomes both a cost effective way to attrit one way attack (OWA) sUAS, but also is incredibly effective to "pave the track" for swarm effects to make it to their target, neutralizing enemy interceptors ahead of the main effort.[76] For C-UAS, air-to-air sUAS interceptors become an effective low-cost defensive counter-air (DCA) doctrine with a recoverable asset able to operate for pennies per engagement. Even in this scenario, save the expensive arrows for the targets that merit them.[76],[77],[78]

This is not theory. DoD's C-UAS assessments and strategy already point to the integration challenges and coordination gaps; we fix them by commanding the cost curve as a tactic, not a PowerPoint .[42],[43],[44]

HELIOS in action (US Navy)

Directed Energy as the Economics Breaker

HELIOS at sea, DE M-SHORAD on land, and THOR for base defense give us scalable counters when the sky is busy and magazines are thin.[39],[40],[41] Three practicalities matter more than swagger demos:

  1. Power choreography: DE fights are logistics fights. Publish power budgets and recharging concepts in ACE playbooks; co-locate mobile generation with DE nodes so they’re not hostage to a single feeder.[79],[80]
  2. Thermal SLOs: track duty cycle, dwell, and cooling as operational metrics. A laser that can’t manage heat mid-salvo is just an idea.
  3. Fire control fusion: DE must sit inside the same sensor-to-shooter micro-loops as guns and EW. That means common event schemas and latency SLOs, not custom stovepipes.

When DE is doctrinally first against mass drones, the rest of the stack lasts longer and costs less.

EMP/GMD as Infrastructure Hygiene

Treat electro-magnetic pulse (EMP) and geomagnetic disturbance (GMD) protection the way you treat patching—boring, continuous, and non-negotiable. ACE only works if bases, feeders, and regional grids ride through shocks and keep the pipes up.[44],[81] That means:

  • Hardening tiers: tier critical C2, fuel, and cooling circuits for survivability; lower tiers ride on portable spares and fast-swap modules.
  • Recovery drills: practice black-start and microgrid transitions during ACE reps; success is measured in minutes to ops-recovered, not anecdotes.
  • Sensor truthing under stress: validate that navigation, timing, and friend/foe discrimination hold up under EMP/GMD-like conditions—not just sunny-day ranges.

Firmware Agility as a TTP

The munition is software. Treat firmware agility like re-arming: push counter-update cycles that beat the adversary’s patch tempo.

  • Bake in SSDF patterns so autonomy stacks and datalinks are built for change (versioned configs, signed updates, rollback paths).[45]
  • Demand SBOMs from suppliers and map them to CVE/KEV so we know which component in which bird is exploitable today.[46],[47]
  • Sign & surge: pre-approve signing services and distribution channels in cATO pipelines so a hot-fix to a guidance filter or RF front-end ships in hours, not quarters.
If you can’t update it at operational tempo, you don’t own it—you rent it from yesterday.

Train Deny–Deceive–Deplete

Mass sUAS warfare is as much economics as kinetics. Build exercises that bleed the wrong magazines and burn the enemy’s time:

  • Inventory-burn traps: set vignettes where blue must choose between shooting $100k interceptors at $2k threats or maneuvering into EW/DE kill boxes. Grade on cost-per-salvo and time-to-rearm, not just kills.[42],[43]
  • Deception lanes: practice decoy blooms, ghost corridors, and false electromagnetic signatures to waste adversary swarms. Build swarm based TTPs to utilize physics restrictions at time/distance intervals for defeating close-in weapons systems (CIWS) opening up corridors for exquisite systems to neutralize capital targets.
  • Shot-selection drills: give operators a “wallet” and make every trigger pull debit the budget in real time. Leaders learn quickly when the UI shows they blew half a million dollars to swat quadcopters.

This is culture work. When crews can see cost-per-effect during reps, they start fighting the budget the way they fight the air threat.

Lessons from Ukraine: EW First, C5ISR Fragile

Ukraine is a running lab on C5ISR degradation and EW saturation. The early and ongoing takeaways are clear: links die, GPS lies, and centralized C2 slows you down .[82],[83],[84],[85] Translate that into our TTPs:

  • Autonomy bias: default to behaviors that degrade gracefully with intermittent comms—local mesh, mission intent packets, and time-boxed autonomy rather than constant C2.
  • Navigation diversity: fuse inertial measurement units (IMU)/vision/radio-nav so GNSS denial can at best force drift, not loss.
  • Edge triage: push targeting and prioritization logic down to the node—if the link to the “big brain” breaks, the swarm still fights the right fight.
  • Telemetry minimalism: log what you need to learn but avoid chatty links that crater under EW pressure.

The doctrine shift is simple: assume contested C5ISR, prove otherwise—then design the swarm to win anyway.

The Civilian Arsenal, Wired to This Fight

Per Part 5, the American civilian software world is the arsenal—and most of the coding will be done by contractors. Swarm warfare doubles down on that reality:

  • Don’t fork industry’s value stream. Use contracted middleware and secure gateways to bring containerized, cloud-native autonomy and C2 into classified environments without forcing vendors to rewrite for bespoke stacks. Inherit controls via cATO patterns; certify pipelines and data paths, not each product from scratch.[4],[8]
  • Buy effects, not brands. Specify latency to classify, kills per kilowatt, mean time to counter-update, EW survivability indices. If one vendor slips, another that meets the interface and the metric can slot in, giving the government a trackable way to decrease total cost of mission effect at speed and scale and force vendor competition.
  • Pay for speed. Use MTA and the Software Acquisition Pathway to award increments on delivered telemetry and cost-per-effect improvements, not slide milestones (we’ll expand this later).[73]

The point is to keep the venture capital (VC) engine hot and the internal research & development (IRAD) flowing by making it easy to cross the cATO Rubicon fast, at scale—then easy to swap when someone better shows up.

Monday-Morning Pieces

  • Publish a counter-swarm shot doctrine with cost-per-effect tables and DE/EW primacy.
  • Stand up DE power/cooling kits as ACE cargo, with SLOs for spin-up and duty cycle.[39],[40],[41],[79],[80]
  • Add firmware agility checklists (SSDF + SBOM + CVE/KEV) to every sUAS/CCA pre-flight and after-action workflow.[45],[46],[47]
  • Build deny–deceive–deplete lanes into every major exercise; score them on economics, not just effects.[42],[43]
  • Force contested-C5ISR assumptions in planning factors; prove you can fight through EW before you brief the happy path.[82],[83],[84],[85]

Swarm-on-swarm will not be won by the shiniest single counter. It will be won by system economics: who spends least to remove most threat—reliably, updatable, and at scale. DE resets the math; EW and deception keep it tilted; firmware agility sustains it; and a civilian-powered software pipeline keeps it moving faster than the enemy can learn.


7) Million-Plane Air Force & CCA/Replicator

The future force isn’t one exquisite airplane—it’s a fielded distribution of several CCAs and thousands upon thousands of sUAS per AOR, stitched by open interfaces and updated like software. Quantity gives us geometry, persistence, and the ability to trade steel for information advantage. MOSA makes that quantity smart—so sensors, EW payloads, and autonomy can spiral in months, not blocks of years.[28],[29],[30],[31],[32],[33]

The force-structure math

Start with the mission math, not the platform myth. An AOR-scale scheme might allocate: ISR/ELINT pickets to find and fix; decoy/jammer swarms to fracture enemy kill chains; armed sUAS "fighters" to pave the avenue; weapons mules to mass cheap effects; and a CCA “skein” to reach where manned aircraft won’t.[76] The point isn’t a magic number—it’s density: enough airborne nodes that an adversary can’t attrit you faster than you regenerate. MOSA gives you the knobs: payload bay standards (FACE), message schemas (UCI), and card-level modularity (CMOSS) so we can add a new seeker, swap a radio, or change autonomy without resetting the whole fleet.[28],[29],[30],[31] Commanders get an effects portfolio they can rebalance daily: more ISR today, more decoys tomorrow, more EW when the enemy lights up.

I would hope the AI formations are better than the AI image generation.

Pilot to swarm-commander

We graduate pilots from stick skill to mission intent for teams of systems. We already do this to a lesser extent; the F-22 is an easier-to-fly plane for standard flight than a C-172 Cessna because the expectation is the pilot is focusing on winning the fight, not worrying about flight mechanics.[86] Abandoned projects like Skyborg and Golden Horde were doctrine seeds: onboard autonomy with human commanders setting goals, guardrails, and target priorities while the machine handles formation, routing, deconfliction, and timing.[34],[35] On the glass, that looks like tasking by verbs—screen, fix, blind, suppress—plus constraints (collateral, spectrum, ROE) and SLOs (latency, dwell). The operator calls the play; the swarm runs it; telemetry closes the loop in minutes, not months.

Replicator/RDER: quantity on purpose

We stop pretending attrition won’t happen and design for it. Use Replicator and RDER for volume and iteration: short learning cycles, fast tooling, and block upgrades; award production to those who improve cost-per-effect each quarter, not to those who perfect a static spec.[45],[49] Accept higher production tempo with statistical airworthiness and a bias for fieldable today over perfect tomorrow. Keep vendors in a race on common interfaces; if a supplier slips, another drops in with no mission pause (MOSA again).

The SBOT

Hardware scales the body; software scales the brain. We institutionalize an SBOT—a signed, versioned bundle that travels with each mission family:

  • Models & behaviors: perception nets, target selectors, route planners, EW playbooks, fail-safes, geofencing, and “graceful degradation” states.
  • Policy cages: ROE encoders, no-go lists, human-on-the-loop checkpoints, and abort logic.
  • Data contracts: feature schemas, timestamps, confidence scoring, and latency SLOs for every pub/sub edge.
  • Safety & supply-chain: SSDF patterns, SBOMs for autonomy stacks, and provenance metadata tied to signing keys.[4],[8],[45]
  • Test artifacts: sim scenarios, red-team seeds, and flight logs needed to roll forward or roll back with cATO inheritances (much more about this below in Section 12).

Treat SBOT like munitions: inventory it, inspect it, and update it at operational tempo.

Logistics for a million

Mass only matters if it sustains. But sustainment is a new action when the inventory is a mass of disposable plastic.

  • Batteries as ammunition. Plan state-of-health (SOH) telemetry, palletized chargers, and swap SOPs into ACE playbooks. Stage chemistries by climate; recycle at theater hubs.
  • Airworthiness at scale. Move from bespoke flight releases to airworthiness-by-constraint: a certified operating envelope (wind, icing, gross weight, autonomy mode) that vendors must prove in either real-world testing or digital twin sim and sample flight, with continuous telemetry checks tightening or expanding the live envelope over time.
  • Spectrum as airspace. Publish a Spectrum ATO daily: frequencies, power, dwell, and emissions etiquette per mission packet. Automate deconfliction so sUAS/CCA networks don’t jam ourselves—and can gracefully reroute under enemy EW.
  • Spares automation. Depending upon cost thresholds, design to two-deep line replaceable units (LRUs) and a digital thread from serial number → failure mode → next-best-spare. Use predictive maintenance on motors/props/etc; print plastics at the edge when wise, but centralize complex spares where yield matters. The LRU requirement not a threshold requirement, but an objective for modular systems above a PEO-specified amount. As an example, extremely low cost ISR quad-copters for use by operators at the tactical edge are not apt for LRU-specific replacement, but are just a totally expendable asset.[87]
  • Zero Trust for the fleet. Every drone is a compute node; treat C2, update, and telemetry channels as untrusted by default (identity, segmentation, continuous attestation).[7]
  • Base defense integration. Align C-UAS and infrastructure hardening with swarm ops so our own mass doesn’t blind our own sensors. This is painfully clear when dealing with critical infrastructure.[44]

Exportability and coalition teaming by design

From the first line of code, assume coalition. That means clean partitioning between the autonomy kernel and export-controlled modules; interface stability (FACE/UCI/CMOSS) so allies can plug national payloads without rewriting our core; and policy toggles (crypto, ROE, geofences) that switch per partner while preserving common tactics.[29],[30],[31] Do not bolt this on at the end—co-develop SBOT variants with close partners so interoperability is lived, not briefed.

How we buy this (without breaking industry)

Keep the civilian arsenal hot. Contract effects, SLOs, and update cadence, not bespoke code paths. Pay for delivered telemetry that proves kills per kilowatt, classification latency, mission success under jamming, and mean time to counter-update. Push vendors to inherit cATO controls from approved platforms and pipelines rather than re-inventing accreditations for every team.[4],[8],[71],[72] If a commercial stack already delivers in containers, use a contracted middleware gateway to cross the boundary—no bifurcated codebases.[71] Your leverage is MOSA: switch suppliers without switching tactics.

Monday-morning orders

  • Publish an AOR mass plan: target densities, roles, and MOSA interfaces mapped to near-term buys.[28],[29],[30],[31],[32],[33]
  • Stand up SBOT v0.1 for ISR/suppression of enemy air defenses (SEAD) swarms with signing, rollback, and telemetry contracts.[4],[8],[45]
  • Create a Replicator + RDER joint board with quarterly downselects on cost-per-effect and update tempo.[48],[49]
  • Write a Spectrum ATO playbook and integrate it into ACE exercises.
  • Redefine pilot training toward swarm mission command. The USAF Fighter Mafia is going to have to swallow a hard fact: The last flying ace retired in January 1999—as a Brigadier General, a full 26+ years after his fifth and final air-to-air kill. Steve Ritchie will probably be the last ace ever. The F-47 mindset prolongs a 1980s airpower fantasy; it doesn't survive contact with swarm economics.

A million-plane Air Force is not poetry—it’s plumbing (and a bit of hyperbole; it'd really be at most a couple hundred thousand at any given time). Open interfaces, contractor-powered software, cATO pipelines, and commander-owned SBOTs turn mass into a living force that learns faster than the threat and survives contact by design.


8) The Sixth-Gen Airman (Roles Sunset, AI Integration)

Your next squadron isn't just people and airplanes—it’s people, airplanes, and a fleet of models. The Sixth-Gen Airman is a mission commander for software effects: they set objectives, constraints, and ethics for autonomous teammates, then learn faster than the threat through telemetry and an agile feedback model. This isn’t sci-fi; it’s already in our enterprise strategies and labs—now we professionalize it.[88],[89],[90]

From pilot/operator to commander of models

Treat AI systems as line pilots with unique strengths and strict guardrails. The human’s job shifts from stick and "switchology" to intent, policy, and tempo: task the autonomy, bound its space, and accelerate the loop when it’s winning. This is the spirit of the Chief, Data and AI Office's (CDAO's) data/AI adoption strategy and the National Security Commission on AI's (NSCAI's) call for operational AI talent, backed by the Government Accountability Office's (GAO's) survey of where DoD AI is actually fielded.[88],[89],[90] Put simply: the squadron is now a human-machine team whose center of gravity is software.

JTACJTEO (Joint Terminal Effects Operator)

We expand the nine-line into a model-line that can be pushed through ABMS/JADC2 threads and executed by humans, CCAs, and sUAS alike.[23],[24],[92] It conveys what to achieve, how fast, and under what risk and policy constraints. Think close air support (CAS) discipline, generalized beyond CAS:

MODEL-LINE // JTEO
Mission: BLIND SA-20 BATTERY (SEAD-ELINT)
Intent: Degrade search/track ≥70% for 20 min, window H+10 to H+35
Forces: CCA-ELINT x4, SUAS-JAM x18, EW-POD x2 (ally)
Constraints: CIVCAS=zero; No-fly Box Bravo; EMCON L3->L2 if GPS-jam>50%
Data: Publish /ew/effects and /sa/radar in near-real-time ≤2s latency
Policy: Human-on-the-loop for lethal release; autonomy allowed for route/deconflict
SLOs: Jammer dwell ≥65%; model drift alert if misclass > 3%

We standardize that schema and teach it like we teach the nine-line: brevity, precision, and machine readability. The JTEO learns to think in APIs and SLOs as readily as grids and talk-ons. Combining the rapid kinetic effects from the realm of indirect fire (IDF) and CAS combined with cyber effects coordinated at the tactical edge creates vast new frameworks for tactical effects at the battlefield edge.[91],[92] Effects that are C2'd in the actual fight, where the warfighter on the ground enjoys the greatest level of immersion, knowledge, and exists within a decentralized execution model.

ABMS/JADC2 exists to increase data interoperability and push better situational awareness (SA) to the warfighter at the edge, not consolidate decisions at an operations center. That model is the exact vulnerability we exploit in our foes; we'd be foolish to adopt it, especially as throughout history it's been shown to be an exceptionally bad model that rewards egoists and micro-managers who then blame others for their errors that are utterly inherent in a centralized C2 model.

Sunset & surge: the new billets

Some legacy billets must shrink to make space for roles that convert data into tempo. Use DoDI 8140 to anchor career tracks and the CDAO strategy to define competencies.[88],[93] Many of these core new roles in USAF can be staffed from a Pathfinder cadre initially, which may or may not scale to their own Air Force Specialty Codes (AFSCs) or civilian job types. These would include:

  • Integration & Interoperability Manager (IIM): Given the massive number of contracted software developers involved, a dedicated and highly technically competent corps of PMs will be required to ensure that disparate vendors are developing against existing ATO frameworks, for validated SLOs and using—as well as enhancing—API FRAGOs as validated by command.
  • Data Strategy & Governance Manager (DSGM): The data requirements both created by the SLOs and API FRAGOs as well as those derived by the software development teams and the API architectures of ABMS/JADC2 creates a massive data lake. Traditional government views of data has been seen as a risk or a liability; many MDSs still require that flight logs and tapes from advanced targeting pods (ATPs) be zeroized after missions instead of utilizing the data for AI training. Data is an asset, and the DSGM's role is to both ensure new behaviors and software interfaces properly, but also that the massive amount of data created and existing is leveraged properly. The issue requiring government oversite as opposed to merely making this a contracted action within a larger contract is akin to the one above—data rights ownership and contracting management. The DSGM isn't an actual data scientist; they too are contracted. The DSGM is who makes sure the data scientists and the software developers are able to work together properly and to the benefit of the commander in accordance with the SLOs and API FRAGOs.
  • Software Design & Development Supervisor (SDDS): The role of a designer is to balance innovative contractor integration and traditional agile software practices for providing deliverables fast to the field created by competent contractors while also facilitating and accelerating the the TTP OODA loop for MTA/UON/JUON derived acquisitions.
"If I'd asked my customers what they wanted, they'd have said a faster horse." - Henry Ford (possibly)

Regardless of the veracity of the quote, the theory sums up the life of the designer manager in this role. This is an ideal role for a USAF Pathfinder, who comes from the tactical world and is trained on the systems, TTP and actions of the warfighter, but also knowledgeable of the software vendors and management teams involved. They can help facilitate the advancement of the software, the SLOs, API FRAGOs and TTPs themselves through use of contractors and tactics development at the edge. The SDDS would also be responsible to curating the SBOT for each mission set.

  • Contractor & Vendor Relations (CVR): This is functionally the de-facto contracting officer representative (COR) for multiple contracts, but will require deep technical knowledge and collaboration with the IIM, DSGM, and SDDS roles. With the exception of the SDDS, which should be a warfighter with a Pathfinder qualification, this role and the IIM and DSGM roles could be fulfilled by Pathfinders or by General Schedule (GS) employees at a PEO. Ideally, for sustained software operations supporting platforms, this would be a GS at a PEO to support long-life operations, while at a tactical level, this would be fulfilled by a Pathfinder in an Active Duty billet or, even more optimally, a Guard or Reservist whose day job is program management in a software or software adjacent industry but whose Guard or Reserve role is that of the supported warfighter. Regardless, the CVR role is focused on balancing the deliverables from a constantly rotating cast of contracts, which may have disparate actual PMs and their associated KOs for the given contracts.
    • Example: Air Force Lifecycle Management Center (AFLCMC)/HN has a contract for a cloud services security and data storage solution that is "enterprise" in nature, across the entirety of USAF, with their own PM + KO, where delivery is from a major cloud service provider (CSP) like Amazon or Google, while AFWERX has a contract with a PaaS vendor for cATO as a service for onboarding dual-use software from commercial vendors with their own PM + KO whose contract is a decentralized IDIQ and a smaller niche vendor, then Special Operations Command (SOCOM) Acquisitions, Technology & Logistics (AT&L) PEO has a contract for containerized software logistics management with yet another company who happens to have a business-to-business (B2B) relationship with the PaaS provider at AFWERX, but with their own PM + KO, and then DIU has an OTA with one AI vendor, with their own PM + agreements officer (AO) and CDAO has an OTA with another AI vendor with their own PM + AO, the CVR role would be the wrangler, on behalf of the mission unit who coordinates the moving pieces to execute as needed. The CVR may have personal relationships with the ten aforementioned PMs/KOs/AOs, but it is irrelevant; the CVR has access to the decentralized contract vehicles and the vendors on contract to deliver effects to commanders and warfighters.

There are many other roles that functionally support military need, like someone to handle model weighting & tuning, managing rollbacks, designing pub/sub fabric, data contract and latency optimization, security control assessor (SCA) assessment, data labeling, data storage, etc.; these are all required functions, but are optimally provided by contractors given the dual-use nature of these roles.

The SDDS and CVR are not “extras.” They are the metabolism of the squadron. Badge them, patch them, and give them Weapons Instructor Course (WIC)-like syllabi and checkrides (more detail on this in Section 11).

Ethics and assurance: certify use-cases, not buzzwords

Anchor every deployment to DoD's AI Ethical Principles, the Responsible AI implementation pathway, and National Institute of Standards & Technologies' (NIST's) AI RMF.[94],[95],[96] Make it operational:

  • Operational Safety Case: tie each model-line to explicit hazards, mitigations, fallback states, and abort criteria; log evidence continuously.
  • Policy Cages: encode ROE, no-go lists, no-strike lists (NSL) and geographic/geopolitical toggles so the same SBOT can be shared with allies but constrained per policy.
  • Continuous attest/verify: treat model/firmware signatures like crypto keys; fail “safe and silent” when provenance or telemetry confidence breaks.
  • Drift and deception drills: red-team adversarial inputs, spoofed GPS, and data voids; rehearse the human-on-the-loop interventions as muscle memory.

Certify the mission thread (ISR, SEAD, logistics) rather than proclaiming “we use AI.” If the thread can’t pass an operational safety case, it doesn’t fly.

Train like you’ll fight (with contractors in the room)

Most of the code will be written by contractors—the civilian software market is the arsenal (as we argued in Part 5). Treat them as teammates inside approved pipelines, not as vendors at arm’s length. Build syllabi where mission owners, PMs, SDDSs, CVRs and industry counterparts rehearse deployments on real platforms and real networks using inherited cATO controls, not lab unicorns.[4],[72] Short exercises should end with a signed SBOT update, telemetry review, and a go/no-go on rollout—shipping is the passing grade.

Data is ammunition (and accountability)

We formalize the commander’s data effects: what datasets move T2D, reduce fratricide, or increase sortie productivity? We fund ingestion, labeling, quality checks, and stewardship as line-items, not afterthoughts.[3],[5] Every model-line declares its data diet up front (formats, cadence, latency), and the DSGM publishes a dashboard the commander can understand without a PhD: Are we feeding the models that win? If not, reprioritize sensors, routes, or contracts the way we reprioritize tankers.

Show, don’t tell: early wins

Point to proofs. As was pointed out specifically in paper #4, DARPA's Air Combat Evolution (ACE) demonstrated that humans are already cognitively inferior in fighter combat to "good" AI.[97] The mechanical capabilities of aircraft designed for USAF fighter tactics have long been far beyond the physiological capabilities of human pilots.[86] The X-62A and follow-on test wing sorties demonstrated within-visual-range maneuver and human teaming with AI under flight conditions.[98],[99],[100]

Put those lessons into the syllabus: define human command verbs, set kill-box policy cages, and practice hand-back when the autonomy hits a guardrail. On the enterprise side, push boring but decisive wins: automated patching and containerized logistics apps delivered over commercial cloud paths with COCO operated cATOshours, not quarters.[71],[72]

Wins that ship change culture.

What changes Monday morning

  • Publish the Model-Line v1 schema and require it on every ABMS/JADC2 thread that involves autonomy.[23],[24]
  • Stand up a Sixth-Gen Airman patch track (especially for SDDSs) with checkrides tied to both software specific skillsets (like SBOT updates and API FRAGO management), but also to tangential skills like VC landscapes, Strategic government acquisitions and funding methodologies (so everything from the PPBE structure and budget, program and activity code (BPAC) management to how the Office of Strategic Capital (OSC) prioritizes funding), and even legacy requirements processes as inevitably, competent 6th Gen Pathfinders will need to deal with PMs at PEOs that are effectively dinosaurs.[88],[93] Utilize DoDI 8140 as a basis for training to start, but lean into the Pathfinder program as documented below for additional structure.
  • Mandate ethics & assurance gates at rollout: no SBOT, no Spectrum ATO, no flight.[94],[95],[96]
  • Task ops squadrons to deliver one telemetry-driven TTP update per month—small, measured, signed—so the habit forms.
  • Put a contractor on every crew for the next three exercises, inside cATO pipelines, to shorten the “say-do” gap.[4]

The Sixth-Gen Airman isn’t a slogan—it’s a redistribution of skill and authority around software effects. When operators command models, when new billets metabolize data and spectrum, and when ethics are executable, the squadron learns faster than the threat. That is how manned, CCA, and sUAS mass turn into advantage on demand.


9) Dynamic Basing Realities (ACE, but Real)

Agile Combat Employment is not a poster—it’s a logistics and data-pipe problem under fire. ACE only works when C2, fuel, parts, and telemetry survive contact with an adversary who is trying to break them in sequence.[79],[80] This section turns ACE from concept art into a checklist you can load on a pallet.

9.1) Survive first: C2 and data pipes under contest

Treat command, control, and data paths as the primary weapon system of ACE. Harden the edge with ZTA enclaves, CVE-driven patching rhythms, and CSF-backed governance that commanders can actually read.[5],[6],[47]

  • PACE for data:
    • Primary: Contracted commercial internet. Use the local Internet Service Provider (ISP) at competitive rates. There's nothing wrong with using commercial internet protocol (IP).
    • Alternate: Contracted commercial SATCOM Internet. Yes, Starlink, using commercial Starlink satellites. Unlike the paltry number of easily targetable and poorly defended US Military tactical satellites (TACSAT) systems, the constellation of more than 8000+ commercial satellites makes the Starlink constellation highly resilient.[101]
    • Contingency: Tactical radio connections (HF/mesh/TACSAT (such as Mobile User Objective System (MUOS))), etc.
    • Emergency: Couriers with signed media.

Regardless of which PACE connection is used, it doesn't matter because every end user connection authenticates using modern ZTA; nothing trusts the subnet.

  • Edge zero trust: pre-approved identities, mutual transport layer security (TLS), device attestation, and deny-by-default routing baked into the jump kit—not a “when we get time” add-on.[5],[6] Every connection must meet Commercial National Security Algorithm (CNSA) 2.0 standards for post-quantum cryptography to not only avoid active attacks, but prevent harvest now, decrypt later (HNDL) tactics.[102] The edge devices themselves must also be red-team proof, including data security for classified data at rest (CDAR).[103]
  • CVE burn-down: daily review and patch of known-exploited vulns prioritized by mission impact; declare “risk debt” in the same breath as fuel state.[47]

If we don’t protect the radio room and the data layer, the rest of ACE is just moving targets around a map.

9.2) C-UAS is table stakes at every site

Every ACE location—no matter how tiny—arrives with a layered C-UAS plan: detect, decide, defeat. This is not optional garnish; it’s survival against cheap mass.[43],[44]

  • Detect with RF sensing, EO/IR, and short-range radar; fuse locally for low-latency tracks.
  • Decide with pre-authorized playbooks for crowded airspace (base, host nation, civil).
  • Defeat with a cost-per-effect ladder:
    • 1. EW/jam first, understanding it has low-cost/long-range, but also low-probability of effectiveness against a competent threat
    • 2. DE where viable
    • 3. Cheap kinetic screen at range third.[76]
    • 4. Exquisite high-percentage kinetic last.[77],[78]
  • Exercise the perimeter: daily drone drills that end in a signed lessons-learned and a firmware update cycle for our own systems (because the enemy updates too).
ACE without C-UAS is a fueling plan waiting to be filmed by a quadcopter.

9.3) Pre-stage multi-cloud and cache the mission brain

Your software will win or lose ACE. Package mission apps and TTPs as containers and pre-stage deployment paths across CSP vendors—no single-cloud heroics.[72] Use the Software Modernization guidance to make offline-first the default.[4]

  • Cold-start bundles: SBOT packages (tactics/models/weights), maps, digital terrain elevation data (DTED), and target libraries signed and cached in theater; sync deltas using commercial tools when pipes return.
  • One-click redeploy: Infrastructure as code (IaC) scripts and artifacts ready for push to any CSP PaaS region; no bespoke builds per vendor.[4],[71],[72]
  • cATO inheritance: platform images and pipelines that carry the controls with them so a pop-up site can go from “power on” to “mission-ready” in hours, not quarters.[4]

If software can’t follow airplanes at ACE tempo, ACE collapses into long-haul taxiing.

9.4) Logistics that fit in a van (and an eVTOL)

Assume the runway is fine but the last-mile is lethal. Use commercial partners and Agility Prime-class mobility for pop-up resupply of LRUs, batteries, and critical spares.[36] Scale spares sized for pickup trucks and electric vertical take off and landing (eVTOLs) vehicles:

  • Power kit: portable generation + battery microgrid, aviation-safe cabling, and power conditioning for radars and comms.
  • C2 kit: SATCOM (Starlink) terminals, directional nodes, and a router that speaks ZTA out of the box.
  • Data kit: SBOT cache drives built to CNSA 2.0 CDAR standards, attestation keys, thin servers for local inference and fusion.
  • C-UAS kit: sensors, jammers, and one organic hard-kill option sized for the site.

Contract the logistics like we contract fuel bladders: predictable, measurable, rapidly swappable. This allows us to also scale commercially developed software tools for logistics management tuned for just-in-time (JIT) that can be modified to our capabilities (a fleet of C-130s that can land on dirt runways isn't in DHL's current inventory) and restrictions (People's Liberation Army-Navy (PLAN) has submarines patrolling offshore) by our COCO software factories.

UVAF because AI still doesn't like words.

9.5) Train to the adversary’s kill chain, not our comfort

The enemy won’t attack your strongest point; they will sequence your weak ones: SATCOM → fuel → spares → data → morale. Build exercises that mirror that order of battle and score by time-to-recover, not just sortie count.[80]

  • Red team the pipes: jam, spoof, and brown-out the PACE plan—can we still launch, land, and strike with stale data windows? Is our data capable of operating in a denied, disrupted, intermittent and limited (DDIL) environment.
  • Fuel as the center of gravity: secure the bowsers and bladders; rehearse decoys and dispersal under drone observation.
  • Data latency drills: fight with 10× worse latency than planned; does the mission still complete with degraded SLOs?
  • Inventory-burn traps: rehearse deny-deceive-deplete so we don’t waste interceptors on $300 nuisances.

When we grade the exercise on reconstitution speed and decision continuity, ACE stops being theater and becomes tradecraft.

9.6) Roles and contracts that make ACE real

Bring contractors inside the wire with inherited controls and pre-cleared pipelines. At ACE sites:

  • Put a Telemetry Engineer and a Model Wrangler on every main-operating base and have them ride the ACE rotations; they own caches, SLOs, and rollbacks.
  • Write effects-based contracts for uptime, latency, and patch SLAs on edge stacks instead of bespoke features—pay for availability and speed, not PowerPoint.
  • Use CSP PaaS vehicles to move workloads between vendors without penalty; no platform lock-in—mission first.[4]

What changes Monday morning

  • Publish the ACE Edge Bill of Materials (BOM) v1 (power, C2, data, C-UAS) and make it a pre-deployment inspection item.[79]
  • Mandate ZTA + CVE check-ins at the daily ACE sync (two numbers: % assets attested; CVE backlog days).[5],[6],[47]
  • Require C-UAS drills at every site before first hot-pit, logged to a common telemetry store.[43],[44]
  • Pre-load SBOT caches to theater and verify attestation/rollback before wheels-up.[4]
  • Exercise multi-cloud by moving one live mission app between two CSP vendors during the next ACE event (with a real sortie depending on it).[71],[72]
  • Score the exercise on time-to-recover C2, time-to-fuel, time-to-data, not just sorties flown.[80]

ACE becomes “real” when mobility, software, and security move together. Protect the pipes, layer the perimeter, pre-stage the brain, and contract for effects. Do that, and dispersed squadrons stop being brittle pop-ups and start behaving like a resilient, learning network.


10) Organizational Design: DIU Elevation, New MFPs

We keep telling ourselves the future fight is software-defined, data-fueled, and network-sustained—and then we budget and accredit like it’s still 1998 using a requirements system for an economy like it's still 1966. This section is the plumbing fix. It takes the behaviors we’ve already proven at small scale—cATO pipelines, JADC2 data contracts, MOSA interfaces, hunt-forward telemetry loops—and wires them into the only things the system truly obeys: money and authority. The thesis is simple: elevate DIU to portfolio command, give them and CYBERCOM the same portable money Special Operations Forces (SOF) have, and stop forcing American industry to split its codebase (and soul) just to cross the moat. The second-order effects—on PPBE velocity, vendor incentives, commander agility, readiness reporting, and even VC investment habits and Veterans Affairs (VA) actuarials—are not side notes; they are the point.

SOCOM is a UCC with an MFP. CYBERCOM is a UCC without one. Close that gap.

When Congress created SOCOM as a unified combatant command with its own Major Force Program (MFP), specifically MFP-11, the incentives changed overnight.[104] SOF could pull from a portable, cross-service pool of money, shop for capabilities where they actually existed, and scale insertions across services without begging each PEO for permission. That alignment—operational authority paired with portfolio money—made integration a mission with teeth, not a staff sport. Cause ➝ cash ➝ effect, in one loop.

The Nunn-Cohen Amendment prevented the next Desert One disaster better than any commander driven policy could have.

CYBERCOM is already a UCC with global responsibilities—defend forward, hunt forward, persistently engage.[18],[19] But most of the dollars still live in service stovepipes and platform program elements (PEs). The result is structural drag: toolchains negotiated one program at a time; hunt-forward teams that can’t bring their telemetry home at speed; “shared” cyber environments that melt when joint scheduling meets color-of-money rules. We declare cyberspace a warfighting domain and then fund it like a ticket queue. That contradiction is operational debt.

Proposal: create two new MFPs.

  1. MFP-CYBER (operational). This funds force packages for offensive cyber operations (OCO), defensive cyber operations (DCO), deployable toolchains, partner-network on-ramps, and the data/identity/telemetry fabric those teams ride on. Its PEs mirror how cyber is actually fought: data contracts, identity management, continuous integration/continuous delivery (CI/CD) pipelines, effects tooling, hunt-forward rotations, not just billets and buildings. Outputs tie to doctrinal measures that commanders already brief: time-to-patch CVEs/KEVs on priority networks, intrusion dwell reduction, partner telemetry on-boarding rates, and campaign objectives met.[18],[19],[47] MFP-CYBER budget lines should be a healthy balance of 0100 for Operations & Maintenance (O&M), 0300 for Procurement and 0400 for Research, Development, Testing & Evaluation (RDT&E). Software development as a value stream oriented service by COCO software factory operations will be done under all three colors of money (RDT&E, Procurement, and O&M) depending upon the use case.[105]
    1. For any new start of software to fulfill a newly aligned mission problem (aligned with a Joint Capability Area (JCA)), prior to delivery of a minimum viable product (MVP) to the warfighter, development is done under RDT&E. The particular category of RDT&E—6.1 through 6.7—can be utilized in accordance with the level of technical risk and technical readiness level (TRL) of the development.[106]
    1. For modification of an existing deliverable already deployed and operational, O&M is the appropriate budget code. The majority of software development done, year-over-year is done on an O&M budget.
    1. For purchasing licenses of dual-use software, low-rate initial production (LRIP) orders using RDT&E color funds in the 6.5 or 6.7 category may be appropriate to modify certain features to be cATO-ready, but then follow-on acquisitions of bulk licenses should be done with Procurement funds.
  2. MFP-INNOVATION (insertion and scale). This lives with DIU and pays for time-to-field: joint front-door sourcing of dual-use software; reciprocal cATO at scale; MOSA conformance enforced by automated test suites; and the middleware/adaptors that let commercial stacks live on a CSP PaaS and at the edge without forking. [4],[8],[28],[29],[30],[31],[49],[72],[107],[108],[109] MFP-INNOVATION, like MFP-CYBER draws from 0100, 0300, and 0400 funds, but significantly more imbalanced in preference to 0300 and 0400. As a lean organization that ships solutions back to the field, DIU's need for O&M is significantly less than CYBERCOM's, but it needs significantly more for RDT&E and probably a similar amount for Procurement. With regards to the RDT&E, most would be aligned to 6.3, 6.4, 6.5 and 6.7; the 6.1 and 6.2 categories would remain focused on deep research from organizations like DARPA or the service laboratories. 
    1. DIU's discretionary budget would be able to flow not merely to Joint acquisitions, but be executed for individual services through DIU's subordinated service innovation organizations.
    1. As every organization within the US Federal Government with an RDT&E budget must pay a "small business tax" (of approximately 3.2%) to the Small Business Administration (SBA), the SBA pots of funds can get rather large; in 2025, the AFWERX budget derived from SBA tax on the Department of the Air Force's (DAF's) RDT&E was $1.4b.[110] Given the success of organizations like AFWERX's use of the "Open Topic" versus the incredible waste of SBA funds when AFRL ran the small business innovation & research (SBIR)/small business technology transfer (STTR) program, the service innovation organizations will all be empowered with the entire service aligned SBIR/STTR programs and SBA funds, and be functionally organized under DIU. Their entire SBA alignment of funds must adhere to the original service. DIU will not be authorized to reallocate USAF SBIR funds to a USN project for example. Given DIU themselves will have a large discretionary RDT&E budget, they can align their own SBIR funds anyway they choose, to include executing them through CSO OTAs in accordance with the MTA model as opposed to using traditional Federal Acquisition Regulation (FAR) based contracts.

Second-order effects.

  • Commander agility increases because money follows outcomes, not org charts. If a CYBERCOM hunt-forward detachment aligned to US European Command (EUCOM) proves a new sensor in Tallinn, MFP-CYBER buys the rollout across the remaining UCCs in-year—no scavenger hunt through three services’ PEs.
  • Platform PEOs get relief: instead of being conscripted into ad-hoc “jointness,” they build to published API/data contracts.
  • Allies benefit faster: funding for telemetry gateways and reciprocity becomes a first-class line, not a bake sale, accelerating coalition learning loops that feed back into U.S. defenses.[18]
  • Readiness reporting gets real: when time-to-patch (CVEs/KEVs), T2D (JADC2), and time-to-field (MTA/Software Acquisition Pathway) are budget-visible measures, the slide that matters becomes a chart that moved.[9],[23],[24],[47],[73],[74]

Blue Hair is perfectly fine if you can code. That's what you're paid for, not military dress and appearance. (Adobe Stock)

Bias towards contractors for software delivery—by design, not apology

We need warfighters to seize and hold terrain, endure G-loads, and absorb shocks that only fit, trained humans can survive. We also need teams who can shave 300 ms off an inference path at 0200, refactor a data pipeline before the next sortie wave, and collapse a CVE/KEV patch window from days to hours. Those are both forms of combat power—one kinetic, one cognitive. They demand different human constraints.

  • Fitness is for the objective; shipping is for the effect. Uniform standards exist to ensure people can fight and survive under physical duress. Software effects are delivered through keyboards, CI/CD, model ops, and telemetry. A coder who lives on Pringles + Mountain Dew and ships a risk-reducing patch faster than the adversary can pivot is performing warfighting labor in the only currency that counts—reduced mission risk.[23],[24],[47],[72] Optimize them for delivery, not dress-right-dress.
  • The market is our arsenal. The best developers earn more on the open market than E- or O-grade pay can match. Inventing a one-size-fits-none Military Occupational Specialty (MOS)/Naval Rate/AFSC for “software developer” (with temporary duty trips (TDYs), physical training (PT) tests, promotion boards, and non-coding collateral duties) is a great way to recruit mediocre coders and lose excellent ones. Contract for outcomes instead: SLOs for time-to-field (MTA & Software Acquisition Pathway),[9],[73],[74] T2D (JADC2),[23],[24] and time-to-patch (CVE/KEV).[47] You can’t order talent to appear; you can pay for measurable effects.
  • Health economics are not a side plot. Hire coders as contractors at competitive market rates and the DoD + VA doesn’t inherit their 30-year care curve. That’s not callous; it’s clarity. We’re buying short-cycle learning and delivery, not lifetime wellness. Keep uniform billets where they must deploy and fight. Keep contractor billets where they must sprint and ship. That division lowers long-run federal health liabilities and increases near-term operational speed.

Second-order effects.

  • Uniformed talent concentrates on command, integration, weapons/tactics, security, expeditionary ops, and mission ownership. Contractor talent concentrates on algorithm engineering, distributed systems, dev tooling, user experience (UX), and data plumbing. The handoff is governed by API FRAGOs and data contracts (more below), not “good vibes” and hallway agreements.
  • Retention flips: coders stay because contracts reward delivery; officers stay because their decision loops finally have software that keeps up.
  • Security posture improves: the fastest way to patch CVEs/KEVs is to let people who patch them for a living do it, on shared platforms, with reciprocity.[47],[107],[108],[109]

“Two dads” that actually works: DIU + the Services, with SBIR/STTR aligned by equity

This looks complicated until you admit we already do it in other communities.

  • DIU is the portfolio parent (“Dad #1”). DIU runs the front door for dual-use capability.[49] It sets the inheritance package (identity, logging, SBOM, CI/CD evidence, telemetry) that enables cATO by design;[4],[8],[107],[108],[109] enforces MOSA with automated conformance suites (FACE/UCI/CMOSS);[28],[29],[30],[31] and steers MFP-INNOVATION to the inserts that create joint leverage fastest. DIU is thin headquarters, thick interfaces.
    • DIU also owns manning decisions for subordinate innovation units, ensuring manning is aligned to innovation requirements and capabilities, not merely "insert random Officer here" that's plagued units like NavalX and AFWERX. Future incoming AFWERX manning for example will be Pathfinder personnel from USAF then selected by DIU. No more traditional acquisitions officers thinking a critical design review (CDR) is necessary for software showing up and breaking delivery while they hide their ignorance of everything from Agile software ceremonies to MTA rules behind a rank symbol.
  • The Services keep SBIR/STTR funds (“Dad #2”). As stated above, the SBIR/STTR “tax” comes from service RDT&E.[110],[111],[112] Keep it aligned to service equities: DAF's set-aside flows to AFWERX/SpaceWERX,[112] Navy’s to its innovation arms, (NavalX, and Marine Innovation Unit (MIU)), the Army to Army Application Lab (AAL), etc. DIU has oversight of manning standards, joint work-share, and shared platform usage, but funding intent for SBIR/STTR remains service-specific—just like Air Force Special Operations Command's (AFSOC's) “two dads” reality (SOCOM + USAF). One parent optimizes for joint scale/standards; the other for service mission maturation.
  • Execution lives where the code lives. Service software factories—Kessel Run, Kobayashi Maru, LevelUp, Platform One, Business & Enterprise Systems Product Innovation (BESPIN), Space CAMP, Army & Navy factories, etc.—do the building and sustainment.[108],[114],[115],[116],[117],[118],[119] DIU provides the guardrails, platforms, and go/no-go decisions; services control mission priorities and production rhythm. No one loses the tiller; everyone stops rebuilding the dock.

Second-order effects.

  • Small businesses get a sane on-ramp: a single security inheritance package and MOSA test harness works everywhere, reducing cost of selling to DoD by an order of magnitude.
  • Fraud risk decreases (counterintuitively) because deliverables are machine-validated (SLOs in telemetry, MOSA tests passing/failing) rather than Portable Document Format (PDF)-driven “progress.”
  • Allies can plug in at the interface: if your data meets the contract, the joint thread can ingest it; if not, no political theater can waive it.

Budget mechanics that pay for speed (and deprecation)

You don’t change behavior with slogans; you change it with how the money moves. We keep the discipline of A-11 and the FMR, but we turn the knobs that matter.[21],[22]

  • BPACs. Commander-controlled pools inside MFP-CYBER and MFP-INNOVATION that shift mid-execution across vendors or factories when SLOs move. Beat the latency SLO to the edge? BPAC slides your way. Miss time-to-patch on the CVE/KEV list? BPAC slides away. No re-POM dance to swap one containerized service for another. (This is what “agile budgeting” looks like when you’re not doing theater.)
  • Outcome Contract Line Item Numbers (CLINs). Contract line items keyed to measured effectstime-to-field (MTA/Software Acquisition Pathway),[9],[73],[74] T2D (JADC2),[23],[24] time-to-patch (CVE/KEV),[47] cost-per-effect (CCA/sUAS)[32],[33],[36]—and tech debt retired.[20],[27]

If it can’t be instrumented and briefed, it shouldn’t be paid. If it moved mission risk, it should.

  • Tech-Debt Bounties. Pay teams to kill dead apps and brittle interfaces. If your squadron retires the bespoke gateway that breaks MOSA and cATO inheritance, that’s not a “capability loss”—it’s maneuver restored (and it should turn green on the PPBE effects review).[20],[21],[22]

Second-order effects.

  • Program offices stop hoarding because money follows delivered SLOs, not eloquent slide decks.
  • Vendors compete on performance (warfighter reviews, latency, availability, mean-times for effects, accuracy) that a commander cares about, not on last-minute proposal polish.
  • Deprecation gets political cover: when “we turned three red metrics green by deleting X” becomes a normal PPBE sentence, nobody wants to be the office that hoarded X.

Do not force forked codebases

If we want the entire U.S. software economy—from 3-person startups to hyperscalers—to serve the warfighter, we cannot demand they split their code into “commercial” and “special DoD” branches and then carry duplicative compliance burden forever. That path leads to cost, latency, and vendor attrition. Worse, it will result in less vendors supplying the warfighter, and means the code for DoD is actually stale and less prioritized than its commercial counterpart.

The alternative scales:

  • Inheritance over reinvention. Vendors clear a standard inheritance package once: for software developed on behalf of the government, identity, logging, artifact hygiene, SBOM, runtime isolation, telemetry, and policy controls mapped to modern commercial ZTA,[6],[7] SSDF,[45] and SBOM practices.[46] They reuse that package across tasks, programs, and services (cATO by design).[4],[8],[107],[108],[109] Evidence is machine-readable; reciprocity is automatic.
  • Contracted middleware adaptors. Pay integrators to wrap modern stacks (containers, event buses, managed data planes) so they run on a CSP PaaS and at the edge without codebase bifurcation.[4],[71],[72] Bring the platform to the vendor, not the vendor to bespoke platform purgatory.
  • Reciprocity as policy, not folklore. If a control is inherited on an authorized software factory, it’s recognized wherever that software factory (for contracted development) or CSP PaaS (for dual-use acquisitions) is authorized—without re-papering.[107],[108],[109] If a vendor’s SBOM and CI/CD controls cleared one program, we don’t make them print the same evidence with a different header and 12-point Sans Serif font.

Second-order effects.

  • Market breadth increases: more firms can play (and stay) because the unit economics make sense.
  • Patching speed increases: a vendor shipping weekly to Fortune 100 can ship weekly to DoDif we let them keep a single codebase.
  • Security posture improves: monoculture risk goes down when we can switch vendors (see MOSA) without 12 months of re-ATO.

On CMMC: stop the rent-seeking, anchor to standards that move risk

Cybersecurity Maturity Model Certification (CMMC) as a cottage industry has created a bureaucratic moat that too often keeps out exactly the dual-use innovators we want. We already have solid baselines: NIST 800-171 (what to protect),[120] SSDF (how to build),[45] SBOM (what’s inside),[46] ZTA (how to connect),[7] and CVE/KEV-driven patching (what to fix now).[47] Map CMMC to those existing rails, automate the evidence with pipeline attestation, and kill duplicative audits that do not move risk to the mission.

Second-order effects.

  • Audit time becomes engineering time (because attestation is generated by CI/CD, not copy-pasted into PDFs).
  • Supply-chain visibility increases: SBOMs tie to CVE/KEV alerts (automatic crosswalk), and mission owners see which workloads are truly at risk.[46],[47]
  • Hyperscaler leverage is unlocked: CSP PaaS + reciprocity means we can ride cloud platform security improvements as they land, not a year later.[71],[72],[107],[109]

API FRAGOs: interfaces as orders

The Amazon story told by the Google Engineer (Steve Yegge’s “all services must communicate through interfaces” memo) is lore because it forced a culture change that composes at scale.[121] We need the same zeal with API FRAGOs.

  • Every fielded capability must publish an interface (versioned contract, test harness, sample events). No interface, no fielding.
  • Schemas read like op orders. A JADC2 thread should literally say: “Unit A publishes events X at ≤200 ms; Units B/C subscribe; decision authority Y has a 2-minute error budget; failure mode Z triggers fallback path Q.” That’s doctrine, not annex.[3],[23],[24]
  • MOSA with teeth. FACE/UCI/CMOSS conformance is an automated gate.[29],[30],[31] You don’t pass the suite, you don’t join the formation—no exceptions for the well-connected.
  • Interface Readiness Levels (IRL). No initial operational capability (IOC) ribbon until your events flow across two joint pathways under operational load with telemetry to prove it. That's when your MVP becomes IOC.

Second-order effects.

  • Swap-ability becomes normal: when the interface is doctrine, vendors compete on SLOs; switching is evolution, not re-platforming.
  • Testing becomes training: the conformance harness doubles as the mission rehearsal data generator.
  • Coalition scale: allies implement the same contracts and immediately “speak the language” of our kill chains.

Put CDAO over the data contract, end-to-end

If data is ammunition, someone has to own caliber, fill, and fuse. Task CDAO to own the data SLAs and lineage across JADC2 threads.[3],[23],[122]

  • Thread-level SLAs (ISR→Target→Shooter; Mobility→Fuel→C2): latency, completeness, accuracy, and error budgets, published and enforced like air tasking orders (ATOs).[23],[24]
  • Mission lineage by default: every model weight traceable to source datasets; stale data flagged and retired like expired ammo.
  • Zero Trust around crown-jewel datasets, with posture dashboards commanders actually use (not just CIO weeklies).[5],[6]

Second-order effects.

  • Disputes resolve in minutes: the “whose data is wrong” debate becomes a dashboards & logs discussion, not a months-long governance fight.
  • Budget ties to effect: when a wing’s T2D drops because the data SLA was met, the BPAC moves and everyone feels it.[21],[22]

Governance you can brief without a thesaurus

  • DIU Board (quarterly). Chairs the portfolio; selects insertions; moves BPACs to the performers beating SLOs.[49] Thin agenda, hard metrics.
  • CDAO Data Court (monthly). Adjudicates schemas and metrics; publishes a red/amber/green by JADC2 thread.[3],[23],[122]
  • CYBERCOM Campaign Sync (monthly). Aligns hunt-forward telemetry and defensive priorities so MFP-INNOVATION lanes stay open (no last-minute “security theater” ambushes).[18]
  • PPBE Effects Review (quarterly). Comptrollers verify money followed delivered effects and retired tech debt, and record in-year BPAC shifts.[20],[21],[22]

Second-order effects.

  • Empire-building gets starved: if it doesn’t move SLOs, it doesn’t get fed.
  • Transparency becomes deterrence: everyone knows who moves the needles; collaboration follows performance.

Guardrails: risks and mitigations (because success has predators)

  • Risk: central capture. A powerful DIU or MFP can ossify if it becomes a procurement cartel.
    Mitigation: keep DIU thin (platforms, standards, tests), keep BPACs commander-controlled, and measure everything against SLOs visible to the field.
  • Risk: lowest-bidder dilution. Cheap software that misses the point can swamp the pipes.
    Mitigation: This has two solutions, depending upon the use case.
    • 1.) For software written from a software factory workorder, enforce API FRAGOs and data SLAs; if you can’t meet the contract under load, you don’t get on the net.
    • 2.) For license cost based applications, such as dual-use technology, incentivize vendors on a pay-per-mission-use model; sloppy software the user eschews will generate no revenue and naturally fall off based on metrics usage. Viable tools will thrive for both the warfighter and the author through increased revenue.
  • Risk: vendor lock-in creep. Even with MOSA, gravity pulls toward single-vendor comfort.
    Mitigation: require multi-vendor validation on every thread before IOC; rotate the “second source” annually; pay tech-debt bounties for successful swaps.
  • Risk: classification gravity. Over-classifying data and interfaces strangles coalition speed.
    Mitigation: push lowest necessary classification for schemas and test vectors; separate data shapes (often unclassified) from data contents (classified).

Monday orders (that move needles in 90 days)

  1. Draft and route charters for MFP-CYBER and MFP-INNOVATION with Outcome CLINs keyed to time-to-field, time-to-patch, and T2D (the only metrics a commander should have to memorize).[9],[23],[24],[47],[73],[74]
  2. Issue a DIU front-door directive: publish the cATO inheritance package, the MOSA test suite, and the first API FRAGO for a cross-service kill chain [4],[8],[28],[29],[30],[31],[107],[108],[109]
  3. Stand up BPAC pilots in two AORs; pre-authorize in-year shifts based on SLO deltas; require a one-page postmortem for each shift (what moved, who unblocked it).[21],[22]
  4. Task CDAO to publish the first JADC2 data SLAs and error budgets; tie one fiscal year (FY) reprogramming to those metrics; make the dashboard visible inside the fence.[3],[23],[122]
  5. Formalize the two-dads model: services keep SBIR/STTR aligned to their equities while DIU sets manning standards and joint priorities; execution flows through service factories.[49],[108],[111],[112],[113],[114],[115],[116],[117],[118],[119]
  6. Publish a Reciprocity & ATO Reuse memo, v2: “No forked codebases. Evidence = pipeline. Platform inheritance recognized across programs by default.”[4],[8],[107],[108],[109]
  7. Move CDAO as subordinate to DIU and aligned with CYBERCOM as supported UCC. CDAO roles gain access to both MFP-CYBER and MFP-INNOVATION and makes JADC2 integration real from the acquisition level all the way through to combatant threads in execution.
  8. Create API FRAGO template. If you ship without an interface, you didn’t ship.[121]

Why this works (and why the enemies of it will be loud)

This is not a love letter to centralization; it’s a thin central nervous system with strong edges. DIU sets rails. MFPs carry portable fuel. CDAO defines the caliber and fuse of our data ammunition. Services race on top of that with their factories, their SBIR pipelines, and their mission owners. CYBERCOM fights as a domain with money that matches mandate. Vendors compete on SLOs, not proximity to conference rooms. Commanders maneuver software like they maneuver metal. And when something better appears (it will), API FRAGOs and BPACs let us swap it in this quarter, not “next POM.”

Will parts of the system hate this? Yes. (Some people’s status is tied to how slowly gates move.)

But the alternative is to keep losing calendar while we win PowerPoint.

The war doesn’t care how elegant our org chart looks; it cares whether our time constants—to patch, to decide, to field, to fuse—beat the adversary’s. Give CYBERCOM and DIU the money; stop forking code; order with interfaces; pay for effects; reward deprecation; and watch how fast the rest of the map redraws itself.


11) Workforce & Career: Pathfinders + a Gig-Style Economy (and the New WepTacs)

We know the pattern by heart: a handful of outrageously effective teams prove something at squadron scale; the demos impress everyone; “lightning in a bottle” gets bottled—and then dies on the shelf. Two years later, the people who made the lightning leave for market-rate jobs and if we're lucky, the organization writes a lessons-learned memo that reads like a eulogy; often the unit just withers and morale tanks for those left behind trying to hold it together while new leadership that destroyed the innovation culture wonders why no one likes them. This section is about ending that cycle with two mutually reinforcing moves:

  1. Institutionalize a Pathfinder cadre—a real “patch” community that owns integration, TTP evolution hand in hand with the WIC patches for each unit, and value-stream governance across DIU, service factories (both software and innovation/dual-use acquisition focused), and operational units. Thus, they become the connective tissue innovation actually needs.

    Pathfinder in this case is the USAF experience track which is both a duty identifier (8Y) and a special experience identifier (SEI). Ideally this exact same model would be employed by the other services, with MOS/Rate staying aligned to the supported unit, an experience identifier akin to WIC graduation used for weapons-systems aligned assignments, and then a duty identifier used for assignments to higher-tier positions.
    1. Pathfinder tiers and the WIC-like path. In USAF, entry from "slick" AFSC into Pathfinder will be nomination, but the normal path will be to:

      1.) Graduate from programs like Project Mercury or Blue Horizons (by default, all Blue Horizons graduates will be Pathfinder qualified), or
      2.) To be initially assigned to a Spark Cell, an AI accelerator, or similar, and complete your alignment there successfully.

      Like WIC patches, Pathfinders are aligned with their actual AFSC and stay aligned with their career field throughout their career, but in an innovation track controlled by the Pathfinder board. Also like WIC Patches, they are Tiered and until senior leadership (E-8 for Enlisted, O-5 on the command list for Officers), they are "pipelined" for Innovation roles. Every unit will eventually have a minimum of one Pathfinder aligned for this purpose.
      1. Tier 1: Tactical/Operational;
        1. The majority of Pathfinders will be unit based at a tactical (squadron or equivalent) level. These are assignments based upon SEI as the Pathfinders stay in their primary AFSC/MOS/Rate. During Tier 1 time, most Pathfinders will complete a Defense Ventures Program (DVP) fellowship (yes, we must bring back DVP; more on this later). Pathfinders are not exempt from their primary AFSC in these assignments. Like their WIC-patched brethren, they'll still fly or control and work with their unit daily, in the normal chain of command.
        1. Other Tier 1 assignments that are associated with duty identifiers (so are primary AFSC/MOS/Rate independent) include: Spark Cell senior level (e.g. Non-Commissioned Officer in Charge (NCOIC), Deputy Director); and service innovation unit (AFWERX/NavalX/MIU/AAL) staff at the action officer (AO) level (such as PMs, etc.)
      1. Tier 2: This is Strategic alignment, and most of these roles are AFSC/MOS/Rate independent, meaning they are all aligned to a duty identifier, not just the SEI; roles include chairing Software or Innovation Weapons & Tactics Conferences (WepTacs), working at DIU/OSC/etc. in policy level roles; AFWERX/NavalX/MIU/AAL senior leadership (Director Level); Spark Cell command; assignments within the Pentagon or MAJCOMs (and other service equivalents) in various locations.
      1. Tier 3: This is teaching at Blue Horizons/etc. (and the equivalent for non-USAF units); working in senior leadership roles at DIU/OSC/etc., higher level positions in the Pentagon or MAJCOMs (and other service equivalents), and for our very best with years of experience, command of units like AFWERX, NavalX, MIU, AAL, DIU, OSC, CDAO, etc.
    1. Pathfinders (and the equivalent in USN/USA/USMC) will be the only personnel considered for command of units like AFWERX/NavalX/MIU/AAL; gone will be the days of putting a senior materiel leader (SML) from an acquisitions career field in charge of an organization like Kessel Run and promptly running it into the ground by destroying the culture.
    1. Pathfinders will be the Technical Point of Contact (TPOC) for SBIRs; this has two benefits:
      1. They understand the acquisition process on the back end, ergo, they can actually deliver more than just a single grant worth of work in the scope of the SBIR Phase I or Phase II period of performance (PoP), giving a pathway to success for the warfighter in their aligned MDS.
      1. This also benefits the commercial winner of the SBIR as a Pathfinder TPOC can help guide them through the byzantine road to acquisitions. Currently, when a company finds a motivated operator at a unit who understands the warfighter need and even the tech stack involved, but is utterly clueless about how to actually work the acquisitions system, that TPOC can't get the company through the Valley of Death even if they've helped craft an amazing piece of kit/software that genuinely works and makes the warfighter more lethal.
    1. Pathfinders will also play a role in the SBIR Evaluation System which desperately needs modification. The former evaluation system, where organizations like AFWERX in USAF and xTechConnect in USA evaluated SBIRs internally, simply cannot scale. Tens of thousands of proposals do not get the proper attention required. It is not uncommon for a founder team to spend 40+ hours of work to build a SBIR proposal to only be seen by an evaluator for 15-30 seconds. AFWERX's reaction, while admirable in that it has created a "protest-proof" model of opening evaluation to the entirety of USAF has been a disaster as well, with any semblance of objective evaluation criteria disappearing. Evaluators in USAF for Open Topic currently must only pass a brief online class and aren't trained on most things they need to be effective evaluators.

      The answer is a two stage evaluation system that would drastically improve quality and maintain an open, "protest-proof" model, but allow for experts to monitor a body they are intimate with:
      1. Stage 1: Embrace the Inverse Token Allocation Model of Preference Elicitation (also called the "Bag of Lemons Approach") for initial evaluations. While the tokenized model for picking winners when the evaluators do not have an objectively standardized expertise of the evaluated material is not much better than the current system—because it's very rare for two people to agree what "Excellent" looks like—it turns out that most people have a pretty objectively similar idea of what "Crap" looks like. So using the tokenization model—as an example, if you agree to do 20 evaluations, you get 20 tokens, and you can apply those tokens to any evaluations you evaluate any way you want, such as putting 10 tokens on one evaluation, one each on nine more, and leaving ten evaluations with no tokens. The token distribution is totally up to the evaluator. Only they are applying tokens to the bad evaluations they don't like.[125],[126],[127] The result is that the pool of SBIR proposals drastically reduces in size to a manageable number that requires actual experts to evaluate. Every proposal would still get the three assessments in accordance with SBA policy.[111]
      1. Stage 2: Enter the Pathfinders—at all tiers/assignments—at this stage, the Pathfinders are assigned SBIRs based upon MDS alignment/skillset and the evaluation process still follows the SBA guidance for evaluating based upon a.) technical merit, b.) commercial applicability, and c.) mission fit/military applicability.[111] The Pathfinders are picking the winners from the reduced pool after Stage 1.
  2. Stand up Innovation & Software WepTacs (separate)—modeled on the rhythm of the Combat Air Forces (CAF) WepTac ran by Air Combat Command's (ACC's) A3TW at Nellis, but scoped for:
    1. Software WepTac: This is about software + data effects—to generate hard outputs (interfaces, SLOs, contracting patterns) that survive first contact with PPBE and production (think exercisepublished playbookfunded change).
    1. Innovation WepTac: This one is about moving money, rights, and risk so dual-use tech actually flows from market → mission without pitch-theater detours.

Where Section 10 wired money and authority (MFP-INNOVATION with DIU, MFP-CYBER with CYBERCOM), Section 11 wires people and process to those rails—and does it without pretending coders should look or live like JTACs (they shouldn't).


What the Software WepTac is (and is not)

It's not a conference; not a “demo day;” not a panel about “culture.” Conducting the Software WepTac in a place like Las Vegas at/around DEFCON or the Black Hat conferences is definitely smarter than putting it in Montgomery, Alabama, though location may vary.

The Software WepTac is an operational proving ground for software-defined effects with the cadence and discipline of the CAF WepTac at Nellis—tasking, injects, red teams, vignettes—focused on the muscles we keep skipping: interfaces, data contracts, cATO inheritance, SLOs, and contracting lanes. It’s chaired by the CYBERCOM J-3 and where Pathfinders, DIU, CDAO, service factories (Kessel Run, Platform One, BESPIN, Space CAMP, Army/Navy factories),[107],[108],[109],[114],[115],[116],[117],[118],[119] and operational units iteratively produce:

  • API FRAGOs for two or three joint kill-chain threads (ISR→Target→Shooter; Mobility→Fuel→C2): versioned schemas, error budgets, and decision authority timing.[3],[23],[24]
  • Data SLAs owned by CDAO (latency, completeness, accuracy), with test vectors and synthetic data sets ready to run on CSP PaaS, at the edge, and in coalition enclaves.[3],[71],[72],[122]
  • cATO-by-design playbooks (inheritance package; CI/CD-attested controls; evidence formats; reciprocity rules) that vendors and factories can actually use—once, everywhere.[4],[8],[107],[108],[109]

Structure (five days, rinse/repeat quarterly):

  • Day 0 (Admin + Unfreeze): Publish vignettes, injects, interface expectations, and SLO targets two weeks prior. Everyone arrives with the code + adapters they intend to field (no “we’ll get you a mock by lunch”).
  • Day 1 (Observe → Orient): Red brief on CVE/KEV/opposing forces (OPFOR) injects; CDAO posts target metrics; DIU posts the inheritance pack and MOSA test suite.[8],[28],[29],[30],[31],[47],[122]
  • Day 2 (Decide): Pathfinder cells run interface spikes, capture breakage, and write API FRAGO drafts. Contracting cell maps which lanes can fund fixes now (BPACs inside MFPs, SBIR Phase 3 task orders, rapid OTAs).[21],[22],[111],[112],[113]
  • Day 3 (Act): Teams ship patches to the live exercise fabric (CSP PaaS development network + edge kits) with cATO evidence posted by pipeline; SLO dashboards are published.[4],[71],[72],[107],[109]
  • Day 4 (Debrief → Publish): Output three things in writing: (1) the updated API FRAGOs / data SLAs, (2) a Contracting Ordering Guide for what just worked (templates, CLIN types, evaluation factors), (3) the Manning & Rotation requests for Pathfinders at wings/NAFs/MAJCOMs (and the equivalent in other services).
  • Day 5 (Tiger Team carry-over): A small team (predesignated) stays on target two weeks post-event to push the artifacts into real programs (justification and approval (J&A), mod to existing IDIQs, SBIR topic pivot, MTA decision memo). No artifacts → it didn’t happen.

Why this will stick when other “innovation weeks” didn’t: The outputs change interfaces and money flows. If you leave with an API FRAGO and a funded ordering guide, your win isn’t trapped in a slide.


Dual-Use Acquisitions / Innovation WepTac (3 days to keep dual-use front doors open)

This WepTac is not about pushing code; if the Software Factory WepTac makes software fast, the Dual-Use Acquisitions/Innovation WepTac makes that speed purchasable—and keeps AFWERX, NavalX, MIU, AAL, DIU (and OSC/CDAO) breathing when the budget tide goes out.

Think of it as a three-day, elbows-in sprint to publish (not “discuss”) six things:

  1. A joint ordering guide (AFWERX,/NavalX/MIU/AAL/DIU-blessed) that any ops unit can actually use next week.
  2. Rights & reciprocity templates that prevent vendor lock and kill code forking.
  3. A portfolio-to-PE/BPAC mapping so commanders can pay for outcomes without re-wiring the POM.
  4. A market signal memo (to VC and primes) that says what the next four quarters will buy and how (by interface, not brand).
  5. Contracting patterns that match how we really deliver: SBIR Phase 3s/OTAs, decentralized IDIQs with clean ordering guides, and Outcome CLINs tied to mission SLOs (these metrics should be memorized by now: time-to-field via MTA/Software Acquisition Pathway; T2D via JADC2; time-to-patch via CVE/KEV).[9],[23],[24],[47],[73],[74]
  6. Manning templates (the #1 cause of innovation unit failure): billet management (with an open Kimono) for the Pathfinders, rotation patterns, and the “two dads” alignment with DIU and the Services.[49],[111],[112],[113]

Who’s in the room (and why)

  • AFWERX, NavalX, MIU, AAL, DIU: owners of the dual-use front door (intake), the scale-up lane (DIU oversight), and the service equities (USAF, USSF, USN, USMC, USA mission needs).
  • Pathfinders (your patch community): the glue—because they own API FRAGOs, data SLAs, and cATO inheritance in ops threads.
  • Contracting & pricing: to make the ordering guide real (Outcome CLINs, SBIR Phase 3s/OTAs, decentralized IDIQs).
  • CDAO & DoD CIO reps: data SLAs, ZTA inheritance, reciprocity memos—so we stop re-auditing the same pipelines.
  • OSC reps: advocate for DIME policies relative to government investment and VC velocity.
  • Selected PEOs / requirements leads: to bind WepTac artifacts to programs that can actually obligate, especially for complex MDSs that involve both traditional acquisitions from a Prime and with modern software delivery managed by Pathfinders or at service software factories.
  • Industry (small + large) and VC observers: not to pitch—to react to the artifacts before we publish them (give us the “what breaks in real life” feedback on day 2, not via a sad blog post in month 6).
  • Selected Veterans: Those who built and scaled many of these organizations in the government and then entered industry are the most vital bodies of knowledge to know what works and what doesn't on both sides of the military/commercial divide. Many of them have seen the Valley of Death from each side of the cliff. Even if they don't have a vote on the board making final decisions, they are the calm experienced voices of reason in the room—often with expert skills making their thought leadership invaluable.

Day 0 (virtual, 90 min) — Warm start & baseline

  • Inputs assembled: current AFWERX/NavalX/MIU/AAL/DIU headcounts (mil/civ/ctr), billet authorizations, vacancy rates, contractor rosters, burn rates, SBIR evaluator capacity, Pathfinder patch inventory, including Tier Lists, DVP alumni lists, and "Gig" experience is all collected.
  • Output: a one-pager “Manning Baseline v0.9” so Day-1 starts with facts, not vibes.

Day 1 — Pin the lane (Interfaces, Rights, Money)

Morning — Dual-Use Intake, by Interface.

  • AFWERX/NavalX/MIU/AAL/DIU present a single intake rubric: every proposal ties to an API FRAGO (from the Software Factory WepTac), not to a platform. If you can publish/subscribe to Thread X schema at SLO Y, you’re “in scope.” No interface? No topic.
  • CDAO drops data SLAs (fields, freshness, lineage) for those threads. Everyone leaves knowing exactly what “done” looks like.
  • Manning Track:
    • Publish the Pathfinder Charter (DoD-wide): mission, authorities, interfaces to DIU/AFWERX/NavalX/MIU/AAL, and the “API FRAGO” ownership model.
    • Standard team template per API thread (this is not a requirement; team size/composition is prone to change based on mission need):
      • 1 Government Pathfinder Lead
        • Manning of this role should go in this order:
          • 1.) An experienced SDDS from the AFSC/MOS/Rate of the supported warfighter
          • 2.) An experienced SDDS from the MDS (or sister service equivalent) of the supported unit
          • 3.) An experienced CVR from the AFSC/MOS/Rate and/or MDS (or sister service equivalent) of the supported unit.
          • 4.) If and only if the above three can't be met, another Pathfinder will be acceptable temporarily
        • The Pathfinder's role is to lead on the TTP + API FRAGO + acceptance authority
      • 1 Product Manager (contractor), 1 Delivery Lead (contractor)
      • 2 App/Model Engineer (contractors), 1 Data/Telemetry Engineer (contractor), 1 Security Engineer (contractor), 1 UX/Designer (contractor)
      • Pizza Rule: 1 government Pathfinder effectively leads 5–7 contractors.
    • Wing/MAJCOM (and sister-service equivalent) sizing rule-of-thumb: 1 Pathfinder thread per priority ops thread; DIU/Service headquarters (HQ) keeps a 5-person “Standards Cell” for API/data SLAs and reciprocity adjudication.

Afternoon — Rights & Risk.

  • Legal + KO/AO Team publish rights menus:
    • Data Rights Menu: Government Purpose Rights (GPR) default; Limited/Restricted allowed if export + portability clauses are met; tool-agnostic, data-first.
    • License Rights Menu: Software factory developments automatically enjoy unlimited GPR by virtue of government owning the IP. For dual-use technologies, commercial license procurement is the default, though government will own all the generated data. Government never wants to own commercial dual-use IP.
  • DoD CIO/CDAO table a Reciprocity Memo draft: if you inherit controls from an authorized software factory/CSP PaaS and present pipeline-born evidence (SBOM + attestation + test results), reciprocity is automatic across DIU/AFWERX/NavalX/MIU/AAL task orders.
  • KOs/AOs outline Outcome CLIN patterns and decentralized IDIQ structure (Autonomy Prime Phase 3-style, updated based upon the SWP to utilize OTAs), plus how a unit writes a one-page call order against it.
  • Manning Track:
    • Pathfinder “patch” pipeline: selection board rubric, initial selection policies, annual re-cert policies; failure to ship = patch expires.
    • Tours: codify DVP 2.0 (12–26 week industry embed) as required for patch award/renewal; reciprocal non-disclosure agreements (NDAs) + conflict screening baked in.
    • SBIR belly-button: Pathfinders own topic design (interface-first), Phase I/II Stage 2 evals, and Phase III go/no-go based on test-harness pass rates—no slide decks.

Deliverables (by close of business (COB) Day 1):

  • Intake Rubric v0.9 (interface-first).
  • Data Rights Menu v0.5 (with exportability clause language).
  • Reciprocity Memo v0.7 (inheritance + evidence = reuse).
  • Ordering Guide skeleton v0.4 (vehicles, CLIN templates, evaluation factors).
  • Pathfinder Charter v1.0 (signed by DIU + Service/MAJCOM reps, or tabled for Senior Leadership Endorsement on Day 3)
  • Team/Billet Templates v1.0 (government/contractor mix, ratios, labor categories)
  • Patch Pipeline & Re-cert SOP v1.0 (includes DVP)

Day 2 — Break it, then fix it (Injects + Red Team)

Morning — Injects (real world, uncomfortable).

  • Data-ownership inject: hypothetical Maven-style clause collides with a coalition export request. Teams must fix the clause and show the tool-agnostic export path in 4 hours.
  • Cloud portability inject: vendor changes region/provider; show how the interface contract (not vendor affinity) preserves SLOs.
  • Security inject: CVE/KEV drops on a library used by three vendors; run the cATO inheritance play (no bespoke paperwork), prove time-to-patch under the target SLO.
  • Manning Track:
    • Scenario A (two-AOR surge): scale to 6 new API threads in 60 days. Output: Surge Staffing Play (where contractors come from, who approves overtime, how we borrow from gig board, when DIU can cross-deck Pathfinders across services).
    • Scenario B (budget dip): 8% cut mid-FY. Output: Graceful Degradation Plan (which threads pause, contractor ramp-downs with IP protections, how to retain core Pathfinders).

Afternoon — Markets & Money.

  • VC panel (no slides) reacts to Intake Rubric and Ordering Guide: “Would this cause your portfolio to invest faster into dual-use?” If not, what toggles (contract duration, payment terms, SBIR bridge timing) change that?
  • Comptroller + KOs/AOs convert the toggles into BPAC move rules and obligation timing (so a wing commander can actually fund a thread when it goes green).
  • Pathfinders align the gig-style task board to the Ordering Guide (tasks → Outcome CLINs → test harnesses → payment on pass).
  • Manning Track:
    • BPAC ties to headcount: convert SLO deltas into staffing triggers (green → unlock 2 contractor full-time employees (FTEs); red → hold backfills).
    • Authorities wiring: Memorandum of Agreement (MOA) that DIU owns standards & oversight; services own SBA/service innovation dollars and billets. Clear “two-dads” diagram (think SOCOM + USAF over AFSOC model).
    • Gig-style board operations: eligibility, conflicts, timeboxes (2–12 weeks), acceptance via shared test harness, payment on pass; Pathfinders curate.

Deliverables (by COB Day 2):

  • Ordering Guide v0.8 (now with portability + CVE/KEV/cATO lanes).
  • BPAC Playbook v0.6 (when/how to shift dollars by SLO change).
  • Gig Board SOP v0.5 (eligible performers, harness, acceptance, pay).
  • Surge Staffing Play v1.0 (who/when/how)
  • Graceful Degradation Plan v1.0
  • DIU–Service MOA v1.0 (roles; SBA/service innovation unit vs POM'd DIU funds; oversight model)
  • Contractor Workforce Note v1.0 (to accompany Ordering Guide)
  • Gig Board SOP v1.0 (aligned to Outcome CLINs)

Day 3 — Publish and wire in (No paper trophies)

Morning — Program wiring.

  • PEOs/MAJCOMs (and sister-service equivalents) map artifacts to current vehicles: which IDIQ gets the dual-use call orders next week; which SBIR topics convert to interface-anchored competitions; which OTA vehicles gain the Outcome CLIN language.
  • AFWERX/NavalX/MIU/AAL publish quarterly buy lists by interface thread (“12 months of demand”), with DIU validating cross-service overlaps and OSC validating by broader DIME alignment. This is the market signal memo—the one VC actually reads.
  • Manning Track:
    • Billet kit: standard cross-service paragraphs (example: the mapping of USAF unit type code (UTC), designed operational capability (DOC) and manpower force element (MFE) codes to a USA counterpart, including their table of organization and equipment (TOE), table of distribution and allowances (TDA) and position descriptions (PDs), and the equivalents in USSF, USN, and USMC), creating a cross-service “Pathfinder Det” template for MAJCOMs/Army Commands/Marine Expeditionary Forces (MEFs)/Numbered Fleets.
    • Promotion/award mapping: Pathfinder patch + professional military education (PME) = stratification guidance; deprecation wins and SBIR Phase III deployments count as major bullets.

Afternoon — Ratify and hand-off.

  • Outbrief to sponsors with signature blocks ready: results of the three days are ratified, and in the case of policy changes required at the Office of the Undersecretary of Defense (OUSD) level, appropriate leadership is briefed in real time for an execution decision.
  • Legal signs the Data Rights Menu; CIO/CDAO sign the Reciprocity Memo; KOs/AOs sign the Ordering Guide.
  • A tiny tiger team (named in the closing slide) owns pushing the artifacts into SAM.gov mods, KO/AO deskbooks, and front-door websites in the next 10 business days. If nobody’s named, it didn’t happen.
  • Manning Track:
    • Sign: Pathfinder Charter, MOA, Billet Kit, Patch Pipeline, Gig Board SOP.
    • Name the Manning Tiger Team (DIU + one rep per service) with a 60-day tasker to:
      1. stand up 3 cross-service Pathfinder detachments,
      1. push the billet kit through human resource (HR) systems,
      1. populate the first 100-person contractor bench aligned to interface threads.

Deliverables (by end of Day 3):

  • Dual-Use Ordering Guide v1.0 (signed).
  • Data Rights Menu v1.0 (signed).
  • Reciprocity Memo v1.0 (signed).
  • BPAC Playbook v0.9 (comptroller-vetted).
  • Market Signal Memo (4Q horizon) listing interface threads, SLO targets, and expected award cadence.
  • Tiger-Team tasking order with dates and systems to touch (vehicles, portals, deskbooks).
  • Billet Kit v1.0 (ready for HR upload)
  • Promotion/Awards Guide v1.0
  • Manning Tiger-Team Order (names + 60-day actions)

What this WepTac explicitly optimizes (and what it kills)

  • Optimizes:
    • Interface-first intake → vendors can compete without bespoke rewrites.
    • Rights/portability → government retains freedom to maneuver; vendors retain sane economics.
    • Reciprocity by evidence → no Kafka run; SBOM + attestation + test = go.
    • Outcome CLINs + decentralized IDIQs → units can buy effects, not artifacts.
    • BPAC agility → commanders shift dollars the same week a thread goes green.
    • Market signaling → investors see a 4Q demand horizon and keep the dual-use pipeline funded.
  • Kills:
    • Pitch theater masquerading as transition. If it doesn’t map to an interface thread + SLO + vehicle, it’s a talk.
    • Forked codebases as a condition of entry. Your cATO inheritance + reciprocity means “bring your code once.”
    • Paper-based “security.” CVE/KEV/cATO and pipeline evidence decide, not a binder.
    • Data lock-in by accident. The clause set forces tool-agnostic export + portability.

How this keeps AFWERX, NavalX, MIU, AAL, and DIU afloat (and pointed the same way)

  • Clarity of roles: DIU sets the joint rails (standards, reciprocity, portability), AFWERX, NavalX, MIU, and AAL run service-aligned intake and scaling, Pathfinders keep the interfaces/data SLAs honest across ops.
  • Budget oxygen: the BPAC Playbook turns performance into money (green SLOs move dollars); the Ordering Guide lets units obligate quickly; SBIR topics are threaded to interfaces, not novelty.
  • Talent oxygen: companies don’t have to bifurcate codebases to work with us; outcomes pay fast; the quarterly Market Signal gives VC and product leads a reason to stay on the DoD glidepath when the commercial quarter gets bumpy.
  • Political oxygen: signed artifacts (Ordering Guide / Rights / Reciprocity) become institutional commitments that survive rotations and administration pivots.

Cells & checklists (so the three days don’t wander)

  • Commercial Tech On-Ramp Cell: validates interface conformance, runs a 30-minute “fit” test with synthetic vectors.
  • Contracting & Pricing Cell: owns the Ordering Guide text, CLIN exemplars, fair-and-reasonable logic, small-biz protections.
  • Risk/ATO/Reciprocity Cell: keeps the cATO inheritance pack tight; converts CVE/KEV and SBOM outputs into pass/fail language.
  • Portfolio & PE/BPAC Cell: maps interface threads to funding lines and writes the BPAC triggers (what moves, who signs).
  • Industry/VC Cell: red-teams the artifacts for viability and incentive alignment (not demos).
  • Data & API Cell (Pathfinders + CDAO): publishes the API FRAGO and data SLA deltas each evening.

Pre-reads (mandatory):

  • Latest API FRAGOs / data SLAs from the Software Factory WepTac.
  • Draft Reciprocity Memo and Data Rights Menu.
  • Current SBIR topics and vehicles inventory (what’s live, what can be amended).
  • Active BPAC pools and FY execution targets.

Second-order effects (the quiet wins)

  • CMMC theater withers: because reciprocity + pipeline evidence becomes the de facto path, not another certification safari.
  • MOSA becomes muscle memory: vendors win by conforming to interfaces; government wins by swapping in the lowest cost-per-effect component without program-level hostage negotiations.
  • SBIR stops being a cul-de-sac: topics are born “on the thread,” and Phase III equals “deployed to the thread,” not “we got a memo.”
  • Personnel sanity: AFWERX/NavalX/MIU/AAL staffs stop drowning in bespoke exception handling; KOs/AOs stop writing snowflake clauses; Pathfinders stop playing translator for twenty different “almost compatible” proposals.

Lessons we will drag into the sunlight

Why lightning doesn’t stay in the bottle. We begin these units with a handful of people with a founder mindset, often at the risk of career suicide. This thesis repeats in history; in the 1970s and 1980s, USA Special Forces (SF) was a dead-end career path, with almost no promotion opportunity to even O-7, let alone Senior USA leadership, and yet, SF was arguably its most creative it ever was in this timeline. Likewise, assignment to DIU in 2016 was often career suicide, yet it was those innovators who ushered in the policy framework that makes 3OS a possibility.

Ultimately, lightning was a person-bound, environment-specific workaround—not an interface, not a contract lane, not a reusable inheritance pack. The Innovation WepTac translates “we hacked it” into: (1) a reproducible schema; (2) a CI/CD-attested control set (cATO); (3) a contracting pattern anyone can order from; (4) a Pathfinder rotation that protects continuity. It owns the manning process to ensure that the cookie-cutter "AFSC/MOS/Rate" based personnel systems can't do what they've always done and brutally murder innovation.

When the pipeline narrows to pitch theater and unfocused grants, you get output without outcomes. When the pipeline is yoked to joint interfaces, service factories, and production SLOs, you get effects. The WepTac will publish an Ordering Guide that looks like Autonomy Prime SBIR P3 at its best—decentralized IDIQ, clean ordering, objective intake criteria, outcomes tied to SLO telemetry—exactly the stuff that let a vendor like Rise8 move with speed on real government outcomes because the lane was clear and repeatable. Contrast that with Maven’s data ownership posture (government executed terms that trapped data gravity in a single tool stack), and you get the failure mode in one sentence: we optimized for a platform purchase, not a data contract. The WepTac fixes that at the source: data SLAs first, tool selection second.


Pathfinders: patch, not mascot

Treat Pathfinder like a weapons school-grade patch community, not a ribbon you pin on a few motivated majors. The patch means you can close the loop—ops ↔ telemetry ↔ code ↔ deploy ↔ TTPunder budget law and with real interfaces. Concretely:

  • Billets & Composition.
    • Line unit: Every unit has at least one Pathfinder responsible for API threads associated with their unit mission, the API FRAGOs, and they are the SDDS representative to any contracting actions their unit either executes or sponsors.
    • At AFWERX, NavalX, MIU, AAL, and DIU, almost every AO (including 100% of PMs) will be Pathfinders.
  • Rotations. Pathfinders cycle through DIU/OSC/CDAO, gaining joint credit for these assignments, while also cycling through service factories, service innovation organizations, and operational units throughout their careers. Everyone does at least one industry tour (more below).
  • Scope. They own the API FRAGOs, data SLAs, inheritance packs, and the ordering guides in their mission families (ISR, strike, mobility, C2). They don’t code every feature; they set the lanes and pay for speed with BPACs.[21],[22]
  • Tasking. Commanders ask Pathfinders for software effects like they ask for fires: “I need a model patch to cut false positives 20% in 30 days.” The response is a plan in SLOs, not a slide.[92],[128],[129],[130]
  • Credible currency. Promotions and awards key off time-to-field (MTA), T2D (JADC2), time-to-patch (CVE/KEV), and tech debt retired—the same metrics Section 10 attached to money.[9],[20],[23],[24],[27],[47],[73],[74]

Manning is the #1 failure mode. We keep trying to staff innovation with “available bodies,” then wonder why it collapses in month four. Solve that with authorized billets and rotation orders—the same adult approach we take to weapons school, OPFOR, and test squadrons. When DIU has oversight (standards, rotations, joint priorities) and the service holds equities (SBIR/STTR and factory execution), the “two dads” model becomes an enablement, not a custody battle.[49],[108],[111],[112],[113],[114],[115],[116],[117],[118],[119]


Industry tours: bring back DVP Fellowship as a Pathfinder pipeline

Industry tours are not tourism. They are supply-chain reconnaissance for software: how the market learns, ships, and secures at scale. Reinstate a DVP-style fellowship as a Pathfinder requirement: 12-26 weeks embedded at a dual-use company or VC portfolio ops team, sleeves rolled up (no shadowing), returning with a published memo: (1) how they staff CI/CD; (2) their incident/CVE/KEV response clock; (3) how their product teams set and measure SLOs; (4) which of our interfaces they could meet without forking code. Fold those memos into the WepTac library and DIU front-door guides. The payoff: Pathfinders speak both dialects—warfighting and software business-ops scale—and can translate, which is the scarcest skill of all.


Pathfinders as the belly-button for SBIR evaluations

If SBIR/STTR is our seed-corn, Pathfinders should be the evaluation and transition nerve.[111] That means:

  • Topic Shaping. No topic goes out without a data contract attached: fields, event cadence, error budget, target SLOs, cATO inheritance pack required.[4],[8],[45],[46]
  • Phase 3 down-selects by telemetry. Vendors run against synthetic test vectors and MOSA conformance harnesses (FACE/UCI/CMOSS);[29],[30],[31] we score latency, accuracy, availability, not slide craftsmanship.
  • P3/Ordering Guide. Every promising Phase II/III is slotted into a decentralized IDIQ (Autonomy Prime-style) with a clean ordering guide so units can buy effects the next day. If we can’t buy it quickly, we stop “piloting” it.
  • Exit criteria = transition criteria. A SBIR win is the day it ships into a JADC2 thread or edge kit with cATO evidence attached—not the day we sign a Phase III.

This is where DIU oversight matters: DIU ensures the lanes and inheritance are the same everywhere; services ensure topics serve their mission; Pathfinders keep the glue in place.


The gig-style economy—inside the fence

We won’t out-recruit Silicon Valley, but we can route work differently. The “gig” model here is not a race to the bottom; it’s a way to atomize real mission work so the best talent—inside and outside government—can swarm it without being trapped in monolithic FTE constructs.

What it looks like:

  • A cross-service board of discrete tasks posted weekly (e.g., “Build an adaptor from schema v0.4 to v0.5 for the ISR thread,” “Reduce inference latency by 15% on edge kit X,” “Kill legacy gateway Y by writing the new MOSA-compliant proxy”).
  • Each task has an Outcome CLIN, a timebox, a test harness, and SLOs. Completion is binary: the harness passes and SLOs are met.
  • Eligible performers: service factories, small businesses (SBIR-qualified), approved independent contractors (such as those from the Letters of Marque 2.0 performers), and internal Pathfinder dev cells.
  • Security & compliance are pre-cleared by inheritance (an authorized software factory, CSP PaaS enclaves).[71],[72],[107],[108],[109]
  • Payments are fast, governed by the same ordering guide used in Autonomy Prime-style vehicles.

Why it matters:

  • Surge on what matters (a CVE/KEV drops, a patch is needed now) without rebucketing an entire program.[47]
  • Retention—for both uniform and contractor talent—because the work is clean, time-boxed, and meaningful.
  • Transparency: commanders can see which vendors and teams actually move SLOs…and move BPACs accordingly.[21],[22]

Guardrails:

  • No “spec work.” Every task is funded; every harness is provided.
  • MOSA and API FRAGOs keep the swarm coherent; if your output doesn’t fit the contract, it doesn’t land.[28],[29],[30],[31]
  • DoDI 8140 maps skill tracks so we don’t invent bespoke HR mini-empires.[93]

Contractors, wages, and wellness (clarity beats hand-wringing)

Reiterating Section 10’s plain talk: software developers should mostly be contractors. They earn market rates, they deliver against SLOs, they do not need to pass a PT test to ship a patch that closes a CVE/KEV exploit chain.[47] We keep uniform billets where bodies must seize objectives and survive kinetic shock. We keep contractor billets where brains must out-iterate adversary code.


Contracting: patterns we canonize (and pitfalls we kill)

Canonize:

  • Decentralized IDIQs with clean ordering guides: objective intake criteria, price-to-performance factors, and Outcome CLINs tied to SLO telemetry. Works for autonomy, data, visualization, and platform tooling.
  • Reciprocity baked in: If it inherits controls on an authorized software factory, reuse is automatic across task orders.[107],[108],[109]
  • MTA + Software Acquisition Pathway for anything that iterates weekly; we stop pretending traditional milestone gates can referee DevSecOps.[9],[73],[74]
  • SBIR phase gating that mirrors production: phase deliverables equal interface conformance + deployed effect (not “final report submitted”).

Kill off:

  • Data lock-in as a feature. The Maven-era mistake was treating a tool as the program rather than the data contract as the program. Fix it by writing the SLA for data first; let multiple tools compete in the thread.
  • Forked codebases as a condition of doing business. The entire WepTac inheritance pack exists so vendors can ship their one codebase across DoD enclaves without rewriting it for every PEO.[4],[8],[71],[72],[107],[109] Dual-use software shouldn't be any different inside the wire as it is commercially so long as it meets CVE/KEV/SBOM constraints.
  • PDF evidence as the artifact. Evidence is pipeline-produced (attestation, SBOM, test results), not manually assembled.[45],[46]

Metrics that matter (and how Pathfinders move them)

We’ve already defined the enterprise scoreboard: time-to-patch (CVE/KEV), time-to-field (MTA), T2D (JADC2), cost-per-effect (CCA/sUAS), model SLOs.[9],[23],[24],[32],[33],[47],[73],[74] The WepTac publishes per-thread targets and the telemetry plumbing to read them. Pathfinders own the play calls that move those numbers in-quarter:

  • Shorten patch by pre-positioning pipelines and using the gig board for keystone libraries; enforce SBOMCVE/KEV crosswalk alerts.[46],[47]
  • Shorten field by keeping everything on the Software Pathway and funding adapters via BPAC shifts (no “replatforming” to meet a single program’s taste).[9],[21],[22]
  • Shorten decision by killing low-value data, improving precision/recall on the critical stream, and pushing error budgets into ops—explicitly.[23],[24]
  • Lower cost-per-effect by swapping components through MOSA to the cheapest performer that meets the SLO and survives red-team injects.[28],[29],[30],[31],[42],[43],[44]

Every quarterly WepTac ends with a “Green Board”: which threads went green, what we killed to get there, where BPACs moved, which contracts enabled it, and which commanders felt the deltas.


Risks and mitigations (same enemies, new armor)

  • Risk: WepTac becomes a trade show.
    Mitigation:
    No artifact, no slot. If your session doesn’t produce an API FRAGO change, a data SLA update, or an ordering-guide entry, it’s a lightning talk, not a WepTac event.
  • Risk: Gig board turns into chaos.
    Mitigation:
    Tasks anchor to API FRAGOs and MOSA. One interface to rule the swarm. Outcome CLINs + test harnesses prevent “artistic interpretations.”
  • Risk: Pathfinder burnout.
    Mitigation:
    virtualized assignment rotations (without uprooting families), industry tour credit, promotion currency tied to the real scoreboard. Protect their calendars like we protect flying hours.
  • Risk: DIU capture or bottleneck.
    Mitigation:
    Keep DIU thin (platforms, standards, tests), keep BPACs commander-controlled, make all SLO dashboards visible to operational units.[21],[22],[49]
  • Risk: CMMC theater drags us back.
    Mitigation:
    Anchor to NIST 800-171, SSDF, SBOM, ZTA, and CVE/KEV. Evidence = pipeline. Reciprocity = default.[7],[45],[46],[47],[107],[109],[120]

12) ATO Revolution: RMF vs. STPA vs. ARCOS

We’ve treated Authority to Operate like a security DMV: take a number, fill a binder, get a sticker. That posture made sense for monolithic, rarely-updated systems. It breaks when our effects are software and our maneuver is iteration. The fix is not to “abolish RMF” or to “waive security.” The fix is to separate governance from control and move the center of gravity from point-in-time attestations to runtime, hazard-based safety. In practice: RMF gives the compact (who’s accountable for what), cATO/pipelines give the inheritance (how teams ship without re-proving the universe), and STPA/ARCOS give the guardrails (how we keep missions safe while code moves at campaign speed).[4],[8],[12],[13],[14],[15],[16]

The three-layer model (governance → pipeline → runtime)

Layer 1 — Governance (RMF, minimally sufficient):
Use RMF to define mission families, authorization boundaries, and who owns risk—not to micromanage software change. RMF's control families and roles stay, but we collapse paperwork into machine-readable artifacts (more below). We issue family authorizations (ISR family, strike family, mobility, C2) with scope statements that anticipate continuous delivery and enumerate what inherits from whom.[12],[13]

Layer 2 — Delivery (cATO with pre-approved patterns):
Units don’t earn cATO one app at a time; they join a hardened pattern: a pre-vetted CI/CD pipeline with signed base images, IaC, scanners, test harnesses, and provenance/attestation baked in (think a COCO equivalent to Platform One's “Big Bang” plus your service’s platform libraries). Controls are pre-mapped to pipeline steps. If you ship through the pattern, you inherit the bulk of controls; your delta is feature-specific evidence. CSP PaaS provides the substrate options (multi-cloud) without re-authorizing every time we redeploy.[4],[8],[71],[72],[107],[108],[109]

Layer 3 — Runtime safety (STPA/ARCOS):
For each mission family, we run STPA to identify hazards (e.g., “mis-identified target as hostile,” “lose positive C2 of sUAS swarm,” “expose ISR sources”), specify safety constraints, then implement ARCOS-style runtime controls to enforce them (pre-flight checks, on-wire policy gates, watchdogs, kill-switches, and post-action monitors tied to telemetry). In other words: controls that matter operate during the mission, not just during an authorization review.[14],[15],[16]

This split keeps RMF as governance (lightweight, durable), cATO as the shipping lane (fast, inherited), and STPA/ARCOS as the real-time seatbelt that prevents mission-class incidents while we push code.


Evidence is code: from PDFs to proofs

Paper screenshots don’t scale. Evidence should be emitted by the pipeline and the system, not written by a staffer:

  • SBOMs & provenance: Every artifact carries a signed SBOM (either Software Package Data Exchange (SPDX) or CycloneDX—we're sticking to industry standards but not picking winners when interoperability is a feature, not a bug) and supply-chain levels for software artifacts (SLSA)-grade provenance. Store once; reuse across programs via reciprocity.[45],[46]
  • CVE/KEV-driven patch SLOs: Map CISA KEV entries and MITRE CVE entries to family-level SLOs (e.g., “KEV item exploited in wild → patch within 72h”). Dashboards show time-to-patch by family; slipping SLOs triggers risk escalations.[47]
  • Control-as-policy: Translate control intent into machine-enforceable policies (e.g., open policy agent (OPA)/Rego rules) in the pipeline and in runtime gateways. The “evidence” is the passing policy decision recorded with a cryptographic attestation.[8]
  • RMF artifacts as Javascript Object Notation (JSON): System security plan (SSP), security assessment report (SAR), plan of action & milestones (POA&M) become versioned JSON/YAML objects in the repo; updates roll with code. Auditors diff, don’t spelunk.[8],[12],[13]

Second-order effect: once evidence is code, contractors can sell outcomes, not slideware (pay for passing test-harness + policy checks), and reciprocity becomes routine (import the attestation instead of re-authoring prose).


Pre-approve the lanes, not each car (cATO by design)

Stop treating cATO as a trophy case. Treat it as a library of pre-approved lanes:

  • Patterns library: Web/API service, edge/embedded, model-ops, batch data pipeline, sUAS firmware lane, etc. Each lane comes with reference IaC, base images, scanners, e2e tests, ATO mapping, and integration shims for an authorized software factory/CSP PaaS.[8],[71],[72],[107],[108],[109]
  • Inheritance by default: If the lane doesn’t change, inheritance holds. New apps declare which lane they use; their authorization package is basically: inputs, outputs, deltas, hazards.
  • API FRAGOs for risk: AOs issue API FRAGOs—policy changes at the lane interface (e.g., “raise minimum TLS version,” “ban X library version family-wide”). Units comply without re-papering.
  • Portability clause: Lanes must run unchanged on at least two CSP PaaS clouds or an on-prem Kubernetes (K8s) target; portability is tested by CI/CD. If you can’t move the lane, you can’t be a lane.[71],[72]

Second-order effect: no forked codebases for “the DoD version.” We adapt the lane once; commercial teams keep their mainline. That’s how we get days, not months. (This also neuters the CMMC cottage industry that tries to swap runtime discipline for consulting hours; we buy controls in code, not binder compliance. Use NIST 800-171/CMMC for supplier posture, not as an excuse to refactor everyone’s development flow.)[120],[131]


ARCOS in practice: hazard budgets and mission interlocks

ARCOS is the operational layer: controllers and monitors that enforce the STPA constraints as the mission runs.

  • Hazard budgets: Each mission thread carries budgets (max mis-identified risk, max lost-link window, max drift in model accuracy). Breaches auto-trip mitigations (de-rate autonomy, require human-on-the-loop, abort).
  • Interlocks & arming checks: Before a strike model arms, it must satisfy data freshness (ISR timestamp), model calibration (accuracy drift < X), source health (no known compromise), and ROE state—all machine-checked.
  • Watchdogs: Out-of-family behavior (e.g., swarm radio patterns inconsistent with plan) triggers fail-safe behaviors (loiter, return, or safe land) and opens an incident that feeds the TTP loop.
  • Kill-switch policy: Define who can kill, under what telemetry conditions, and where the kill signal rides (redundant paths). Test it quarterly.[25],[26]

Second-order effect: authorizations become living—you can “turn the knob” on hazard budgets during a crisis (e.g., accept higher drift for faster sortie rates) with visible risk deltas and a signature trail.


ATO playbooks by mission family (sample outline)

Ship short, brutally concrete playbooks (10–15 pages each) per family:

  1. ISR family — Hazards: mis-tagging, deanonymizing sources, data latency. Controls: source reputation service, freshness gates, privacy budgets, replay filters. Evidence: SBOMs, model cards, red-team results.
  2. Strike family — Hazards: mis-identification, collateral damage estimation (CDE) errors, rules-state drift. Controls: arming checks, dual-channel consent, geofence enforcement, post-action audit hooks.
  3. Mobility — Hazards: load mis-calc, route hijack, fuel estimation error. Controls: dual-sensor checks, route integrity proofs, predictive maintenance fail-overs.
  4. C2 — Hazards: stale common operating picture (COP), loop saturation, auth bypass. Controls: message-bus SLAs, priority preemption, zero-trust enforcement around crown-jewel topics.[5],[7],[23],[25],[26]

Each playbook ships with: lane selection, policy bundles (OPA rules), hazard budgets, red-team scripts, and test harness uniform resource locators (URLs). (If it isn’t in code, it isn’t real.)


Civilian market as the arsenal (no forks, no re-platforms)

Everything above is engineered to pull dual-use software across the river without forcing vendors to fork or rebuild their stack:

  • Contract for outcomes: CLINs that pay on harness passes, policy attestation, and SLO deltas (e.g., time-to-patch vs CVE/KEV; inference latency; drift control).[45],[46],[47]
  • Middleware, not rewrites: Use contracted platform integrators to build the thin shims that let a modern containerized app ride the cATO lane. We don’t ask Google-scale vendors or SBIR smalls to re-architect; we absorb via lanes.[4],[71],[72],[107],[108],[109]
  • Reciprocity by default: If a partner passed the same lane elsewhere, import the attestation (plus delta) and move. (Yes, that means telling the compliance cottage industry no when it wants to re-grade the same code for a fee.)

Second-order effect: investors keep funding R&D we need because time to revenue in defense is measured in weeks, not 18-month ATO marathons. (See Section 10’s BPAC model—effects-based dollars reward code that moves the needle.)


Red-team cadence: break it on purpose, fix it in code

Quarterly family exercises fold into OT&E: swarm-on-swarm for sUAS, degraded comms for C2, deception runs for ISR. We measure:

  • Kill-chain to patch: time from red-team finding → CVE/KEV/SBOM correlation → policy update → patch deployed (target: days).
  • ARCOS trip rate: did the watchdogs catch the problem before a mission hazard manifested?
  • Reciprocity friction: time to import an attestation from another command or ally.

The key: every exercise updates the lane (policy bundle, tests) so the entire family gets better, not just the one unit that ran the drill.[25],[26],[82],[83],[84],[85]


What we stop doing (anti-patterns to kill)

  • Binder theater: no more PDF farms. If a control can’t be expressed in code/policy/telemetry, it’s probably not controlling anything meaningful.
  • ATO per app, per base, per cloud: this multiplies nothing but cycle time. Authorize lanes and families, not snowflakes.
  • CMMC as gate to ship: use NIST 800-171/CMMC to manage supplier posture; never as a substitute for runtime controls. Don’t make “compliance” the product.[120],[131]
  • Forked “DoD” builds: portability is a first-class acceptance test; if your approach creates a fork, change the approach.
  • Security freezes: pausing change increases risk when adversaries operate continuously. We manage risk in motion.

90/180-day implementation, no drama

By Day 30

  • Pick two mission families (e.g., ISR, sUAS) and declare Family Authorizing Officials.
  • Stand up two lanes from the patterns library (API service; sUAS firmware).
  • Publish CVE/KEV patch SLOs and initial dashboards.[47]

By Day 90

  • Ship Playbook v1.0 for both families (hazard budgets + policy bundles).
  • Migrate three programs onto the lanes; import at least one reciprocity attestation.
  • Run one red-team event; convert gaps into policy/tests.

By Day 180

  • Add model-ops lane; integrate model cards/drift monitors.
  • Close the loop with contract vehicles that pay on harness/policy SLOs (Outcome CLINs).
  • Kill at least one certificate that doesn’t move risk; announce the Kill-Cert Memo (a public sign that paperwork doesn’t outrank safety).

Metrics that matter (AOs can sign to)

  • Time-to-field (lane) after merge (target: hours → days).
  • Time-to-patch (CVE/KEV) per family (target: 72h for exploited CVEs/KEVs; exceptions logged).[47]
  • Policy coverage (% of applicable controls enforced by code/policy vs prose).
  • Reciprocity reuse rate (% of authorizations imported rather than re-authored).
  • Drift dwell (mean time a model exceeds drift budget before remediation).
  • Incident proximity (# of ARCOS trips that prevented a mission hazard vs # of post-facto incidents).
  • Portability health (successful redeploys across two CSP PaaS targets per quarter).[71],[72]

Worked example: sUAS firmware agility (lane + ARCOS)

  • Lane: sUAS firmware CI/CD with signed toolchain, reproducible builds, static/dynamic analysis, hardware-in-the-loop bench, and fieldable canary ring.
  • STPA hazards: “loss of link → flyaway,” “spoofed GPS → prohibited area,” “payload mis-arm.”
  • ARCOS controls: watchdog timer on link, geofence enforcement, dual-path arming checks, IMU/GPS sanity checks, auto-return on anomaly.
  • Evidence: SBOM + provenance, canary flight logs, watchdog event telemetry, CVE/KEV patch SLOs met.
  • Contract deliverable: pass the firmware lane harness across three hardware variants and two ground stations; demonstrate portable redeploy on a second CSP PaaS edge environment without code changes. Pay on pass.[39],[42],[43],[44],[45],[46],[47],[71],[72]

Bottom line: RMF is the contract, cATO is the lane, STPA/ARCOS is the seatbelt. We don’t waive security—we operationalize it. We stop rewarding PDFs and start rewarding portable code, enforceable policy, and telemetry that proves safety while moving fast. That’s how we get from “ATO as a tax” to ATO as a combat multiplier.”


13) Data as a Strategic Asset (JADC2, CDAO)

We talk about “owning the high ground.” In this war, the high ground is data you can trust at the speed you can use it—and the models that ride on it. Everything else is just terrain. Part 1 argued the Third Offset only makes sense if we treat information, software, and networks as the decisive mass we can leverage because the US economy is now (the dominant) information economy in the world. Part 2 showed adversaries already run whole-of-state campaigns across that mass. Parts 3–5 walked through the mechanics: acquisition policies, realities of requirements, open interfaces, safety in code, and persistent, partnered cyber campaigning. This section does the boring, essential work: make mission data the funded, governed, and measured core of combat power—and wire JADC2 so it’s a contract, not a vibe.[3],[23],[24],[122] Remember: JADC2 exists to make data movement more reliable and useful at the tactical edge, not to consolidate command in a central location.

JADC2's magic is the data, not in letting a 4-star persist in a fantasy as a field-grade officer.

Mission Data as a Program of Record (POR on purpose)

If you don’t fund it, you don’t own it. Today, “data” shows up as an unfunded annex or a science project attached to a platform. Flip it: stand up Mission Data JADC2 where the deliverables are ingestion pipelines, cleaning/labeling capacity, lineage, quality scores, and access pathways—not a pile of comma-separated values (CSVs) on a share drive.

  • Scope (clear and bounded): by mission family (ISR, strike, mobility, C2), not by platform. That ensures F-35 imagery and MQ-9 full-motion video (FMV) flow through the same ISR data thread with shared contracts and tooling.[23],[24]
  • Budget shape: sustain the data like you sustain fuel—base O&M for pipelines and catalogs; RDT&E to evolve labels/ontologies and add new sources; procurement for commercial feeds/services when it beats building. The National Cybersecurity Strategy backs this posture; CDAO is chartered to do exactly this alignment job.[3],[122]
  • Outputs you can grade: freshness (P95 age), completeness (coverage vs target set), label fidelity (inter-rater κ/F1), lineage depth (provenance hops), and T2D for the supported thread. Bonuses (literally, Outcome CLINs) pay when the metric moves.[21],[22],[132]

Second-order effect: once “data work” is a funded POR with SLAs, commanders can prioritize it in the fight the way they prioritize tankers. No more “sorry, the contractor who knew that extract, transform, load (ETL) left.”


This is still easier than reality. Reality currently is that none of these waveforms are compatible, nor is the data, and each maneuver element is represented once, not the huge number of (often incompatible) variance of each element. (US DoD)

JADC2 as Data Contracts, Not Briefs

JADC2 is often described in sweeping diagrams that make everyone nod. We need less nodding, more publishing. Treat JADC2 as contract-first data engineering:

  • Publish joint data models per mission family (entities, relationships, enumerations, error codes). Stable cores + versioned extensions. No bespoke schemas approved unless you show a migration path into the core. (You can be creative; you can’t be incompatible.)[23],[24]
  • Declare the bus: topics, message shapes, quality of service (QoS)/latency tiers, ownership, retention windows. “API FRAGOs” (from Section 2/12) update these contracts as the fight evolves—e.g., add a field for new EW measurements, raise the minimum sample rate, deprecate legacy fields with a sunset clock.[23],[24]
  • Conformance as a billable deliverable: integrators get paid to implement adapters that conform to the joint contract. They do not get paid to lobby for their proprietary schema.
  • Schema evolution discipline: changelogs, semantic versioning, deprecation schedules, and gate checks in CI/CD so schema drift breaks the build in lab, not over a contested airfield.

Second-order effect: this neuters the slowest part of “integration”—negotiating formats—and turns it into unit tests vendors must pass. It also unlocks reciprocity across units and allies, because format fidelity makes reuse real.[23],[24]


Zero Trust Around Crown Jewels (data & weights)

Treat mission datasets and model weights like warheads: segmented, monitored, and hard to misuse. Baseline with the DoD Zero Trust Strategy and NIST CSF 2.0, then get specific.[5],[6],[7]

  • Segmentation by mission & impact: ISR training corpora (sources and selectors), strike target libraries, and model weights each sit in their own trust zones. Even if a network boundary fails, lateral movement hits identity, device health, and policy gates.[6],[7]
  • Usage control > access control: beyond “who can read,” enforce how it can be used (e.g., no bulk export, no training on this set without a signed data card, no cross-domain promotion without tear-line logic and human review).
  • Continuous verification: device posture, user risk, and session context evaluated every access; risky sessions get read-only or synthetic views.[6]
  • Cryptographic provenance: all artifacts (raws, features, weights) carry signed provenance (who/what created, with which base image, using which data). That’s how you kill shadow copies and track model lineage when a fix is needed.
  • Model custody: store weights with escrow rules (two-person integrity for export or retraining; hardware-rooted keys for decryption in inference services). Drift or suspicious behavior trips runtime interlocks (Section 12’s ARCOS), de-rates autonomy, and pages a human.[7]

Second-order effect: when compromise happens (assume it will), blast radius is small and recovery is fast because provenance gives you a map and ZTA gives you the valves to shut.


Supplier Security You Can Enforce at the Edge

We will keep buying dual-use software from a civilian market that iterates weekly. The right answer is not to force forked “DoD builds.” The right answer is to lift supplier posture while absorbing their mainline through hardened lanes.[45],[46],[120],[131]

  • Baseline the base: require NIST 800-171 for controlled data handling, and let CMMC serve as an audit mechanism, not a pretext to block velocity. (Security theater doesn’t ship outcomes.)[120],[131]
  • SSDF as engineering checklists: vendors show their SSDF practices in code: signed commits, reviewed merges, reproducible builds. We verify in pipeline, not PowerPoint.[45]
  • SBOMs by default: every deliverable ships with an SBOM, and your lane will fail if it doesn’t. (We care because CVE/KEV-driven patch clocks run on SBOM fidelity.)[46],[47]
  • Adapters > rewrites: pay for adapters into the JADC2 contract and for passing the cATO lane harness, not for re-engineering the vendor’s stack to a bespoke enclave. (Middleware is our problem; the market is our arsenal, as we said in Part 5.)

Second-order effect: investors keep funding the tools we need because time to field in DoD isn’t gated by months of compliance ceremony that add little marginal security.


Hunt-Forward Telemetry as Fuel

Hunt-Forward ops with allies generate the cleanest adversary telemetry on earth—live, labeled, and negotiated for sharing. Don’t let it die in an after-action report (AAR). Pipe it straight into the Mission Data PORs:

  • Threat Enrichment Interface (TEI): a defined path where allied hunt data (IOCs, TTP transitions, dwell metrics, toolchain artifacts) maps into model features and detector updates used by U.S. and coalition defenders.[65]
  • Rapid reciprocity: if Estonia (or whomever) sees a new operator method, the delta flows into our CVE/KEV/SBOM/ARCOS dashboards and into our runtime policy updates in hours, not quarters.[47],[65]
  • Labeling surges: fund short, intense labeling sprints when a new threat wave hits; move money the same week because this is a POR with an outcome CLIN for “X thousand high-fidelity labels delivered.”

Second-order effect: coalition defense becomes a real-time learning organism—and our models stop being out-of-date the day they deploy.


Data SLAs, Not Platitudes (and the metrics you sign for)

What gets measured gets resourced. Tie budget to data effects, not platform mythologies.[21],[22],[132]

  • T2D: median time from sensor publish → decision authorization in the supported thread. (JADC2 measures should already be pushing us here; make them outcome-bearing.)[23],[24],[132]
  • Fratricide reduction: percent reduction in blue-on-blue incidents or near-misses attributable to data/identification improvements (confidence bands required).
  • Sortie productivity: decision-quality targets serviced per airborne hour (or per megawatt at a node) with thresholds by AOR.
  • Patch SLOs (CVE/KEV): time to neutralize exploitable CVEs/KEVs on data pipelines and dependent services.[47]
  • Model fitness: drift dwell time, data freshness at inference, and post-action audit variance vs expected.
  • Cost-per-effect: what did it cost in data dollars to deliver one validated engagement, one avoided strike, one rerouted convoy? (This is where BPAC-style budgeting from Section 10 lands.)

Second-order effect: leaders stop arguing abstractions and start arguing numbers they can change this quarter.


How to Buy It (and what to stop buying)

Buy this:

  • Data adapters and contracts: pay to map legacy feeds to the joint schema with automated conformance tests.
  • Pipelines and catalogs: managed ingestion, feature stores, lineage, access brokers, and policy engines that bind to your ZTA.[6],[7]
  • Labeling & Quality Assurance (QA) capacity: contracts that deliver label quality and coverage, not hours burned. Bonus on κ/F1, penalize rework.
  • Outcomes in code: CLINs that release when pipelines pass the lane harness and when SLAs improve (T2D, drift dwell).[132]

Stop buying this:

  • “One-off data lakes” scoped to a single platform or a single user. Everything lands in the family thread.
  • Proprietary schemas you can’t publish. If it can’t be put in the contract repo, it can’t be in the system.[23],[24]
  • Compliance PDFs that don’t translate to policy, telemetry, or tests. (Section 12 killed this for ATO; kill it for data, too.)

Governance That Actually Governs (CDAO, JADC2, Units)

CDAO owns the cross-service data metamodels, lane patterns, and metrics, not the day-to-day mission threads. CDAO's requirements for the metamodels come from the UCCs, not the services. Once defined, services own execution; units own outcomes.[3],[23],[122]

  • CDAO: publishes the core joint data models and policy bundles, runs the schema registry, curates the patterns library for data pipelines, sets ZTA baselines around crown jewels, and reports standardized data effects metrics up the chain.[6],[7],[122]
  • JADC2 leads: own the mission-family overlays—what the ISR/strike/mobility/C2 threads require this quarter. They issue the API FRAGOs that tune schemas and SLAs.[23],[24] If this sounds like a hot topic at a Software WepTac, that's because it is.
  • MAJCOMs/Units: build and fight on top—field the adapters, ensure labeling coverage, and burn down drift dwell with retrains tied to operations. They get graded (and funded) on effect deltas.[21],[22],[132]

Second-order effect: headquarters stops playing integrator; industry and units do. HQ publishes the rails and the scoreboard.


Safety & Ethics Are Part of the Data Contract

This isn’t a “nice to have.” Our responsible AI and system safety commitments live in the contract:

  • Model cards and use constraints: published with weights; human-on-the-loop requirements encoded as policy.[95],[96]
  • Data minimization & tear lines: enforceable in the broker—downstream apps only get fields they can justify; allied views are automatically de-identified unless ROE requires otherwise.
  • ARCOS hooks: STPA hazards translate into runtime monitors on data & models (e.g., anomaly in sensor fusion, unacceptable uncertainty => de-rate or abort).[14],[15],[16]
  • Audit by design: every query and model decision is traceable to inputs and policy state at the time—post-action reviews stop being “trust me” affairs.
  • Letters of Marque 2.0 White List = Data Health Contract. Treat the open-source universe as a program of record:
    • Fund ingestion of SBOMs, continuous dependency graph monitoring, and automated attestations that our cATO lanes can enforce.
    • Score vendors on time-to-detect poisoning, time-to-patch, and % of mission datasets protected by signed provenance.
    • Fold Red/White outcomes into CDAO's data SLAs so Zero Trust wraps the crown-jewel datasets and model weights by default.

A 120-Day “No Excuses” Plan

Day 0–30

  • Charter Mission Data PORs for ISR and C2; appoint data product owners; publish initial JADC2 schema v0.9 and broker topics.[23],[24]
  • Stand up ZTA guardrails (policy engine, identity posture checks) around ISR crown jewels.[5],[6],[7]
  • Drop the Data Effects Scoreboard (T2D, drift dwell, CVE/KEV SLOs).[47],[132]

Day 31–60

  • Contract adapter sprints to bring two legacy feeds each into ISR/C2 threads; pay on schema conformance tests.
  • Stand up labeling surge cell with κ/F1 incentives; wire it to retrain loops.
  • Publish supplier posture rules (NIST 800-171/SSDF/SBOM) as pipeline checks—fail builds that lack them.[45],[46],[120]

Day 61–120

  • Import Hunt-Forward telemetry via TEI; demonstrate one policy update propagated to field within 72 hours.[47],[65]
  • Execute a joint red-blue data exercise: inject schema drift, a poisoned label batch, and a CVE/KEV-rated vulnerability; measure detection, rollback, and patch SLOs.
  • Issue first API FRAGO tightening latency on two ISR topics; retire one bespoke schema with a 90-day sunset clock.

This is the pretty-pretty version. (Doug Didia / AUTONOMOUS BATTLEFIELD CONCEPTS)

What this buys in the real fight

  • Fewer blue-on-blues and bad strikes because freshness, lineage, and model–hazard controls are coded, not inferred after the fact.
  • Shorter OODA loops because observe→orient collapses in the broker; decide runs on shared facts; act is already wired to consumers.
  • Resilience under attack because ZTA and ARCOS assume compromise, contain it, and recover with provenance rather than hope.[6],[7]
  • Coalition velocity because partners feed us telemetry we can use immediately, and we give back formats they can adopt without years of bespoke integration.[23],[65]

Bottom line: data is ammunition—and model weights are magazine feeds. Stock them, guard them, and move them with discipline measured in SLAs, not slogans. Do that, and JADC2 stops being a diagram and becomes a joint instrument of power the commander can aim.[3],[5],[6],[7],[21],[22],[23],[24],[45],[46],[65],[120],[122],[131],[132]


14) Five Unorthodox Case Studies

Unconventional teachers are useful because they collapse theory into muscle memory. Each vignette below isn’t hero-worship; it’s a worked example that maps directly to a few of our earlier sections and shows how the behavior looks when it’s real, under constraints, with tradeoffs. The pattern you’ll see: tight product vision + ruthless interfaces + fast feedback + willingness to kill your darlings. Different domains; same metabolism.

AI is Amazing.

Steve Jobs — “Product Commander’s Intent”

Jobs’ superpower wasn’t taste alone; it was forcing organizational coherence around the user’s moment of truth. (The iPod wheel wasn’t a UI flourish—it was the doctrine that one thumb, in motion, wins.) Translate that to air combat C2: the Air Operations Center (AOC) should feel like a product, not a staff labyrinth. That means a singular commander’s intent encoded as design rules, and everything—data models, APIs, workflows—either makes that moment faster and safer or it dies.

  • Section 1 tie-in (Thesis & Framing): Treat data, models, and code as maneuver elements. Jobs would ask: what is the “one-thumb” task for the AOC? I’d argue it’s “compose, validate, and publish an effects plan from mixed sources in minutes, with audit trails baked in.” That implies stark choices in §13: mission data as a POR with SLAs you can grade in the fight, not a sidecar to platforms.[3],[23],[24],[122]
  • Section 2 tie-in (Software-centric loop): Jobs would never accept “we emailed a slide deck to the other cell” as a TTP. We publish an API FRAGO: the interfaces and schemas are the order. MOSA/FACE/UCI/CMOSS stop being standards we cite; they become the language of tasking.[28],[29],[30],[31]
  • Design doctrine: reduce, don’t accrete. The AOC “product” ships fewer screens with stronger affordances (think: target list as a living object; drag to assign effect; the system composes the message, enforces ROE, and opens the handoff bus). We kill all workflows that can’t be expressed as an event on the bus (§2), and we push consequences to the edge: if the schema doesn’t carry what the shooter needs, it can’t be done.
  • Second-order effects: clarity feeds speed. When the bus becomes the doctrine, training gets easier, allies integrate faster, and command “style” stops breaking systems. It also sharpens ethics: model cards, tear lines, and interlocks (from §12 and §13) are visible in the flow, not trapped in binders.[14],[15],[16],[95],[96]
Hard to believe it's been almost 19 years.

Kim Jong-un — “Iterative Adversary, Cost-Imposing Tests”

It’s fashionable to mock North Korea until you run the chart: iterative, instrumentation-rich missile testing that walks up capability and deters at a discount. Every launch is a learning event that trains crews, shakes logistics, and telegraphs confidence. They have embraced “good enough, right now, with tomorrow queued.” That’s not a compliment; it’s a warning.[133]

Is North Korea a real threat to the US? Possibly. Are they getting a lot more bang-from-their-buck relative to DIME reality? Absolutely.

  • Section 4 tie-in (DIME & hybrid): Treat their tests as cross-domain campaigns: messaging, sanctions evasion, tech advancement, and alliance probing. We respond with DIME symmetry: financial friction (OFAC pressure on sanction-evasion pipes), informational inoculation (operationalize “truthful narratives” on platform terrain), and cyber campaigns that raise their cost to iterate (target the supply chain of telemetry and components).[2],[17],[57]
  • Section 6 tie-in (swarm/EW/EMP realism): Kim’s program is a daily reminder that tempo beats mythology. We train deny-deceive-deplete against mass sUAS with the same cadence he uses for boosters: frequent, instrumented reps where we burn down electromagnetic (EM) spectrum gaps, rotate deception libraries, and push firmware agility as a TTP.[39],[40],[41],[42],[43],[45],[46],[47]
  • Second-order effects: if we don’t budget for learning per week, we end up with hero demos per year and wonder why the battlefield doesn’t bend. Their lesson drives ours: treat telemetry, not talking points, as the national asset.

Lady Gaga — “Surprise as a System; Choreographing Cheap Mass”

The Super Bowl LI halftime drone show wasn’t war—and that’s the point. It was attritable mass, tightly choreographed, safely executed, under a hard deadline, with millions of adversarial observers (the crowd) and a brutal SLO (don’t drop a quadcopter on anyone). The lesson is cultural and technical: surprise at scale can be rehearsed, safety-bounded, and delightful—and then redeployed for deterrence.[134],[135]

  • Section 3 tie-in (fractal airpower & cheap mass): What Intel did—hundreds of sUAS reading a common pattern language and executing formation libraries—is the mental model for fractal formations in the battlespace. We need the same library of behaviors, with mission guardrails and deception variants baked in (formation mirages, decoy bloom, radar-signature choreography).
  • Section 7 tie-in (Million-Plane math): The show demonstrates that commercial off-the-shelf (COTS) + choreography + safety interlocks scales. Translate: SBOT artifacts that squadron-level units can compose (“formation 12 + beacon 7 + abort rule 3”), with cATO lanes ensuring the pack flies or de-rates as a pack.[4],[8],[39]
  • Second-order effects: audience psychology is part of the effect. A sudden sky of shapes changes behavior. In conflict, the analog is narrative and perception operations synchronized with sUAS mass (Section 4’s “platform algorithms as terrain”). Cheap mass isn’t just to kill things; it’s to shape choices before the first shot.

Intel was just getting started.


Elon Musk / SpaceX — “Explode, Learn, Fly Again”

SpaceX turned the most ossified domain in America into a continuous-integration sport. They moved the center of gravity from presentations about safety to telemetry-proven safety (incidents are not reputational death; they are data harvests). We don’t copy the bravado; we copy the learning loop.

  • Section 5 tie-in (OODA across the enterprise): “Speed of learning > speed of flight.” Starship’s early “rapid unscheduled disassembly” wasn’t brand failure—it was force-on-range learning that shrank design cycles.[136],[137],[138],[139] Our version is OT&E as a continuous fabric tied to ops: every trial produces datasets and decision deltas (§5, §12).[25],[26],[27]
  • Section 12 tie-in (ATO Revolution): SpaceX didn’t ask the range to bless their slides; they engineered the range into their runtime interlocks. That’s our RMFSTPA/ARCOS move: hazard-based guardrails that enable frequent change with bounded risk.[8],[12],[13],[14],[15],[16] We pre-approve lanes (cATO), sign artifacts, instrument pipelines, and treat runtime policy (not paperwork) as the gate.[4]
  • Into §7 (cheap mass / attrition tolerance): Reusability is economics the enemy can’t match. For us, “reusable” means reusable code, adapters, and behaviors across CCAs and sUAS, not a bespoke stack per platform. The SBOT becomes our “booster landing”—the repeatable magic that cheapens the next sortie.[4],[8]
  • Second-order effects: public testing attracts talent and capital, which in turn subsidize our learning (Part 5’s point that the civilian market is the arsenal). It also shifts culture: failure telemetry beats fear of failure. But take the lesson fully: SpaceX’s velocity is enabled by ruthless configuration control (you can’t fly often if your build graph is mush) and by single-threaded leadership on the main thing. Our acquisition playbook must mirror that discipline.

El Chapo / Sinaloa — “Modular Franchise vs. Vertical Control”

Set aside the criminality (we reject it outright) and look at the organizational physics: Sinaloa operated as a modular franchise with central standards for brand, dispute resolution, and supply-chain access—then local autonomy for ops. It scaled because it balanced control of the chokepoints with freedom at the edge. That mix is uncomfortable to the Pentagon, but it is exactly what our innovation system lacks.[140],[141]

  • Section 10 tie-in (Org design): DIU as “SOCOM-grade peer” needs MFP-coded resources and authority to set the chokepoints—standards, lanes, reciprocal security, and outcome metrics—while AFWERX/NavalX/MIU/AAL become the franchises that execute locally, tied to service equities and SBA flows.[22],[49] Control what matters (interfaces, safety, money), empower what wins (local speed).
  • Section 11 tie-in (Workforce & gigs): The cartel’s “contractorized” structure shows the power and fragility of gig labor. Our lawful version: Pathfinders as the belly-button, a cross-service gig board for surge tasks, and industry tours (DVP, as well as Experience with Industry (EWI)) to refresh the network that feeds us. The point is not precarity; it’s elastic capacity married to shared standards so code ships where needed.
  • Second-order effects: modularity breeds resilience (units can fail without systemic collapse) and competition (franchises compare outcomes in public scoreboards). It also demands relentless API governance (back to §2): without enforced interfaces, “franchise” becomes “fork.”
Sinaloa incorporated franchising, acquisitions, vertical integration, and partnerships. (Victor Manuel Sanchez Valdes)

Stitching the Five into the Theory (and into the plan)

  1. Jobs: Start with the AOC, and manage it as a product; lock the bus as doctrine; fund data as POR; measure T2D and fratricide reduction like net promoter score (NPS). The “one-thumb” moment is the commander’s effects publish—everything upstream/downstream serves it.[3],[23],[24]
  2. Kim: Expect iterative adversaries; impose costs continuously (DIME ) while we train in deny-deceive-deplete cycles that mirror their tempo.[39],[40],[41],[42],[43],[45],[46],[47],[57],[133]
  3. Gaga: Choreograph cheap mass with safety interlocks and a reusable behavior library (SBOT). Surprise isn’t luck; it’s a rehearsed pattern language that can be recomposed under pressure.[39],[134],[135]
  4. Musk/SpaceX: Explode, learn, fly again as acquisition doctrine. Pre-approve lanes, instrument runtime risk, and pay on effect deltas. Make OT&E a data fabric, not a calendar slot.[4],[14],[15],[16],[25],[26],[136],[137],[138],[139]
  5. El Chapo/Sinaloa: Build central choke-point control + edge autonomy. DIU (with MFP) sets lanes and scoreboards; AFWERX/NavalX/MIU/AAL run service-aligned franchises funded via SBA flows; Pathfinders coordinate the gigs; outcome CLINs pay the builders.[22],[49]

Practical codas (what you’d see different next quarter)

  • AOC 90-day “Jobs pass”: kill three screens, ship one effects-composer that publishes to the bus; add two fields to the ISR schema that remove a human handoff; run two red-team drills where the UI blocks unethical choices without binding the commander.
  • Kim counter-tempo: pre-position Hunt-Forward telemetry ingestion; when they test, our models update this week; run a finance + cyber play that raises their next test’s marginal cost.
  • Gaga-grade choreography: fund a SBOT sprint to express five new formation behaviors, each with safety invariants and deception variants; prove we can hand them to a second wing without a contractor on site.
  • SpaceX-style OT&E cadence: monthly swarm/counter-swarm scrimmage with debriefs tied to model retrains and firmware drops; publish patch SLOs and drift dwell as public (to the team) metrics.
  • Sinaloa-style org hygiene: sign the DIU MFP memo draft; publish the Innovation Franchise Playbook (what the lanes are, how outcome money flows, how Pathfinders get tasking priority); bring back DVP as a formal Pathfinder billet requirement.

Meta-lesson (the five confirm what Sections 1–13 demanded)

Advantage accrues to the system that can update its identity and process the fastest without losing safety or coherence. Jobs shows the coherence (one intent, one bus); Kim shows the adversary’s tempo (assume weekly learning); Gaga shows surprise as an engineered deliverable (cheap mass, choreographed); Musk shows the update rate (learn in public, gate in code); Sinaloa shows modularity with chokepoints (control standards and cash, free the edge). Put together, they are the operating manual for the rest of this paper: software-first national military power with budget and doctrine wired to interfaces, telemetry, and outcomes—not to PowerPoint, lineage, or contractor mystique.

And they give us permission to be blunt about the acquisition culture change we’re codifying in §§10–13: If it isn’t on the bus, it isn’t in the fight. If it isn’t instrumented, it isn’t safe. If it doesn’t improve a metric the commander signs for, it isn’t funded. The heroes change, the rule doesn’t.


15) Implementation Roadmap & Metrics

This isn’t a “vision.” It’s an order of march with owners, artifacts, and clocks. The thread through all of it is what Part 5 argued outright: the civilian software market is the U.S. arsenal; our job is to make that arsenal fire on command. That means (1) encode commander’s intent as interfaces, pipelines, and budgets rather than memos; (2) value effects and telemetry rather than milestones; (3) instrument everything so we can kill the things that don’t move the fight. Below is a three-phase plan—90 days, 1 year, 2–3 years—with the metrics that decide if we’re winning.


Phase I — 90 Days: Stand Up the Pipes, Prove the Loop

Commander’s intent: Pick two AORs and show that data, code, and models behave like maneuver elements (they can be tasked, protected, logged, and retired). No pilots that die in PowerPoint; two real threads from ops → telemetry → code change → redeploy in days.

1) Mission Data Contracts (AOR-1 & AOR-2).

  • Artifacts: Joint data contracts that name the producers/consumers, schemas, latency/SLOs, decision rights, tear lines. They live in a repo—not a PDF—and changes are versioned like code.[3]
  • Owner: CDAO (contract template + metrics) with the component J-staffs and service commands owning fielding.
  • Deliverable in 30 days: v0.1 contracts covering ISRC2 → shooters for one priority mission per AOR (e.g., maritime fires in INDOPACOM; C-UAS in CENTCOM).
  • Security posture: ZTA controls around “crown-jewel” datasets and model weights; inherit platform security where possible.[5],[6],[7]

2) cATO Lanes and Pre-approved Pipelines.

  • Artifacts: Two cATO lanes per AOR (one unclassified, one classified) with signed patterns, SBOM capture, CVE/KEV watch, and runtime guardrails; RMF risk as runtime policy, not a gate.[4],[8]
  • Owner: Component AO + an authorized software factory stand up the CI/CD lane with cATO inheritance.
  • Deliverable: Pipelines issuing signed deployables (container images + policies) to an edge cluster in-theater. The AO publishes the reusable “ATO playbook” to the repo (not a SharePoint graveyard).

3) Pathfinder Teams on-station.

  • Artifacts: Two Pathfinder detachments (small, mixed teams: ops, telemetry engineer, model wrangler, API lead) attached to the AOR commander.
  • Owner: USAF Pathfinder program provides the nucleus; each service mirrors with a small cadre; DIU brokered industry augmentation.
  • Deliverable: A weekly effects review where the detachment briefs “diff to last week” (what code shipped, what tactics changed, what risk dropped). No slide decks—dashboards + logs.

4) SBOT v0.1.

  • Artifacts: For the two priority missions, publish behavior libraries (formation behaviors, autonomy failsafes, C2 handoffs) as versioned packages with tests.
  • Owner: AOR lead wings + software factory liaison.
  • Deliverable: Five behaviors per mission thread, each with safety invariants and telemetry hooks.

5) Metrics (begin day 1).

  • Time-to-patch (CVE/KEV): measured from CISA KEV entry/MITRE CVE entry to the last relevant host updated in the AOR lane (target ≤ 72 hours by Day 90).[47]
  • Time-to-field (MTA/UON/JUON): signed artifact to operational use (target ≤ 30 days for changes; ≤ 120 days for a new capability using MTA/UON/JUON pathways).[73],[74]
  • T2D (JADC2): sensor event to human decision with audit trail (baseline now; improvement target −30% by Day 90).[23],[24]
  • Cost-per-effect (CCA/sUAS): dollars per validated effect delivered in exercise/ops (baseline now; method locked).
  • Model SLOs: accuracy bands per use-case (with drift dwell alarms), and rollback SLO (minutes to safe behavior).

6) Anti-pilot discipline.

  • Exit criteria for Phase I are not “demo complete.” They’re two closed loops where telemetry from the field generated code changes that changed tactics that improved a metric. If either loop stalls, we kill the lane or fix it within the 90 days.

7) Letters of Marque 2.0.

  • Publish draft bounty schedules (Blue/Red) and coverage SLAs (White). Stand up the Letters-of-Marque Program Office (NSC-chaired; State/DoD/DoJ/Treasury/CISA/CYBERCOM/CDAO).
  • Launch a pilot White List covering the top 200 OSS packages in our stacks; start reporting OSS risk dwell and dependency exposure.
  • Build escrow/payment rails that are auditable to U.S. oversight and compliant with OFAC/anti-money laundering (AML), with privacy for vendors where lawful.

Phase II — 1 Year: Field Cheap Mass, Institutionalize the Scrimmage

Commander’s intent: Prove we can scale beyond a petri dish. Two million-plane pilot wings (read: CCAs plus thousands of sUAS and decoys across two bases), quarterly swarm/counter-swarm exercises, and a DIU-brokered insertion pipeline that moves dual-use tools from commercial stacks to the edge in weeks, not fiscal-years.

1) Two Million-Plane Pilot Wings.

  • Structure: Each wing owns a MOSA-compliant interface plan; runs an SBOT; treats sUAS procurement as consumables with pre-approved firmware agility; has a counter-C-UAS TTP cell.
  • Owner: MAJCOMs designate wings (one in the continental United States (CONUS), one outside CONUS (OCONUS)), with joint observers.
  • Deliverables by Month 12:
    • Inventory: 3,000–5,000 sUAS + decoys, 50–100 attritable platforms/CCA surrogates, and a small fleet of autonomous ground vehicles for logistics.
    • Ops: Four quarterly swarm/counter-swarm scrimmages that produce new behaviors and firmware counter-updates each time.
    • Safety: Directed-energy and EW de-confliction playbooks tied to range control (our SpaceX-style runtime interlocks for ground/air safety).
    • Interoperability: API FRAGOs for at least three mission-to-mission handoffs (e.g., ISR→fires, C-UAS→base defense, logistics→ACE).

2) DIU-led Insertion Pipeline (RDER-style cadence).

  • What changes: DIU becomes the single-front-door for dual-use capability drops into the two wings. We pay on outcome CLINs (effects with telemetry), not level-of-effort, and we refuse forked codebases.
  • Mechanics:
    • Commercial to cATO in ≤ 30 days via contracted middleware and inherited controls (no bespoke rebuilds unless safety demands it).
    • Outcome scoring: Time-to-value from contract award to effect in a scrimmage; “two-sprint rule” (if we can’t show effect in 2 sprints, we either pivot the requirement or kill it).
    • Budgeting: Services keep SBIR/STTR tax flows (AFWERX et al. stay service-aligned), but DIU prioritizes and sequences joint inserts and publishes the scoreboard.[48],[49]

3) Quarterly Red-Team Cadence (becoming muscle memory).

  • Swarm-on-swarm with deception/economic traps that force adversary “inventory burn.”
  • ACE logistics raids: adversary cell targets our fuel, parts, spectrum, and base data pipes; we measure survival under contestation.[79],[80]
  • Algorithmic influence table-top exercise (TTX): treat platform algorithms as terrain and run truth-forward campaigns with allies; measure inoculation and counter-messaging speed.[62],[63],[65]
  • Rules: Every quarter must generate new datasets, changed TTPs, and at least one deprecation (killing a tactic or tool is a win).

4) Metrics (ratcheting targets).

  • Time-to-patch (CVE/KEV): ≤ 48 hours in wing lanes by Month 12.[47]
  • Time-to-field (MTA/UON/JUON): ≤ 14 days for code/config changes; ≤ 90 days for new capability in the wing stack.[73],[74]
  • T2D (JADC2): −50% from baseline for named threads; audit trails show decision origin and model influence.[23],[24]
  • Cost-per-effect: published per scrimmage (include decoy and deception wins; drive dollars per defended asset hour down).
  • Model SLOs: accuracy, drift dwell, rollback time, and “adversarial robustness” measured in live red-team runs.
  • CICO (code-in/code-out): ratio of new code merged vs. code retired. A healthy wing kills old code.

5) Policy bolts tightened by Year-1.

  • API FRAGO order: “If it isn’t on the bus, it isn’t in the fight.” Each wing publishes its public (coalition-safe) schemas and internal secret annexes; all tasking messages align.
  • BPAC pilot: Budget Program Activity Codes aligned to value streams (JADC2 thread, C-UAS thread, ACE logistics thread) rather than platforms; commanders can re-weight within BPAC without new paperwork (with reporting back to PPBE).[20],[21],[22]
  • No fork clause: Contract language bans bespoke forks to meet compliance; vendors ship the same core with wrappers; we pay for outcome, not ceremony.

6) Letters of Marque 2.0Year-1.

  • First Black List tranche (narrow scope): e.g., named GRU elements with strictly bounded disruption effects.
  • Quarterly scorecards: CVE/KEV dwell (Red), SBOM coverage (White), time-to-effect and cost-per-effect (Black), and collision rate with ongoing ops.
  • Bake bounty results into our wing scrimmages and hunt-forward exchanges; convert best Red/White finds into immediate cATO pushes to the field.

Phase III — 2–3 Years: Lock in the Budget Physics, Make Green on JADC2

Commander’s intent: Convert momentum into structure. Two new Major Force ProgramsMFP-CYBER and MFP-INNOVATION—fund the metabolism directly; PPBE scoring pays for effects shipped and technical debt retired; JADC2 metrics go green for named threads across combatant commands.

1) MFP-CYBER and MFP-INNOVATION live.

  • Why: SOCOM's lesson is clear: a UCC with a dedicated MFP can steer purpose-built resources. We need the same for CYBERCOM (campaigns, hunt-forward, cyber logistics, dual-use software) and for DIU (inserts, standards, and the innovation franchise).
  • What changes:
    • CYBERCOM gets its MFP: hunt-forward, persistent engagement, and shared tooling aren’t ad hoc asks—they’re the funded plan.[18]
    • DIU gets MFP-INNOVATION: funds lanes, franchises, and outcome buys across services; holds the joint scoreboard and the no-fork whip.[49]
    • Services retain SBIR/STTR equity (AFWERX, etc.), but DIU controls priority and integration; think SOCOM/USAFAFSOC “two dads” model mirrored for innovation and cyber.[22]

2) PPBE incentives re-weighted to software effects.

  • Mechanics:
    • Outcome CLINs normalized in contracts (effects with telemetry = payment).
    • Deprecation credits: retiring dead apps/hardware earns budget points (and frees sustainment tail).
    • BPAC expanded: every JADC2 thread and C-UAS/ACE thread runs as a managed value stream with reprogrammable funding bands (within congressional oversight).
    • Annual posture hearings include T2D, Time-to-Patch, Cost-per-Effect tables alongside end-strength.
  • Tools: GAO-style audits refocused to “goal attainment” for JADC2 rather than box-checking, and PPBE Commission recs baked into the reweighting.[20],[21],[22],[132]

3) JADC2 metrics green across named threads.

  • Definition of green:
    • Coverage: named mission threads (at least four per UCC) have contracts, lanes, and measured decisions within SLOs.[23],[24]
    • Interchange: cross-service data exchange hits SLA > 99% for mission-critical paths.
    • Change rate: average code/config change that touches a mission thread deploys in days, not months.
    • Resilience: CVE/KEV patching SLOs met for 95% of fleet within 48 hours; top-5 model drifts detected and corrected within target dwell windows.[47]

4) Human system fully grown-in.

  • Pathfinders: DoD-wide career field (mirroring USAF), with billets embedded in every UCC/Joint Task Force (JTF), plus a formal DVP requirement (industry rotations) feeding the gig board.
  • Gig economy: cross-service tasking board with pre-cleared industry and reserve talent; pay by artifact and effect; one week to onboard to any wing lane.
  • Leadership training: intermediate and senior schools now include “API FRAGO practicum” and runtime risk labs; weapons school instructors co-author SBOT behaviors.

The Scoreboard (Authoritative Metrics)

These are not “nice to track.” They are the currency of success. They roll up to a one-page commander’s dashboard and every dollar should map to moving one of them.

  1. Time-to-Patch (CVE/KEV) — “vulnerability to fixed in the wild.”
    • Definition: time from CVE/KEV listing to the last relevant asset in a mission thread patched or mitigated; measured per lane.[47]
    • Targets: ≤ 72 h (Phase I), ≤ 48 h (Phase II), ≤ 24 h (Phase III for crown jewels).
  2. Time-to-Field (MTA/UON/JUON) — “idea to effect.”
    • Definition: time from approved change/request to operational use, stratified by code/config vs. new capability.[73],[74]
    • Targets: ≤ 30/120 days (Phase I), ≤ 14/90 days (Phase II), “change inside the adversary’s planning cycle” as the norm.
  3. T2D (JADC2) — “sensor to signed decision.”
    • Definition: event → fused picture → decision logged (with model influence and human authority).[23],[24]
    • Targets: −30% (Phase I), −50% (Phase II), meets SLA 95% of the time (Phase III).
  4. Cost-per-Effect (CCA/sUAS) — “dollars per defended hour / validated hit / decoyed shot.”
    • Definition: include munitions, sUAS losses, energy (DE), and logistics; count deception wins as effects.
    • Targets: Down and to the right every quarter; publish league tables between wings.
  5. Model SLOs — “accuracy where it matters, with fail-safe.”
    • Definition: per use-case, accuracy bounds, drift dwell, rollback SLO, and adversarial robustness; tie payment to SLO conformance.
  6. Code-In / Code-Out (CICO) — “are we killing dead code?”
    • Definition: ratio of merged lines to retired lines per quarter; healthy programs remove as much as they add. If we're building spaghetti bowls, we're building tech debt.
  7. Interoperability SLO — “API FRAGO or it didn’t happen.”
    • Definition: % of effects taskings and sensor messages that traverse published schemas and pass validation on first send.
  8. Letters of Marque Metrics — “Scoring the Arsenal.”
    • Definition: % of effects taskings and sensor messages that traverse published schemas and pass validation on first send.
    • Targets:
      • CVE/KEV dwell reduction (days → hours) on Red finds
      • % OSS coverage and mean time to detect (MTTD) on White;
      • $ per verified effect and collateral-risk=0 on Black; deconfliction “red light” events trending to zero; time-to-payout under 10 business days.

Red-Team Cadence (the drumbeat)

  • Quarterly swarm-on-swarm (friction, deception, EW, lasers, THOR): measure inventory burn and behavior library maturity.[39],[40],[41],[42],[43],[44]
  • Quarterly ACE logistics raid (fuel/parts/data pipes under attack): measure hours of sustained ops under contestation; publish gaps; fix them next quarter.[79],[80]
  • Quarterly algorithmic influence TTX with allies: measure inoculation speed and content friction against hostile narratives; tie into defend-forward telemetry.[62],[63],[65]
  • Annual “kill list” review: top 10 apps, processes, or TTPs to deprecate; budget points awarded for sunsets.

Governance That Bites

  • DIU Board over the Innovation Franchise (Phase III MFP-INNOVATION): sets lanes, the no-fork rule, outcome CLIN templates, and the league tables for inserts.[49]
  • CDAO owns data SLAs & schema registry: enforces data contracts, zero-trust wraps, and model dataset governance across threads.[3],[122]
  • CYBERCOM synchronizes the campaign: hunt-forward telemetry feeds our defensive models; campaign authorities align with continuous cost-imposition.[18]
  • AO Council: treats RMF as runtime policy; publishes shared cATO patterns; resolves inheritance disputes in days, not months. Reciprocity is automatic unless exceptions are explicitly documented, and ultimate ATO/cATO approval rests with supported commanders, with AOs as advisors.
  • Budget crosswalks: PPBE re-weights to effects; BPAC lets commanders shift within value streams; SBIR/STTR remains service-specific but sequenced by DIU; oversight reports show metric movement, not page counts.[20],[21],[22]

Monday-Morning Orders (what starts now)

  1. Name the two AOR threads and the point of contact (POC) “single-threaded leaders” for each.
  2. Publish v0.1 data contracts (even if ugly) and open the repo.
  3. Stand up two cATO lanes (copy an existing authorized software factory pattern; don’t invent. Use an approved low-cost COCO software factory if at all possible as the speed and quality is superior to a government equivalent).
  4. Deploy two Pathfinder detachments with authority to merge.
  5. Hold the first 2-week review: show a metric that moved (or kill something).

Everything else is scaffolding.


What this buys us

By Day 90, we have two working loops where data, code, and tactics move together. By Year 1, we prove cheap mass + choreography + safety is a repeatable product (not a demo) and that dual-use inserts can cross the cATO Rubicon without mutilating their code. By Year 3, the budget physics match the doctrine: a MFP-funded metabolism for cyber and innovation; PPBE points for outcomes and sunsets; JADC2 green on named threads. And because the scoreboard is public inside the enterprise, the culture follows: if it isn’t on the bus, it isn’t in the fight; if it isn’t instrumented, it isn’t safe; if it doesn’t move a metric the commander signs for, it isn’t funded.


Conclusion

Everything in these pages reduces to a few durable moves: put the mission on a bus (interfaces + data contracts), ship through lanes (cATO patterns with inheritance), protect with runtime guardrails (STPA/ARCOS), and pay for effects (Outcome CLINs, BPAC). Then repeat that loop faster than the adversary can learn.

The tech is not the bottleneck. Coherence is. When commanders speak in verbs and SLOs, when Pathfinders own the interfaces, when factories and vendors ship against the same test harness, the enterprise behaves like a team—manned, unmanned, and model.

Swarm-on-swarm isn’t science fiction; it’s economics. Deception and EW tilt the ledger; directed energy collapses the price of a shot; cheap kinetic cleans up. Firmware agility keeps us current. We don’t win by spending more—we win by spending less per removed threat, reliably and at scale.

Mass is finally affordable. CCAs and thousands of sUAS become geometry, persistence, and attrition tolerance—if and only if we refuse bespoke stacks. MOSA makes mass smart; SBOT makes it repeatable; JADC2 turns data into a shared instrument instead of a diagram.

Culture follows the scoreboard. Time-to-Patch, Time-to-Field, Time-to-Decision, Drift Dwell, Cost-per-Effect. Post them. Move them. Reward the teams that delete dead code and retire brittle gateways. Kill ceremonies that don’t change outcomes. Make reciprocity the default so dual-use code crosses the river once.

Organize for velocity. DIU with an MFP sets rails and keeps the front door open. CYBERCOM with an MFP fights continuously with allies, feeding telemetry back into the lanes. Pathfinders—patched like weapons school grads—close ops ↔ telemetry ↔ code ↔ TTP. Most builders are contractors by design; most authority sits with commanders by necessity.

What you’d notice next quarter: fewer screens in the AOC and a faster “effects publish”; quarterly swarm scrimmages that end in firmware drops; adapters shipping in weeks, not fiscal years; a market that keeps showing up because we stopped forcing forked codebases; and units posting green metrics you can brief without a thesaurus.

We’ve spent decades perfecting slides. The adversary is perfecting tempo. Choose: gate change with binders, or gate risk with code. When the interfaces are the order, when telemetry is the truth, and when budgets move on outcomes, the United States uses the thing it already leads the world in—software at market scale—as a weapon. That’s the point of this paper. Make it real.


References

1 U.S. Department of Defense. 2022. 2022 National Defense Strategy (Fact Sheet, includes NPR & MDR). October 27.

2 The White House. 2022. National Security Strategy. October.

3 U.S. Department of Defense. 2020. DoD Data Strategy. October.

4 U.S. Department of Defense. 2022. DoD Software Modernization Strategy. February.

5 National Institute of Standards and Technology. 2024. NIST Cybersecurity Framework 2.0. February.

6 U.S. Department of Defense. 2022. Department of Defense Zero Trust Strategy. November 22.

7 National Institute of Standards and Technology. 2020. Zero Trust Architecture (SP 800‑207). August.

8 Department of Defense CIO. 2021. DoD Enterprise DevSecOps Reference Design, v2.0. February.

9 U.S. Department of Defense. 2023. DoDI 5000.87: Operation of the Software Acquisition Pathway. November 15.

10 U.S. Department of Defense. 2020. DoDI 5000.02: Operation of the Adaptive Acquisition Framework. January 23.

11 Gansberger, Donald and Victor Lopez. 2025. System-Theoretic Process Analysis for Security (STPA) and Commander’s Risk for Risk Management Framework. July 18.

12 National Institute of Standards and Technology. 2018. Risk Management Framework for Information Systems and Organizations (SP 800-37, Rev. 2). December.

13 National Institute of Standards and Technology. 2020. Security and Privacy Controls for Information Systems and Organizations (SP 800-53, Rev. 5). September.

14 National Institute of Standards and Technology. 2018. SP 800‑160, Vol. 1 Rev. 1: Systems Security Engineering. December.

15 Leveson, Nancy. 2011. Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press.

16 Leveson, Nancy & Thomas, John. 2018. STPA Handbook. MIT PSAS.

17 U.S. Department of Defense. 2023. DoD Cyber Strategy — Unclassified Summary. September 12.

18 U.S. Cyber Command. 2018. Achieve and Maintain Cyberspace Superiority: Command Vision for USCYBERCOM. April.

19 U.S. Department of Defense. 2018. Summary of the 2018 Department of Defense Cyber Strategy. September.

20 Commission on Planning, Programming, Budgeting, and Execution Reform. 2024. Final Report. March.

21 Office of Management and Budget. 2024. Circular No. A‑11: Preparation, Submission, and Execution of the Budget. (Current edition page.)

22 DoD Comptroller. 2024. DoD Financial Management Regulation (FMR), Volume 2B — Budget Formulation and Presentation.

23 Congressional Research Service. 2024. Defense Primer: Joint All-Domain Command and Control (JADC2). Updated.

24 RAND Corporation. 2023. ABMS and JADC2: Considerations for Effective Implementation.

25 U.S. Department of Defense. 2020. DoDI 5000.89: Test and Evaluation. November 19.

26 U.S. Department of Defense. 2020. DoDI 5000.90: Program Management. August 18.

27 Defense Innovation Board. 2019. Software Is Never Done: Refactoring the Acquisition Code for Competitive Advantage. May 3.

28 Office of the Under Secretary of Defense (R&E). 2024. Modular Open Systems Approach (MOSA).

29 The Open Group. 2022. FACE™ (Future Airborne Capability Environment) Technical Standard. Overview and materials.

30 SAE International. 2018. AS6518: Universal Command and Control Interface (UCI) Architecture.

31 U.S. Army C5ISR Center. 2023. C5ISR Center Publishes CMOSS Open Standards. February 7.

32 U.S. Air Force. 2024. Department of the Air Force selects vendors for Collaborative Combat Aircraft program. April 24.

33 Air & Space Forces Magazine. 2024. Collaborative Combat Aircraft: What You Need to Know. May 1.

34 Air Force Research Laboratory. 2021. Golden Horde: Networked Collaborative Weapons.

35 U.S. Air Force. 2021. Skyborg Vanguard Program Takes Flight. April 29.

36 AFWERX. 2020–2025. Agility Prime (eVTOL and Advanced Air Mobility).

37 Black, Thomas. 2024. The US is Losing the Air Taxi Race to China. November 12.

38 Bondar, Kateryna. 2024. How Ukraine’s Operation “Spider’s Web” Redefines Asymmetric Warfare. June 2.

39 U.S. Air Force Research Laboratory. 2020. THOR: Tactical High Power Operational Responder.

40 Lockheed Martin. 2022. U.S. Navy Installs HELIOS Laser Weapon System Integrated into Aegis Combat System. August 18.

41 U.S. Army. 2022. Army successfully demos new laser weapon system: Stryker-based DE M‑SHORAD. August 9.

42 U.S. Government Accountability Office. 2023. Counter-UAS: DOD Needs to Better Coordinate Efforts to Develop and Field Systems. February 7.

43 U.S. Department of Defense. 2021. Department of Defense Counter–Small Unmanned Aircraft Systems Strategy. January.

44 National Academies of Sciences, Engineering, and Medicine. 2023. Counter‑Uncrewed Aircraft Systems for Critical Infrastructure.

45 National Institute of Standards and Technology. 2022. Secure Software Development Framework (SSDF) Version 1.1 (SP 800‑218). February.

46 National Telecommunications and Information Administration. 2021. Minimum Elements for a Software Bill of Materials (SBOM). July 12.

47 Cybersecurity and Infrastructure Security Agency. 2021–2025. Known Exploited Vulnerabilities (KEV) Catalog.

48 Office of the Under Secretary of Defense for Research & Engineering. 2022. Rapid Defense Experimentation Reserve (RDER).

49 Defense Innovation Unit. 2024. DIU 2024 Annual Report.

50 Gansberger, Donald. 2023. AFWERX Story Time (video). August 4.

51 Layne, Rachel. 2021. Americans could owe $6.5 trillion for wars in Afghanistan and Iraq — and that's just the interest. August 18.

52 Bruegel. 2022. Europe urgently needs a common strategy on Russian gas. October.

53 European Commission. 2022. REPowerEU Plan: Communication from the Commission (COM/2022/230). May 18.

54 Global Energy Monitor. 2023. Russian Gas and Europe 2023. December.

55 U.S. Department of Commerce, BIS. 2022. Implementation of Additional Export Controls: Advanced Computing and Semiconductor Manufacturing Items (Interim Final Rule). Federal Register, Oct 13.

56 The White House. 2022. Executive Order 14083 — Ensuring Robust Consideration of Evolving National Security Risks by CFIUS. September 15.

57 U.S. Department of the Treasury (OFAC). 2021. Sanctions Compliance Guidance for the Virtual Currency Industry: Ransomware Advisory. October 15.

58 U.S. Senate Select Committee on Intelligence. 2020. Russian Active Measures Campaigns and Interference in the 2016 U.S. Election. Multi-volume report.
and
United States District Court. 2018. United States of America vs. Internet Research Agency et al. February 16.

59 Timberg, Craig, et al. 2017. Russian content on Facebook, Google and Twitter reached far more Americans than thought. October 30.

60 U.S. Department of Justice. 2019. Report on the Investigation into Russian Interference in the 2016 Presidential Election (Mueller Report), Vol. I. March 7.

61 ODNI. 2017. Assessing Russian Activities and Intentions in Recent U.S. Elections. January 6.

62 U.S. Senate Select Committee on Intelligence. 2019. Report Vol. II: Russia’s Use of Social Media. October 8.

63 Dwoskin, Elizabeth, et al. 2017. Russian ads, now public, show sophistication of the influence campaign. November 1.

64 Goldberg, Jeffrey. 2025. The Trump Administration Accidentally Texted Me Its War Plans. March 24.

65 U.S. Cyber Command. 2022. Hunt Forward Operations: Strengthening Cyber Defenses with Allies and Partners. May 4.

66 U.S. Air Force. 2020. CSAF Brown’s Strategic Approach: Accelerate Change or Lose. August 31.

67 U.S. Air Force. 2021. CSAF Releases Action Orders to Accelerate Change. January 11.

68 MITRE. 2024. CWE Top 25 Most Dangerous Software Weaknesses (2024).

69 Hegseth, Peter. 2025. Directing Modern Software Acquisition to Maximize Lethality. March 6.

70 Miller, Jason. 2025. Arrington kicks off effort to eliminate RMF for DoD software. May 5.

71 Second Front Systems. 2023. Second Front Systems’ Game Warden Platform Selected by AFWERX Prime. July 10.

72 U.S. Department of Defense. 2022. DoD Awards Joint Warfighting Cloud Capability (JWCC) Contracts. December 7.

73 U.S. Department of Defense. 2019. DoDI 5000.80: Operation of the Middle Tier of Acquisition (MTA). December 30 (Change 2, 2023).

74 U.S. Department of Defense. 2019. DoDI 5000.81: Urgent Capability Acquisition. December 31.

75 U.S. Department of Defense. 2020. DoDI 5000.85: Major Capability Acquisition. August 6.

76 Hatchpad IT. 2025. The Future Frontlines of Defense: 3D Drones and Strategic Logistics | The Pair Program Ep59. April 1.

77 Anduril unveils Anvil-M counter-drone kit that can defeat smaller UAS. October.

78 Beaucar Vlahos, Kelley. 2023. The cost of US fighting Houthis in the Red Sea just went up. December 19.

79 Department of the Air Force. 2022. Air Force Doctrine Publication 3‑99: Agile Combat Employment. December.

80 RAND Corporation. 2024. Mitigating Risk to Agile Combat Employment. Perspective PEA3238‑1.

81 U.S. Department of Homeland Security (CISA). 2018. Strategy for Protecting and Preparing the Homeland Against Threats of Electromagnetic Pulse and Geomagnetic Disturbances. September.

82 E-ISAC & SANS ICS. 2016. Analysis of the Cyber Attack on the Ukrainian Power Grid. March 18.

83 U.S. Department of Justice. 2020. GRU “Sandworm” Indictment (NotPetya, Ukraine grid, Olympics). October 19.

84 Cranny-Evans, Sam & Withington, Thomas. 2022. Preliminary Lessons from Russia’s EW in Ukraine. RUSI.

85 C4ADS. 2025. Above Us Only Stars. March 19.

86 Gordon, Randy. 2019. Special Lecture: F-22 Flight Controls. (Video)

87 Hegseth, Peter. 2025. Unleashing US Military Drone Dominance. July 10.

88 Chief Digital and Artificial Intelligence Office. 2023. DoD Data, Analytics, and AI Adoption Strategy. November.

89 U.S. Government Accountability Office. 2021. Artificial Intelligence: Status of DoD Applications. June 30.

90 National Security Commission on Artificial Intelligence. 2021. Final Report. March.

91 Gansberger, Donald. 2018. Final Report. Summer.

92 Joint Chiefs of Staff. 2019. JP 3‑09.3: Close Air Support. December 10.

93 U.S. Department of Defense. 2023. DoDI 8140.02: Cyberspace Workforce Management. February 15.

94 National Institute of Standards and Technology. 2023. AI Risk Management Framework (AI RMF 1.0). January.

95 U.S. Department of Defense. 2023. Responsible Artificial Intelligence Strategy and Implementation Pathway. January 6.

96 U.S. Department of Defense. 2020. DoD Adopts Ethical Principles for Artificial Intelligence. February 24.

97 DARPA. 2024. Air Combat Evolution (ACE) Achieves First AI vs. Human Within-Visual-Range Maneuvers in X-62A. February 14.

98 Air Force Research Laboratory. 2024. AI Learns Safety from Humans as X‑62A VISTA Completes First-of-its-Kind Human‑Flight Teaming Test. September 20.

99 412th Test Wing (Edwards AFB). 2024. X‑62A VISTA Flies AI Dogfighting Under Test Conditions. November 30.

100 Johns Hopkins APL. 2024. Secretary of the Air Force Experiences AI Dogfighting at Edwards AFB. July 11.

101 McDowell, Jonathan. 2025. Jonathan's Space Pages: Starlink Statistics. September 15.

102 American Binary. 2025. MaxKyber Product Page.

103 Cigent. 2024. Understanding CSfC for DAR Data Security. September 9.

104 Paul, Christopher E. and Michael Schwille. 2021. The Evolution of Special Operations as a Model for Information Forces. February 10.

105 AcqNotes. 2025. PPBE Process: Appropriation Categories – Funding Type.

106 Defense Acquisitions University. 2022. Crosswalk Card: Budget Activity to TRL. June 16.

107 DoD CIO. 2024. Reciprocity & ATO Reuse (cATO) — Guidance and Resources.

108 U.S. Air Force. 2025. LevelUP DevSecOps (USAF Platform One Enterprise Services).

109 Metz, Danielle. 2025. Modernizing the Department of Defense’s Authorization to Operate Process For Agility. March 20.

110 AFWERX. 2025. Frequently Asked Questions.

111 U.S. SBA / DoD. 2024. DoD SBIR/STTR Program Overview.

112 United States Code. 2024. 15 U.S.C. § 638 (SBIR/STTR).

113 AFWERX. 2025. AFWERX.

114 U.S. Air Force. 2025. Kessel Run (USAF Software Factory).

115 U.S. Air Force. 2025. BESPIN (Business & Enterprise Systems Product Innovation).

116 U.S. Air Force. 2020. Space CAMP software factory helps drive digital transformation. February 18.

117 Naval Information Warfare Center Atlantic. 2023. The Forge (Navy Software Factory).

118 U.S. Army. 2021. Army Software Factory to be housed at Austin Community College. April 26.

119 U.S. Marine Corps. 2022. Marine Corps launches software factory. May 20.

120 National Institute of Standards and Technology. 2024. SP 800‑171 Rev. 3: Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations. May.

121 Yegge, Steve. Unk. Steve Yegge's internal memo in Google about what platforms are and how Amazon learnt to build them.

122 U.S. Department of Defense. 2022. Establishment of the Chief Digital and Artificial Intelligence Office (CDAO). February 1.

123 The White House. 2021. Executive Order 14028 — Improving the Nation’s Cybersecurity. May 12.

124 Schmitt, Michael N., gen. ed. 2017. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press.

125 Lykourentzou, Ioanna, Faez Ahmed, Costas Papastathis, Irwyn Sadien, and Konstantinos Papangelis. 2018. When Crowds Give You Lemons: Filtering Innovative Ideas using a Diverse-Bag-of-Lemons Strategy. November 1.

126 Lukumon, Gafari and Mark Klein. 2023. Crowd-sourced idea filtering with Bag of Lemons: the impact of the token budget size. .

127 Klein, Mark et al. 2015. High-Speed Idea Filtering with the Bag of Lemons.

128 Joint Chiefs of Staff. 2022. JP 1: Doctrine for the Armed Forces of the United States. November 19.

129 Joint Chiefs of Staff. 2022. JP 3‑0: Joint Operations. June 18 (incorporating change).

130 Joint Chiefs of Staff. 2020. JP 5‑0: Joint Planning. December 1.

131 U.S. Department of Defense. 2023. Cybersecurity Maturity Model Certification (CMMC) Program — Proposed Rule. Federal Register, December 26.

132 U.S. Government Accountability Office. 2023. Joint All‑Domain Command and Control: DOD Needs to Define Outcome‑Oriented Goals and Measures of Progress. GAO‑23‑106094.

133 CSIS Missile Threat. 2025 (updated). North Korea: Missile Overview.

134 Intel. 2017. Intel Drone Light Show Takes to the Skies for Super Bowl LI Halftime. Feb 5.

135 Snider, Mike. 2017. Lady Gaga’s 300 drones light up halftime show. USA Today, Feb 5.

136 NASA. 2020. NASA Astronauts Launch from America in Historic Test Flight of SpaceX Crew Dragon. May 30.

137 Pearson, Ben. 2023. SpaceX Rocket Explosion Illustrates Elon Musk’s ‘Successful Failure’ Formula. Reuters, April 20.

138 Shepardson, David. 2023. U.S. FAA Closes Probe into SpaceX’s April Starship Test Launch. Reuters, September 8.

139 Pearson, James & Roulette, Joey. 2023. SpaceX Starship launch failed minutes after reaching space. November 18.

140 Felbab‑Brown, Vanda. 2022. How the Sinaloa Cartel Rules. April 4.

141 Congressional Research Service. 2022. Mexico: Organized Crime and Drug Trafficking Organizations. June 7.

Additional Resources Used in the Creation of this Document:


a1 NSA, CISA, FBI, et al. 2024. PRC State-Sponsored Actors Compromise and Maintain Access to U.S. Critical Infrastructure. February 7.

a2 U.S. Department of Justice. 2021. Three North Korean Military Hackers Indicted… February 17.

a3 UK Foreign & Commonwealth Office. 2017. Attribution of WannaCry to North Korean actors. December 19.
and
NSA, CISA, HHS, et al. 2023. #StopRansomware: Ransomware Attacks on Critical Infrastructure Fund DPRK Malicious Cyber Activities. February.
and
U.S. v. Park Jin Hyok. 2018. Criminal Complaint. June 8.

a4 FBI, CISA, DoD, NIS, KISA, NPA. 2024. North Korea Cyber Group Conducts Global Espionage to Advance Military and Nuclear Programs. July 25.

a5 U.S. Attorney SDNY. 2016. Charges Against Seven Iranians… Including Bowman Avenue Dam Intrusion. March 24.

a6 CISA & FBI. 2022. Iranian Government-Sponsored APT Actors Compromise Federal Network. November 25.

a7 NSA, CISA, et al. 2023. PRC State-Sponsored Actor “Living off the Land” to Evade Detection. May 24.

a8 Joint Chiefs of Staff. 2018. Joint Publication 3-12: Cyberspace Operations. June 8.

a9 Symantec. 2011. W32.Stuxnet Dossier. February.

a10 Langner, Ralph. 2013. To Kill a Centrifuge. November.

a11 Zetter, Kim. 2014. Countdown to Zero Day (excerpt & coverage). November 11.

a12 NATO CCDCOE. 2008. Cyber Attacks Against Georgia: Legal Lessons.

a13 Hollis, David. 2011. Cyberwar Case Study: Georgia 2008. Small Wars Journal.

a14 E-ISAC & SANS ICS. 2016. Analysis of the Cyber Attack on the Ukrainian Power Grid. March 18.

a15 U.S. Department of Justice. 2020. GRU “Sandworm” Indictment (NotPetya, Ukraine grid, Olympics). October 19.

a16 Cranny-Evans, Sam & Withington, Thomas. 2022. Preliminary Lessons from Russia’s EW in Ukraine. RUSI.
and
C4ADS. 2025. Above Us Only Stars. March 19.

a17 Nakashima, Ellen & Timberg, Craig. 2019. Cyber Command disrupted IRA on 2018 election day. February 27.

a18 Schwartz, Moshe. 2011. DoD Contractors in Afghanistan and Iraq: Background and Analysis. CRS.

a19 GAO. 2021. Contingency Contracting: DoD Has Taken Steps… September 30.

a20 Knutson, Barbara et al. 2005. How Should the Army Use Contractors on the Battlefield? RAND.

a21 Luckey, John. 2012. Inherently Governmental Functions and DoD. CRS.

a22 Gartner. 2024. Security & Risk Management Spending Forecast. December 3.

a23 National Science Foundation (NCSES). 2025. U.S. R&D Totaled $892 Billion in 2022. February 27.

a24 CISA. 2018. Chinese Cyber Activity Targeting Managed Service Providers (APT10).

a25 U.S. Department of Justice. 2018. Two Chinese Hackers (APT10) Charged in Global Computer Intrusion Campaign. December 20.

a26 CSET. 2020. Chinese Investment in U.S. AI Companies. September.

a27 Vanderlee, Kelli. 2022. Testimony on China’s State-Sponsored Cyber Espionage. USCC.

a28 The White House. 2023. National Cybersecurity Strategy. March.

a29 Barnett, Thomas P.M. 2005. Let’s Rethink America’s Military Strategy. TED.

a30 Osinga, Frans. 2006. Science, Strategy and War: The Strategic Theory of John Boyd. Princeton.

a31 Boyd, John. 1987. A Discourse on Winning and Losing (Briefing Compendium). Air University.

a32 Clausewitz, Carl von. 1873/1976. On War. (Howard & Paret translation excerpts)

a33 Sun Tzu. 1910/2022. The Art of War. (Giles translation)

a34 Glenn, Russell W. 2008. Trust and Leadership in Counterinsurgency. Parameters.

a35 Glenn, Russell W. 2013. Rethinking Western Approaches to Counterinsurgency. SSI.

a36 RAND. 2023. Agile Combat Employment: Implications for Air Base Resilience.

a37 RAND. 2022. Competing in the Gray Zone: Russian and Chinese Activities.

a38 RAND. 2022. Countering Russian Aggression in the Gray Zone.

a39 CRS. 2021. Joint All-Domain Command and Control (JADC2).

a40 U.S. Air Force. 2020. ABMS/JADC2 advancing ways to connect joint forces. June 15.

a41 DoD. 2020. DoDI 5000.87 — Operation of the Software Acquisition Pathway. October 2.

a42 U.S. Air Force Chief Software Office. 2020. Continuous ATO (cATO) Guide. October 20.

a43 Boyd, John R. 2018 (orig. briefs 1970s–90s). A Discourse on Winning and Losing. Air University Press.

a44 Boyd, John R. 1986 (slides). Patterns of Conflict.

a45 Boyd, John R. 1989 (transcript). Patterns of Conflict (USMC Command & Staff College).

a46 Fadok, David S. 1995. John Boyd and John Warden: Air Power’s Quest for Strategic Paralysis. Air University.

a47 Meilinger, Phillip S., ed. 1997. The Paths of Heaven: The Evolution of Airpower Theory. Air University Press.

a48 Barnett, Thomas P.M. 2005. The Pentagon’s New Map. TED Talk.

a49 DARPA. 2020. AlphaDogfight Trials (AI vs. human F-16 simulation).

a50 USAF. 2025. April Doctrine Paragon: Col John Boyd’s Perspective. April 30.

a51 Small, Daniel. 2024. Electromagnetic Pulse (EMP) and Its Impacts. Congressional Research Service.

a52 Congressional Research Service. 2023. Defense Primer: Directed-Energy Weapons.

a53 U.S. Government Accountability Office. 2023. Directed Energy Weapons: Critical Technologies Are Maturing, but DoD Needs to Manage Risks.

a54 Bower, Anthony, et al. 2023. Fueling the Fight: Improving Logistics for Pacific Air Forces. RAND.

a55 Schultz, Katherine, et al. 2023. Hiding in Plain Sight? Posture and Basing in the Indo-Pacific. RAND.

a56 RAND. 2023. Enabling Agile Combat Employment for Fifth-Generation Aircraft.

a57 DoD. 2022. DoDI 8510.01: Risk Management Framework (RMF) for DoD IT. Rev.

a58 DoD CIO. 2022. Continuous ATO (cATO) Guidebook.

a59 USAF/DoD. 2021–2025. Platform One Documentation (incl. Iron Bank & cATO patterns).

a60 Defense Innovation Unit. 2025. About DIU.

a61 U.S. Department of Defense. 2022. Department of Defense Releases Joint All-Domain Command and Control (JADC2) Strategy. March 17.

a62 Defense Acquisition University. 2023. cATO — Continuous Authority to Operate (explainer).

a63 U.S. Air Force. 2022. AFDP 3-99: Agile Combat Employment. December 1.

a64 RAND Corporation. 2023. Agile Combat Employment—Could Your Base Be Next? February 16.

a65 Truman Presidential Library. 1948. Functions of the Armed Forces and the Joint Chiefs of Staff (Key West Agreement). March.

a66 DoD Comptroller. 2023. DoD Financial Management Regulation, Vol. 2A — Chapter 1: Major Force Programs.

a67 Hicks, Kathleen. 2023. Remarks at the NDIA Emerging Technologies for Defense Conference (Introducing Replicator). August 28.

a68 U.S. Department of Defense. 2023. Replicator Initiative to Counter China Will Field Thousands of Autonomous Systems. August 30.

a69 Crosby, Jeremiah. 2020. Operationalizing Artificial Intelligence for Algorithmic Warfare. Military Review (July–August).

a70 Congressional Research Service. 2024. Defense Primer: Roles and Missions of the U.S. Armed Forces. Updated April 10.

a71 U.S. Department of Defense. 2018. DoD Elevates U.S. Cyber Command to Combatant Command. July 31.

a72 U.S. Department of Defense. 2023. Secretary of Defense Reaffirms Commitment to Innovation and Directs Defense Innovation Unit to Report Directly to the Secretary of Defense. April 4.

a73 CSIS. 2024. A New Era of Strategic Disruption: The Case of Uncrewed Systems in Ukraine. January 9.

a74 CNA. 2023. The Ukraine War and Uncrewed Systems: Implications for the Future Force. October.

a75 Clausewitz, Carl von. 1976. On War. Edited/translated by Michael Howard & Peter Paret. Princeton University Press.

a76 Sun Tzu. 1910. The Art of War. Translated by Lionel Giles. Project Gutenberg.

a77 Osinga, Frans P.B. 2006. Science, Strategy and War: The Strategic Theory of John Boyd. Routledge.

a78 Glenn, Russell W. 2001. Reading Athena’s Dance Card: Men Against Fire in Vietnam. RAND.

a79 Barnett, Thomas P.M. 2004. The Pentagon’s New Map. TED Talk.

a80 Defense Advanced Research Projects Agency. 2018. OFFensive Swarm-Enabled Tactics (OFFSET).

a81 Defense Innovation Unit. 2023. Blue UAS Framework.

a82 Center for Strategic and International Studies. 2016. The Kremlin Playbook: Understanding Russian Influence in Central and Eastern Europe. October 13.

a83 U.S. Department of Defense. 2019. DoD Digital Modernization Strategy. July.

a84 Department of the Air Force. 2021. AFDP‑1: The Air Force. March.

a85 U.S. Army. 2022. Army publishes new Field Manual 3‑0, Operations. October 6.

a86 U.S. Digital Service. 2014. U.S. Digital Services Playbook. August.

a87 U.S. Department of Defense. 2016. DoD to Launch “Hack the Pentagon” Cyber Bug Bounty Program. March 2.

a88 U.S. Department of Defense. 2021. DoDI 5000.91: Product Support Management for the Adaptive Acquisition Framework. November 4.

a89 U.S. Department of Defense. 2020. DoDI 5000.83: Technology and Program Protection to Maintain Technological Advantage. July 20.

a90 Defense Science Board. 2012. Task Force Report: The Role of Autonomy in DoD Systems. July.

a91 U.S. Navy. 2021. Navy Stands Up Task Force 59 to Integrate Unmanned Systems and AI into Fleet Operations. September 9.

a92 Defense Advanced Research Projects Agency. 2019. Mosaic Warfare Concept Overview.

a93 U.S. Department of Defense. 2017. Algorithmic Warfare Cross‑Functional Team Moves Out on Project Maven. July 21.

a94 Transparency International. 2024. Corruption Perceptions Index 2024.

a95 Defense Advanced Research Projects Agency. 2015. Persistent Close Air Support (PCAS).

a96 Office of the Director of National Intelligence. 2023. Annual Threat Assessment of the U.S. Intelligence Community. February.

a97 MITRE. 2024. ATT&CK for ICS Matrix.

a98 U.S. Army TRADOC. 2018. TRADOC Pamphlet 525‑3‑1: The U.S. Army in Multi‑Domain Operations 2028. December.

a99 U.S. Air Force. 2023. Secretary of the Air Force Outlines Seven Operational Imperatives. January 23.

a100 U.S. Marine Corps. 2023. Force Design 2030 — Annual Update. June.

a101 Air Force Special Operations Command. 2022. Armed Overwatch Contract Awarded. August 1.

a102 Federal Communications Commission. 2020–2025. Secure and Trusted Communications Networks (Supply Chain Reimbursement Program).

a103 Cybersecurity and Infrastructure Security Agency. 2022. Shields Up.

a104 NATO Defence Innovation Accelerator for the North Atlantic (DIANA). 2024. About DIANA.

a105 U.S. Government Accountability Office. 2018. Weapon Systems Cybersecurity: DOD Just Beginning to Grapple with Scale of Vulnerabilities (GAO‑19‑128). October 9.

a106 Bureau of Industry and Security. 2020. Commerce Department Adds DJI and Other Chinese Entities to the Entity List. December 18.

a107 U.S. Department of Defense. 2020. DoDI 5000.88: Engineering of Defense Systems. November 18 (incl. changes).

a108 U.S. Department of Defense. 2018. Digital Engineering Strategy. June.

a109 U.S. Department of Defense. 2023. DoD Establishes Task Force Lima to Examine and Advance Use of Generative AI. August 10.

a110 The White House. 2023. National Cybersecurity Strategy. March.

a111 Cybersecurity and Infrastructure Security Agency. 2023. Cross‑Sector Cybersecurity Performance Goals (CPGs). v2.0.

a112 North Atlantic Treaty Organization. 2022. NATO 2022 Strategic Concept. June 29.

a113 Price, Brian R., PhD. 2023. Colonel John Boyd’s Thoughts on Disruption. Spring.

a114 Rule, LtCol Jeffrey N. 2013. A Symbiotic Relationship: The OODA Loop, Intuition, and Strategic Thought. March.

a115 Sun Tzu. The Art of War.

a116 von Clausewitz, General Carl. 1874. On War.

 

 ‌