E N D
1. Systems/Software ICM Workshop Acquisition and Process IssuesWorking GroupRick Selby and Rich Turner Systems/Software ICM Workshop
July 14-17, 2008
Washington DC
2. Process & Acquisition Participants Rick Selby, Northrop Grumman (co-chair)
Rich Turner, Stevens Institute of Technology (co-chair)
Steven Wong, Northrop Grumman
Ernie Gonzalez, SAF
Ray Madachy, USC
Matt Rainey, Redstone Huntsville
Dan Ingold, USC
Dave Beshore, Aerospace
Lindsay MacDonald, BAE Systems
Blake Ireland, Raytheon
Bill Bail, Mitre
Barry Boehm, USC
John Forbes, OSD/SSA
Carlos Galdamez, Boeing
Gail Haddock, Aerospace
3. Software/Systems Process and Acquisition Initiatives Start-up teams
4. Software/Systems Process and Acquisition Initiatives
5. Ideal Program PDR (pre-Milestone B) Attendees
“right decision-makers” attend and will involved prior to the PDR [P 12, 3, 0] [E 0, 1, 2]
All success-critical stakeholders engaged, including technical warrant holders who will authorize ultimate deployment
Focus of meeting and method of evaluation
Focus on risks (vs functionality) of achieving desired functionally within the proposed architecture
Evidence-based review
The decision making actually occurs before the milestone; how to empower the
Technical knowledge
Level the playing field in terms of technical knowledge, such as embedding engineers in the contractor organization and making the acquisition personnel more knowledgeable about SW
Need to have government/FFRDC/UARC SW tiger teams that go into and help acquisitions and programs, such as from Tri-Service Acquisition Initiative including Start-Up Teams to help launch new programs
Risks
Rather than listing the system “functions”, we list the “risks” at the review [P 10, 4, 0] [E 0, 3, 0]
Independent groups (non-contractor) identify and investigate the risks [P 9, 5, 0] [E 0, 3, 0]
Architecture
At least one SW Build for each key (such as high risk) software component (maybe CSCI) to demonstrate their functionality and integration, which demonstrates that the SW people have explored preliminarily design space [P 10, 2, 0] [E 3, 0, 0]
Need to define architecturally significant requirements (by definition, these architecturally significant risks are addressed in the first release) and map these to risks [P 10, 2, 0] [E 0, 0, 3]
Scalability
Performance
If architecturally significant risks are not addressed, the system “will fail”
Somehow put in place the architecturally baseline earlier
Requirements
Prioritized list of requirements/capabilities/features that the customer can select the “cut point” based on the degree of value, risk, budget, and other new information [P 13, 0, 0] [E 1, 1, 1]
Incorporate some notion of how to change the requirements to reduce risk
Need to be able to assess whether requirements allocated to configurable items make sense
6. Comments on Draft 5000.2 Language [SA 8, 4, SD 3] 3.5.10. A [System] Preliminary Design Review[(s)] (PDR[(s)]) shall be conducted for the candidate design(s) to establish the allocated baseline (hardware, software, human/support systems) and underlying architectures and to define a high-confidence design. All system elements (hardware and software) shall be at a level of maturity commensurate with the PDR entry and exit criteria [as defined in the Systems Engineering Plan]. A successful PDR will [provide {independently validated?} evidence that supports] inform requirements trades [decisions]; [substantiates design decisions;] improve[s] cost [, schedule, and performance] estimation; and identify[ies] remaining design, integration, and manufacturing risks. The PDR shall be conducted at the system level and include user representatives [, technical authority,] and associated certification authorities. The PDR Report shall be provided to the MDA at Milestone B {and include recommended requirements trades based upon an assessment of cost, schedule, and performance risk[s]?}.
Synergy with ICM
Greater emphasis on risk-driven decisions, evidence, and high-confidence designs
7. Comments on Draft 5000.2 Language [SA 7, 6, SD 1] 3.5.10. A Preliminary Design Review (PDR) shall be conducted for the candidate design(s) to establish the allocated baseline (hardware, software, human/support systems) and underlying architectures and to define a high-confidence design. [At PDR, evidence shall be provided that {independently?} validates that a]All system elements (hardware and software) [are] shall be at a level of maturity commensurate with the PDR entry and exit criteria. A successful PDR will [support] inform requirements trades [decisions]; [substantiate design decisions;] improve cost [, schedule, and performance] estimation[es]; and identify remaining design, integration, and manufacturing risks. The PDR shall be conducted at the system level and include user representatives [, technical authority,] and associated certification authorities. The PDR Report shall be provided to the MDA at Milestone B {and include recommended requirements trades based upon an assessment of cost, schedule, and performance risk[s]?}.
Synergy with ICM
Greater emphasis on risk-driven decisions, evidence, and high-confidence designs
8. Some Quotes for Context Setting "The only way we will have large acquisition programs on schedule, within budget, and performing as expected, is for everyone - from Congress down to the suppliers - to all stop lying to each other at the same time."
"Software's just another specialty discipline and doesn't deserve special attention. Integrating software engineering into the development is the job of the chief system engineer."
"It takes so long for a program to reach deployment that we are essentially acquiring legacy systems."
"Spiral process is nothing more than the vee chart rolled up."
"There is no such thing as an emergent requirement."
"Evolutionary acquisition is just a ploy to excuse the software guys’ incompetence and let programs spiral forever without having to deliver something."
9. Some Topics for Discussion: Acquisition and Process Quality Factor Tradeoffs
Integrating hardware and software quality factor evidence planning and preparation guidelines
Coordinating single-quality-factor IPTs
Cost and Risk
Budgeting for systems and software risk mitigation
Risk-driven earned value management
Translating shortfalls in feasibility evidence into next-increment risk management plans
Requirements
Concurrently engineering vs. allocating system, hardware, software, and human factors requirements
Methods for dealing with requirements emergence and rapid change
Competitive Prototyping
Supporting value-adding continuity of prototype development and evaluation teams
Topic Specifics
Synchronizing different-length hardware and software increments
Early hardware-software integration: hardware surrogates
Contracting for 3-team developer/V&Ver/next-increment rebaseliner incremental development
10. Incremental Commitment Life Cycle Process The Incremental Commitment Life Cycle Process: Overview
This slide shows how the ICM spans the life cycle process from concept exploration to operations. Each phase culminates with an anchor point milestone review. At each anchor point, there are 4 options, based on the assessed risk of the proposed system. Some options involve go-backs. These options result in many possible process paths.
The life cycle is divided into two stages: Stage I of the ICM (Definition) has 3 decision nodes with 4 options/node, culminating with incremental development in Stage II (Development and Operations). Stage II has an additional 2 decision nodes, again with 4 options/node.
One can use ICM risk patterns to generate frequently-used processes with confidence that they fit the situation. Initial risk patterns can generally be determined in the Exploration phase. One then proceeds with development as a proposed plan with risk-based evidence at the VCR milestone, adjusting in later phases as necessary. For complex systems, a result of the Exploration phase would be the Prototyping and Competition Plan discussed above.
Risks associated with the system drive the life cycle process. Information about the risk(s) (feasibility assessments) supports the decision to proceed, adjust scope or priorities, or cancel the program.
The Incremental Commitment Life Cycle Process: Overview
This slide shows how the ICM spans the life cycle process from concept exploration to operations. Each phase culminates with an anchor point milestone review. At each anchor point, there are 4 options, based on the assessed risk of the proposed system. Some options involve go-backs. These options result in many possible process paths.
The life cycle is divided into two stages: Stage I of the ICM (Definition) has 3 decision nodes with 4 options/node, culminating with incremental development in Stage II (Development and Operations). Stage II has an additional 2 decision nodes, again with 4 options/node.
One can use ICM risk patterns to generate frequently-used processes with confidence that they fit the situation. Initial risk patterns can generally be determined in the Exploration phase. One then proceeds with development as a proposed plan with risk-based evidence at the VCR milestone, adjusting in later phases as necessary. For complex systems, a result of the Exploration phase would be the Prototyping and Competition Plan discussed above.
Risks associated with the system drive the life cycle process. Information about the risk(s) (feasibility assessments) supports the decision to proceed, adjust scope or priorities, or cancel the program.
11. Understanding ICM Model for Software Reconciling the milestones
Where are LCO/LCA/IOC and SRR/PDR/CDR?
When are the downselects: 3 to 2, 2 to 1?
How to drive behavior
RFP language
Award fee
Large carrot (sole winner of major program)
How long does the competitive phase last (ends at Milestone B, ends later, etc)?
Create a “whole new contractor role” that gets awarded to the 2-to-1 downselect non-winner
External evaluators come into reviews (“air dropped”) and have a high entry barrier and limited context to achieve success
Loss of valuable expertise in the non-winner
Non-winner becomes the “evaluator” of evidence throughout the program
What kinds of evidence/prototypes are needed for what kinds of risks?
Funding
Who pays for pre vs post 2-to-1 downselect (what color)?
How do you use CP to do:
New approaches for model definition and validation
Quality attribute trades (non-functional)
12. Ranked Summary of Initiatives (High to Low)
13. Issues - 1 John Young was seeing CP as a way to “get the HW right”
He did not expect CP to cause all this discussion about SE/SW
What is the order of buying down risk?
We currently do evidence-based reviews for CMMI appraisals? [P 1,4,9] [E 2, 0 , 1]
How do we change the behavior of both the vendor and acquirer?
Reviewers now “tune out” when the SW architecture presentation is given because it is hard to “bring it to life”
ICM ties together goals of reviewers
Navy currently has a six-gate review system
Has an emphasis similar to ICM, including both system and software
ICM has “sufficient levels of vagueness”; provides opportunity for tailoring – which is a positive flexible
How can we figure how the HW-SW touchpoints?
SW has the inherent value of changeability
14. Issues - 2 What are the ideal forms of evidence? [P 12, 0, 1] [E 0, 1, 2]
Demonstrating is not a complete answer?
Needs to be a validated demo that addresses
For example, on the early FCS reviews there were many dog-and-pony shows
No/little talk about risks
There are already lots of gates and reviews in place now; but the Army had seven Nunn-McGurdy’s last year
The decision makers are not attending the early reviews
These people are needed, not just surrogates “who just take notes”
When do you start addressing these issues and when do you push these issues up the chain
The review attendees are “going for the show” not “to do the review”
The contractor overwhelms the reviewers in terms of technical knowledge
Somehow we need to level the playing field in terms of technical knowledge
Need some form of parallel teams
Risk: PDRs are currently oriented around functions
Rather than listing the system “functions”, we list the “risks” at the review
This enables something that the reviewers can focus on
The “ranked risk list” becomes a first-class document that is at least as important as other design documents
At the PEO/IWS (Navy), there is an emerging requirement that prior to System PDR, there will have been at least one SW Build for each CSCI to demonstrate that their functionality and integration
Demonstrate performance-criticality functionality
We should define the “invariants” that you must have in order to adopt ICM
We need to make sure that the risks that are currently being presented are honest/accurate
The government reviewers somehow identify the risks, and can empower/contract some teams to address these risks
Independent groups (non-contractor) identify and investigate the risks
Requirements organization and presentation
Need to define architecturally significant requirements and map these to risks
The first release (“indivisible build 1”) needs to address all architecturally significant risks
Take a fraction of the predictable overruns (50-100%) and spend it up front to reduce risks
15. Issues - 3 Need TRL framework for SW? [P 7, 6, 0] [E 0, 2, 1]
Maybe do not call this framework “TRL” because of confusion with existing HW-centric TRLs
MDA has SWRLs (software readiness levels) now and it works pretty well
Navy has ratings for process and functionality, analogous to TRLs
Interface level readiness too (from IDA workshop April 2008)
SMC uses Key Decision Points (KDPs) as the major decision milestones, and their KDP-B occurs after the SW reviews now
Need to change the attitude of the senior acquisition and policy decision makers (above chief engieer level)
SW illiteracy exists at the highest levels, such as arguments about whether to do SDPs
Need to re-instate the original language that was proposed for the DoD 5000 revision
Need to think broadly about new acquisitions approaches, such as moving away from fee-on-labor cost-plus contracting vehicles to “new incentive models” [P 11, 2, 1] [E 0, 0, 3]
Navy sonar systems have periodic re-bidding approach/process where contractors continual re-bid on new capabilities
“technology refresh cycle”
One inhibitor: How to protect IP that is a discriminator for contractors corporations, including the underlying methods for producing the products
Boeing made middleware for UAV open source shared across military contractors
ELV Atlas-V Ground Control System uses Linux libraries
Naval open architecture for open systems (including shared code) initiative, including contract terms and licensing
Domain-based “members-only” open source models
How about moving toward an open source model
Development tools as well as systems
How to address assurance???
Government has “open access” to disk farm where all development artifacts (req, design, source code, test code, etc) are stored/developed and therefore can inspect/analyze
Have common test beds
Such as original Ada validation suite
Define common test beds that also provide meeting place and communities to interact
DDR&E has several development and test beds environments now that enable the SBIRs teams to develop products
How to address unknown unknowns
Making unknown unknowns known
16. Issues - 4 Enable parallel, open, competitive environment [P 6, 4, 3] [E 0, 0, 3]
When acquiring new system, adopt an members-only open source model:
Standard middleware (think RT Linux for ELV Atlas-V ground system and now NASA Aries ground)
Apple-like AppStore for developers to develop and sell applications; Google has gadgets (free), Microsoft has gadgets (free), Yahoo has widgets (free)
Must have some sequence of gates to ensure that SW was “reasonable” / do-no-harm
Members-only contributors
SW is “low cost” but need to pay for support
Multi-tier pricing scheme for execute only, source for customers, source for all
Acquirers can select/purchase the applications that have the most value
What are the incentives for contractors to invest for developing these applications?
Example: ACE/TAO is open source middleware that is being used on Navy SSDS large-deck combat systems
Enables new potential competitors because of externally known interfaces
Architecture would need to be able to accommodate this “new thinking”
17. Issues - 5 What is the earliest to end the Competitive Prototyping?
Sometime between Milestone A and B
What is the latest to end the Competitive Prototyping?
You can build 2 or more complete systems by keeping competition going throughout the lifecycle
You continue the competition until the decision makers (and success critical stakeholders) have sufficient evidence that the risks are “acceptable enough” to go forward with one contractor
You can possibly re-open the competition later for some aspects of the program
The current working assumption is that you can downselect to one contractor at Milestone B
Can we gain knowledge over time?
Once a winner is selected, you want to hit-the-ground running and not lose any time and talent
How to do we minimize/prevent protests [P 0, 5, 8] [E 0, 0, 3]
Will the early rounds of prototyping show you enough evidence to justify going with the a Sole Source award (and therefore avoid protests)
Milestone B brings on new requirements and formal briefings to Congress
Right now, the government declares the budget before Milestone B
It is very difficult to re-certify programs when you exceed 25% of the original budget
Most programs who Nunn-McCurdy once, do it again because of staff loss, etc
Government wants cost and schedule realism
18. Issues - 6 Do we need to have multiple Milestone B’s and/or multiple PDRs?
Start-Up Teams: Recommend that SW Tiger Teams are engaged in the SEP development (which occurs prior to Milestone B because the SEP defines the entry and exit criteria for the PDR that occurs prior to Milestone B). [P 10, 3, 0] [E 3, 0, 0]
Update the current SEP preparation guide
Many parts are HW-centric or HW-exclusive
SWAT Team = external team of experts that “help” PEO prior to Milestone B
Driving behavior [P 13, 0, 0] [E 0, 3, 0]
What to define in the RFP, award fee language
Emphasize usability to government and contractor
SW Leadership meeting across all ACAT 1 programs including Chief SW Engineers [P 11, 3, 0] [E 3, 0, 0]
Include Chief System Engineers, both government and contactor
Maybe hold this at the NDIA Sys Engr meeting