Test Plan Optimization Methodology Workshop

A Workshop on an Optimization Approach to Integrating Modeling and Simulation with Test and Evaluation in Weapon System Development

I. Introduction

The Office of the Secretary of Defense has developed a framework called Simulation, Test and Evaluation Process (STEP) to integrate Modeling and Simulation (M&S) into the Test and Evaluation (T&E) process. A Test and Evaluation Research and Education Center (TEREC) Workshop is planned for January 1999 at Georgia Tech to address this integration. This paper provides the following to prospective workshop attendees.

This workshop will describe an initial Optimization Model (OM) and other efforts, explore options for the generation of inputs to the OM, and identify further development areas. T&E teams and the acquisition decision-makers will use the products developed by this workshop to create capabilities to support informed decision making on T&E support to acquisition. The workshop objective is to develop a lexicon and taxonomy for the optimization and integration of M&S and T&E methodologies and concepts. An integrated life cycle planning support mechanism requires a model and associated data that allows test planners to generate a T&E plan that provides a maximum amount of reliable and credible system information with the most efficient use of resources. Therefore, the OM approach described in this paper proposes a change in focus from isolated acquisition milestones and test events to an integrated life cycle perspective. This can be accomplished by planning and managing various combinations of activities to guide the T&E team and acquisition decision-makers toward desired optimum results. Resulting recommendations will provide the direction to proceed in the development of these methodologies and procedures as operational tools. The success metric of this workshop is advancement of understanding of the challenge of developing integrated T&E strategies, leading to concepts for

Background

The Department of Defense (DoD) has undertaken a significant effort to integrate testing, training, and M&S to provide acquisition officials the best possible information for weapon system acquisition decisions at the least cost. A comprehensive methodology is required for the integration process resulting in an efficient and effective blending of testing, training, and M&S for the broad spectrum of all weapon systems.

The description of the acquisition process usually consists of two-dimensional charts with horizontal and vertical events and paths. The horizontal dimension identifies events in terms of occurrences in time (e.g. milestones, test, modeling, analyses, decisions, and other activities). The vertical dimension is subdivided into paths that provide the information to be presented at each event. These paths include the M&S, T&E, and Information Base that represent the activities that provide the necessary ingredients for each event.

The aim of the process is to achieve horizontal and vertical integration of the spectrum of acquisition activities and events. Several horizontal paths integrate vertically with each other. The Information Base path will provide information to assist in developing these activities and events. In the M&S path, models are continuously vertically integrated with and calibrated to the latest test conditions and data. The T&E path generates most of the systems information. The intent is to achieve a process that begins prior to Milestone 0 (MS0) with physics-based and engineering models and continues to generate, maintain and use all information required throughout the system lifecycle. As a minimum, the information generated by the integrated environment should produce taxonomies supported by closed form relationships, calculi of quantitative operators, and their logical applications.

The Army Operational Test and Evaluation Command (OPTEC) has initiated a program to develop a decision model to implement this process. In 1997, a Proof of Principle (PoP) was used to demonstrate a concept for integrating M&S with traditional field test methods. A follow-on effort was initiated in 1998 shortly after the completion of the PoP to continue the development of a versatile Optimization Model (OM) for assessing the cost and effectiveness of alternative test, simulation, and evaluation strategies. The process is based on a foundation of operations research, mathematical optimization, statistics, analyses, and validation, verification, and accreditation techniques. However, because this process is in the early stages of its development, several questions must be addressed to move the model closer to operational utility. This paper summarizes the status of the model development and introduces the issues that must be addressed.

The work done is a first step in achieving the objectives of the STEP process. Note that the proposed integrated process adds no new programs and activities. The plan is to utilize current programs and activities in more systematic and objective ways, to improve the availability (i.e. time, quantity, and accuracy) of the required information, and to achieve this through the calibration of M&S and T&E. Some expansion of current activities will occur with the bulk of the additional resources required in the Model Calibration path.

II. Conceptual Application of the Integration Methodology

When fully implemented, the STEP integration methodology is expected to provide a decision support tool that will be used consistently and iteratively throughout the system acquisition process. Accordingly, the integration of M&S with tests will occur over time and across "levels" of resolution and operational realism spanning the range from physics-based models through developmental, operational, and field tests. At each stage, the objective is to refine the extent and quality of the information base by both expanding results and validating existing data and by improving the credibility of the tests and simulations used to generate the information. It is important to note that the information base can and should be sustained beyond a single system and used to support development of subsequent systems.

This integration methodology operates at two levels with exchange of information. The higher, decision making level takes into account the efficacy and costs of models, simulations, and testing in devising effective programs for acquiring necessary knowledge about the system under test. The lower, execution level considers the detailed dimensions of the system knowledge sought and the attributes of the models, simulations, and tests that make them more or less suitable to gather that knowledge. The process includes analyzing those knowledge needs, determining and quantifying the suitability of modeling, simulation, and/or testing to meet them, and eventually developing enhancements to improve M&S suitability.

At the higher level, input to an optimization (decision) model includes programmatic and system acquisition information while the primary output is a balanced and affordable T&E program integrating both M&S and testing. The lower level uses inputs from the acquisition program and returns data to drive the optimization model. That data derives from an analysis of the system knowledge to be acquired and of feasible M&S and test alternatives. It responds with quantified assessments of the credibility of each alternative and on the benefits derived (e.g. statistical variability) to feed the optimization.

The categorization of necessary input into the OM consists of credibility, benefits, and constraints. Credibility is the probability of emulating test or true results. Benefits are defined in terms of coverage of the operational environment and the amount and timing of learning at test events. Constraints fill in the remainder of the information required by the OM. The information includes resources (funds, time, availability of assets, etc.) and other measures (e.g. information required by the developer to succeed, the limits of achievable learning, spin-offs such as validated output that can be used to support analyses, and factors resulting in increased credibility).

Optimization Model (OM)

The OM is designed to allow planners to select combinations of M&S and/or tests that meet the knowledge acquisition objectives of the program. The model is designed to consider the system as a whole and to allocate resources to maximize the benefits and credibility associated with the overall T&E program. The description of the OM variables is shown in Figure 1 at its highest level of detail (i.e. for a single event in the overall test program.).

Optimization Model


Single Event Mathematical Programming Formulation

For the situation in Figure 1, the following binary programming formulation is used that selects M&S and/or test options to gather necessary system information. For ease of interpretation, this model avoids the multi-event time horizon of an operational model.

Binary Programming Formation


These terms are defined in Figure 1. In essence, the objective function maximizes the product of parameter importance, credibility, and benefits for every set of options chosen by the model subject to a set of constraints. The constraints illustrated show that a budget must be met and that constraints determine how many options must be or can be chosen in each parameter/test condition cell. Many other constraints can be introduced.

The system under test is described by parameters, which are the major capabilities of subsystems being tested. Information is sought under a number of test conditions or scenarios. Information may be gathered through field test, through simulation, or through a combination. Parameters may vary in importance. Each M&S or test option may have a different level of credibility and provide a different level of information benefit, depending on the nature of the method and structure of the test. The objective function is structured to maximize benefits and investment in the most important test parameters and in the most credible options. The model maintains a budget and meets certain selection requirements and restrictions to provide feasible answers.

The testers of each event, through historical experience and statistical calculations define the Eijkl's that identify options. Costs are determined through standard cost analysis techniques and models. Some discussion of the development of importance, benefits, credibility, and constraints derivation methodologies and factors appear in the PoP documentation (e.g. computations and applicability of bounds on error, Bayesian statistics, and fuzzy set theory and Dempster-Shafer logic) and will be addressed in greater detail in future efforts. Other approaches will also be evaluated.

Expanded Mathematical Programming Formulations

There are several expansions of the above formulation that could take place. In addition to the expansions of the mathematical programming methods (e.g. network, dynamic programming, Bayesian Networks, etc.), there are other excursions that can be effected and implemented (e.g. multiple test events for a single system, multiple systems, facilities utilization, etc.). Current thinking anticipates similar OMs to be developed for the entire acquisition cycle of a system and multiple systems. The obvious implication is that the benefits and credibility of each test means may depend on when the test means is applied. That is, M&S may be more credible later in the process after some calibrating test results are achieved. This model is currently under development, and will be used to help assess overall complexity of the process. Using the inputs from the test events, a system planning OM is developed and applied to allocate resources such that the objectives of the testers and decision makers are analyzed and the best courses of action taken. Sensitivity analyses will play a major role in determining the robustness and confidence of the results.

Other factors likely to affect expansion of the OM include the practicalities and realities associated with the real test/exercise and model development environments and the availability of the ingenuity required to overcome difficulties with reflecting real environments in the models. Some examples of such events are the ability to measure interactions between and among factors and their consequences, including the synergistic effects of complex systems.

III. Development Issues

Several issues must be addressed at the Workshop in progressing toward M&S and T&E integration:

Other benefits to be considered are:

These and other issues will be the primary subjects of discussion in the TEREC Workshop to be held in January 1999 at Georgia Tech.

Last Updated December 20, 2007