The Performance Map


Causal Loops

Concept Maps

The Performance Map

I'm fixing a hole where the rain gets in
And stops my mind from wandering
Where will it go
I'm filling the cracks that ran through the door
And kept my mind from wandering
Where will it go
And it really doesn't matter if I'm wrong
I'm right
- The Beatles in Fixing a Hole

This page provides an overall view of the various nodes (parts) of the performance map shown below.


Performance can simply be defined as "focused behavior" or "purposeful work" (Rudman, 1998, p. 205). That is, jobs exist to achieve specific and defined results (outputs) and people are employed so that organizations can achieve those results. Thus, performance is what organizations need from employees to achieve their goals. Note that performance should not be confused with activity. For example, meetings, operating machines, etc. are work activities. These activities must be put into context with what the organization wants employees to do, and how well. Thus, focused behavior that achieves purposeful results is considered "optimal" performance.

When the "actual" performance does not meet the "optimal" performance, then a "performance gap" occurs (optimal - actual = gap). If this gap between actual performance and optimal performance starts to have an adverse impact, then it may be thought of as "poor" performance: specific, agreed-upon deviations from expected behavior (Mitchell & O'Reilly, 1983). Note that deviations must be defined and agreed upon by both the evaluator and the performer.


An output occurs in every organization system and the processes within it. Outputs consists of people, technology, materials, information, and time. Performance can cause two types of impacts (results) upon the output:
  • Positive Impact: The performance, combined with time, material, information, and technology cause a "desired impact" upon the output. In turn, the output cues the performer to the adjustments that must be made, which further provides a positive impact upon the outcome. Hence, performance not only affects the output, the output also affects the performance.
  • Negative Impact: Any gap in the optimal performance cause an "unwanted impact" upon the output. The gap may be caused by a lack of training, environmental factors, motivators, etc. In turn, this performance gap has a negative impact (inverse relationship) upon the output that miscues the performer, which in turn, further compounds the problem.

Performance Analysis

The Performance Analysis helps us to describe the problem, determine the drivers, and then elect a performance solution. (Rossett & Sheldon, 2001, p. 33).

1. The initial step measures the gap to determine its magnitude and actual impact that it has upon the organization (describes the problem). While the performer, supervisors, managers, peers, etc. might realize that there is a performance problem, it normally takes an analysis to determine the full extent of it. That is, what exactly is wrong and what should be happening?

2. Next, the "drivers" (causes) of the gap are determined. Although it is often easy to see that something is wrong, it normally takes some real detective work to determine its root cause. And if the root cause is not discovered, the problem will continue no matter what you do.

3. Finally, a solution system is selected. The Performance Analysis may take several iterations or perhaps just one to measure the gap, determine all the drivers, and then selecting a partial solution. So after the first iteration, we need to look back upon the gap to determine if the selected solution will indeed bridge that gap. This process may be repeated several times until a full set of solutions are determined. In the end, the sum of the "partial solutions" become the Solution System.

Training Requirements Analysis

If skills, knowledge, and attitude are required, then a training requirement analysis is performed. A Training Requirements Analysis is a useful approach for designing training that will respond to the required needs after you have defined them (Watkins & Kaufman, 1998). This is done by identifying the learners, job/task, and setting (Molenda, Pershing, Reigeluth, 1996). In addition, a "cognitive job analysis" generally needs to be performed (Clark, R., 2002).

For training, the following three processes (nodes) are then implemented:

1. Design

The design process delivers a blueprint of the strategies that supports the performance objectives outlined in the analysis processes. If training is required, then learning objectives and learner assessments (tests) are developed (Tovey, 1997, p. 44). In addition, the basic "architecture" is laid out (Clark, R., 2000).

2. Development

Most instructional design theories suggest that at least three things need to take place during the development process: 1) content is gathered, 2) context (experience that produces performance) is added, and 3) the material is chunked and sequenced.

3. Learning

In the ISD model, this is the Implementation phase. We learn by experiences that allow us to (Wertenbroch & Nabeth, 2000):
  • Absorb (read, hear, feel)
  • Do (activity)
  • Interact (socialize)
  • Reflection (Dewey 1933)

Performance Requirements Analysis

If non-training performance solutions are required (such as OD, HRD, & HRM), then it is passed on for a performance requirements assessment.


Goals are the specific standards or expectations that customers have for products or services are identified. The goal minus the actual performance is the performance gap the must be corrected (goal - actual = gap).


The performance improvement process is designed and implemented via a design process.

Knowledge Management

The knowledge management process helps to ensure that the overall performance efforts work together to grow the organization.


Leadership is provided to ensure that the performance initiatives are managed.


Coaching helps the performers in reaching the next level of performance.

Performance Management

And of course this ties in to where this page started.


Two types of evaluations are normally performed throughout the entire process. A formative evaluation is a method of judging the worth of a program while the program activities are "forming" or happening. This part of the evaluation focuses on the process. Formative evaluations are basically done on the fly.

A summative evaluation is a method of judging the worth of a program at the end of the program activities (summation). The focus here is on the outcome.


Clark, R. (2000). Four architectures of instruction. Performance Improvement, v. 39, pp. 31.38.

Clark, R. (2002). The new ISD: Applying cognitive strategies to instructional design. Performance Improvement, v. 41, n. 7. pp.8-14.

Dewey, J. (1933). How We Think: A Restatement of the Relation of Reflective Thinking to the Educative Process. Boston: D.C. Heath.

Mitchell, T. R. & O'Reilly, C. A. (1983). Managing poor performance and productivity in organizations. Research in Personnel and Human Resources Management. 1, pp. 201-234.

Molenda, M., Pershing, J. & Reigeluth, C. M. (1996). Designing instructional systems. The ASTD Training and Development Handbook. Craig, Robert (editor). New York: McGraw-Hill, p. 266-293.

Rossett, Allison & Sheldon, Kendra (2001). Beyond the Podium: Delivering Training and Performance to a Digital World. San Francisco: Jossey-Bass/Pfeiffer, p. 67.

Rudman, R. (1998). Performance Planning and Review. Warriewood, Australia: Business & professional Publishing.

Watkins, R. & Kaufman, R. (1998). An update on relating needs assessments and needs analysis. The 1998 ASTD Training and Performance Yearbook. Woods, John & Cortada, James (Editors). New York: McGraw-Hill, pp. 123-132.

Wertenbroch, Anna & Nabeth, Thierry (2000). Advanced Learning Approaches & Technologies: The CALT Perspective (PDF file). The Center for Advanced Learning Technologies.



For author and copyright information, see the About page.
Created June 11, 2004
Updated March 5, 2008


A Big Dog, Little Dog and Knowledge Jump Production.