History
In 1975, Florida State University initially developed the ADDIE model for the US Army in 1975 to systematically create programs to train individual on particular jobs/tasks. [I] Since then, the ADDIE model has undergone several iterations and is the basis for many Instruction Design models. Still, the main phases remain: analysis, design, development, implementation, and evaluation. Over the years the model has become more dynamic and interactive than its original hierarchical version.
Process
ADDIE is a five step process used by instructional designers and curricula developers consisting of analysis, design, development, implementation, and evaluation. It is intended that one phase will feed into the next. Each phase of the ADDIE model is explained in more detail below.
Analysis
This phase identifies the context of the problem and the current state. In the educational setting it identifies the audience (including their needs and characteristics) as well as the course goals or objectives. In the business setting it assesses current performance with desired performance (gap analysis) and establishes the business case for addressing the problem at hand. Questions to address in the phase may include:
This phase crafts solutions using the analysis completed in the previous phase. In the educational setting it determines what approach to take when developing new curricula. In the business setting it determines the plan for addressing the problem identified in the analysis phase. Questions to consider in this phase may include:
This phase takes the outcomes of the design phase and then builds them out. It also provides formative feedback on the tool/training at meeting its desired objectives by allowing a pilot period. This feedback can then be incorporated into the design/development before implementation. Questions to think about during this phase may include:
This phase rolls out the developed and tested tool/training to the full audience. Even through the implementation phase, feedback can be gathered and incorporated into upgraded designs. Questions to remember may include:
Throughout the process formative feedback can be used to ensure the tool/training is meeting design specification, additionally, at the conclusion of the process summative feedback should be used to ensure the tool/training has closed the performance gap identified in the analysis phase. Questions to assess may include:
Analysis
This phase identifies the context of the problem and the current state. In the educational setting it identifies the audience (including their needs and characteristics) as well as the course goals or objectives. In the business setting it assesses current performance with desired performance (gap analysis) and establishes the business case for addressing the problem at hand. Questions to address in the phase may include:
- What is the current performance level?
- What is the desired performance level?
- Who is affected?
- What resources are available?
- What is the business case for this change?
This phase crafts solutions using the analysis completed in the previous phase. In the educational setting it determines what approach to take when developing new curricula. In the business setting it determines the plan for addressing the problem identified in the analysis phase. Questions to consider in this phase may include:
- How has the problem been addressed in the past?
- What approaches will best address the problem (job aides, training, clear expectations)?
- How quickly do changes need to be made?
- What will it look like?
- How should completion be determined?
This phase takes the outcomes of the design phase and then builds them out. It also provides formative feedback on the tool/training at meeting its desired objectives by allowing a pilot period. This feedback can then be incorporated into the design/development before implementation. Questions to think about during this phase may include:
- What objectives (if any) have not been addressed?
- How engaged are participants?
This phase rolls out the developed and tested tool/training to the full audience. Even through the implementation phase, feedback can be gathered and incorporated into upgraded designs. Questions to remember may include:
- What are the logistic costs for the implementation plan (printing, coordination, etc.)?
- What resources are needed outside of the tool/training (train-the-trainer sessions, handouts)?
Throughout the process formative feedback can be used to ensure the tool/training is meeting design specification, additionally, at the conclusion of the process summative feedback should be used to ensure the tool/training has closed the performance gap identified in the analysis phase. Questions to assess may include:
- What areas do users still have difficulties with?
- What metrics have improved the most, how does that tie in with the tool/training?
- How might the tool/training have affected aspects outside the items determined in the analysis phase?