It is wrong to look for more specific language in any standards. It is not really the language of the standards by which students are evaluated.
When standards are used to evaluate something in other areas of education, they are often far more vague than any scuba agency's standards. If you were to look at descriptors the College Board uses to score essays on the AP exam, you would say that it would be impossible to score with anything remotely close to any kind of consistency from one evaluator to another. Yet they achieve what is called inter-rater reliability with a greater than 90% score. That is because scoring like this is done not so much by the wording of the descriptors as by comparison with benchmark performances.
The process begins with experienced evaluators examining a set of actual performances and deciding on the specific skill levels. Those become the benchmark performances. Then the evaluators who will actually score the exams are trained on those benchmarks. They learn what it looks like when a student achieves a specific score, and they use that experience to score future exams. This process is called calibration. As someone who has both been trained to do this and trained others to do this, I assure you that it is remarkably accurate--meaning that two different raters will independently score the same performance at the same level.
This system is used in all such grading that I know of--AP exams, SATs, LSATs, etc.
PADI, along with (I believe) all other agencies, uses the same system. DM candidates must view and perform skills, AI candidates must view and perform skills, and instructor candidates must view and perform skills at the IDC level, and (finally) the instructor candidate must perform skills at the IE. In theory, when an instructor views a student performance, that performance is compared with the benchmark performances the instructor has seen in the past and scored accordingly.
Where the system falls down is in the follow up. In a system such as that used by the college board, inter-rater reliability is checked continually. Each performance is scored by two people independently, and discrepancies between the two must be addressed. Benchmark papers are routinely sent back through the system to make sure they get the same scores as before. An evaluator who starts to stray consistently from the norm is recalibrated.
PADI, along with (I believe) all other agencies tries to do this through the Course Director system or its equivalent. It is also maintained by the fact that many instructors do become involved in new instructor training. In a strong instructional program, a CD will maintain standards. In a system like scuba instruction, with instructors acattered all over the world. it is extremely hard to maintain that level of calibration, and instructor's views of standards will shift to some degree. That is a flaw to be overcome.
I have been a student in 4 different agencies, and I frankly don't see much difference in this regard.
When standards are used to evaluate something in other areas of education, they are often far more vague than any scuba agency's standards. If you were to look at descriptors the College Board uses to score essays on the AP exam, you would say that it would be impossible to score with anything remotely close to any kind of consistency from one evaluator to another. Yet they achieve what is called inter-rater reliability with a greater than 90% score. That is because scoring like this is done not so much by the wording of the descriptors as by comparison with benchmark performances.
The process begins with experienced evaluators examining a set of actual performances and deciding on the specific skill levels. Those become the benchmark performances. Then the evaluators who will actually score the exams are trained on those benchmarks. They learn what it looks like when a student achieves a specific score, and they use that experience to score future exams. This process is called calibration. As someone who has both been trained to do this and trained others to do this, I assure you that it is remarkably accurate--meaning that two different raters will independently score the same performance at the same level.
This system is used in all such grading that I know of--AP exams, SATs, LSATs, etc.
PADI, along with (I believe) all other agencies, uses the same system. DM candidates must view and perform skills, AI candidates must view and perform skills, and instructor candidates must view and perform skills at the IDC level, and (finally) the instructor candidate must perform skills at the IE. In theory, when an instructor views a student performance, that performance is compared with the benchmark performances the instructor has seen in the past and scored accordingly.
Where the system falls down is in the follow up. In a system such as that used by the college board, inter-rater reliability is checked continually. Each performance is scored by two people independently, and discrepancies between the two must be addressed. Benchmark papers are routinely sent back through the system to make sure they get the same scores as before. An evaluator who starts to stray consistently from the norm is recalibrated.
PADI, along with (I believe) all other agencies tries to do this through the Course Director system or its equivalent. It is also maintained by the fact that many instructors do become involved in new instructor training. In a strong instructional program, a CD will maintain standards. In a system like scuba instruction, with instructors acattered all over the world. it is extremely hard to maintain that level of calibration, and instructor's views of standards will shift to some degree. That is a flaw to be overcome.
I have been a student in 4 different agencies, and I frankly don't see much difference in this regard.