Part of a problem with any standards are the determination of the level of expectation from one instructor to another. This varies and is subjective from one instructor to the next. As students we expect instructors should be "Masters" of their materiel knowledge, skill and control. Able to perform to the highest standards upon request and routinely as a norm of practical and academic application.
Poorly trained and practiced, results in poor instructors. You only know what you know and have been exposed to during your instruction. Here is where a curriculum stating clear and concise expectations is required, not easy to do but achievable if measurable marks are established.
Example a brand new OW diver: is it reasonable to assume they should be able to hold their buoyancy and trim within a limited range of movement say 6 inches either up and down with little sculling of fins and hands. Where as a Cavern Student at the end of a class should be able to hold buoyancy and trim within a 6 inch total and have no sculling of hands and fins?
Knowledge and Skill Mastery at various levels should expect a higher bar as one progresses. Instructors IMHO should never pass a fault and always set the expected example. If a student takes longer they take longer
[h=2]mastery[/h]
/ˈmɑːstərɪ/
noun (pl) -teries
1.full command or understanding of a subject
2.outstanding skill; expertise
3.the power of command; control
4.victory or superiority
In my past vocation, I used to teach how to write effective standards and how to write effective scoring guides for performance assessments on those standards. I used to train teams of assessors so that they could score performances so that their scores would be consistent from one to another.
You describe an example of a way to put precise language into such a standard, and you describe one that can work fairly well. You will be quite frustrated, though, if you try to accomplish that throughout the process. In this case the numbers you used work, but usually bringing numbers into the system does more harm than good. They create what I used to call "the illusion of objectivity" into the process, an illusion that can actually create inaccuracy in scoring. If you look at the scoring guides for major assessments like Advanced Placement Exams, SATs, LSATs, etc., you will be stunned by the subjective language in those guides. Assessment expert
Grant Wiggins uses the phrase "trained assessor judgment" to explain why this can work. I actually mentioned it earlier. In judging a performance, an assessor compares what he or she is seeing with a mental model of what a standard performance looks like. The key to consistent scoring is to make sure everyone is working on the same mental model.
As I said before, the instructor training process is supposed to do this. One thing that could help keep things consistent would be to create a centralized set of videos of different levels of student performance. If an agency--or several agencies--were to make a collection of such videos and then have their top people give the performances "official" ratings, it could do wonders for instructor training. More importantly, if you were to make it public, it would be useful for students as well. Believe me, the best way to prepare students for an AP exam is to show them samples of actual past student performances at different levels of quality.
When talking about the idea of
mastery, it is a mistake to go to a dictionary definition. When it is used in instructional settings with standards, as it is with scuba, the context is important. The idea of "mastery" with standards comes from the theory of
mastery learning as described by
Benjamin Bloom. The concept has evolved since Bloom introduced it decades ago, but it is very much the basis of the use of the word "mastery" in scuba Instruction. The use of any other definition of the word "mastery" is out of context.