To my knowledge PADI doesn't define mastered. Sort of like advertising "our Pizza tastes best". If criteria for taste isn't defined, it can't be proven one way or the other.
If you want PADI's definition ff mastery, either look it up alphabetically in your most recent copy of the General Standards and Procedures or look at the portion of the definition I quoted in post #19.
BoulderJohn put PADI's definition here. I think I prefer NAUI's stand, which is to define a minimum skills set and then encourage instructors to take it up a notch, if I get it right. It's obvious to me that PADI's requirement to achieve "Mastery" leaves the definition open to interpretation, and allows folk to crap all over PADI's instructional methods because of it.
In my past roles as an educator, I have not only had to perform performance evaluations, I have had to teach others how to perform them. PADI uses that process up to a certain point. When the process is done correctly, it is stunningly accurate. The problem is that it begins to lose accuracy over time if the process is not done correctly. The problem can be seen in this thread by all the people saying what mastery means to them. This will be a long post as I try to describe a somewhat complicated process.
Let's begin with the fact that any attempt to define a
rigidly precise nature of a properly performed performance in
MOST (not all) performance assessments is foolish, for it really can't be done, and the attempt to do so will mask the truly subjective nature of the definition behind a façade of objectivity. All professional organizations that do this kind of assessment regularly know this and don't even try to do so. Let's take the College Board, for example, in their assessment process for open questions for SAT and Advanced Placement exams. Anyone looking at the scoring criteria for those exams for the first time would ask, "How can you possibly score anything accurately with such vague and general descriptors?" Yet AP exam open questions are scored to the same number on a 1-9 scale by two different assessors more than 90% of the time. This ability of two different assessors to give the same score on a performance assessment is called inter-rater reliability.
High levels of inter-rater reliability are achieved through the training process, which is sometimes referred to as
calibration. It is a process by which all assessors' individual definitions of mastery
at that level become consistent. It takes surprisingly little time. The new assessor is asked to score a number of performances that have previously been scored by experts. After each attempt, their score is compared to the expert score, and that score is explained. Before long they are consistently giving the same sores as the experts. I have seen many Advanced Placement workshops in which AP teachers (not the national assessors) are shown the scoring system and go through a mini-version of the assessor training. They pretty much have it down after a little more than an hour of training. They now have in their heads a common image of mastery at that performance level to which they can compare new student performances accurately.
As you can see, the training process for instructors in every scuba agency I know is roughly the same. They have supposedly been given the same kind of training so that they will have approximately the same concept of mastery as other instructors. The problem comes with what follows.
As time passes, the accuracy of assessors begins to slip. Some will start to score items too high, and some will start to score them too low. Organizations like the College Board constantly monitor assessor performance to guard against this. If an assessor begins to stray from the scores other assessors are giving too far or too often, or if they begin to inaccurately score expert-scored items that have been placed among the items to be scored as a way of testing assessor accuracy, they are pulled out of the assessor group and "recalibrated."
The only way scuba agencies can do this is by having Course Directors (or the equivalent) monitor instructor behavior and take aside for retraining those who are not performing appropriately. I worked in a shop that did this. That means that a shop must have someone at that level of training on the staff with the paid time to monitor instructor performance. That is a very expensive proposition, and shops that are struggling to get by cannot afford it. In the case of small shops and independent instructors, it simply does not happen. To do it effectively world-wide would be an enormously expensive process, and I can't see it happening.
And so that is what happens with scuba. In time many instructors slip and need recalibration. You see it in this thread, as one instructor after another claims to have a different definition of mastery from others. It is true of all agencies.
Here is a real-life story of what can happen when such processes are not followed. A nearby elementary school participated in the annual science fair, and they decided to make a big deal of it at the 6th grade level. They brought in 16 science experts from the community to act as judges, telling them to score student performances on a 100 point scale, but giving little to no instruction of what it meant
at that level. Each student performance was judged by 3 of these experts, and the student's final score was an average of the three. One student had worked incredibly hard and produced an exemplary product. Two of the three judges gave him a perfect 100 score, and the general comments were that the project might be capable of being the state champion. The third judge gave it the highest score of all the students he judged--60, with no explanation of why. As a consequence, the student's average score fell below the minimum level to advance to the next level of competition. This one judge had decided that "mastery" meant a performance at the college level, and so no 6th grader could possibly make the grade. None of the students he judged moved on to the next level of competition.