Is there an instructor crisis?

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

Status
Not open for further replies.
A businessman understands the correct accounting to use for time and materials.

3-hour class, $60, is $20/hour, right? NOPE! You need to "bill" for everything. Time driving, marketing, time talking to prospective customers, time on follow-up classes on people who nearly passed, time driving to/from the shop to get fills, time spent on paperwork, time taking additional classes, time doing research. Money for gas, fills, equipment maintenance, park fees, pool fees, supplies of any kind (even pencils or mask defog), money for taking classes, insurance costs, or even additional costs of eating out. Because I'm not an instructor, I'm probably missing a dozen things from both lists.

An owner doesn't have to do the class, to know they'd be earning -$8/hour (that's negative $) with just back-of-the-napkin calculations.

---

I'm reminded of my friend who likes to do mini dive-jobs. These jobs are almost always losses, but since it's his dive boat, I have little choice but to tag along. For example, a job to retrieve an item someone dropped. Often the item location is approximate, in murky waters, or at an impractical depth, meaning almost no chance we'd find it. Or an anchor-retrieval, which turns out to be tangled in dozens of other anchors, steel cables, and zero-vis. Even if we succeed, we still end up wasting another 60 to 90 minutes, meeting the guy at a dock. Those 60-90 minutes mean we might end up missing a dive, or the dive-shop is closed by the time we're done (and neither of us live close to a fill station).

Instead of that, we could have hit a location where sunglasses, iphones, etc are frequently dropped, and often finding a bunch of sunglasses, with 1 to 6 of them being worth more (each) one of those bounties.
I used to get dive calls from a flyer I had pinned up out at the marina. I get a call one day to go do a boat bottom cleaning that no one else wanted to do. The guy bought the boat sight unseen and it was totally overgrown. So I meet the guy there and get started. On those jobs it was time and materials at a rate of $75 hour (10 years ago).
It took about three hours of hard work using a couple steel 72’s. He needed some zincs which I had in stock so I sold those too. While I was there someone else came up and wondered if I’d have time to put on a few zincs on his boat and look it over for blisters. And then a crab guy comes up and needs to have some crab line undone from his prop.
I went out there to do one job and by the end of the day walked away with $600 cash.

Moral of the story, there is money in diving you just need to know where to look.
Hint: It’s not teaching OW classes.
 
What do you mean? I think what happens in the States doesn't matter all that much to the international dive industry. Outside of Mexico and the Caribbean Merican divers always seem to be a tiny minority. I think most customers that travel for diving are Europeans.
I don't think I've ever been to any dive destination that wasn't packed with Europeans. Even Mexico is full with Brits, Germans, Dutch, etc.
Same with all the big gear brands. Most seem to be European for some reason. It's really only the agencies that are from the States.

It would be really interesting to see how many certifications are sold in what country.
That’s actually encouraging to know.
 
There is no way that two different instructors will have the exact same "expectation" of the skills performed at the certification level. And even more important, no two instructors have the same expectation of what is considered to be "reasonably comfortable". Some instructors simply have higher expectations than others and there is no getting around that fact. This does not mean that they did not meet standards.
I have written about this in the past in great detail. The concept is called benchmarking. When you go through your IDC, you are supposed to be given an image of what constitutes a passing performance at the specific levels. You are supposed to compare that performance with what you see and judge accordingly. That is the process used in all performance evaluations in all fields. Yes, there will be some variance from instructor to instructor, but what was described in the example given in this thread clearly does not make the benchmark. Any instructor who would give a passing score to what was described is clearly wrong.

Around this time each year in the USA, Little League baseball players try out for teams. They go to a practice session in which they display their skills. As they play, coaches sit there and rate them. They score them on their expectations of what a player should be able to do at that age level to achieve mastery. At the same time, major league scouts are watching prospects and rating them as prospects for professional baseball. They are looking at the same skills as the Little League coaches, but the benchmark levels of mastery performance are much different when evaluating potential pros.

What you see is the concept of mastery at different benchmark levels. It happens in all areas of performance evaluation. You can give the same writing assignment to students in 5th grade, 10th grade, and college. Your expectation of what constitutes mastery varies by the different benchmark levels, and your ability to do that accurately depends your training. That training is supposed to calibrate you for your evaluation ability.

It should take surprisingly little calibration to do the job. Advanced Placement course essays are graded for mastery on a 9-point scale, and the professional evaluators have about a 90% interrater reliability in their scoring. Every year AP teachers go to training sessions so they know how their students' work will be judged. In those sessions, they are trained as evaluators are trained--by looking at actual student performances. I have done several of these sessions, and in each one, we achieved that 90% interrater reliability after grading about 5 samples. I have also led training programs with about the same success rate.

That's on a 9-point scale. In scuba, we use a 2-point scale (pass or fail). A trained instructor who lets a clearly failing performance pass almost certainly made a conscious decision to do so.
 
I have written about this in the past in great detail. The concept is called benchmarking. When you go through your IDC, you are supposed to be given an image of what constitutes a passing performance at the specific levels. You are supposed to compare that performance with what you see and judge accordingly. That is the process used in all performance evaluations in all fields. Yes, there will be some variance from instructor to instructor, but what was described in the example given in this thread clearly does not make the benchmark. Any instructor who would give a passing score to what was described is clearly wrong.

Around this time each year in the USA, Little League baseball players try out for teams. They go to a practice session in which they display their skills. As they play, coaches sit there and rate them. They score them on their expectations of what a player should be able to do at that age level to achieve mastery. At the same time, major league scouts are watching prospects and rating them as prospects for professional baseball. They are looking at the same skills as the Little League coaches, but the benchmark levels of mastery performance are much different when evaluating potential pros.

What you see is the concept of mastery at different benchmark levels. It happens in all areas of performance evaluation. You can give the same writing assignment to students in 5th grade, 10th grade, and college. Your expectation of what constitutes mastery varies by the different benchmark levels, and your ability to do that accurately depends your training. That training is supposed to calibrate you for your evaluation ability.

It should take surprisingly little calibration to do the job. Advanced Placement course essays are graded for mastery on a 9-point scale, and the professional evaluators have about a 90% interrater reliability in their scoring. Every year AP teachers go to training sessions so they know how their students' work will be judged. In those sessions, they are trained as evaluators are trained--by looking at actual student performances. I have done several of these sessions, and in each one, we achieved that 90% interrater reliability after about grading about 5 samples. I have also led training programs with about the same success rate.

That's on a 9-point scale. In scuba, we use a 2-point scale (pass or fail). A trained instructor who lets a clearly failing performance pass almost certainly made a conscious decision to do so.
@boulderjohn, I’m not part of your ongoing battle with wetbe4igetinthewater but I just have a quick question regarding just doing the skills and mastery of the skills.
Let’s take the mask off and replace exercise. This one always seems to be the toughest for most people. If you get a person to be able to do this without freaking out and bolting to the surface, they remove the mask, take a few breaths, put the mask back on and clear it. They do it one time in the pool and once in open water. Is this a pass or a fail?
IMO that is not mastery. Mastery would be pulling your mask off at 100’ and doing the entire ascent (with the aid of a buddy) and doing a safety stop all the way up with no mask. That to me would be mastery.
I’ve seen people do the exercise but by no means are they masters at it. In fact they are lucky they did it at all, and I’ll bet most of them hope they never have to do it again. But yet they pass because they did it to the satisfaction of all the rules.
When I certified we had to pull our mask off and be led around in the pool in a big circle so we had to breathe off the reg and be comfortable. There was no cheating. Would this be considered mastery of that skill? At least in the pool?
Where are you drawing the line of mastery vs just barely doing it in real physical mechanics, not in some philosophical terms based on the attitude of the instructor that day, or whatever vague description.
 
I have written about this in the past in great detail. The concept is called benchmarking. When you go through your IDC, you are supposed to be given an image of what constitutes a passing performance at the specific levels. You are supposed to compare that performance with what you see and judge accordingly. That is the process used in all performance evaluations in all fields. Yes, there will be some variance from instructor to instructor, but what was described in the example given in this thread clearly does not make the benchmark. Any instructor who would give a passing score to what was described is clearly wrong.

Around this time each year in the USA, Little League baseball players try out for teams. They go to a practice session in which they display their skills. As they play, coaches sit there and rate them. They score them on their expectations of what a player should be able to do at that age level to achieve mastery. At the same time, major league scouts are watching prospects and rating them as prospects for professional baseball. They are looking at the same skills as the Little League coaches, but the benchmark levels of mastery performance are much different when evaluating potential pros.

What you see is the concept of mastery at different benchmark levels. It happens in all areas of performance evaluation. You can give the same writing assignment to students in 5th grade, 10th grade, and college. Your expectation of what constitutes mastery varies by the different benchmark levels, and your ability to do that accurately depends your training. That training is supposed to calibrate you for your evaluation ability.

It should take surprisingly little calibration to do the job. Advanced Placement course essays are graded for mastery on a 9-point scale, and the professional evaluators have about a 90% interrater reliability in their scoring. Every year AP teachers go to training sessions so they know how their students' work will be judged. In those sessions, they are trained as evaluators are trained--by looking at actual student performances. I have done several of these sessions, and in each one, we achieved that 90% interrater reliability after about grading about 5 samples. I have also led training programs with about the same success rate.

That's on a 9-point scale. In scuba, we use a 2-point scale (pass or fail). A trained instructor who lets a clearly failing performance pass almost certainly made a conscious decision to do so.
John..... Respectfully... I don't think you understood my original post. My issue was that my personal evaluation and expectations of "mastery" of skills and "reasonable comfort" was at a HIGHER level than the other instructors in my shop.. Not lower... That resulted in some of my students NOT passing my course even though they may have passed with another instructor.
 
When I certified we had to pull our mask off and be led around in the pool in a big circle so we had to breathe off the reg and be comfortable. There was no cheating. Would this be considered mastery of that skill? At least in the pool?
This is a required skill in CW4. Before that, in CW2, you clear a fully flooded mask; you remove, replace, and clear a mask; and you breathe without a mask for at least one minute. Before that, in CW1, you clear a partially flooded mask.
Seems like ample required opportunities to show some comfort and repeatability.....i.e., "mastery." Remember, this is a new OW diver, not a 100-ft certified experienced diver.
 
Let’s take the mask off and replace exercise. This one always seems to be the toughest for most people. If you get a person to be able to do this without freaking out and bolting to the surface, they remove the mask, take a few breaths, put the mask back on and clear it. They do it one time in the pool and once in open water. Is this a pass or a fail?
IMO that is not mastery. Mastery would be pulling your mask off at 100’ and doing the entire ascent (with the aid of a buddy) and doing a safety stop all the way up with no mask. That to me would be mastery.
This is a good example of what I wrote about concerning benchmark levels. Your problem is that you are using a definition of mastery that is not consistent with the concept as defined in the the instructional concept Mastery Learning, which is how it it is defined in scuba instruction. The benchmark level of mastery is defined as would be expected of a diver at that certification level.

Look at the baseball example I gave above. A 13-year old's mastery performance in Little League tryouts would not be considered mastery by a player hoping to make the major leagues.

I mastery learning, a student moves through an instructional sequence. The student must display mastery at each step, and each step adds complication. If the student displays mastery at each step, then the student should have no problem with the last step. Let's see how this works in the OW class, starting in the pool
  1. The student clears a partially flooded mask easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
  2. The student clears a fully flooded mask easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
  3. The students removes and replaces the mask easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
  4. Student removes the mask, swims a distance, and replaces the mask. (If with me, this is all done neutrally buoyant and without touching the bottom.) this is done easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
  5. In the OW dives, the student repeats steps 1 and 2. It must be done easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
  6. In the OW dives, the student removes and replaces the mask easily and fluidly, with no problems. If not, the student repeats until it is done easily and fluidly.
The student does not have to repeat the skill if it is done smoothly and fluidly. In my experience, a student who gets through steps 1 and 2 easily will not have any trouble after that. Students who have trouble at step 6 must have been allowed to get by with subpar performance earlier.

Your example of mask mastery uses a definition common in other parts of the English language, but it is not the definition used in the mastery learning process. It is more than would be expected of a diver at the OW certification level.
 
This is a required skill in CW4. Before that, in CW2, you clear a fully flooded mask; you remove, replace, and clear a mask; and you breathe without a mask for at least one minute. Before that, in CW1, you clear a partially flooded mask.
Seems like ample required opportunities to show some comfort and repeatability.....i.e., "mastery." Remember, this is a new OW diver, not a 100-ft certified experienced diver.
Right.
But I’m trying to understand where the line is between “mastery” and “good enough”.
Obviously to some they are not one in the same.
I’m thinking you have to look at who is determining what mastery is (the agency). To me in the context of an OW training environment and the time constrains to be able to do the skills in a timely manner, mastery and good enough are pretty much the same thing. Either they can do the skill or they can’t. Have they mastered it? Probably not to the satisfaction of a few, but did they do it well enough to satisfy the rule as it is written?
To some mastery seems to be a much higher bar.
 
John..... Respectfully... I don't think you understood my original post. My issue was that my personal evaluation and expectations of "mastery" of skills and "reasonable comfort" was at a HIGHER level than the other instructors in my shop.. Not lower... That resulted in some of my students NOT passing my course even though they may have passed with another instructor.
A benchmark level means it is performed at a level as would be expected of a diver at that certification level. A student should not pass a skill if it is not good enough at that level, but an instructor should not require a performance beyond that level, either. There will certainly be some difference in that regard, but those differences should be minor. If a shop has students are being passed for subpar performances or instructors demanding too much, it is the job of shop management to step in and correct those problems. The the term for this is recalibration.
 
Right.
But I’m trying to understand where the line is between “mastery” and “good enough”.
Obviously to some they are not one in the same.
I’m thinking you have to look at who is determining what mastery is (the agency). To me in the context of an OW training environment and the time constrains to be able to do the skills in a timely manner, mastery and good enough are pretty much the same thing. Either they can do the skill or they can’t. Have they mastered it? Probably not to the satisfaction of a few, but did they do it well enough to satisfy the rule as it is written?
To some mastery seems to be a much higher bar.
One is training new divers.....not experienced/tech/solo/advanced divers. It is like driver training; you don't expect Formula One skills yet. Yes, there is some ambiguity in what "mastery" means, but the standard is to err on the side of what you expect out of a new diver, not perfection. That is also true for kicks and buoyancy and trim.
 
Status
Not open for further replies.
https://www.shearwater.com/products/peregrine/

Back
Top Bottom