- Messages
- 20,647
- Reaction score
- 15,141
- # of dives
- I'm a Fish!
@tarponchik I see no reason why on a CCR you would ever want to calibrate with air. The ppO2 of air at the surface is so incredibly far from where you will ever run the setpoint of a rebreather that have that as the calibration point is completely useless.
Your argument about wearing the cells out faster has little validity as well because the rebreather is going to be run at a ppO2 of .8 to 1.6 for hours upon hours upon hours for the life of that cell so exposing it to the ppO2 for calibration is comparable to diving for any extra 5 minutes on the cell life? If that is something that you care about, you probably need not be on a rebreather. Obviously you are consuming more electrolyte, but it is the same as doing an extra 5 minutes on a decompression stop. Not something that is going to matter to the actual life of the rebreather cell.
I think this is best thought of as a graph
Cell controllers are simple devices. We will assume that air has a ppO2 of .2 and is 20/80 for easier math. To follow @Bobby 's math, we will say that the cell shows 10mV when exposed to air, and 45mV when exposed to pure O2 at atmospheric pressure.
In single point calibration for air, the device assumes a straight line of what amounts to .02x. I.e. it takes the mV reading, multiplies it by .02, and that gives your ppO2 reading.
In single point calibration, ALL it can do is take the mV reading and multiply it by .02 and that's it.
When you are actually breathing 1.6ppO2 which is 65mV, it will be showing you 65*.02=1.3
If you bring your setpoint up to where it shows 1.6, then you will be at 80mV and obviously breathing significantly higher than 1.6 ppO2 somewhere in the 1.7-1.8 ish range
If however we do an oxygen calibration, then the slope is now .22 because it takes 45mV and tries to get 1.0.
When this calibration sees 65mV it shows 1.43ppO2 on the controller and would only try to bring it up to 73mV if you had a setpoint of 1.6 which would be slightly less bad than the air calibration.
If I have time tomorrow I'll try to plot this so you can see it visually.
TLDR is that single point calibration uses one formula
y=mx where y=ppO2, m=some multiplier, and x=mV from the cell
The closer to y that you calibrate, the more accurate M will be in the area that you are reading
2 point calibration like the old Meg, creates a formula of y=mx+b which gives something a bit closer to reality.
The only way to be 100% sure what a cell is going to do is to be able to pressurize the head. Best way to do that is to be able to pressurize it to 4.76ata on air which is a known gas and easiest to be sure about, and have it show 1.0, then pressurize it to 7.62ata on air and have it show 1.6
No rebreather I am aware has a head calibration kit that can be pressurized to 100psi to do that, so we have to do a lot of math to attempt to predict the way a cell will hopefully behave when diving
Your argument about wearing the cells out faster has little validity as well because the rebreather is going to be run at a ppO2 of .8 to 1.6 for hours upon hours upon hours for the life of that cell so exposing it to the ppO2 for calibration is comparable to diving for any extra 5 minutes on the cell life? If that is something that you care about, you probably need not be on a rebreather. Obviously you are consuming more electrolyte, but it is the same as doing an extra 5 minutes on a decompression stop. Not something that is going to matter to the actual life of the rebreather cell.
I think this is best thought of as a graph
Cell controllers are simple devices. We will assume that air has a ppO2 of .2 and is 20/80 for easier math. To follow @Bobby 's math, we will say that the cell shows 10mV when exposed to air, and 45mV when exposed to pure O2 at atmospheric pressure.
In single point calibration for air, the device assumes a straight line of what amounts to .02x. I.e. it takes the mV reading, multiplies it by .02, and that gives your ppO2 reading.
In single point calibration, ALL it can do is take the mV reading and multiply it by .02 and that's it.
When you are actually breathing 1.6ppO2 which is 65mV, it will be showing you 65*.02=1.3
If you bring your setpoint up to where it shows 1.6, then you will be at 80mV and obviously breathing significantly higher than 1.6 ppO2 somewhere in the 1.7-1.8 ish range
If however we do an oxygen calibration, then the slope is now .22 because it takes 45mV and tries to get 1.0.
When this calibration sees 65mV it shows 1.43ppO2 on the controller and would only try to bring it up to 73mV if you had a setpoint of 1.6 which would be slightly less bad than the air calibration.
If I have time tomorrow I'll try to plot this so you can see it visually.
TLDR is that single point calibration uses one formula
y=mx where y=ppO2, m=some multiplier, and x=mV from the cell
The closer to y that you calibrate, the more accurate M will be in the area that you are reading
2 point calibration like the old Meg, creates a formula of y=mx+b which gives something a bit closer to reality.
The only way to be 100% sure what a cell is going to do is to be able to pressurize the head. Best way to do that is to be able to pressurize it to 4.76ata on air which is a known gas and easiest to be sure about, and have it show 1.0, then pressurize it to 7.62ata on air and have it show 1.6
No rebreather I am aware has a head calibration kit that can be pressurized to 100psi to do that, so we have to do a lot of math to attempt to predict the way a cell will hopefully behave when diving
Last edited: