I think everyone is aware of the dangers of confusing correlation with causation. But in the case of AI, correlation is causation. Correlation is the basis of the model/weightings that determine the output. As opposed to say deductive or inductive reasoning or targeted research, much less hypothesis formulation and testing.
This leads to a real danger if people make decisions based on the output of putative AI software. Or worse, automate decisions based on the output of putative AI software.
We already have examples of this. Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks.
This leads to a real danger if people make decisions based on the output of putative AI software. Or worse, automate decisions based on the output of putative AI software.
We already have examples of this. Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks.