IBM is working hard to show the world which is the best AI creator. In Setember 2018 the company kick off of an AI ables to investigate all others AI. The idea could be i.e. scan enviroments where a some kind of discrimination is in action.
This can be the case of Insurance that are giving bad conditions to minorities.
The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.
It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.
Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.
So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.
The control over AI is the main concern, a business field, and law vacuum. Everything has to be written in this area that doesn’t distinguish between the controller and the controlled.