Please note! This essay has been submitted by a student.
Algorithmic models have firmly established in all spheres of our lives. These algorithms are being used in our everyday decision making from advertising to policy and housing activity. Companies use them to decide who should be hired or who should be let go. Banks use them to determine our credit scores and sanction loans. Finally, the criminal justice system uses risk assessment algorithms to predict which individual is likely to commit a crime. It is very clear even from the surface that anything with such an impact needs to accountable, in this case, Algorithms.
One of the most inherent problems present in these algorithms at a very basic level is the human bias that is injected in these machine-based decision-makers. A negative example of such human bias came from the judicial system which imposed a strict jail sentence for black defendants. The COMPAS is such a system, which is used by the judicial system to predict the risk score of an individual committing crime. The main goal of this system was to improve public safety and to remove the bias that the human judges could inherently have. But this system wrongly predicted that black defendants have high risk and white defendants have a low risk of committing crimes. This controversy proves that algorithms are complicated and need to well researched by an open community to claim that the makers did not inject any bias into them.
Another example in tech, Amazon using its AI recruiting software to review and make recommendations for interview candidates. But this software model ended up listing more male resumes since it was trained with data from the past which had more male resumes in the past. This model involuntarily downgraded resumes from the female candidates. This shows how improper algorithms and review methods may result in gender when not given adequate attention to details.
Let us look at a more alarming situation. Recently one of Uber’s self-driving vehicles killed a pedestrian. This truck was in automatic mode with the safety driver turned on. The main reason for this accident was a flaw in the self-driving software. This software decided not to stop or move away even though the car sensors did detect a pedestrian in the vicinity. This proves that an improper algorithm or implementation can also be harmful and life-threatening. This creates a situation where responsibility for the event could not be determined. Is the engineering team responsible, is it the company Uber, is it the car manufacturer or the person in the car. The risks of such unaccountability far outweigh the advantages of such algorithms and systems.
Finally, let us look at the formal definition. One such definition quotes “Algorithmic accountability can be defined as the controls employed on an algorithmic system to understand its purpose and intentions as well as rectify any risk or harmful outcomes”. “Some of the methods to control are transparency, fairness or using reverse engineering”.
However, these methods pose certain challenges and the main goal would be to make algorithmic accountability more effective and thereby reducing the risk, harm or bias caused by such models. This can be only achieved by implementing proper governance structures. The risks such as social discrimination or violation of privacy need to be handled by implementing effective policies while implementing these algorithmic systems. Given the position in society as a software engineer, the best and most effective ways to contribute is to make sure the algorithms and code that I would design and implement would be to take account of all the risks, inclusion of all stakeholders, not focus only on company welfare, and most important of all be ethical. A pledge that if followed by all, would make algorithms, a better place to rest our trust.