The curious case of Ethics in AI

Published On February 26, 2020
In Market Intelligence, Data Analytics, Blog Archives

Artificial intelligence is consistently trying to make our lives function effortlessly every day. From logistics optimization, fraud detection, research, art composition, virtual assistants providing personalized experiences to many more, our lives are being transformed by AI for the better. The AI systems completely rely on algorithms, which are designed to help the machine sense and understand its environment, take inputs, process those inputs to provide a solution, do risk assessment, predict the future trends etc.  

During the era, when the AI technology was still fairly new to the world, the systems purely functioned according to the programs written by humans. But, with the advancement of technology, now the AI systems are capable of ‘learning’ from the manually fed data and the readings collected from its surroundings. The machine “learns” on its own hence, paving way for the machine learning generation where the system doesn’t require to be programmed explicitly.  

Ethics in Artificial Intelligence 

As the AI technology progressively moves forward, ethical concerns are being raised about the advances. In Tempe, Arizona, USA, a woman riding a bicycle was killed by a self-driving-car. Even though, there was a human behind the wheels, the AI system was fully in-control of the car. Such incidents’, the ones involving human and AI interactions, raise a series of ethical questions. Questions like, who was responsible for the woman’s death? Was it the human behind the wheel? Manufacturer of the self-driving-car? Or perhaps the AI system designers? The curious case of who is morally correct remains a mystery. This raises another concern about the rules that are guiding the AI systems which aid them in becoming moral and ethical. The ambiguity attached to this issue is also directly projected towards the legal authorities, where rules governing such situations do not help conclude what’s right or wrong. 

Biasness in AI 

An example of biasness by AI: In 2014, a recruiting tool was developed by Amazon for identifying the right candidate for the role of software engineer. As the system learned from the data it was fed, it swiftly started discriminating against women. The company had to abandon the system in 2017.  

Ethical issues in AI 

The advent of AI also poses threats to humanity in terms of employment. People can’t deny the anxiousness built up by the thought that AI will someday snatch their jobs. For example, if self-driving trucks get a go ahead for commercialization, as promised by Elon Musk, CEO of Tesla, hundreds of trucks and lorry drivers will lose their jobs. The same scenario is projected to occur in the corporate sector also, where AI and machine learning are already being implemented to handle the administrative tasks.  

Conclusion 

It should not be forgotten that, in the end, the AI systems are created by humans, who can be judgmental and bias. Whatever we design, there are always going be arrows thrown at it, at every step of the way. It is important to create awareness about the ethics in AI, as the world we are trying to build has ubiquitous technology.  It can’t be stressed enough that if AI is used effectively, with the vision of achieving social progress, in that case it can act as a catalyst for constructive change.  

  • Share this article
Nupur Verma
Nupur Verma
About the Author

Nupur is a Digital Marketing Professional with a demonstrated history of working in Research and IT industry. She is skilled in social media marketing, brand strategy, email marketing, inbound marketing, SEO, Google Analytics. She is also ardent about content writing and graphic designing.

Write to Us for More Information or No-obligation Consultation








DOWNLOAD WHITEPAPER



DOWNLOAD Case Study



DOWNLOAD



DOWNLOAD