When new technologies become widespread, they often raise ethical questions. For example:
Weapons - who should be allowed own them?
Printing press - what should be allowed to be published?
Drones - where should they be allowed to go?
The answers to these questions normally come after the technologies have become common enough for issues to actually arise. As our technology becomes more powerful, the potential harms from new technologies will become larger. I believe we must shift from being reactive to being proactive with respect to new technological dangers.
We need to start identifying the ethical issues and possible repercussions of our technologies before they arrive. Given that technology grows exponentially fast, we will have less and less time to consider the ethical implications.
We need to have public conversations about all these topics now. These are questions that cannot be answered by science - they are questions about our values. This is the realm of philosophy, not science.
Artificial intelligence in particular raises many ethical questions - here are some I think are important to consider. I include many links for those looking to dig deeper.
I provide only the questions - it's our duty as a society to find out what are the best answers, and eventually, the best legislation.
1. Biases in Algorithms
Machine learning algorithms learn from the training data they are given, regardless of any incorrect assumptions in the data. In this way, these algorithms can reflect, or even magnify, the biases that are present in the data.
For example, if an algorithm is trained on data that is racist or sexist, the resulting predictions will also reflect this. Some existing algorithms have mislabeled black people as "gorillas" or charged Asian Americans higher prices for SAT tutoring. Algorithms that try to avoid obviously problematic variables like "race", will find it increasingly hard to disentangle possible proxies for race, like zip codes. Algorithms are already being used to determine credit-worthiness and hiring, and they may not pass the disparate impact test which is traditionally used to determine discriminatory practices.
How can we make sure algorithms are fair, especially when they are privately owned by corporations, and not accessible to public scrutiny? How can we balance openness and intellectual property?
A software engineers calls out algorithmic bias by Google
2. Transparency of Algorithms
Even more worrying than the fact that companies won't allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators. Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.
For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.
An algorithm that is much more transparent than deep learning
3. Supremacy of Algorithms
A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?
For example, some algorithms are already being used to determine prison sentences. Given that we know judges' decisions are affected by their moods, some people may argue that judges should be replaced with "robojudges". However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a "risk score", the algorithm uses inputs about a defendant's acquaintances that would never be accepted as traditional evidence.
Should people be able to appeal because their judge was not human? If both human judges and sentencing algorithms are biased, which should we use? What should be the role of future "robojudges" on the Supreme Court?
A case study from the ProPublica investigation into the COMPAS sentencing algorithm
4. Fake News and Fake Videos
Another ethical concern comes up around the topic of (mis)information. Machine learning is used to determine what content to show to different audiences. Given how advertising models are the basis for most social media platforms, screen-time is used as the typical measure of success. Given that humans are more likely to engage with more inflammatory content, biased stories spread virally. Relatedly, we are on the verge of using ML tools to create viral fake videos that are so realistic humans couldn't tell them apart.
For example, a recent study showed that fake news spread faster than real news. False news were 70% more likely to be retweeted than real news. Given this, many are trying to influence elections and political opinions using fake news. A recent undercover investigation into Cambridge Analytica caught them on tape bragging about using fake news to influence elections.
If we know that videos can be faked, what will we be acceptable as evidence in a courtroom? How can we slow the spread of false information, and who will get to decide which news count as "true"?
Left: real image of Parkland shooting survivor Emma González. Right: Fake image that went viral
5. Lethal Autonomous Weapon Systems AI researchers say we will be able to create lethal autonomous weapons systems in less than a decade. This could be in the form of small drones that are able to be deployed, and unlike current military drones, be able to make decisions about killing others without human approval.
For example, a recent video created by AI researchers showcases how small autonomous drones, Slaughterbots, could be used for killing targeted groups of people, i.e., genocide. Almost 4,000 AI/Robotics researchers have signed an open letter asking for a ban on offensive autonomous weapons.
On what basis should we ban these types of weapons, when individual countries would like to take advantage of them? If we do ban these, how can we ensure that it doesn't drive research underground and lead to individuals creating these on their own?
Still from "Slaughterbots", click image to watch full video.
6. Self-driving Cars
Google, Uber, Tesla and many others are joining this rapidly growing field, but many ethical questions remain unanswered.
For example, an Uber self-driving vehicle recently killed a pedestrian in March 2018. Even though there was a "safety driver" for emergencies, they weren't fast enough to stop the car in time.
As self-driving cars are deployed more widely, who should be liable when accidents happen? Should it be the company that made the car, the engineer who made a mistake in the code, the operator who should've been watching? If a self-driving car is going too fast and has to choose between crashing into people or falling of a cliff, what should the car do? (this is a literal literal trolley problem) Once self-driving cars are safer than the average human drivers (in the same proportion that average human drivers are safer than drunk drivers) should we make human-driving illegal?
A dashcam from Uber's self-driving accident
7. Privacy vs Surveillance
The ubiquitous presence of security cameras and facial recognition algorithms will create new ethical issues around surveillance. Very soon cameras will be able to find and track people on the streets. Before facial recognition, even ominpresent cameras allowed for privacy because it would be impossible to have humans watching all the footage all the time. With facial recognition, algorithms can look at large amounts of footage much faster.
For example, CCTV cameras are already starting to be used in China to monitor the location of citizens. Some police have even received facial-recognition glasses that can give them information in real time from someone they see on the street.
Should there be regulation against the usage of these technologies? Given that social change often begins as challenges to the status quo and civil disobedience, can a panopticon lead to a loss of liberty and social change?
Surveillance cameras in China using machine vision
Philosophy with a deadline
Actual people are currently suffering from these technologies: being unfairly tracked, fired, jailed, and even killed by biased and inscrutable algorithms.
We need to find appropriate legislation for AI in these fields. However, we can’t legislate until society forms an opinion. We can’t have an opinion until we start having these conversations and debates. Let’s do it. And let’s get into the habit of beginning to think about ethical implications at the same time we conceive of a new technology.
For another post: Longer-term ethical issues Designer babies Job automation AGI alignment AI rights Consciousness uploading
Let's grab coffee!
I'm interested in talking about the intersection of technology and ethics, data science consulting, full-time opportunities and passion projects.