A Brief History of Ethics
Everyone is allowed to have an opinion, but we live in a time when they are often used as political currency. This post is meant as a kind of somber realization that a good faith effort to engage in constructive dialog for the better part of 2 years, on the topic of ethics around the field of artificial intelligence, was never about ethics or artificial intelligence. As both terms remain open to interpretation, they provide easy cover for any and all ideas and motivations. From the beginning there was a divide between insiders, already rubbing elbows in the right social networks, and the poor shlubs that would be left to do the hard work of trying to implement platitudes into practice.
I wanted to know what the actionable result of all this talk was going to be. I wanted to provide a view of the ethical battlefield that wasn’t limited to Corporations, Governments, and Universities. And yes, I wanted to make sure my area of research wasn’t going to be a casualty, smothered by a mix of regulatory requirements, insurance costs, and special hires designed to keep potential competition out of the marketplace.
This is me, walking away from whatever “ethics” has become.
It’s ironic that the way to uncover bias in large data sets is to throw an algorithm at it, yet there’s an expectation of being able to somehow see that bias ahead of time, to clean the data prior to training. The ethical issues are clear: How the data is collected; Who has access to the data (in-house vs 3rd party); Where the data is used (what is the process that is going to be automated). There are established (though not enforced) data privacy issues. But there also remains a great deal of concern regarding “direct support” of military and law enforcement agencies (though this should have been the red flag signaling the non-technical direction of the ethics movement).
From a technical perspective, actually representing the “AI” side of things, the products and services tend to be over-sold and under-perform. For all of the criticism and well-worn examples, the tools actually do work. However, it’s less about some advanced AI system that can’t “see” darker skin tones, and more about design, (software) engineering, and testing failures that are not unique to machine learning. Along with a push for actionable ethics, I’ve had to be one those “outsiders” pushing-back against the ability of the technology (as it is today). To borrow an analogy, ethics won’t make the ladder taller.
While there should have been more effort to focus on data, especially as it’s the driving resource for all involved, it felt like Ethics of AI formed its own little cottage industry, tapping into the near unlimited funding and VC capital surrounding all things AI. All of the “professional” focus went to framing ethics as a major issue, with calls to action that brought all the politicians to the yard. Mission statements led to 100+ page reports drafted for committees and advisory groups. Trade publications wrote about this new movement, and companies were in sudden need of AI Ethics experts. Thankfully, the people writing the reports were also available to advise unsuspecting companies on how to navigate the rough waters… of what amounted to nothing more than a tea cup under a magnifying glass.
Other people have made good money packaging the free-flow of ethics dialog on Twitter.
Ethics washing became a simple way to describe the efforts by large companies to effectively shield themselves from harsh criticism and from ethical guidelines turning into actual laws that would directly impact them. Cynically, there may have been a realization that a growing ethics industry was actually a good side business (market) to get into, but it didn’t take very long for Big Tech to start creating ethics boards and publishing their own AI ethics principles. There was no real authority for oversight nor implementation of ideas.
Perhaps it was a second red flag as to the nature of the movement, but just talking about ethics was no longer enough. Those early days of “who makes the rules for who is qualified to talk about Ethics in AI” was just a warm-up for the many fights, and failures, because representation became a much larger ethics issue (I would say, totally eclipsing technical merits). It’s become both unethical for a smart device to be always listening and for it to not understand every spoken language and dialect around the world. Two years ago a start-up may have designed a “smart device” around a target demographic, but the ethics of today says it has to be designed to work equally well for everyone, can’t be used to monetize data after purchase, and shouldn’t cost so much that it widens the socioeconomic divide.
“Manels” are a popular trope now, and bonus criticism if everyone is white. Percentages matter. Diversity matters. The environment matters. Everything matters, it seems, except the actual technology. Ethics somehow became a backseat driver. No, it’s worse than that. The Ethics in AI movement doesn’t want you to own a car. It parallels the LAWS (Lethal Autonomous Weapons Systems) movement in that its staunchest practitioners seem to be against any form of weapon or usage, and only begrudgingly talk about meaningful human control. That likely says a lot more about activism in general, and what motivates people to take-up a cause.
I know it’s all about positive outcomes, and taking a hammer to anything that is status quo. When did it flip from promoting awareness to demanding everyone convert to the Church of Woke? Ethics shouldn’t be about telling people how they can or can’t invest their time and energy. Ethics shouldn’t feel like a burden.
Ethically, I’m not ethnic enough to contribute, but am still asked to pay tribute.