The EU finally proposed the long-awaited Regulation on a European approach for Artificial Intelligence.
This awoke a long-held debate between innovation and human rights. On one hand, the tech industry is quick to remark that over-regulation stifles innovation. On the other hand, consumers advocate for innovation that doesn’t encroach on human dignity.
Even though I find valid points on both sides, I’m siding with the European approach this time because I feel the tech industry often forgets that technology should serve humanity and not the other way around and artificial Intelligence is the perfect domain in which we could lose control of our tools.
Whether we focus on machine learning, expert systems, logic reasoning, or neural networks, we have to concede that machines need data from humans to properly function and develop. Sure, in return we get interactive maps to reach obscure destinations or access to personalized recommendations on what series to binge but, what about the fact that AI systems can (and apparently do) manipulate our actions and discriminate against us. Who’s serving who?
If one takes the time to read Who Owns the Future (Jaron Lanier); Weapons of Math Destruction (Cathy O’Neil); Re-Engineering Humanity (Brett Frishman and Evan Sellinger); The Age of Surveillance Capitalism (Shoshana Zuboff) or at least watch Netflix’s documentary Coded Bias, one can get a clear sense of the crossroads we are in.
This is an existential problem. I don’t see how humanity can thrive (or even survive) in an era of unchecked wielding of algorithmic power and I believe the tech industry is going to back me up on this because I know they understand that “do no evil” means putting humans first.
I am not sure if the EU’s proposed regulation is the optimal solution but, I do think it’s a step in the right direction, by recognizing that AI Companies have a responsibility towards their consumers and establishing certain principles and requirements.
It is my opinion that any regulation focusing on consumer and citizen protection in the context of AI systems should, at least, include the following.
Data Governance. AI Companies must be stewards of the data they collect. Not only because of its high value but because, generally, that data relates to people. Respecting their customers' control of their personal data, as well as their rights to access, rectification, opposition and cancelation are paramount.
Record keeping (Logs). Logging information automatically should be built by design in any AI system while restricting the ability of its operator to tamper such records. Accurate record-keeping is the bedrock of accountability and liability limitation because it creates traceable systems. Since lots of things can go wrong with AI, we must ensure we will have the data to figure what happened, adapt and move forward.
Transparency. The operation of an AI system should be completely transparent to enable users to use it properly. Of course, we don’t want companies divulging their trade secrets but, at least some minimum information should be shared, after all, doesn't information always want to be free?
Information to users. Users should be aware of some minimum information, such as the intended purpose of the system, its level of accuracy if the system has been tested, and who actually developed and is running it. Let there be light!
Human oversight. “The system says so” should never be an excuse for anything. Human oversight must ensure that the system’s results do not infringe human rights, civil liberties, and a correct interpretation of the input to ensure a more just output. We must not let go of the override codes.
I firmly believe that by including these principles, we are on a real way to make a dent in the universe. Let’s keep innovating to build a better world for humanity.
Photo by Jens Johnsson from StockSnap.
Comments