More transparency for an algorithmic Administration | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Viogén, Saler, Veripol, Bosco, RisCanvi, Hermes, Send@… these are just some of the names given to algorithms and artificial intelligence (AI) systems used by public administrations to manage their procedures. Without delving into more complex technological issues, the administration is currently moving towards an increasingly automated operation to increase its efficiency in the face of the large volume of data that it must process. Public authorities are gradually abandoning the analog world to immerse themselves in the so-called “industrial revolution 4.0” that has brought us the Internet of Things (IoT) and automated learning systems that, imitating neural networks, can become real black boxes or black boxes of which little or nothing is known about how they operate.
Tell me if it is not disturbing that an artificial intelligence system was the third most voted candidate in the mayoral elections of a Tokyo district in 2018 or that the European Parliament is considering the need to regulate an electronic person statute. With such examples, it is better understood that the Bill of Digital Rights has preferred to speak of people’s rights “before” artificial intelligence and not of a right to artificial intelligence. In this case, the preposition undoubtedly makes sense.
Shaken and, I would almost say, alarmed by this vertiginous evolution, a concern arises to incorporate a more humanistic and ethical vision of technology into this great digital transformation. What I wrote a long time ago in a document on challenges for the public administration of the future makes more sense now: “Yes, we need more technology and innovation in the administration, but let’s not forget to imbue it not only with artificial intelligence and algorithms, but also of ethics, which is to those as the soul to the body”.
For a couple of years we have witnessed in Spain an overproduction of documents of soft law that speak of the need to increase transparency in the use of these technologies: the Guide of the Spanish Agency for Data Protection on adaptation to the General Data Protection Regulation of treatments that incorporate AI, Digital Spain 2025 (redacted 2026, after its review last July), the National Strategy for National Intelligence (ENIA), the Charter of Digital Rights or the Practical Guide and tool on the business obligation to provide information on the use of algorithms in the workplace, the work of the Ministry of Labor and Social economy. We even already have some rules of positive law that speak of it, such as the so-called Law rider or Law 15/2022, of July 12, comprehensive for equal treatment and non-discrimination, good examples of the regulatory tsunami in this matter that, to a large extent, comes from the European Union and that will have its maximum culmination with the upcoming approval of the Regulation on harmonized standards on artificial intelligence (Artificial Intelligence Law).
The objective is none other than to guarantee that the public decisions that are made using this type of technological resource are motivated and explainable, as has been the case until now. Nothing new under the sun: the regulations require that the foundations of the Administration’s resolutions be explicit. If now the reasons for that decision are algorithmic formulas, these should be equally known and understood if we want to guarantee the fundamental right of any person to defend themselves against possible arbitrariness.
But, can the management of that discretionary space that the rules sometimes attribute to those who are responsible for exercising certain public powers be translated into algorithms? How can a machine manage the subjectivity inherent in the exercise of certain powers, resolve what is fair and what is not, give precise content to indeterminate legal concepts when they have to be applied to a specific case, etc.? At least for now, this intangible margin of appreciation that belongs to people remains inalienable. And this reminds me of a recent advertisement for a well-known brand of soft drinks in which a job interview between a human candidate and an android was represented. After the usual “intelligent” interrogation to test the applicant’s abilities, the system concludes by saying that “you”, referring to the human race, “cannot contribute anything that we do not already do”, to which the young man responds by saying that they they will never be able to feel “want”, nor will they be able to reaffirm themselves, fall in love again and be reborn. A poetic and suggestive way of expressing that there is another intelligence, essentially emotional, that will never be possible to write in a programming language.
Access to algorithms is essential for various reasons, but one of the most relevant is to be able to know if the decision is biased because the data (inputs) used by the algorithm or with which it is trained to create patterns do not represent all of reality or are based on stereotypes and prejudices (COMPAS case). In other cases, the bias may be in the design of the algorithm itself, since its authors also have their own scale of values, experiences and opinion, which can undoubtedly influence the results produced by the automated operation (cases BOSCO, DELIVEROO or SyRI).
At the moment, the inflation of plans, strategies and letters to which we have previously referred is proving to be ineffective, given that resolutions that deny access to the source code or other documentation related to the algorithms used by administrations continue to be frequent. The usual: from the saying (commitment) to the fact (response), there is still a stretch.
Transparency commissioners and councils are the ones that are most tenaciously defending the opening of this information, based on comparative experiences such as that of the French Commission d’accès aux documents administratifs. Our Council for Transparency and Good Governance has resolved on several occasions that “as long as other mechanisms are not established to achieve the stated goals with equivalent guarantees, such as, for example, independent audits or supervisory bodies, the only effective recourse to such effects is access to the algorithm itself, to its code, for its control both by those who may feel harmed by its results and by the general public for the sake of the observance of ethical principles and justice”. The Catalan Commission for the Guarantee of the Right of Access to Public Information has ruled in a similar vein.
This defense of algorithmic transparency is absolutely compatible with the acceptance of limits, something inherent to any right. Much has been said about intellectual and industrial property or trade secrets, limits that are difficult to understand when dealing with products generated en casa by the administration itself or over which it has the exploitation rights. It is more justified to invoke public security, but its application is very restricted to a handful of cases where the risks are clearly identifiable.
With all these antecedents and based on some of the actions planned in the ENIA, it is necessary to move forward more decisively in specifying the legal regime for the use by the Administration of algorithms and AI systems. And we should start by putting on the table one of the most reiterated demands on which the administrations and even the aforementioned strategy itself remain absolutely silent, and that is the need to publicize the list of algorithms that are used in the public management, in what processes they operate or what are their essential rules. In general, we find out about the existence of “public algorithms” by surprise through the media given the great difficulty of locating the awards to carry out this type of creation on the public sector contracting platforms. And this is so in the best of cases, since in the case of technological developments with our own means we may not even know about it.
The recent Law of Transparency and Good Governance of the Valencian Community is a good step in this line, since it requires the publication of the list of algorithmic or artificial intelligence systems that have an impact on administrative procedures or the provision of public services. This publication must include an understandable description of the design and operation of these instruments, the level of risk involved and the point of contact to which they can be addressed in each case, in accordance with the principles of transparency and explainability (it will enter into force in May 2023).
Internationally, Amsterdam or Helsinki are some of the cities that have been publishing the registry of their algorithms for a long time, most of them chatbot, which they use in some of the services they provide. You can learn more about some of them and others thanks to the Global Observatory of Urban Artificial Intelligence.
It’s time to unbutton your cuffs, roll up your sleeves and, ties or no ties, get down to business.
Joaquin Meseguer Yebra He is a corresponding academic of the Royal Academy of Jurisprudence and Legislation, coordinates the FEMP transparency and access to information working group and is executive secretary of the Spanish chapter of the International Open Government Academic Network. He has been Deputy General Director of Transparency of the Madrid City Council and General Director of Transparency and Good Governance in the Junta de Castilla y León.
You can follow EL PAÍS TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.