Interview with Chiara Giovannini & Liz Coll, ANEC
ANEC is the European consumer voice in standardisation. As one of only three EU-appointed not for profit organisations able to directly intervene in standardisation processes, ANEC represents the consumer interest of all Europeans in the creation and application of technical standards, market surveillance and enforcement, accreditation, and conformity assessment schemes. ANEC works with European Standardisation bodies CEN-CENELEC and ETSI to directly shape the standards that will underpin key European laws and public policies for consumers.
We have talked to Chiara Giovannini and Liz Coll to find out more about ANEC’s work and the role that AI plays in standardisation processes.
What actions do you take to make sure that the design of consumer products and services directly reflect consumer needs in relation to security, privacy and safety?
Chiara, Liz: For a standard to be effective, its provisions need to be clear, unambiguous and replicable. This is particularly important in the case of AI systems: because security breaches can take multiple forms, objective and measurable requirements are needed to allow for the objective assessment of the conformity level of AI systems.
For example, with respect to consumer IoT products, technical requirements shall be fulfilled in order to ensure a baseline set of security such as no universal default passwords and only allow consumers to set secure passwords. Security updates provided over the lifetime of the product to ensure state-of-the-art security information about product-related expected lifetime shall be given to the consumer before purchase.
We expect European standards to specifically address European values and fundamental rights and not just adopt International Standards which might not reflect our values and principles. As consumers, we want to be sure that consumer protection principles will be reflected by design in the future European standards on AI.
We also call for increased inclusiveness of the standardisation process in order for consumers of all ages and abilities to be able to effectively participate in the development of the standards. Unfortunately, sometimes it is not the case. This does not bode well for the future development of the standards, which should be based on the consensus of all the concerned stakeholders.
In you view what are the main risks and opportunities of AI & other data-heavy technologies in standardisation processes?
Chiara, Liz: The expert we recruited with the AI Fund support has represented ANEC during the establishment and initial phases of the European standards committee of CEN-CENELEC charged with developing the standards that will be key to implementation and delivery of the proposed EU AI Act.
The membership is mostly made up of industry representatives who head a national delegation on behalf of a national standards body. There are well recognized challenges with the dynamics and power asymmetries in such an environment, with civil society representatives having far less resources, capacity and technical knowledge to participate in the discussions. Under the European Standardization system, ANEC is formally recognised as a societal stakeholder which means that we can participate in committees and be involved in discussions and consensus building. However, we are not able to vote on items. JTC 21 AI is made up of over 100 participants, and a large proportion of these attend the plenaries, with smaller numbers attending the sub-groups and working groups.
There are several participants representing national delegations yet employed by large digital and technology firms such as IBM, Microsoft, Siemens, Google and Huawei. Most of the representatives are also members of ISO-IEC SC42, the international standards organisation committee for AI. The significance of this is that many standards for application within Europe are being drafted or revised at the international level where stakeholder participation is very limited. There is also strong participation of countries who do not share European values particularly in relation to the standardisation of emerging technologies, which we consider as an important risk.
There are difficulties in transposing fundamental rights and EU values and principles into technical standards from both a substantive and process perspective. Harmonised standards should not be used to define or apply fundamental rights, legal or ethical principles. For example, we think it is unclear how technical standards will help service providers determine what types of biases are prohibited and how they should be mitigated (Art. 10 (2) f)).
Standards which implement technical aspects should be developed with strong consumer and wider societal participation at national and regional level and incorporate consumer needs.
Would you share one surprising thing you learned so far?
Chiara, Liz: We have the impression there is a bit of a hype and inaccurate marketing of AI applications not only in the media but also in policy groups and even in standardisation discussions.
Many things are labelled ‘AI’ when they only use small elements of the technology, which makes it difficult for consumers and consumer advocates to navigate.
However, while AI is different from other sets of technologies, many of the same issues for consumers such as data, security, competition are present.
So we need to question some of claims and capability to date, but be vigilant about the future/potential capacity for negative consequences.