Businesses with an online presence in China may be surprised to learn that the algorithmic features of their websites and online services may not comply with the new rules. The Cyberspace Administration of China Internet Information Service Algorithmic Recommendation Management Arrangements (provisions), adopted in January 2022, entered into force on 1 March.
A summary of the provisions
The scope of “algorithmic recommendation technology” subject to the provisions is quite broad. Within its scope are website features and online services that use algorithms to:
- create content (“generative or synthetic type”);
- make personalized recommendations;
- classify or select information;
- search or filter content;
- shipping service providers; or
- otherwise provide information to users.
These website features and online services must comply with a variety of new requirements, some of which are rather nebulous. Penalties for uncorrected or serious violations can include fines between 10,000 and 100,000 yuan (approximately $1,580 to $15,800).
Provisions relating to public morals and order
The provisions impose various obligations on providers of algorithmic recommendation services to maintain public morals and order. Companies should use algorithms “for the sake of good” and not in activities that are detrimental to “national security and the social public interest” or “disrupting the economic and social order”. They must regularly ensure that their algorithm models do not violate “ethics and morals, for example by driving users into addiction or excessive consumption”. To these ends, the provisions require companies to “vigorously present” information consistent with “prevailing value guidance.”
Companies must also delete and terminate the transmission of illegal information when discovered and report such illegal information to the authorities. They also cannot use illegal information as keywords for user interests or as tags to recommend content.
In addition, “algorithmic recommendation service providers with public opinion properties or social mobilization capabilities” must register through the Internet Information Service Algorithm Ranking System and conduct security assessments.
Good cyber management practices are now mandatory. Among other practices, companies must establish systems to test their algorithms (including ongoing testing), review the ethical implications of their technology, protect cybersecurity and respond to incidents, secure data and personal information, and combat fraud. . Companies must also have “mechanisms for manual intervention and autonomous user choice”; in other words, there must be ways for humans to override algorithmic results. Providers must explain how their algorithms work and are encouraged to do so in a way that is transparent and understandable to users of their websites and services.
Competition and consumer protection
Anti-competitive uses of algorithms are prohibited. Additionally, the provisions include a number of protections for consumers (and workers):
Manipulation and Deception: Algorithmically generated or synthetic content must be identified as such. Fake news is prohibited. The same goes for many other types of deception and manipulation: “Algorithmic recommendation service providers may not use algorithms to falsely register users, illegally trade accounts, or manipulate user accounts; or for fake likes, comments, shares, etc. They may not use algorithms to protect information, over-recommend, manipulate topic listings or search result rankings, or control popular search terms or selections and other such interventions in the presentation of information; or to commit acts influencing public opinion online, or outside of oversight and direction.
Targeted recommendations: Companies must give users the right to opt out of targeted recommendations based on their personal characteristics and choose or delete the “user tags” associated with them in the company’s databases.
Minors and the elderly: The provisions impose particular obligations for the protection of minors and the elderly. When serving minors, providers must protect them from harm online, including incitement to dangerous or immoral actions. Especially for social media and gaming platforms, algorithms should not foster children’s online addiction. Providers of online information serving children also have a duty to develop recommendation models that highlight information beneficial to their physical and mental health.
Similarly, algorithmic services provided to older people should be tailored to their needs and vulnerabilities, including cybersecurity and fraud prevention.
On-demand workers: Algorithmic services provided to on-demand workers must protect workers’ interests, such as their rights to compensation and certain benefits.
Price, Terms and Conditions: Suppliers of consumer goods and services may not use algorithms to engage in unreasonable price discrimination or other “differential treatment in terms of trade” based on “trends in consumers, their shopping habits and other similar characteristics”.
Customer service: The provisions require “convenient and efficient” customer service for complaints about the results of the algorithms.
The larger context
The provisions are part of a growing wave of AI regulation in China and around the world. China Cyberspace Administration Finalizes another settlement—this one about algorithmically created content, including technologies such as virtual reality, text generation, text-to-speech, and “deep fakes.” The European Parliament and the Council of the European Union examine the global law on AI propose last year by the European Commission. The United States Federal Trade Commission is considering regulation “to curb lax security practices, limit invasions of privacy, and ensure that algorithmic decision-making does not result in unlawful discrimination.” The White House Office of Science and Technology Policy has work started on an “AI bill of rights”. These efforts are in addition to restrictions on automated decision-making included in the GDPR and various US privacy laws.
It is too early to tell whether these various efforts will ultimately produce regulatory harmony or dissonance across jurisdictions. In the meantime, however, businesses can take practical steps to stay ahead.
Peter J. Schildkraut is a partner in the Washington, DC office of Arnold & Porter, where he co-leads the technology, media and telecommunications industry team and provides strategic advice on regulatory matters. Darrel Pae is a partner in the Washington, DC office, where he provides regulatory, transactional and litigation advice to telecommunications, media and internet clients. hazelnut zhang is a partner in the firm’s Shanghai office, where she advises pharmaceutical, biotechnology and medical device companies on a wide range of regulatory, transactional and litigation matters.