Source: Pixabay

What should Australia do about…
the PRC’s artificial intelligence ambitions?

By Max Parasol

Australia must pay close attention to developments in artificial intelligence (AI) and other emerging technologies in the People’s Republic of China (PRC). PRC research in these areas is now world class. There is concern across the globe about how the PRC will use AI, including in its social credit system and surveillance efforts in the country and overseas. The PRC now develops and publishes its own standards for AI safety and sets legal norms in areas such as cyber security. However, nuance is required to analyse what the PRC is actually accomplishing with AI.

The PRC is already a global AI power. Global business will continue to be involved with the PRC and AI. That will not change. Australia must engage in global discussions on AI norms and ethics, including with the PRC, even though Australia is a relatively small technology player. It should join conversations around algorithmic transparency and actively participate in forums on technology standards. In Australia, each project should be dealt with on a case-by-case basis.

What is AI?

AI refers to “systems that are capable of performing tasks commonly thought to require intelligence.” Machine Learning “refers to the development of digital systems that improve their performance on a given task over time through experience.”1

Deep Learning is a sub-field of Machine Learning that involves algorithms inspired by the structure of the brain called artificial neural networks that make complex connections between data sets. Along with increased computing power, Deep Learning is the major recent breakthrough in AI research. Key applications include accurate voice and facial recognition, which can be used for better medical diagnostics. AI image recognition can also be applied by security agencies, which has been a cause for concern among international observers.

As an “omni-purpose” technology with numerous applications, AI can solve problems associated with the PRC’s modernisation process. At the same time, AI is likely to have unintended consequences wherever it is used. This is not a dilemma exclusive to the PRC.

Why do the PRC’s AI ambitions matter to Australia?

The PRC’s AI ambitions matter to Australia for two reasons.

First, by developing AI the PRC has the potential to solve numerous problems globally. Research and development and commercial opportunities exist across many sectors, from traffic management and self-driving vehicles to pollution control and efficient farming methods.

Secondly, it is important to distinguish the PRC’s public sector from its private sector in order to collaborate.

The PRC’s AI achievements have mostly been made by private sector firms. Many of these companies have global ambitions.2 Australia’s concerns about the PRC’s AI ambitions are directly linked to Beijing’s control of the private sector. This means companies in the PRC regardless of ownership are not truly independent of the PRC state. Given the opacity of PRC decision-making, how can we decipher the difference between bottom-up innovation and top-down control?

The first step is to acknowledge that AI provides solutions to the PRC’s development challenges: from medical diagnostics in a struggling healthcare system to avoidance of peer-to-peer shadow banking debacles.

The second step is to better understand which companies are driving these developments: names such as SenseTime, YITU Technology and Infervision. SenseTime and YITU Technology have been particularly adept at building world-class AI image and natural language applications over the past five years.

We must recognise the problems that the companies are trying to solve, but also which companies thrive from unsavoury government procurement contracts, and which of these firms have US venture capital and global links. Finally, we must ask what this means for those invested in their commercial operations.

Procurement contracts are hard to untangle. For example, SenseTime has multiple American backers and until April 2019 was involved in a security joint venture in Xinjiang. SenseTime sold out of the venture following public pressure and states it now has very little business in Xinjiang.

This has practical implications in Australia. Universities that want to attract AI R&D funding from the PRC need to be aware of where that IP will be used. The University of Technology Sydney signed a multi-million dollar research deal which included AI with the state-owned China Electronics Technology Group Corporation (CETC). This collaboration could provide Australia and the world with problem-solving technology, but CETC is also a major arms manufacturer for the People’s Liberation Army.

Another issue is that open-source code and data sets are included in published academic works. This allows other researchers to prove repeatability. Anyone with a large data set and enough processing power can use open-source tools to implement proven AI use cases.

There are three factors that Australia needs to consider in its response to AI challenges: the uncertainty of how AI will work in the future; how AI operates today in globalised ecosystems; and how the PRC’s datasets will change the world.

The unknowns of AI

AI is an emerging technology. Its future direction is uncertain. International governance discussions are just beginning and will become increasingly important. The PRC is at the forefront in setting nascent AI safety and ethics standards, and cyber security laws and regulations.3 This is a stated aim of the PRC’s Cyber or Network Sovereignty policies. This means that the PRC is in the vanguard on many crucial law and technology issues, for better or for worse.

The PRC has also created a domestic cyber regime that may hamper innovation by making necessary international collaborations problematic. Access to major overseas markets, research facilities, and most importantly talent, will be hindered for some companies with a close association to the PRC’s security apparatus.

The international policy community needs to engage with the PRC on AI, because business will continue to do so. The policy community needs to create guidelines for business that provide clarity about the types of firms and projects that present an acceptable risk for the relative reward. Business needs to do its own due diligence. In AI ethics, the PRC’s private companies now have an outstanding opportunity to “assure others that its private sector can be truly private.”4

Globalised ecosystems

A sign, “tech is global”, hangs on the wall at People Squared, a co-working space in Beijing. Even in the current climate, it remains true. While there is constant speculation about an AI race or a new tech-driven cold war, entrepreneurial ecosystems are globalised networks that rely on each other. PRC AI companies rely on US-designed Nvidia processing chips, on US investment, and increasingly on US-based research labs.

Enterprise or business-to-business AI requires the creation of an ecosystem of partnerships. Zhang Yaqin, President of Baidu, said in 2018 that in expanding its AI technologies in traditional industries, “the most important thing lies not only in product and technology, but also in partners in the ecosystem.”5

In autonomous driving innovations, the PRC and the US are so interconnected through research labs, partnerships, semiconductors/processing chips and staff, it is hard to attach a nationality to the research.

Many countries have now chosen to tackle issues like cyber security, data protection and data privacy unilaterally rather than through multilateral trade agreements. Yet, the global trading system, particularly in technology, is an interconnected global ecosystem. Unless governments can cooperate to create new global rules and domestic reforms, global innovation may be hindered. This is not a zero-sum game. Global linkages are too great.

PRC data will change the world

The PRC’s massive accumulation of data, largely by private sector companies, will change history. For example, healthcare will be changed by the PRC’s huge data sets in areas such as lung cancer CT scan image recognition, or even the identification of rare tropical diseases through Deep Learning.

Yet data privacy is of concern to ordinary citizens, academics and private companies in the PRC, too. How the PRC’s nascent data protection laws develop will determine the usefulness of PRC datasets to AI advances. If the PRC becomes a further walled-off ecosystem – particularly in open-source AI platforms – who knows what AI will be developed?

Xinjiang in western China is an example of the surveillance state in action. But some issues have been overemphasised by the media, for example, the social credit system. The system is not one unified apparatus: it began as an attempt to solve a huge problem – the large unbanked and shadow banking economy – and the system has run into many hurdles.

In reality, only some pilot programs have scores and each program records scores in its own way. There is no standardised national social credit score. Rather, there is a complex web of systems run by different ministries, levels of governments and regions, interconnected by data sharing. A primary role of the system is to evoke trust in the marketplace and promote responsible social conduct, and provide a means of internal oversight of government officials.6

A group of researchers investigated how behaviour was judged under the pilot Beijing Social Credit System website using Machine Learning tools.7 They found the system was clear about what actions are punishable, but less clear about what actions earn positive points. The majority of the blacklist consisted of people who had failed to pay debts or committed traffic violations. There were also more companies than individuals on the Beijing blacklist, perhaps indicating that the government is more concerned with controlling companies than people, and offsetting problems like shadow banking. Companies penalised under the PRC’s Cyber Security Law are overwhelmingly domestic PRC firms. International companies including Zara and the Marriott hotel chain have only been penalised for not listing Taiwan as part of China on their websites.

Social credit could be used coercively. The legal system in the PRC is not independent of the Party. A court ruling that stems from a political case could land someone on a blacklist.

Because different pilot systems exhibit different trends and scoring systems, and because of potential coercion, the social credit experiment remains an issue to be monitored closely.

Australia will constantly struggle with a technology and a country that is both a challenge and an opportunity. But AI is global. The PRC is a major player and business will seek AI opportunities in the PRC. The way forward is to regularly engage with international stakeholders, including private PRC companies and academics, to create better global privacy standards and data privacy regulatory environments. Without PRC participation, discussions of AI ethics may prove meaningless.

Policy recommendations

  • Australia should rely on diplomacy and work via established multinational networks, Track-2 business-to-business dialogues, and relationships in the PRC to discuss AI ethics and manage risks through dialogue. The PRC must not be isolated. Again, without PRC participation, discussions of AI ethics may prove meaningless.
  • The Australian government should continue to support connections between entrepreneurial ecosystems globally such as the Australian Shanghai Landing Pad.
  • Australian companies and the Australian government should seek to join conversations on issues such as algorithmic transparency. Dialogue with PRC industry and academia is valuable.
  • The Australian government should participate in the creation of global cyber standards, especially in areas where none exist.
  • The Australian government should build cyber transparency centres to review IT products where necessary, and improve Australia’s overall cyber capabilities.
  • The Australian government should provide funding to train more cyber security professionals and must base decisions on PRC involvement in ICT on technical expertise and rational assessments of risk.
  • Australian legislators should study the pros and cons of flexible data protection regulations in the PRC that can evolve as new technologies emerge to promote Australian innovation.
  • The Australia-based projects of PRC tech and AI companies should be assessed on a case-by-case basis.
  • Australian businesses should consider which PRC companies are suitable for collaboration based on the risks associated with specific technologies.

Author

Max Parasol has been teaching about Chinese innovation as a Senior Fellow at Monash University since 2014. He was a Visiting Fellow at Peking University in 2019. He is finalising a PhD studying the Chinese innovation ecosystem, focused on China’s environment for developing artificial intelligence through open-source platforms and global networks.

China Matters does not have an institutional view; the views expressed here are the author’s.

This policy brief is published in the interests of advancing a mature discussion of the PRC’s AI ambitions. Our goal is to influence government and relevant business, educational and non-governmental sectors on this and other critical policy issues.

China Matters is grateful to six anonymous reviewers who received a blinded draft text and provided comments. We welcome alternative views and recommendations, and will publish them on our website. Please send them to [email protected].

Notes

  1. Machine learning is variously characterised as either a sub-field of AI or a separate field of AI. See: ‘The Malicious Use of Artificial Intelligence Forecasting, Prevention, and Mitigation’, Oxford and Cambridge Universities, OpenAI et al., 2018, p. 9, https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/1c6q2kc4v_50335.pdf
  2. Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order, New York: Houghton Mifflin Harcourt, 2018.
  3. On 28 May 2019, the Beijing Academy of Artificial Intelligence (BAAI) released the “Beijing AI Principles,” an outline to guide R&D, implementation, and the governance of AI. The Principles were endorsed by renowned Chinese universities Peking University; Tsinghua University; the Chinese Academy of Sciences’ Institute of Automation and Institute of Computing Technology; and companies including Baidu, Alibaba, and Tencent. The document provides a statement of Chinese views on AI ethics.
  4. Chris Byrd, ‘To make AI ethics work, we need Chinese companies to lead’, TechNode, 29 May 2019, https://technode.com/2019/05/29/to-make-ai-ethics-work-we-need-chinese-companies-to-lead/
  5. Zhou Mo, ‘Baidu expands use of its AI tech in traditional industries’, China Daily, 31 May 2018, http://www.chinadaily.com.cn/a/201805/31/WS5b0fe579a31001b82571d7d6.html
  6. Rogier Creemers, ‘China’s ‘Social Credit System’ Isn’t What It Sometimes Seems—So Far, New America, 14 May 2018, https://www.newamerica.org/cybersecurity-initiative/digichina/blog/chinas-social-credit-system-isnt-what-it-sometimes-seems-so-far/
  7. Mo Chen, Severin Engelmann, Felix Fischer, Jens Grossklags, & Ching-Yu Kao, ‘Clear Sanctions, Vague Rewards: How China’s Social Credit System Currently Defines “Good” and “Bad” Behavior”, Conference on Fairness, Accountability, and Transparence, January 2019, https://www.cybertrust.in.tum.de/fileadmin/w00bzf/www/papers/2019-FAT-Engelmann-Chen.pdf