Global Rule of Law and Dual-use Technologies
Lidia Zuin
The New York Public Library @ unsplash.com
💡
The images used in this article are photos of immigrants arriving and living in the United States at the beginning of the 20th-century. They are courtesy of The New York Public Library's historical collection.
According to Amara’s Law, “[...] we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” In other words, whenever a technology is being developed or a scientific breakthrough is made, we, as a society, tend to get excited about it and overestimate the potential benefits.
The very concept of an “exponential technology” holds the promise of a technology that obeys Moore’s Law of exponential development – that is, the observation made by Gordon Moore in 1965 that the number of transistors in a dense integrated circuit doubles about every two years. As Silicon Valley pushes the agenda of futurism, we see technologies from the branch of Artificial Intelligence (AI) as possible turning points in our history —in fact, they follow what the World Economic Forum (WEF) calls the Fourth Industrial Revolution.
But the WEF was also responsible for diagnosing the hype around certain technologies, as it was the case for blockchain. In 2018, the institution published a document with frameworks that assisted executives and organizations in understanding whether or not blockchain was an appropriate and helpful tool for their business needs. In their words:
World Economic Forum
World Economic Forum
When understanding the reasons and the expectations behind the use of a certain technology, organizations can better assess the demands, benefits, and risks. In order to assess the impact of emerging technologies in the field of migration policies, the present case study comprises an in-depth analysis of the technological branches of blockchain and artificial intelligence, and their corresponding dual use applications on these fields.
The first section of this paper focuses on the impact of blockchain technology and how much its specificities of cryptography and decentralization can be both used to safeguard people or be weaponized in cyberattacks. The second section evaluates AI in its central role of the automation of systems, including those used in border control and in the departments of defense. Overall, we evaluate the very purpose of technologies and assess the “dual” impact they may produce.
Blockchain: From the Underground to Over the Top
Immigrants being served a free meal at Ellis Island.
The New York Public Library @ unsplash.com
Immigrants being served a free meal at Ellis Island.
The New York Public Library @ unsplash.com
Blockchain has moved on from a conversation about the use of Bitcoin for illegal commerce to become a platform of financial speculation of cryptocurrencies, NFTs, and a method to achieve decentralized, transparent, and incorruptible transactions.
When it comes to citizenship, Bitnation’s Pangea is one of the biggest examples of a system that is attempting to escape from the hegemony of nation-states and compete with existing institutions and governmental systems, as shown by Filippi. The initiative provides self-sovereign identities, notarization services, property rights and company registration, dispute resolution systems, and other functionalities that are usually associated with the functions of the public administration.
While Estonia has implemented a digital transformation project in public services, such as notarial processes, Bitnation is attempting to take it to the next level in an Ethereum blockchain, which is a “public and transnational blockchain [that] provides the necessary transparency, verifiability, incorruptibility, and trust that one would expect from these governmental services”, argues Filippi. In other words, Bitnation might be fulfilling what this study has diagnosed in 2018 as “cloud communities:” a new conception of global citizenship based on blockchain technology. (→For more information about blockchain consensus mechanisms, please visit our article on Blockchain and Sustainability).
However, when it comes to the dual use of blockchain, it is its ability to decentralize and enforce incorruptibility that could be applied, for instance, in an accountability feature connected to military control systems, such as for monitoring the provenance of weapons and their legal distribution throughout the globe. On the other hand, these same specifications could actually fulfill the proposal of Baran’s model of decentralized communications; a system specifically created during the nuclear threat of the Cold War with the objective of decentralizing control systems to improve its overall resilience to attacks.
In the case of battleships, the blockchain-based project Aegis is already paving the way for the future of military control systems. Their platform uses blockchain to verify that all nodes are working from the same set of data, which creates tamper-proof records of all actions happening in the battlefield, such as threat neutralization. That is, all action is recorded in blockchain, meaning that this info is immutable and accessible to the whole network.
According to Matisek, while blockchain could be used for defense, it can also be applied for offense, thus proving its framing as a dual-use technology. The author argues that blockchain has "enumerable advantages for offensive uses in hybrid warfare" due to its features that enable both security and anonymity. These are functionalities that could be used to protect classified data, but also for the monetization of war.
One example is the case of an official accusation made by South Korea against North Korea in 2018, which affirmed that the neighbor has hacked local exchanges to steal billions of won in cryptocurrency. Similarly, the creation of state cryptocurrency is being seen by specialists as a possible warfare threat, since digital currencies “have the potential to become the backbone of individual nations’ economies.”
While blockchain cryptography is one of the most sophisticated security systems in the contemporary world, the processing power of quantum computing could ultimately break down blockchain’s cryptography system and ignite a new kind of cybersecurity threat. Matisek argues that, in this scenario, quantum computing "[...] could enable hostile actors to more easily hack rival secure networks.”
Digitized Borders, Automated Frontiers
As much as technology has always been part of border and immigration security (either symbolically through passports or physically through the construction of walls), increasingly more commentators have been pointing to the digitization of such procedures, which end up creating what they call “digital borders.”
In 2020, the United Nations published a report on discrimination, race, technology, and borders. The document uses the term “digital borders” to refer to “borders whose infrastructure and processes increasingly rely on machine learning, automated algorithmic decision-making systems, predictive analysis and related digital technologies.” These technologies, by their token, are thus “[...] integrated into identification documents, facial recognition systems, ground sensors, aerial video surveillance drones, biometric databases, asylum decision-making processes and many other facets of border and immigration enforcement.”
Immigrants at Ellis Island, Registry Room (or Great Hall).
The New York Public Library @ unsplash.com
Immigrants at Ellis Island, Registry Room (or Great Hall).
The New York Public Library @ unsplash.com
Instead of using these technologies to reduce bureaucracy and processing time, the report stresses that these digital border technologies are only “reinforcing parallel border regimes that segregate the mobility and migration of different groups on the basis of national origin and class, among others.” One example addressed in the document is the introduction of “eGates” at international airports, in which “self-service” cabins allow EU/EEA and Swiss nationals to enter the country automatically without presenting themselves to an officer.
According to the report, in these systems, people have their face scanned and crossed with a biometric database which has already over 8 million people registered, most of them being individuals fleeing conflict or needing humanitarian assistance. Notwithstanding, this same report has also pointed to the fact that black women were twenty more times misrecognized when compared to white men. In their words, such technological gap is only reinforcing a discriminatory mechanism based on race, gender, and other demographics such as the country of origin. This is what some researchers call “techno-colonialism”.
However, in order to facilitate migration and border processes, automation is one resourceful strategy to be taken into account. Several UN member states and multiple organs of the UN have been using big data analytics to better inform their policymaking. Similarly, as suggested in the report issued by the UN, instead of banning such technologies they can be used to generate insights for designing comprehensive regulations. The GDPR is, for instance, one first step towards this objective of making organizations accountable for the data they collect and how it is used.
Since the debate of using big data often relates to privacy, it is important to review the meaning of privacy itself. In the 1990s, ‘cypherpunk’ activists wrote a manifesto starting with the following assertion:
Cypherpunk Manifesto
Cypherpunk Manifesto
In other words, it is the specificity of the data that is being collected, and the accountability related to managing big data that will really matter in the future of privacy. While surveillance systems become increasingly more automated with the help of AI applications, organizations such as Homo Digitalis and Privacy International are focused on educating people about how their data is treated, and propose ways for enhancing data protection standards, such as reviewing the algorithmic structure of identity systems, and pursuing security mechanisms that protect people’s ownership over the data they produce.
As much as “data is the new petrol”, data can also ignite warfare —take, for instance, the case of political espionage or hacking. Now, when this data is biased, it is even more worrisome. According to Angela M. Sheffield, entry points may contain unintentional biases held by the developers, which is a real issue when there is a lack of diversity in the workforce, or when the data used to train a program is under-representative or over-representative of a group. All in all, these are crucial issues that could turn useful tools into dangerous toys, and departments of defense (DoD) do not wait for adjustments as stated by Sheffield:
Angela M. Sheffield
Angela M. Sheffield
In international warfare, whoever achieves AI and Quantum Computing supremacy is closer to the highest place on the warfare podium. In the case of the US, the so-called Project Maven is their main enterprise of AI application in warfare.
Formally known as the Algorithmic Warfare Cross-Functional Team (AWCFT), the project was established in 2017 with the mission to “accelerate the DoD’s integration of big data and machine learning.” The focus is thus “to apply computer vision algorithms to tag objects identified in images or videos captured by surveillance aircraft or reconnaissance satellites.” But the project got more media attention when Google, one of the several tech companies in the program, publicly announced their withdrawal from the project as employees claimed that this was a means to weaponize AI.
Immigrants at Ellis Island undergoing a medical inspection. Unknown date.
The New York Public Library @ unsplash.com
Immigrants at Ellis Island undergoing a medical inspection. Unknown date.
The New York Public Library @ unsplash.com
Preventive Regulations
In November 2021, the U.S.’s DoD innovation department published a whitepaper outlining guidelines to avoid “unintended consequences” in AI systems. As described at VentureBeat, the paper includes “worksheets for system planning, development, and deployment that are based on DoD ethics principles adopted by the Secretary of Defense and was written in collaboration with researchers at Carnegie Mellon University’s Software Engineering Institute.” In the meantime, NATO is also working on an AI strategy which is based on the following principles:
Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.
Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level.
Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures.
Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behavior.
Bias Mitigation: Proactive steps will be taken to minimize any unintended bias in the development and use of AI applications and in data sets.
In Europe, the United Kingdom and Germany have been working together on visualizing the perils of warfare in the Fourth Industrial Revolution. The same goes for the Swiss army, who has been investing in a comprehensive research of technology innovation, with a heavy focus on AI. In 2018, Jean-Marc Rickli (Geneva Center of Security Policy, GCSP) published an article that reflected on the dangers and promises of autonomous weapon systems (AWS), which could ultimately be used both for defensive strategies or offensive. While they still do not exist, the intended impacts on international security are already being investigated and showing signs that all companies dealing with AI should be “very vigilant about not granting too much power and autonomy to weaponized robots and algorithms.”
One AI to Rule us All?
Meanwhile, the Swedish philosopher Nick Bostrom has been working on the concept of an AI Singleton, which stands for a single artificially intelligent entity capable of making decisions at the highest level. According to him, this hypothetical use of AI for governance and decision-making could have reprecussions on an AI that has the ability to (1) prevent any threats (internal or external) to its own existence and supremacy, and (2) to exert effective control over major features of its domain (including taxation and territorial allocation).
While the Singleton holds positive and negative sides, Bostrom believes that, historically, we, as a society, have developed after a trend of consolidation, from hunter-gatherer bands to chiefdoms, city-states or even multinational organizations. A Singleton, therefore, would be simply an extrapolation of this societal behavior which is guided by other technological trends like the improvement of surveillance, mind-control technologies, communication technologies, and artificial intelligence. However, it is cryptographic systems like blockchain that could otherwise make Singleton less possible, since their premise is precisely to decentralize decision-making. The very concept of a DAO (decentralized autonomous organization) is to build a form of governance that has no leaders or central decision makers, but technology could facilitate the process in a more collective way.
Some aspects of the discussion are already taking place, while many other facets are still in the realm of contemplation for the future. However, while some applications of technological domains of Blockchain and AI might not exist in the field of military and warfare yet, Rickli argues that even specialists like Sergey Brin, Google’s co-founder, have not seen the effects of AI coming. For that reason, Rickli argues that watching the effects of these emerging technologies at the market level is a means to understand and anticipate possible outcomings, thus avoiding further consequences that could lead us to a dystopian context.
International Multiplicity
Migrant agricultural worker's family. 1936.
Dorothea Lange @ The New York Public Library
Migrant agricultural worker's family. 1936.
Dorothea Lange @ The New York Public Library
Technologies are tools and once they are invented, the application of such technologies can be different from the original intention. While inventors and developers had a first impression on what their work could be useful for, when these technologies are deployed, they are subjected to regulations, laws, ethics, and the ruling economic system. As an example, the former director for multilateral and humanitarian affairs for the United States National Security Council, Jamie Metzl suggests in his book “Hacking Darwin" that, when speaking of gene editing, fertility tourism could be a gap found in the variation of policies internationally. That is; some countries could allow certain kinds of procedures while others forbid it.
This same premise could be applied to the case of migration policies and the way AI and blockchain-based solutions could be integrated in those systems. In Canada, for instance, DNA tests are used to investigate migrants’ background and origins. The government applied this measure after trialing other methodologies that have subsequently failed; nonetheless, the decision has been evaluated by anthropologists and lawyers as unethical.
As we have seen, the dual use of technologies is highly dependent on the objectives and goals adopted by each country and institution. Since cultures vary and judgements are relative to specific circumstances, it might be hard to develop a singular, global rule of law that contemplates all the diversity that encompasses our societies. Maybe it is the very ambiguity of technology that could help us deal with multiple scenarios; possibly, it is the dual use of the tools we handle that could provide insights to build our collective hopes for the future, and avoid disasters of the past.