Open Systems Interconnection (OSI) is a reference model for understanding and developing compatible networking and communication practices. The OSI model illustrates the relationships between all components of a network and provides a framework that allows each layer to serve distinct purposes and work independently of each other according to their separate functions. Froehlich (2021) argues that the OSI model is “theoretical in nature and should only be used as a general guide,” but it serves an important purpose in creating a clear framework for understanding how a telecommunication system should work. The different terminologies and formats between the different OSI model layers help make distinctions between the specific roles and tasks that take place throughout a network and aid in the planning, design, and troubleshooting of the network. In 1984, the International Organization for Standardization (ISO) proposed Open Systems Interconnection (OSI) as a common framework for networking technology.
The OSI uses conceptual abstraction to separate the layers into compartmentalized slices based on function that can be evaluated as a piece at a time (Froehlich, 2021). The seven layers of the OSI model work in cascading hierarchy, each layer serving the layer above it and being served by the layer below it. The highest layer is the application layer which is followed by the presentation layer, session layer, transport layer, network layer, data-link layer, and finally the physical layer. The OSI model creates a framework that benefits from its adaptability, security, and flexibility; and has become a standard model in networking.
The disadvantages of the OSI model include the fact that layers cannot work in parallel with each other and they must wait for the next layer to send data to them, the session layer and the presentation layer aren’t as useful as the other layers, and it is a theoretical model that has physical restrictions on its practical implementation (Froehlich, 2021).
Bryne (2016) introduces the concept and history of the Locomotive Act of 1865, also known as the Red Flag Act, that was passed by the U.K. parliament to increase safety and awareness surrounding self-propelled vehicles. The Red Flag Act states that when a vehicle has several carriages attached, a pedestrian with a red flag must lead the vehicle from at least 60 in front of the train. The author explains that the Red Flag Act illustrates the idea that there should be a warning when a large, dangerous machine is incoming. The same concept is applied to artificial intelligence and computer interactions as opposed to human-to-human interactions, where a machine would thereby be clearly labeled a machine so as not to mislead the human user. This concept has been called the Turning Red Flag Law by Toby Walsh, an Australian artificial intelligence professor.
Potential Benefits for Society
Artificial intelligence is an increasingly powerful tool that continues to benefit society in new ways. As it increases in complexity, artificial intelligence becomes more difficult to distinguish from human computer users. A Red Flag Turning Law would create a responsibility of maintaining accountability and transparency for all artificial intelligence which would reduce the amount of uncertainty in computer users and might even be a positive influence on the evolution of artificial intelligence itself (which further benefits society as a whole).
Enforcing transparency through the computer architecture itself mitigates the harmful potential consequences and risks of the indistinguishability between human and artificial intelligence users. If artificial intelligence continues to be utilized in an increasing number of applications, its recommendations will likely be trusted by users more than an anonymous human user’s recommendations due to the enforced regulation within its architecture. The flagging of artificial intelligence could benefit the user’s security and level of transparency by creating a more difficult environment for a human threat actor to intercept the user’s interactions. The human-to-AI authentication methods available to an artificial intelligence model could easily surpass those available to most human users with additional development.
Threat actors will inevitably continue to utilize artificial intelligence as a tool for malice, and a Turing Red Flag Law would bring additional transparency to the harmful actions of an artificial intelligence so that it could be more easily detected and stopped. Artificial intelligence systems have a history of exhibiting bias, and this behavior can be more easily identified as bias by a human user with a Red Flag Turing Law in effect.
Potential Consequences for Society
One of the primary concerns of the technology industry, if a Red Flag Turning Law is put into effect, is that the continued development of artificial intelligence will be stifled or stunted due to creating additional limitations on the software. This limitation would likely not be adopted by a global community which could lead to the potential of advanced rogue artificial intelligence use in non-agreeing organizations. Technology creators might fear legal repercussions in developing their artificial intelligence systems to their full potential. It is vital that a Red Flag Turing Law is not in opposition to the creativity and freedom of experimentation that is necessary for the continuous development of these important technologies.
Bryne quotes Toby Walsh in his discussion about the use of artificial intelligence in self-driving cars and how the Red Flag Turing Law would be applied within that specific example. He states that human drivers often make far more mistakes on the road than artificial intelligence and are a more dangerous threat on the road than a self-driving system. Therefore, there would be a potential benefit of the ability to distinguish between human drivers and artificial intelligence systems on the road because the human driver could be recognized as less predictable, more dangerous, and requiring of more attention. Simultaneously, the artificial intelligence drivers could be expected to drive predictably, follow traffic rules appropriately, and fair better in low visibility conditions. This entire concept adds to the discussion about the value inherent in knowing whether a user is human or computer.
Reflecting on the Author’s Proposal
The development of artificial intelligence systems requires balancing the forces of regulation and rapid development so that the technology continues to progress appropriately. The added transparency and accountability from a Red Flag Turing Law could serve as powerful benefits to adoption, while the slowing of development and attention to regulation present considerable challenges to implementation and development. Regardless of whether a Red Flag Turing Law is adopted in its current conceptualization, society must develop a way for artificial intelligence algorithms to provide accountability and transparency, as well as detection and intervention of artificial intelligence’s potentially harmful behavior.
Armed with artificial intelligence and machine learning tools, threat actors will make many attempts to disguise their artificial intelligence systems with hopes of masquerading as human users; I think it is extremely likely that tools to detect artificial intelligence algorithms for authentication will be brought to use. I think that defensive artificial intelligence models will quickly become more adapted at detecting other artificial intelligence algorithms and labeling them as artificial intelligence with more proficiency than is possible by humans. Due to the evolutionary nature of cyber security and artificial intelligence tools, the tools used to detect artificial intelligence will likely be an AI trained on data models of all known Ais. This technology would function similarly to anti-virus in that it uses signatures of the algorithm and matches them against a collection of signatures in a database of known AI algorithms.
Over enforcement of a law like a Red Flag Turing Law would likely disadvantage small businesses and individuals from developing artificial intelligence systems as rapidly as large corporations because of the regulatory attention that must be paid and the severe penalties of potential artificial intelligence mistakes. These regulations would lead to longer testing cycles and potentially higher ethical concern when working with artificial intelligence, which I think is a positive and necessary change.
AI models are trained on data that is created by humans and the system inherits all the bias, prejudice, and hypocrisy portrayed in the training data. Attention to the development of artificial intelligence regulation to correct the inherit problems from the training data will hopefully cause society to address those same issues as they exist in people, as a society. I hope that, as we make societal decisions surrounding these issues, we are brought into a global conversation to discuss all aspects of these issues so that we can begin to shape the architecture of the next generation of artificial intelligence tools. Without transparency, accountability, and due diligence given to ethics the benefits of artificial intelligence tools will not optimally serve the common goals of humanity. I do not look forward to a future with many different personalities of artificial intelligence that have been developed with various intent. The potential of artificial intelligence as a unifying element and technology for global communication and understanding is unprecedented, its significance exponential and similar in effect to the internet and telephone.
Government and Business Applications of Facial Recognition
The federal government utilizes facial recognition software commonly at federal points of interest such as border crossings and within their own records for FBI investigations. There are policies in place that restrict the ability of the FBI to use photographs as positive identification within a case, however the organization is free to utilize photographs as clues to possible identifying evidence. The FBI uses a large repository of photographic data called the Interstate Photo System (IPS) and acts upon this data with a Next Generation Identification System (NGIS). This data corroborates individuals’ corresponding fingerprint data with face photos and other identifying information for the access of the FBI. According to Brandom (2021), on the state government level, a few states such as Massachusetts and Maine have passed legislature with intent to limit the ability of state governments to use facial recognition software.
ACLU and Criticism of the Technology
The American Civil Liberties Union (ACLU) has made criticizing statements regarding the privacy issues surrounding facial recognition software and have stated that they are handling the issue with seriousness and concern. Balli (2021) stated that the creators of Facebook, the Meta company, decided to stop utilizing facial recognition software within their platform in response to the declared privacy concerns of their users and those of government. Brandom (2021) stated concerns about an innate racial bias within facial recognition technology that raises additional significant issues surrounding the topic.
Facial Recognition and Individual Privacy
Facial recognition technology becomes in direct opposition to individual privacy when installed in public spaces. The technology could be implemented to use as a biometric authentication method for retail purchases, entrance keys, or even for criminal penalties. According to PrivacyRights.org (2011), prevalent use of facial recognition software in a public space would lead to complete loss of individual anonymity while improving the commercial ability of demographic targeting in marketing and sales. I believe that our individual anonymity is a large part of what makes us feel comfortable and safe in public environments. I think that with broad application of facial recognition software we would lose that sense of comfortable anonymity within society’s public spaces.
Private Businesses’ Potential Abuse of the Technology
The capabilities of facial recognition technology are not yet common knowledge, and the ethical considerations of its possible applications are still being considered by the technology community. Facial recognition is often used in military applications for targeting individuals, but the same basic technology is also available for commercial use by private business. The scalability of facial recognition software can easily create an over-arching and controlling architecture that has a high risk of important personal data being leaked in the case of a breach.
Advocacy for Facial Recognition
There is a market understanding that many customers will push back at a privacy over-reach if it becomes too concerning. However, the commercial world will utilize any tool at its disposal if it aids in reaching more customers, making more sales, or increasing profit margins. Facial recognition allows businesses to give personalized experiences to customers as they enter a store or could target them with coupons after they were recognized at a corresponding location. The ethical components of facial recognition technology will find polarizing effect upon the minds of technology leaders, some of whom will embrace the technology while others will choose not to use it citing ethical concerns.
IT Manager’s Perspective
As an IT manager, I would not encourage the use of facial recognition software in commercial applications. On the government level, I think that facial recognition software has valuable and viable applications in crime prevention and international security. The policies I would support as an IT manager would prioritize the individual privacy rights of my organization’s customers over leveraging technology with possible ethical ramifications. As a business, I would not want the legal responsibility of housing and protecting sensitive personal data from possible data breaches, even with the use of a third-party partner. I stand for proper authentication methods and a minimalist, need-to-know style of personal data sharing.