Category: artificial intelligence

  • Opinion: Turing Red Flag Law

    Bryne (2016) introduces the concept and history of the Locomotive Act of 1865, also known as the Red Flag Act, that was passed by the U.K. parliament to increase safety and awareness surrounding self-propelled vehicles. The Red Flag Act states that when a vehicle has several carriages attached, a pedestrian with a red flag must lead the vehicle from at least 60 in front of the train. The author explains that the Red Flag Act illustrates the idea that there should be a warning when a large, dangerous machine is incoming. The same concept is applied to artificial intelligence and computer interactions as opposed to human-to-human interactions, where a machine would thereby be clearly labeled a machine so as not to mislead the human user. This concept has been called the Turning Red Flag Law by Toby Walsh, an Australian artificial intelligence professor.

    Potential Benefits for Society

    Artificial intelligence is an increasingly powerful tool that continues to benefit society in new ways. As it increases in complexity, artificial intelligence becomes more difficult to distinguish from human computer users. A Red Flag Turning Law would create a responsibility of maintaining accountability and transparency for all artificial intelligence which would reduce the amount of uncertainty in computer users and might even be a positive influence on the evolution of artificial intelligence itself (which further benefits society as a whole).

    Enforcing transparency through the computer architecture itself mitigates the harmful potential consequences and risks of the indistinguishability between human and artificial intelligence users. If artificial intelligence continues to be utilized in an increasing number of applications, its recommendations will likely be trusted by users more than an anonymous human user’s recommendations due to the enforced regulation within its architecture. The flagging of artificial intelligence could benefit the user’s security and level of transparency by creating a more difficult environment for a human threat actor to intercept the user’s interactions. The human-to-AI authentication methods available to an artificial intelligence model could easily surpass those available to most human users with additional development.

    Threat actors will inevitably continue to utilize artificial intelligence as a tool for malice, and a Turing Red Flag Law would bring additional transparency to the harmful actions of an artificial intelligence so that it could be more easily detected and stopped. Artificial intelligence systems have a history of exhibiting bias, and this behavior can be more easily identified as bias by a human user with a Red Flag Turing Law in effect.

    Potential Consequences for Society

    One of the primary concerns of the technology industry, if a Red Flag Turning Law is put into effect, is that the continued development of artificial intelligence will be stifled or stunted due to creating additional limitations on the software. This limitation would likely not be adopted by a global community which could lead to the potential of advanced rogue artificial intelligence use in non-agreeing organizations. Technology creators might fear legal repercussions in developing their artificial intelligence systems to their full potential. It is vital that a Red Flag Turing Law is not in opposition to the creativity and freedom of experimentation that is necessary for the continuous development of these important technologies.

    Bryne quotes Toby Walsh in his discussion about the use of artificial intelligence in self-driving cars and how the Red Flag Turing Law would be applied within that specific example. He states that human drivers often make far more mistakes on the road than artificial intelligence and are a more dangerous threat on the road than a self-driving system. Therefore, there would be a potential benefit of the ability to distinguish between human drivers and artificial intelligence systems on the road because the human driver could be recognized as less predictable, more dangerous, and requiring of more attention. Simultaneously, the artificial intelligence drivers could be expected to drive predictably, follow traffic rules appropriately, and fair better in low visibility conditions. This entire concept adds to the discussion about the value inherent in knowing whether a user is human or computer.

    Reflecting on the Author’s Proposal

    The development of artificial intelligence systems requires balancing the forces of regulation and rapid development so that the technology continues to progress appropriately. The added transparency and accountability from a Red Flag Turing Law could serve as powerful benefits to adoption, while the slowing of development and attention to regulation present considerable challenges to implementation and development. Regardless of whether a Red Flag Turing Law is adopted in its current conceptualization, society must develop a way for artificial intelligence algorithms to provide accountability and transparency, as well as detection and intervention of artificial intelligence’s potentially harmful behavior.

    Armed with artificial intelligence and machine learning tools, threat actors will make many attempts to disguise their artificial intelligence systems with hopes of masquerading as human users; I think it is extremely likely that tools to detect artificial intelligence algorithms for authentication will be brought to use. I think that defensive artificial intelligence models will quickly become more adapted at detecting other artificial intelligence algorithms and labeling them as artificial intelligence with more proficiency than is possible by humans. Due to the evolutionary nature of cyber security and artificial intelligence tools, the tools used to detect artificial intelligence will likely be an AI trained on data models of all known Ais. This technology would function similarly to anti-virus in that it uses signatures of the algorithm and matches them against a collection of signatures in a database of known AI algorithms.

    Over enforcement of a law like a Red Flag Turing Law would likely disadvantage small businesses and individuals from developing artificial intelligence systems as rapidly as large corporations because of the regulatory attention that must be paid and the severe penalties of potential artificial intelligence mistakes. These regulations would lead to longer testing cycles and potentially higher ethical concern when working with artificial intelligence, which I think is a positive and necessary change.

    AI models are trained on data that is created by humans and the system inherits all the bias, prejudice, and hypocrisy portrayed in the training data. Attention to the development of artificial intelligence regulation to correct the inherit problems from the training data will hopefully cause society to address those same issues as they exist in people, as a society. I hope that, as we make societal decisions surrounding these issues, we are brought into a global conversation to discuss all aspects of these issues so that we can begin to shape the architecture of the next generation of artificial intelligence tools. Without transparency, accountability, and due diligence given to ethics the benefits of artificial intelligence tools will not optimally serve the common goals of humanity. I do not look forward to a future with many different personalities of artificial intelligence that have been developed with various intent. The potential of artificial intelligence as a unifying element and technology for global communication and understanding is unprecedented, its significance exponential and similar in effect to the internet and telephone.

    References

    Bryne, M. (2016). AI Professor Proposes ‘Turing Red Flag Law’. Vice Mediahttp://motherboard.vice.com/read/ai-professor-proposes-turing-red-flag-law

  • Ethics of Facial Recognition Technology

    Ethics of Facial Recognition Technology

    Government and Business Applications of Facial Recognition

    The federal government utilizes facial recognition software commonly at federal points of interest such as border crossings and within their own records for FBI investigations. There are policies in place that restrict the ability of the FBI to use photographs as positive identification within a case, however the organization is free to utilize photographs as clues to possible identifying evidence. The FBI uses a large repository of photographic data called the Interstate Photo System (IPS) and acts upon this data with a Next Generation Identification System (NGIS). This data corroborates individuals’ corresponding fingerprint data with face photos and other identifying information for the access of the FBI. According to Brandom (2021), on the state government level, a few states such as Massachusetts and Maine have passed legislature with intent to limit the ability of state governments to use facial recognition software.

    ACLU and Criticism of the Technology

    The American Civil Liberties Union (ACLU) has made criticizing statements regarding the privacy issues surrounding facial recognition software and have stated that they are handling the issue with seriousness and concern. Balli (2021) stated that the creators of Facebook, the Meta company, decided to stop utilizing facial recognition software within their platform in response to the declared privacy concerns of their users and those of government. Brandom (2021) stated concerns about an innate racial bias within facial recognition technology that raises additional significant issues surrounding the topic.

    Facial Recognition and Individual Privacy

    Facial recognition technology becomes in direct opposition to individual privacy when installed in public spaces. The technology could be implemented to use as a biometric authentication method for retail purchases, entrance keys, or even for criminal penalties. According to PrivacyRights.org (2011), prevalent use of facial recognition software in a public space would lead to complete loss of individual anonymity while improving the commercial ability of demographic targeting in marketing and sales. I believe that our individual anonymity is a large part of what makes us feel comfortable and safe in public environments. I think that with broad application of facial recognition software we would lose that sense of comfortable anonymity within society’s public spaces.

    Private Businesses’ Potential Abuse of the Technology

    The capabilities of facial recognition technology are not yet common knowledge, and the ethical considerations of its possible applications are still being considered by the technology community. Facial recognition is often used in military applications for targeting individuals, but the same basic technology is also available for commercial use by private business. The scalability of facial recognition software can easily create an over-arching and controlling architecture that has a high risk of important personal data being leaked in the case of a breach.

    Advocacy for Facial Recognition

    There is a market understanding that many customers will push back at a privacy over-reach if it becomes too concerning. However, the commercial world will utilize any tool at its disposal if it aids in reaching more customers, making more sales, or increasing profit margins. Facial recognition allows businesses to give personalized experiences to customers as they enter a store or could target them with coupons after they were recognized at a corresponding location. The ethical components of facial recognition technology will find polarizing effect upon the minds of technology leaders, some of whom will embrace the technology while others will choose not to use it citing ethical concerns.

    IT Manager’s Perspective

    As an IT manager, I would not encourage the use of facial recognition software in commercial applications. On the government level, I think that facial recognition software has valuable and viable applications in crime prevention and international security. The policies I would support as an IT manager would prioritize the individual privacy rights of my organization’s customers over leveraging technology with possible ethical ramifications. As a business, I would not want the legal responsibility of housing and protecting sensitive personal data from possible data breaches, even with the use of a third-party partner. I stand for proper authentication methods and a minimalist, need-to-know style of personal data sharing.

    References

    Balli, E. (2021). The ethical implications of facial recognition technology. Arizona State University. https://news.asu.edu/20211117-solutions-ethical-implications-facial-recognition-technology

    Brandom, R. (2021). Most US government agencies are using facial recognition. The Verge. https://www.theverge.com/2021/8/25/22641216/facial-recognition-gao-report-agency-dhs-cbp-fbi

    Greco, K. (2019). Facial Recognition Technology: Ensuring Transparency in Government Use. FBI.govhttps://www.fbi.gov/news/testimony/facial-recognition-technology-ensuring-transparency-in-government-use

    (2011). Facial Recognition is a Threat to Your Privacy. PrivacyRights.orghttps://privacyrights.org/resources/facial-recognition-threat-your-privacy

  • Quantum Computing & Cybersecurity

    What is quantum computing?

    Quantum computing represents the third era of computing hardware which emerged after analog and digital computers, and which applies the laws of quantum mechanics to the world of computer science. Instead of using a digital bit to store a binary state, a quantum computer uses a quantum bit (qubit) to store binary and indefinite states within the subatomic particle of the qubit. Quantum computers utilize laws of quantum mechanics such as quantum entanglement, using the probability of entangled particles being in a certain state at a specific moment in time to quickly solve complex problems that contain many possible solutions (Smith, 2021).

    Does quantum computing present a cybersecurity threat? If yes, why? If no, why not?

    The capabilities of a fully developed quantum computer would theoretically pose a massive cybersecurity threat to our current infrastructure. The quantum mechanical properties of the sub-atomic particles within a quantum computer allow for many possible solutions to a problem to be considered simultaneously, which leads to solving some types of complex problems much faster than is possible with classical computers. One of the most discussed ramifications of a fully functional quantum computer is the ability to quickly determine the two prime factors of large numbers because determining those key pairs would crack the types of public key encryption systems currently utilized by the world wide web (Denning, 2019). Once a quantum computer can reliably surpass the performance of classical supercomputers, the current methods of encryption will essentially begin to prove obsolete against an advanced quantum computer. Essentially, all current encryption algorithms can be solved by a computer given a long enough period but the keys that take classical computers years to crack can potentially be solved by quantum computers in a fraction of the time. Researchers are currently working to create new algorithms and forms of cryptography that can resist the potential attacks of quantum computers; as well as new forms of key exchange based on quantum hardware.

    What role would quantum computing have on cryptography?

    The role that quantum computing takes in cryptography involves its ability to consider the many possible solutions to a problem in parallel instead of one at a time (Evans, 2019). In a brute force attack, considering all possible solutions simultaneously would theoretically provide a solution exponentially faster. These game changing effects of quantum computers on offensive cyber security presently creates a pre-emptive need for quantum resistant encryption algorithms to combat the inevitable emergence of quantum powered brute force attacks in the coming quantum era of computing.

    One defensive solution that provides some peace of mind against quantum attacks is to simply use longer keys (Denning, 2019). Denning writes in American Scientist that a 128-bit key has the same protection against a classical computing attack as a 256-bit key has against a quantum computing attack utilizing Grover’s algorithm.

    What country is winning the quantum computing arms race?

    According to Smith (2021), the United States and China are headlining a race to fully develop the capability of quantum computing and be the first nation with the ability to bypass information security as we know it. Each of these world superpowers is supported by several companies that are pushing the leading edge of quantum computing technology by developing a variety of quantum computing solutions and hardware. China has already achieved some major milestones in quantum computing such as the first cloud-native quantum computing platform, obtaining a solution in a fraction of a single percent of the time that it would take the fastest supercomputer in the world to obtain, and combining quantum computing with artificial intelligence. The key to winning the quantum computing arms race is likely to reside in the amount of collaboration and funding between government organizations and private companies. Regardless of what nation wins the quantum computing arms race, there is an expectation to allow developing nations to access the power of quantum computing through a cloud service, thus providing a global benefit.

    What national security implications would quantum computing present to the US if China beats them?

    If China can beat the United States in the race to quantum supremacy, all US intellectual property as well as possibly some classified government level data could potentially be quickly compromised and leveraged toward the disadvantage of the United States’ government, businesses, and citizens (Schappert, 2023). The winner of the quantum computer race would also have the earliest access to further applications of quantum computing such as developments in medicine, physics, artificial intelligence, and machine learning.

    References

    Denning, D. (2019). Is Quantum Computing a Cybersecurity Threat? American Scientist. https://www.americanscientist.org/article/is-quantum-computing-a-cybersecurity-threat

    Evans, A. (2019). Managing Cyber Risk. Taylor & Francis. https://online.vitalsource.com/books/9780429614262

    Schappert, S. (2023). Quantum computing race explained: fast and furious. Cybernews. https://cybernews.com/editorial/quantum-computing-race-explained/

    Smith, C. (2021). Competing Visions Underpin China’s Quantum Computer Race Alibaba builds their own qubits, Baidu remains quantum hardware-agnostic. IEEE Spectrum. https://spectrum.ieee.org/alibaba-baidu-quantum-computer-race