Blog

  • OSI in a Nutshell (Open Systems Interconnection)

    OSI in a Nutshell (Open Systems Interconnection)

    Open Systems Interconnection (OSI) is a reference model for understanding and developing compatible networking and communication practices. The OSI model illustrates the relationships between all components of a network and provides a framework that allows each layer to serve distinct purposes and work independently of each other according to their separate functions. Froehlich (2021) argues that the OSI model is “theoretical in nature and should only be used as a general guide,” but it serves an important purpose in creating a clear framework for understanding how a telecommunication system should work. The different terminologies and formats between the different OSI model layers help make distinctions between the specific roles and tasks that take place throughout a network and aid in the planning, design, and troubleshooting of the network. In 1984, the International Organization for Standardization (ISO) proposed Open Systems Interconnection (OSI) as a common framework for networking technology.

    The OSI uses conceptual abstraction to separate the layers into compartmentalized slices based on function that can be evaluated as a piece at a time (Froehlich, 2021). The seven layers of the OSI model work in cascading hierarchy, each layer serving the layer above it and being served by the layer below it. The highest layer is the application layer which is followed by the presentation layer, session layer, transport layer, network layer, data-link layer, and finally the physical layer. The OSI model creates a framework that benefits from its adaptability, security, and flexibility; and has become a standard model in networking.

    The disadvantages of the OSI model include the fact that layers cannot work in parallel with each other and they must wait for the next layer to send data to them, the session layer and the presentation layer aren’t as useful as the other layers, and it is a theoretical model that has physical restrictions on its practical implementation (Froehlich, 2021).

    References

    Froehlich, A. (2021). OSI model (Open Systems Interconnection). TechTargethttps://www.techtarget.com/searchnetworking/definition/OSI

  • Opinion: Turing Red Flag Law

    Bryne (2016) introduces the concept and history of the Locomotive Act of 1865, also known as the Red Flag Act, that was passed by the U.K. parliament to increase safety and awareness surrounding self-propelled vehicles. The Red Flag Act states that when a vehicle has several carriages attached, a pedestrian with a red flag must lead the vehicle from at least 60 in front of the train. The author explains that the Red Flag Act illustrates the idea that there should be a warning when a large, dangerous machine is incoming. The same concept is applied to artificial intelligence and computer interactions as opposed to human-to-human interactions, where a machine would thereby be clearly labeled a machine so as not to mislead the human user. This concept has been called the Turning Red Flag Law by Toby Walsh, an Australian artificial intelligence professor.

    Potential Benefits for Society

    Artificial intelligence is an increasingly powerful tool that continues to benefit society in new ways. As it increases in complexity, artificial intelligence becomes more difficult to distinguish from human computer users. A Red Flag Turning Law would create a responsibility of maintaining accountability and transparency for all artificial intelligence which would reduce the amount of uncertainty in computer users and might even be a positive influence on the evolution of artificial intelligence itself (which further benefits society as a whole).

    Enforcing transparency through the computer architecture itself mitigates the harmful potential consequences and risks of the indistinguishability between human and artificial intelligence users. If artificial intelligence continues to be utilized in an increasing number of applications, its recommendations will likely be trusted by users more than an anonymous human user’s recommendations due to the enforced regulation within its architecture. The flagging of artificial intelligence could benefit the user’s security and level of transparency by creating a more difficult environment for a human threat actor to intercept the user’s interactions. The human-to-AI authentication methods available to an artificial intelligence model could easily surpass those available to most human users with additional development.

    Threat actors will inevitably continue to utilize artificial intelligence as a tool for malice, and a Turing Red Flag Law would bring additional transparency to the harmful actions of an artificial intelligence so that it could be more easily detected and stopped. Artificial intelligence systems have a history of exhibiting bias, and this behavior can be more easily identified as bias by a human user with a Red Flag Turing Law in effect.

    Potential Consequences for Society

    One of the primary concerns of the technology industry, if a Red Flag Turning Law is put into effect, is that the continued development of artificial intelligence will be stifled or stunted due to creating additional limitations on the software. This limitation would likely not be adopted by a global community which could lead to the potential of advanced rogue artificial intelligence use in non-agreeing organizations. Technology creators might fear legal repercussions in developing their artificial intelligence systems to their full potential. It is vital that a Red Flag Turing Law is not in opposition to the creativity and freedom of experimentation that is necessary for the continuous development of these important technologies.

    Bryne quotes Toby Walsh in his discussion about the use of artificial intelligence in self-driving cars and how the Red Flag Turing Law would be applied within that specific example. He states that human drivers often make far more mistakes on the road than artificial intelligence and are a more dangerous threat on the road than a self-driving system. Therefore, there would be a potential benefit of the ability to distinguish between human drivers and artificial intelligence systems on the road because the human driver could be recognized as less predictable, more dangerous, and requiring of more attention. Simultaneously, the artificial intelligence drivers could be expected to drive predictably, follow traffic rules appropriately, and fair better in low visibility conditions. This entire concept adds to the discussion about the value inherent in knowing whether a user is human or computer.

    Reflecting on the Author’s Proposal

    The development of artificial intelligence systems requires balancing the forces of regulation and rapid development so that the technology continues to progress appropriately. The added transparency and accountability from a Red Flag Turing Law could serve as powerful benefits to adoption, while the slowing of development and attention to regulation present considerable challenges to implementation and development. Regardless of whether a Red Flag Turing Law is adopted in its current conceptualization, society must develop a way for artificial intelligence algorithms to provide accountability and transparency, as well as detection and intervention of artificial intelligence’s potentially harmful behavior.

    Armed with artificial intelligence and machine learning tools, threat actors will make many attempts to disguise their artificial intelligence systems with hopes of masquerading as human users; I think it is extremely likely that tools to detect artificial intelligence algorithms for authentication will be brought to use. I think that defensive artificial intelligence models will quickly become more adapted at detecting other artificial intelligence algorithms and labeling them as artificial intelligence with more proficiency than is possible by humans. Due to the evolutionary nature of cyber security and artificial intelligence tools, the tools used to detect artificial intelligence will likely be an AI trained on data models of all known Ais. This technology would function similarly to anti-virus in that it uses signatures of the algorithm and matches them against a collection of signatures in a database of known AI algorithms.

    Over enforcement of a law like a Red Flag Turing Law would likely disadvantage small businesses and individuals from developing artificial intelligence systems as rapidly as large corporations because of the regulatory attention that must be paid and the severe penalties of potential artificial intelligence mistakes. These regulations would lead to longer testing cycles and potentially higher ethical concern when working with artificial intelligence, which I think is a positive and necessary change.

    AI models are trained on data that is created by humans and the system inherits all the bias, prejudice, and hypocrisy portrayed in the training data. Attention to the development of artificial intelligence regulation to correct the inherit problems from the training data will hopefully cause society to address those same issues as they exist in people, as a society. I hope that, as we make societal decisions surrounding these issues, we are brought into a global conversation to discuss all aspects of these issues so that we can begin to shape the architecture of the next generation of artificial intelligence tools. Without transparency, accountability, and due diligence given to ethics the benefits of artificial intelligence tools will not optimally serve the common goals of humanity. I do not look forward to a future with many different personalities of artificial intelligence that have been developed with various intent. The potential of artificial intelligence as a unifying element and technology for global communication and understanding is unprecedented, its significance exponential and similar in effect to the internet and telephone.

    References

    Bryne, M. (2016). AI Professor Proposes ‘Turing Red Flag Law’. Vice Mediahttp://motherboard.vice.com/read/ai-professor-proposes-turing-red-flag-law

  • RIAA Lawsuits (Recording Industry Association of America)

    RIAA Lawsuits (Recording Industry Association of America)

    Upwards of 30,000 individuals have been targeted with lawsuits from the Recording Industry Association of America since their campaign against peer-to-peer file sharing began in 2003 (Electronic Frontier Foundation, 2008). Despite the years of efforts that the RIAA put into combatting peer-to-peer file sharing, the results of their campaign have only served to increase the popularity of peer-to-peer file sharing and increased the total number of users. Because such a small group of individuals were targeted with lawsuits, it is speculated that the RIAA was attempting to make an example of a few to coerce the many into voluntary compliance.

    Supporting Data from Big Champagne

    Over the period that the RIAA was targeting people with lawsuits (between September 2003 and June 2005), a data analytics organization called Big Champagne reported that the number of active users in peer-to-peer sharing had doubled. American users were comprising the majority of all peer-to-peer file sharers while the number of users continued to increase at a 20% rate each year after 2005. Due to the time correlation of the increase in usage and the notoriety of the lawsuits, I speculate that the rise in of peer-to-peer file sharing usage was directly due to the popularization of the lawsuits.

    2007 – 2008

    Throughout the 2 years following the lawsuits, the popularity of peer-to-peer file sharing services and networks continued to rise (Electronic Frontier Foundation, 2008). Teenage users started to become a new majority of peer-to-peer users because they were able to transfer the downloaded songs onto trending media players like MP3 players and iPods. Though downloading to digital media devices was a very popular method for file sharers of the time, the number of users was dwarfed by the amount of people that were simply making copies of compact disks. The New York Times reported the results of a poll from that period stating that 69% of teens assumed that it was completely legal to make copies of compact disks and distribute them to friends. The RIAA admitted that the copying of compact disks was such a problem that it accounted for 37% of all file sharing. Other smaller avenues of file sharing have been revealing themselves as time goes on, such as file transfer over messaging applications or invite-only networks.

    Enforcement and Public Education

    It is interesting to question whether the RIAA intended the lawsuits against individuals to draw attention to a policy that the public would generally overlook due to a lack of interest and education. A lawyer for the RIAA, Cary Sherman, reportedly said “Enforcement is a tough love form of education” which leads me to believe that the RIAA was using the lawsuits to spread awareness about their intentions (Electronic Frontier Foundation, 2008). The RIAA explained some of its intentions in stating that they were unable to pursue justification with the peer-to-peer file sharing networks themselves because they had been protected by a previous court ruling, thus they decided to bring charges against individual users of the services. Although I can understand why the organization ultimately took the actions that it did, I think that they could have done a better job at creating awareness around the issue before the lawsuits and perhaps developed alternative methods to create an accountability for all users in violation of their policies. A survey that was taken years after the lawsuits were instigated gave indications that awareness of the possible consequences of illegal file sharing. However, the increasing popularity did not seem to wane after the rise in awareness of peer-to-peer file sharing’s illegality. Another survey suggested that most young computer users were more worried about receiving a computer virus when downloading than receiving an RIAA lawsuit. After 2006, the RIAA stopped popularizing their attempts at enforcement and resultingly experienced a decrease in public awareness.

    The Electronic Frontier Foundation (2008) reported that legal mainstream services have experienced increasing popularity in the wake of the RIAA lawsuits. The largest U.S. retailer of music was stated to be Apple’s iTunes which compromises about 30% of the market revenue in the nation. Other schools of thought and methods of legitimate file sharing have been tested in the market such as DRM-free work; but none of these methods have slowed the use of peer-to-peer file sharing networks which remains as the most likely way listeners obtain music.

    Domestic Leeching and Offshore Uploading

    On a peer-to-peer file sharing network, the act of downloading files without sharing them is known as leeching. Because the RIAA chose to only target people who were uploading files to the file sharing networks, there was an assumption among users that leechers would not be targeted which resulted in the popularity of this downloading strategy. If the user who is sharing the files is uploading from outside of the country, it also becomes more difficult to enforce for the RIAA who operates nationally. New solutions have been since created like software that further obfuscates the identity of a user and private social network groups that are used to continue to share files with little fear of RIAA enforcement.

    The Perspective of the Electronic Frontier Foundation

    According to the authors at the Electronic Frontier Foundation (2008), the continued efforts of the RIAA to discontinue the prevalent use of peer-to-peer file sharing applications and networks among the many listeners in the United States has utterly failed. Technological creativity and advancement will continue to pressure the music industry in new and exciting ways and the music industry should develop channels that appeal to their customer base and the technologies they like to use. The EFF describes the normalcy of copyright disputes between copyright holders and the makers of popular new technological platforms, and the level of adaptation required to maintain high levels of interaction with their listeners. They advocate for a music service that has a free tier for listening but also allows users to pay for supplementary features. The prospective service could use engagement analytics to determine the amount of revenue distributed to the owners of the content.

    Conclusion

    The Electronic Frontier Foundation (2008) reported only increases in peer-to-peer file sharing usage through the entire RIAA campaign to eliminate the problem of peer-to-peer file sharing. The continuous rise of popularity of file sharing services both throughout the period of the RIAA lawsuits and since has proven to the market that peer-to-peer file sharing is a valuable technology that must be appropriately utilized by the music industry to be dealt with. It is of my own conclusion that the RIAA has made no substantial effect to reduce the amount of loss of revenue due to peer-to-peer file sharing because their own actions caused them to increase the popularity of the undesired action. I think that the most helpful campaign for the RIAA to prospectively undertake is to co-develop a music sharing platform with their user base that conversationally addresses all the needs of the listeners, the artists, and the regulating bodies based on concrete feedback and continuous revision.

    References

    (2008). Electronic Frontier Foundation. https://www.eff.org/wp/riaa-v-people-five-years-later#7