1-12 of 12 Results

  • Keywords: privacy x
Clear all

Article

Communication privacy management theory (CPM) argues that disclosure is the process by which we give or receive private information. Private information is what people reveal. Generally, CPM theory argues that individuals believe they own their private information and have the right to control said information. Management of private information is not necessary until others are involved. CPM does not limit an understanding of disclosure by framing it as only about the self. Instead, CPM theory points out that when management is needed, others are given co-ownership status, thereby expanding the notion of disclosing information; the theory uses the metaphor of privacy boundary to illustrate where private information is located and how the boundary expands to accommodate multiple owners of private information. Thus, individuals can disclose not only their own information but also information that belongs to others or is owned by collectives such as families. Making decisions to disclose or protect private information often creates a tension in which individuals vacillate between sharing and concealing their private information. Within the purview of health issues, these decisions have a potential to increase or decrease risk. The choice of disclosing health matters to a friend, for example, can garner social support to cope with health problems. At the same time, the individual may have concerns that his or her friend might tell someone else about the health problem, thus causing more difficulties. Understanding the tension between disclosing and protecting private health information by the owner is only one side of the coin. Because disclosure creates authorized co-owners, these co-owners (e.g., families, friends, and partners) often feel they have right to know about the owner’s health conditions. The privacy boundaries are used metaphorically to indicate where private information is located. Individuals have both personal privacy boundaries around health information that expands to include others referred to as “authorized co-owners.” Once given this status, withholding to protect some part of the private information can risk relationships and interfere with health needs. Within the scheme of health, disclosure risks and privacy predicaments are not experienced exclusively by the individual with an illness. Rather, these risks prevail for a number of individuals connected to a patient such as providers, the patient’s family, and supportive friends. Everyone involved has a dual role. For example, the clinician is both the co-owner of a patient’s private health information and holds information within his or her own privacy boundary, such as worrying whether he or she diagnosed the symptoms correctly. Thus, there are a number of circumstances that can lead to health risks where privacy management and decisions to reveal or conceal health information are concerned. CPM theory has been applied in eleven countries and in numerous contexts where privacy management occurs, such as health, families, organizations, interpersonal relationships, and social media. This theory is unique in offering a comprehensive way to understand the relationship between the notion of disclosure and that of privacy. The landscape of health-related risks where privacy management plays a significant role is both large and complex. The situations of HIV/AIDS, cancer care, and managing patient and provider disclosure of private information help to elucidate the ways decisions of privacy potentially lead to health risks.

Article

Sandra Petronio and Rachael Hernandez

Have you ever wondered why a complete stranger sitting next to you on a plane would tell you about a recent cancer diagnosis? Why your parents never disclosed that you were adopted, feeling shocked when you accidently find out as an adult? These and many other actions reflect decisions individuals make about managing their private information. Being aware of how individuals navigate decisions to disclose or protect their private information provides useful insights that aid in the development and sustainability of relationships with others. Given privacy plays an integral role in everyone’s life, knowing more about privacy management is critical. communication privacy management (CPM) theory was first introduced by Sandra Petronio in 2002. CPM is evidence-based and accordingly provides a dependable understanding of how decisions are made to disclose and protect private information. This theory uses plain language to understand privacy management in everyday life. CPM focuses on the relationship people have with each other in communicative contexts, such as face-to-face interactions, on social media, and in dyads or groups. CPM theory is based on a communicative-social behavioral perspective and not necessarily a legal point of view. CPM theory illustrates that privacy is not paradoxical but is sustainable through the process of a privacy management system used in everyday life. The theory of CPM has been employed in a number of contexts shedding light on antecedents, mechanisms, and outcomes of private information management. In addition, a number of researchers across multiple countries, such as the Netherlands, United Kingdom, Japan, Kenya, South Korea, and the United States, have used CPM theory in their research investigations. Learning more about the system of private information management allows for a better understanding of how people navigate managing their private information when others are involved. Literature illustrates patterns of privacy management and demonstrates the challenges as well as the positive outcomes of the way individuals regulate their private information.

Article

Bill D. Herman

The volume of information on the Internet is incomprehensible and growing exponentially. With such a vast ocean of information available, search engines have become an indispensible tool for virtually all users. Yet much of what is available online is potentially objectionable, controversial, or harmful. This leaves search engines in a potentially precarious position, simultaneously wanting to maximize the usefulness of results for end users while also minimizing political, regulatory, civil, and even criminal difficulties in the jurisdictions where they operate. Conversely, the substantial logistical and legal obstacles to regulating Internet content also leave policymakers in an unenviable position, and content that the public or policymakers may well want regulated—even that which is patently illegal—can remain virtually impossible to stamp out. The policies that may affect online search are incredibly varied, including contract law, laws that affect expression and media producers more generally, copyright, fraud, privacy, and antitrust. For the most part, the law that applies was developed in and will still apply to offline contexts as well. Internet search is still an area filled with its own vexing policy questions. In many cases, these are questions of secondary liability—addressing whether the search provider is liable for search results that link to websites that are beyond their control. In other areas, though, the behavior of search providers will endure specific scrutiny. While many of these questions could be or actually are asked in countries around the world, this article focuses primarily on the legal regimes in the United States and the European Union.

Article

Jenny Crowley

Self-disclosure, or revealing information about the self to others, plays an integral role in interpersonal experiences and relationships. It has captivated the interest of scholars of interpersonal communication for decades, to the extent that some have positioned self-disclosure as the elixir of social life. Sharing personal information is the means by which relationships are built and maintained, because effective disclosures contribute to greater intimacy, trust, and closeness in a relationship. Self-disclosure also confers personal benefits, including reduced stress and improved physical and psychological health. Furthermore, disclosing private thoughts and feelings is often a necessary precondition for reaping the benefits of other types of communication, such as supportive communication. Despite the apparent advantages for personal and relational well-being, self-disclosure is not a panacea. Revealing intimate information can be risky, awkward, and incite judgment from close others. People make concerted efforts to avoid self-disclosure when information has the potential to cause harm to themselves, others, and relationships. Research on self-disclosure has primarily focused on dyadic interactions; however, online technologies enable people to share personal information with a large audience and are challenging taken-for-granted understandings about the role of self-disclosure in relating. As social networking sites become indispensable tools for maintaining a large and robust personal network, people are adapting their self-disclosure practices to the features and affordances of these technologies. Taken together, this body of research helps illuminate what is at stake when communicating interpersonally.

Article

Edward L. Carter

The right to be forgotten is an emerging legal concept allowing individuals control over their online identities by demanding that Internet search engines remove certain results. The right has been supported by the European Court of Justice, some judges in Argentina, and data-protection regulators in several European countries, among others. The right is primarily grounded in notions of privacy and data protection but also relates to intellectual property, reputation, and right of publicity. Scholars and courts cite, as an intellectual if not legal root for the right to be forgotten, the legal principle that convicted criminals whose sentences are completed should not continually be publicly linked with their crimes. Critics contend that the right to be forgotten stands in conflict with freedom of expression and can lead to revisionist history. Scholars and others in the southern cone of South America, in particular, have decried the right to be forgotten because it could allow perpetrators of mass human rights abuses to cover up or obscure their atrocities. On the other hand, those in favor of the right to be forgotten say that digital technology preserves memory unnaturally and can impede forgiveness and individual progress. The right to be forgotten debate is far from resolved and poses difficult questions about access to, and control of, large amounts of digital information across national borders. Given the global nature of the Internet and the ubiquity of certain powerful search engines, the questions at issue are universal, but solutions thus far have been piecemeal. Although a 2014 decision by the Court of Justice of the European Union (EU) garnered much attention, the right to be forgotten has been largely shaped by a 1995 European Union Directive on Data Protection. In 2016, the EU adopted a new General Data Protection Regulation that will take effect in 2018 and could have a major impact because it contains an explicit right to be forgotten (also called right to erasure). The new regulation does not focus on the theoretical or philosophical justification for a right to be forgotten, and it appears likely the debate over the right in the EU and beyond will not be resolved even when the new rule takes effect.

Article

Digital technologies are frequently said to have converged. This claim may be made with respect to the technologies themselves or to restructuring of the media industry over time. Innovations that are associated with digitalization (representing analogue signals by binary digits) often emerge in ways that cross the boundaries of earlier industries. When this occurs, technologies may be configured in new ways and the knowledge that supports the development of services and applications becomes complex. In the media industries, the convergence phenomenon has been very rapid, and empirical evidence suggests that the (de)convergence of technologies and industries also needs to be taken into account to understand change in this area. There is a very large literature that seeks to explain why convergence and (de)convergence phenomena occur. Some of this literature looks for economic and market-based explanations on the supply side of the industry, whereas other approaches explore the cultural, social, and political demand side factors that are important in shaping innovation in the digital media sector and the often unexpected pathways that it takes. Developments in digital media are crucially important because they are becoming a cornerstone of contemporary information societies. The benefits of digital media are often heralded in terms of improved productivity, opportunities to construct multiple identities through social media, new connections between close and distant others, and a new foundation for democracy and political mobilization. The risks associated with these technologies are equally of concern in part because the spread of digital media gives rise to major challenges. Policymakers are tasked with governing these technologies and issues of privacy protection, surveillance, and commercial security as well as ensuring that the skills base is appropriate to the digital media ecology need to be addressed. The complexity of the converged landscape makes it difficult to provide straightforward answers to policy problems. Policy responses also need to be compatible with the cultural, social, political, and economic environments in different countries and regions of the world. This means that these developments must be examined from a variety of disciplinary perspectives and need to be understood in their historical context so as take both continuities and discontinuities in the media industry landscape into account.

Article

Michelle Miller-Day

Families shape individuals throughout their lives, and family communication is the foundation of family life and functioning. It is through communication that families are defined and members learn how to organize meanings. When individuals come together to form family relationships, they create a system that is larger and more complex than the sum of its individual members. It is within this system that families communicatively navigate cohesion and adaptability; create family images, themes, stories, rituals, rules, and roles; manage power, intimacy, and boundaries; and participate in an interactive process of meaning-making, producing mental models of family life that endure over time and across generations.

Article

Tamara Shepherd

Privacy rights are controversial in communication processes and entail varying levels of disclosure of sensitive personal information. What constitutes such personal information and how it should be accessed and used by various actors in a particular communicative exchange tends to be dependent on the situation at hand. And yet, many would argue that a baseline level of privacy should be expected by individuals as part of maintaining human integrity and personal control over information disclosure. Different frameworks exist for thinking about privacy as a right, and these frameworks further suggest different mechanisms for the control of information and the protection of privacy rights in changing communication environments. For example, the main shift in communication processes from the pre-Internet era to a networked world has brought with it renewed debates over the regulation of privacy rights. How would privacy rights be evoked in the face of rapidly changing technologies for networked surveillance, biometric identification, and geolocation? And moreover, how would these rights be applied differently to distinct populations based on class, nationality, race, gender, and age? These questions form the core of what is at stake in conceptions of privacy rights in contemporary communication.

Article

Patrick Lee Plaisance

News workers—writers, editors, videographers, bloggers, photographers, designers—regularly confront questions of potential harms and conflicting values in the course of their work, and the field of journalism ethics concerns itself with standards of behavior and the quality of justifications used to defend controversial journalistic decisions. While journalism ethics, as with the philosophy of ethics in general, is less concerned with pronouncements of the “rightness” or “wrongness” of certain acts, it relies on longstanding notions of the public-service mission of journalism. However, informing the public and serving a “watchdog” function regularly require journalists to negotiate questions of privacy, autonomy, community engagement, and the potentially damaging consequences of providing information that individuals and governments would rather withhold. As news organizations continue to search for successful business models to support journalistic work, ethics questions over conflicts of interest and content transparency (e.g., native advertising) have gained prominence. Media technology platforms that have served to democratize and decentralize the dissemination of news have underscored the debate about who, or what type of content, should be subjected to journalism ethics standards. Media ethics scholars, most of whom are from Western democracies, also are struggling to articulate the features of a “global” journalism ethics framework that emphasizes broad internationalist ideals yet accommodates cultural pluralism. This is particularly challenging given that the very idea of “press freedom” remains an alien one in many countries of the world, and the notion is explicitly included in the constitutions of only a few of the world’s democratic societies. The global trend toward recognizing and promoting press freedom is clear, but it is occurring at different rates in different countries. Other work in the field explores the factors on the individual, organizational, and societal levels that help or hinder journalists seeking to ensure that their work is defined by widely accepted virtues and ethical principles.

Article

Internet freedom is a process. Internet freedom takes place through a myriad of practices, such as technology development, media production, and policy work, through which various actors, existing within historical, cultural, economic, and political contexts, continuously seek to determine its meaning. Some of these practices take place within traditional Internet governance structures, yet others take place outside of these. Crypto-discourse refers to a partially fixed instance of the process in which actors seek to construct the meaning of Internet freedom that mainly takes place outside of traditional Internet governance structures. Crypto-discourse describes a process in which specific communities of crypto-advocates (groups of cryptographers, hackers, online privacy advocates, and technology journalists) attempt to define Internet freedom through community practices such as technological development and descriptive portrayals of encryption within interconnected communities that seek to develop and define encryption software, as well as through the dissemination of these developments and portrayals within and outside of these communities. The discursive work of the cypherpunks, interrelated discourse communities, and related technology journalism is at the core of crypto-discourse. Through crypto-discourse, crypto-advocates employ encryption software as an arena of negotiation. The representation of encryption software serves as a battlefield in a larger discursive struggle to define the meaning of Internet freedom. Crypto-discourse illustrates how social practices have normative implications for Internet governance debates regarding Internet freedom and in particular expectations for state authorities to uphold online rights. The relationship between freedom and the state that these crypto-advocates articulate as a response to specific events excludes other possible positive notions of Internet freedom in which the state has an obligation to ensure the protection of online rights.

Article

Intermediary liability is at the center of the debate over free expression, free speech, and an open Internet. The underlying policies form network regulation that governs the extent that websites, search engines, and Internet service providers that host user content are legally responsible for what their users post or upload. Levels of intermediary liability are commonly categorized as providing broad immunity, limited liability, or strict liability. In the United States, intermediaries are given broad immunity through Section 230 of the Communication Decency Act. In practice, this means that search engines cannot be held liable for the speech of individuals appearing in search results, or a news site is not responsible for what people are typing in its comment section. Immunity is important to the existence of free expression because it ensures that intermediaries do not have incentives to censor content out of fear of the law. The millions of users continuously generating content through Facebook and YouTube, for instance, would not be able to do so if those intermediaries were fearful of legal consequences due to the actions of any given user. Privacy policy online is most evidently showcased by the European Union’s Right to be Forgotten policy, which forces search engines to delist an individual’s information that is deemed harmful to reputation. Hateful and harmful speech is also regulated online through intermediary liability, although social media services often decide when and how to remove this type of content based on company policy.

Article

Charles Ess

Since the early 2000s, Digital Media Ethics (DME) has emerged as a relatively stable subdomain of applied ethics. DME seeks nothing less than to address the ethical issues evoked by computing technologies and digital media more broadly, such as cameras, mobile and smartphones, GPS navigation systems, biometric health monitoring devices, and, eventually, “the Internet of things,” as these have developed and diffused into more or less every corner of our lives in the (so-called) developed countries. DME can be characterized as demotic—of the people—in three important ways. One, in contrast with specialist domains such as Information and Computing Ethics (ICE), it is intended as an ethics for the rest of us—namely, all of us who use digital media technologies in our everyday lives. Two, these manifold contexts of use dramatically expand the range of ethical issues computing technologies evoke, well beyond the comparatively narrow circle of issues confronting professionals working in ICE. Three, while drawing on the expertise of philosophers and applied ethics, DME likewise relies on the ethical insights and sensibilities of additional communities, including (a), the multiple communities of those whose technical expertise comes into play in the design, development, and deployment of information and communication technology (ICT); and (b), the people and communities who use digital media in their everyday lives. DME further employs both ancient ethical philosophies, such as virtue ethics, and modern frameworks of utilitarianism and deontology, as well as feminist ethics and ethics of care: DME may also take, for example, Confucian and Buddhist approaches, as well as norms and customs from relevant indigenous traditions where appropriate. The global distribution and interconnection of these devices means, finally, that DME must also take on board often profound differences between basic ethical norms, practices, and related assumptions as these shift from culture to culture. What counts as “privacy” or “pornography,” to begin with, varies widely—as do the more fundamental assumptions regarding the nature of the person that we take up as a moral agent and patient, rights-holder, and so on. Of first importance here is how far we emphasize the more individual vis-à-vis the more relational dimensions of selfhood—with the further complication that these emphases appear to be changing locally and globally. Nonetheless, DME can now map out clear approaches to early concerns with privacy, copyright, and pornography that help establish a relatively stable and accepted set of ethical responses and practices. By comparison, violent content (e.g., in games) and violent behavior (cyber-bullying, hate speech) are less well resolved. Nonetheless, as with the somewhat more recent issues of online friendship and citizen journalism, an emerging body of literature and analysis points to initial guidelines and resolutions that may become relatively stable. Such resolutions must be pluralistic, allowing for diverse application and interpretations in different cultural settings, so as to preserve and foster cultural identity and difference. Of course, still more recent issues and challenges are in the earliest stages of analysis and efforts at forging resolutions. Primary issues include “death online” (including suicide web-sites and online memorial sites, evoking questions of censorship, the right to be forgotten, and so on); “Big Data” issues such as pre-emptive policing and “ethical hacking” as counter-responses; and autonomous vehicles and robots, ranging from Lethal Autonomous Weapons to carebots and sexbots. Clearly, not every ethical issue will be quickly or easily resolved. But the emergence of relatively stable and widespread resolutions to the early challenges of privacy, copyright, and pornography, coupled with developing analyses and emerging resolutions vis-à-vis more recent topics, can ground cautious optimism that, in the long run, DME will be able to take up the ethical challenges of digital media in ways reasonably accessible and applicable for the rest of us.