1-13 of 13 Results

  • Keywords: scope x
Clear all


Corporate or Product Diversification  

Margarethe F. Wiersema and Joseph B. Beck

Corporate or product diversification represents a strategic decision. Specifically, it addresses the strategic question regarding in which businesses the firm will compete. A single-business company that expands its strategic scope by adding new businesses becomes a diversified, multibusiness company. The means by which a company expands its strategic scope is by acquiring businesses, investing in the development of new businesses, or both. Similarly, an already diversified firm can reduce its strategic scope by divesting from or closing businesses. There are two fundamentally different types of corporate diversification strategy, depending on the interrelatedness of the businesses in the company’s portfolio: related diversification and unrelated diversification. Related diversification occurs when the businesses in the company’s portfolio share strategic assets or resources, such as technology, a brand name, or distribution channels. Unrelated diversification occurs when a company’s businesses do not share strategic assets or resources and do not have interrelationships of strategic importance. Companies can pursue both types of diversification simultaneously, and thus have a portfolio of businesses both related and unrelated. In addition to variations in the type of diversification, companies can vary in the extent of their diversification, ranging from business portfolios with very limited diversification to highly diversified portfolios. Decisions regarding the diversification strategy of a firm represent major strategic scope decisions since they impact the markets and industries in which the company will compete. Companies can increase or reduce their level of diversification for a variety of reasons. Economic motives, for example, include the pursuit of economies of multiproduct scale and scope, whereby per-unit costs may be lowered through the increase in sales volume or other fixed-cost reducing benefits associated with growth through diversification. In addition, companies may diversify for strategic reasons, such as enhancement of capabilities or superior competitive positioning through entry into new product markets. Similarly, economic and strategic reasons can motivate the firm to refocus and reduce its level of diversification when the strategic and economic rationales for being in a particular business are no longer justified. The performance consequences of corporate diversification can vary, depending on both the extent of the firm’s diversification and the type of diversification. In general, research indicates that high levels of diversification are value-destroying due to the integrative and complexity-associated costs that administering an extremely diversified portfolio imposes on management. Nevertheless, related diversification, where the company shares underlying resources across its business portfolio (e.g., brand, technology, and distribution channels), can lead to higher levels of performance than can unrelated diversification, due to the potential for enhanced profitability from leveraging shared resources. Corporate diversification was a major U.S. business trend in the 1960s. During the 1980s, however, pressure from the capital market for shareholder wealth maximization led to the adoption of strategies whereby many companies refocused their business portfolios and thus reduced their levels of corporate diversification by divesting unrelated businesses in order to concentrate on their predominant or core business.



Nils Holtug

Prioritarianism is a principle of distributive justice. Roughly, it states that we should give priority to the worse off in the distribution of advantages. This principle has received a great deal of attention in political theory since Derek Parfit first introduced the distinction between egalitarianism and prioritarianism in his Lindley Lecture, published in 1991. In the present article, prioritarianism is defined in terms of a number of structural features of the principle. These structural features are also used to distinguish between this principle and other distributive principles such as utilitarianism, egalitarianism, and leximin. Prioritarianism is mostly discussed as an axiological principle that orders outcomes with respect to their (moral) value, but it is also clarified how it can be incorporated in a criterion of right actions, choices, or policies. Furthermore, different aspects of the principle that need to be further specified to arrive at a full-fledged distributive theory are discussed, including the weights that give priority to the worse off, currency (what kind of advantages should be distributed), temporal unit (the temporal span in which one has to be worse off in order to be entitled to priority), scope (whether the principle applies globally or only domestically, and whether, for example, future generations and non-human animals are covered by the principle), and risk. For each aspect, different possible views are distinguished and discussed. Finally, it is discussed how prioritarianism may be justified, for example, by outlining and discussing the argument that, unlike certain other distribution-sensitive principles such as egalitarianism, prioritarianism is not vulnerable to the so-called “leveling down objection.”


Negation in Morphology  

Karen De Clercq

Negative markers are not a uniform category. They come in various types and, depending on their type, they take scope over a clause, a phrase, or just a word. Low scope negative markers (LSN) like de-, dis-, un-, iN-, non-, -less are bound morphemes and have therefore been mainly studied within morphology, focusing on the semantics of these markers (contradiction vs. contrariety), issues related to their productivity, and their combinability with certain categories. Wide scope negative markers (WSN), like not are often free morphemes and are usually treated within syntax. Thus there is a morphology-syntax divide when it comes to the treatment of negative markers. However, there are reasons to give up this divide and to uniformly treat negative markers within one module of the grammar. First, from a typological point of view, the bound-free divide of negative markers does not correlate with their scope. For instance, agglutinative languages have WSN markers that are bound morphemes attaching to the verbal base. Second, morphological processes, like suppletion or other types of allomorphy, can be observed in markers that show properties of WSN markers. Third, independent negative particles, like for instance the Dutch free morpheme weinig ‘little, few’, shares stacking properties with other LSN markers like un- and iN-. Fourth, both LSN and WSN markers are subject to the same constraint concerning stacking scopally identical negative markers. Fifth, syncretisms have been found across languages between WSN and LSN, allowing negative markers to be ordered in such a way that no ABA patterns arise, suggesting that the morphology of negative markers reflects the natural scope of negation and that there is a continuum between LSN and WSN markers.


Bare Nominals  

Bert Le Bruyn, Henriëtte de Swart, and Joost Zwarts

Bare nominals (also called “bare nouns”) are nominal structures without an overt article or other determiner. The distinction between a bare noun and a noun that is part of a larger nominal structure must be made in context: Milk is a bare nominal in I bought milk, but not in I bought the milk. Bare nouns have a limited distribution: In subject or object position, English allows bare mass nouns and bare plurals, but not bare singular count nouns (*I bought table). Bare singular count nouns only appear in special configurations, such as coordination (I bought table and chairs for £182). From a semantic perspective, it is noteworthy that bare nouns achieve reference without the support of a determiner. A full noun phrase like the cookies refers to the maximal sum of cookies in the context, because of the definite article the. English bare plurals have two main interpretations: In generic sentences they refer to the kind (Cookies are sweet), in episodic sentences they refer to some exemplars of the kind (Cookies are in the cabinet). Bare nouns typically take narrow scope with respect to other scope-bearing operators like negation. The typology of bare nouns reveals substantial variation, and bare nouns in languages other than English may have different distributions and meanings. But genericity and narrow scope are recurring features in the cross-linguistic study of bare nominals.


Justice Between Generations  

Tim Meijers

A wide range of issues in moral, political, and legal philosophy fall under the heading of “intergenerational justice,” such as questions of justice between the young and the old, obligations to more-or-less distant past and future generations, generational sovereignty, and the boundaries of democratic decision-making. These issues deserve our attention first because they are of great social importance. Solving the challenges raised by aging, stable pension funding, and increasing healthcare costs, for example, requires a view on what justice between age groups demands. Climate change, resource depletion, environmental degradation, population growth, and the like, raise serious concerns about the conditions under which future people will have to live. What kind of world should we bequest to future generations? Second, this debate has theoretical significance. Questions of intergenerational justice force reconsideration of the fundamental commitments (on scope, pattern, site, and currency) of existing moral and political theories. The age-group debate has led to fundamental questions about the pattern of distributive justice: Should we care about people’s lives considered as whole being equally good? This has implausible implications. Can existing accounts be modified to avoid such problematic consequences? Justice between nonoverlapping generations raises a different set of questions. One important worry is about the pattern of intergenerational justice—are future generations owed equality, or should intergenerational justice be cast in terms of sufficiency? Another issue is the currency of intergenerational justice: what kind of goods should be transferred? Perhaps the most puzzling worry resulting from this debate translates into a worry about scope: do obligations of justice extend to future people? Most conventional views on the scope of justice—those that focus on shared coercive institutions, a common culture, a cooperative scheme for mutual advantage—cannot easily be extended to include future generations. Even humanity-based views, which seem most hospitable to the inclusion of future generations, are confronted with what Parfit called the nonidentity problem, which results from the fact that future people are mostly possible people: because of the lack of a fixed identity of future people, it is often impossible to harm them in the comparative sense.


Ecological Rhetoric  

Chris Ingraham

As the problems wrought by anthropogenic global warming have become more urgent, scholars of rhetoric have turned more than ever before toward environmental topics and ecological perspectives. These interests have influenced the contemporary study of rhetoric enough that it is now possible to identify some different yet overlapping strains of research at the nexus of ecology and rhetoric. Doing so, however, is not without ongoing contestations, including over the nature of ecological thought, expanding systems of rhetoric, environmentalisms, ecofeminisms, and critical eco-futures. Despite these challenges, rhetoric and ecology may pair so well together because each is a capacious figure of thought, capable of accommodating others. As a way of thinking about interconnectedness in particular, “ecology” has been taken up by many scholars in diverse fields and disciplines. As a result, the ways the concept is mobilized in studies of rhetoric reflect an unruly assortment of approaches to, and understandings of, ecology, the influence of which cannot be traced to any pure or universal version of the term, because, as with “rhetoric,” no such common meaning exists. Grappling with the complex convergence of both terms might help scholars to constellate a semi-stable image of what it can mean and involve to study these topics together.


Lexical Integrity in Morphology  

Ignacio Bosque

The Lexical Integrity Hypothesis (LIH) holds that words are syntactic atoms, implying that syntactic processes and principles do not have access to word segments. Interestingly, when this widespread “negative characterization” is turned into its positive version, a standard picture of the Morphology-Syntax borderline is obtained. The LIH is both a fundamental principle of Morphology and a test bench for morphological theories. As a matter of fact, the LIH is problematic for both lexicalist and anti-lexicalist frameworks, which radically differ in accepting or rejecting Morphology as a component of grammar different from Syntax. Lexicalist theories predict no exceptions to LIH, contrary to fact. From anti-lexicalist theories one might expect a large set of counterexamples to this hypothesis, but the truth is that attested potential exceptions are restricted, as well as confined to very specific grammatical areas. Most of the phenomena taken to be crucial for evaluating the LIH are briefly addressed in this article: argument structure, scope, prefixes, compounds, pronouns, elliptical segments, bracketing paradoxes, and coordinated structures. It is argued that both lexicalist and anti-lexicalist positions crucially depend on the specific interpretations that their proponents are willing to attribute to the very notion of Syntax: a broad one, which basically encompasses constituent structure, binary branching, scope, and compositionality, and a narrow one, which also coverts movement, recursion, deletion, coordination, and other aspects of phrase structure. The objective differences between these conceptions of Syntax are shown to be determinant in the evaluation of LIH’s predictions.


Derivational Economy in Syntax and Semantics  

Željko Bošković and Troy Messick

Economy considerations have always played an important role in the generative theory of grammar. They are particularly prominent in the most recent instantiation of this approach, the Minimalist Program, which explores the possibility that Universal Grammar is an optimal way of satisfying requirements that are imposed on the language faculty by the external systems that interface with the language faculty which is also characterized by optimal, computationally efficient design. In this respect, the operations of the computational system that produce linguistic expressions must be optimal in that they must satisfy general considerations of simplicity and efficient design. Simply put, the guiding principles here are (a) do something only if you need to and (b) if you do need to, do it in the most economical/efficient way. These considerations ban superfluous steps in derivations and superfluous symbols in representations. Under economy guidelines, movement takes place only when there is a need for it (with both syntactic and semantic considerations playing a role here), and when it does take place, it takes place in the most economical way: it is as short as possible and carries as little material as possible. Furthermore, economy is evaluated locally, on the basis of immediately available structure. The locality of syntactic dependencies is also enforced by minimal search and by limiting the number of syntactic objects and the amount of structure accessible in the derivation. This is achieved by transferring parts of syntactic structure to the interfaces during the derivation, the transferred parts not being accessible for further syntactic operations.


State of the Art of Contingent Valuation  

Tim Haab, Lynne Y. Lewis, and John Whitehead

The contingent valuation method (CVM) is a stated preference approach to the valuation of nonmarket goods. It has a 50+-year history beginning with a clever suggestion to simply ask people for their consumer surplus. The first study was conducted in the 1960s and over 10,000 studies have been conducted to date. The CVM is used to estimate the use and non-use values of changes in the environment. It is one of the more flexible valuation methods, having been applied in a large number of contexts and policies. The CVM requires construction of a hypothetical scenario that makes clear what will be received in exchange for payment. The scenario must be realistic and consequential. Economists prefer revealed preference methods for environmental valuation due to their reliance on actual behavior data. In unguarded moments, economists are quick to condemn stated preference methods due to their reliance on hypothetical behavior data. Stated preference methods should be seen as approaches to providing estimates of the value of certain changes in the allocation of environmental and natural resources for which no other method can be used. The CVM has a tortured history, having suffered slings and arrows from industry-funded critics following the Exxon Valdez and British Petroleum (BP)–Deepwater Horizon oil spills. The critics have harped on studies that fail certain tests of hypothetical bias and scope, among others. Nonetheless, CVM proponents have found that it produces similar value estimates to those estimated from revealed preference methods such as the travel cost and hedonic methods. The CVM has produced willingness to pay (WTP) estimates that exhibit internal validity. CVM research teams must have a range of capabilities. A CVM study involves survey design so that the elicited WTP estimates have face validity. Questionnaire development and data collection are skills that must be mastered. Welfare economic theory is used to guide empirical tests of theory such as the scope test. Limited dependent variable econometric methods are often used with panel data to test value models and develop estimates of WTP. The popularity of the CVM is on the wane; indeed, another name for this article could be “the rise and fall of CVM,” not because the CVM is any less useful than other valuation methods. It is because the best practice in the CVM is merging with discrete choice experiments, and researchers seem to prefer to call their approach discrete choice experiments. Nevertheless, the problems that plague discrete choice experiments are the same as those that plague contingent valuation. Discrete choice experiment–contingent valuation–stated preference researchers should continue down the same familiar path of methods development.


Morpheme Ordering  

Patrik Bye

Morpheme ordering is largely explainable in terms of syntactic/semantic scope, or the Mirror Principle, although there is a significant residue of cases that resist an explanation in these terms. The article, we look at some key examples of (apparent) deviant ordering and review the main ways that linguists have attempted to account for them. Approaches to the phenomenon fall into two broad types. The first relies on mechanisms we can term “morphological,” while the second looks instead to the resources of the ‘narrow’ syntax or phonology. One morphological approach involves a template that associates each class of morphemes in the word with a particular position. A well-known example is the Bantu CARP (Causative-Applicative-Reciprocal-Passive) template, which requires particular orders between morphemes to obtain irrespective of scope. A second approach builds on the intuition that the boundary or join between a morpheme and the base to which it attaches can vary in closeness or strength, where ‘strength’ can be interpreted in gradient or discrete terms. Under the gradient interpretation, affixes differ in parsability, or separability from the base; understood discretely, as in Lexical Morphology and Phonology, morphemes (or classes of morphemes) may attach at a deeper morphological layer to stems (the stronger join), or to words (weaker join), which are closer to the surface. Deviant orderings may then arise where an affix attaches at a morphological layer deeper than its scope would lead us to expect. An example is the marking of case and possession in Finnish nouns: case takes scope over possession, but the case suffix precedes the possessive suffix. Another morphological approach is represented by Distributed Morphology, which permits certain local reorderings once all syntactic operations have taken place. Such operations may target specific morphemes, or morphosyntactic features characterizing a class of morphemes. Agreement marking is an interesting case, since agreement features are bundled as syntactically unitary heads but may in certain languages be split morphologically into separate affixes. This means that in the case of split agreement marking, the relative order must be attributed to post-syntactic principles. Besides these morphological approaches, other researchers have emphasized the resources of the narrow syntax, in particular phrasal movement, as a means for dealing with many challenging cases of morpheme ordering. Still other cases of apparently deviant ordering may be analyzed as epiphenomena of phonological processes and constraint interaction as they apply to prespecified and/or underspecified lexical representations.


Nominal Reference  

Donka Farkas

Nominal reference is central to both linguistic semantics and philosophy of language. On the theoretical side, both philosophers and linguists wrestle with the problem of how the link between nominal expressions and their referents is to be characterized, and what formal tools are most appropriate to deal with this issue. The problem is complex because nominal expression come in a large variety of forms, from simple proper names, pronouns, or bare nouns (Jennifer, they, books) to complex expressions involving determiners and various quantifiers (the/every/no/their answer). While the reference of such expressions is varied, their basic syntactic distribution as subjects or objects of various types, for instance, is homogeneous. Important advances in understanding this tension were made with the advent of the work of R. Montague and that of his successors. The problems involved in understanding the relationship between pronouns and their antecedents in discourse have led to another fundamental theoretical development, namely that of dynamic semantics. On the empirical side, issues at the center of both linguistic and philosophical investigations concern how to best characterize the difference between definite and indefinite nominals, and, more generally, how to understand the large variety of determiner types found both within a language and cross-linguistically. These considerations led to refining the definite/indefinite contrast to include fine-grained specificity distinctions that have been shown to be relevant to various morphosyntactic phenomena across the world’s languages. Considerations concerning nominal reference are thus relevant not only to semantics but also to morphology and syntax. Some questions within the domain of nominal reference have grown into rich subfields of inquiry. This is the case with generic reference, the study of pronominal reference, the study of quantifiers, and the study of the semantics of nominal number marking.


Blended Learning in Teacher Education  

Harrison Hao Yang and Jason MacLeod

Practices of blended learning are being wholeheartedly accepted and implemented into the mainstream processes of educational delivery throughout the world. This trend follows a large body of research that suggests blended learning approaches can be more effective than both traditional face-to-face instruction and entirely computer-mediated instructional approaches. However, in teacher education there are two important factors that influence the outcomes of blended learning; first, the articulation of differences between instructional approaches, and second, the understanding of key pedagogical strategies that support student success. Research on blended learning in teacher education should include both preservice and in-service teacher participants. Preservice teachers are individuals operating in the preparation and training stages, prior to assuming full responsibility of a professional teaching role. In-service teachers are individuals practicing as teachers that are typically still toward completion of their early career induction training to the profession. Both historical utilization and future research trends are evident through a critical analysis of the last three decades of highly cited scholarship on blended learning in teacher education. Historical utilization trends show an emergence of online and blended learning approaches, which reached nearly 30% of postsecondary education students in 2016. Future research trends include evidence-based practices, preparing for active learning classrooms, building capacity for practical training, collaborative teaching opportunities, leveraging blended learning to improve education equity, and cultivating mixed reality blended learning environments. Researchers, practitioners, administrators, and policymakers should continue to stay informed on this topic and continuously find ways to improve the application of blended learning in teacher education.


Scope Marking at the Syntax-Semantics Interface  

Veneeta Dayal and Deepak Alok

Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…t­j­…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.