Skip to main content

Home/ TOK Friends/ Group items tagged arbitrary

Rss Feed Group items tagged

Javier E

MacIntyre | Internet Encyclopedia of Philosophy - 0 views

  • For MacIntyre, “rationality” comprises all the intellectual resources, both formal and substantive, that we use to judge truth and falsity in propositions, and to determine choice-worthiness in courses of action
  • Rationality in this sense is not universal; it differs from community to community and from person to person, and may both develop and regress over the course of a person’s life or a community’s history.
  • So rationality itself, whether theoretical or practical, is a concept with a history: indeed, since there are also a diversity of traditions of enquiry, with histories, there are, so it will turn out, rationalities rather than rationality, just as it will also turn out that there are justices rather than justice
  • ...164 more annotations...
  • Rationality is the collection of theories, beliefs, principles, and facts that the human subject uses to judge the world, and a person’s rationality is, to a large extent, the product of that person’s education and moral formation.
  • To the extent that a person accepts what is handed down from the moral and intellectual traditions of her or his community in learning to judge truth and falsity, good and evil, that person’s rationality is “tradition-constituted.” Tradition-constituted rationality provides the schemata by which we interpret, understand, and judge the world we live in
  • The apparent problem of relativism in MacIntyre’s theory of rationality is much like the problem of relativism in the philosophy of science. Scientific claims develop within larger theoretical frameworks, so that the apparent truth of a scientific claim depends on one’s judgment of the larger framework. The resolution of the problem of relativism therefore appears to hang on the possibility of judging frameworks or rationalities, or judging between frameworks or rationalities from a position that does not presuppose the truth of the framework or rationality, but no such theoretical standpoint is humanly possible.
  • MacIntyre finds that the world itself provides the criterion for the testing of rationalities, and he finds that there is no criterion except the world itself that can stand as the measure of the truth of any philosophical theory.
  • MacIntyre’s philosophy is indebted to the philosophy of science, which recognizes the historicism of scientific enquiry even as it seeks a truthful understanding of the world. MacIntyre’s philosophy does not offer a priori certainty about any theory or principle; it examines the ways in which reflection upon experience supports, challenges, or falsifies theories that have appeared to be the best theories so far to the people who have accepted them so far. MacIntyre’s ideal enquirers remain Hamlets, not Emmas.
  • history shows us that individuals, communities, and even whole nations may commit themselves militantly over long periods of their histories to doctrines that their ideological adversaries find irrational. This qualified relativism of appearances has troublesome implications for anyone who believes that philosophical enquiry can easily provide certain knowledge of the world
  • According to MacIntyre, theories govern the ways that we interpret the world and no theory is ever more than “the best standards so far” (3RV, p. 65). Our theories always remain open to improvement, and when our theories change, the appearances of our world—the apparent truths of claims judged within those theoretical frameworks—change with them.
  • From the subjective standpoint of the human enquirer, MacIntyre finds that theories, concepts, and facts all have histories, and they are all liable to change—for better or for worse.
  • MacIntyre holds that the rationality of individuals is not only tradition-constituted, it is also tradition constitutive, as individuals make their own contributions to their own rationality, and to the rationalities of their communities. Rationality is not fixed, within either the history of a community or the life of a person
  • The modern account of first principles justifies an approach to philosophy that rejects tradition. The modern liberal individualist approach is anti-traditional. It denies that our understanding is tradition-constituted and it denies that different cultures may differ in their standards of rationality and justice:
  • Modernity does not see tradition as the key that unlocks moral and political understanding, but as a superfluous accumulation of opinions that tend to prejudice moral and political reasoning.
  • Although modernity rejects tradition as a method of moral and political enquiry, MacIntyre finds that it nevertheless bears all the characteristics of a moral and political tradition.
  • If historical narratives are only projections of the interests of historians, then it is difficult to see how this historical narrative can claim to be truthful
  • For these post-modern theorists, “if the Enlightenment conceptions of truth and rationality cannot be sustained,” either relativism or perspectivism “is the only possible alternative” (p. 353). MacIntyre rejects both challenges by developing his theory of tradition-constituted and tradition-constitutive rationality on pp. 354-369
  • How, then, is one to settle challenges between two traditions? It depends on whether the adherents of either take the challenges of the other tradition seriously. It depends on whether the adherents of either tradition, on seeing a failure in their own tradition are willing to consider an answer offered by their rival (p. 355)
  • how a person with no traditional affiliation is to deal with the conflicting claims of rival traditions: “The initial answer is: that will depend upon who you are and how you understand yourself. This is not the kind of answer which we have been educated to expect in philosophy”
  • MacIntyre focuses the critique of modernity on the question of rational justification. Modern epistemology stands or falls on the possibility of Cartesian epistemological first principles. MacIntyre’s history exposes that notion of first principle as a fiction, and at the same time demonstrates that rational enquiry advances (or declines) only through tradition
  • MacIntyre cites Foucault’s 1966 book, Les Mots et les choses (The Order of Things, 1970) as an example of the self-subverting character of Genealogical enquiry
  • Foucault’s book reduces history to a procession of “incommensurable ordered schemes of classification and representation” none of which has any greater claim to truth than any other, yet this book “is itself organized as a scheme of classification and representation.”
  • From MacIntyre’s perspective, there is no question of deciding whether or not to work within a tradition; everyone who struggles with practical, moral, and political questions simply does. “There is no standing ground, no place for enquiry . . . apart from that which is provided by some particular tradition or other”
  • Three Rival Versions of Moral Enquiry (1990). The central idea of the Gifford Lectures is that philosophers make progress by addressing the shortcomings of traditional narratives about the world, shortcomings that become visible either through the failure of traditional narratives to make sense of experience, or through the introduction of contradictory narratives that prove impossible to dismiss
  • MacIntyre compares three traditions exemplified by three literary works published near the end of Adam Gifford’s life (1820–1887)
  • The Ninth Edition of the Encyclopaedia Britannica (1875–1889) represents the modern tradition of trying to understand the world objectively without the influence of tradition.
  • The Genealogy of Morals (1887), by Friedrich Nietzsche embodies the post-modern tradition of interpreting all traditions as arbitrary impositions of power.
  • The encyclical letter Aeterni Patris (1879) of Pope Leo XIII exemplifies the approach of acknowledging one’s predecessors within one’s own tradition of enquiry and working to advance or improve that tradition in the pursuit of objective truth. 
  • Of the three versions of moral enquiry treated in 3RV, only tradition, exemplified in 3RV by the Aristotelian, Thomistic tradition, understands itself as a tradition that looks backward to predecessors in order to understand present questions and move forward
  • Encyclopaedia obscures the role of tradition by presenting the most current conclusions and convictions of a tradition as if they had no history, and as if they represented the final discovery of unalterable truth
  • Encyclopaedists focus on the present and ignore the past.
  • Genealogists, on the other hand, focus on the past in order to undermine the claims of the present.
  • In short, Genealogy denies the teleology of human enquiry by denying (1) that historical enquiry has been fruitful, (2) that the enquiring person has a real identity, and (3) that enquiry has a real goal. MacIntyre finds this mode of enquiry incoherent.
  • Genealogy is self-deceiving insofar as it ignores the traditional and teleological character of its enquiry.
  • Genealogical moral enquiry must make similar exceptions to its treatments of the unity of the enquiring subject and the teleology of moral enquiry; thus “it seems to be the case that the intelligibility of genealogy requires beliefs and allegiances of a kind precluded by the genealogical stance” (3RV, p. 54-55)
  • MacIntyre uses Thomism because it applies the traditional mode of enquiry in a self-conscious manner. Thomistic students learn the work of philosophical enquiry as apprentices in a craft (3RV, p. 61), and maintain the principles of the tradition in their work to extend the understanding of the tradition, even as they remain open to the criticism of those principles.
  • 3RV uses Thomism as its example of tradition, but this use should not suggest that MacIntyre identifies “tradition” with Thomism or Thomism-as-a-name-for-the-Western-tradition. As noted above, WJWR distinguished four traditions of enquiry within the Western European world alone
  • MacIntyre’s emphasis on the temporality of rationality in traditional enquiry makes tradition incompatible with the epistemological projects of modern philosophy
  • Tradition is not merely conservative; it remains open to improvement,
  • Tradition differs from both encyclopaedia and genealogy in the way it understands the place of its theories in the history of human enquiry. The adherent of a tradition must understand that “the rationality of a craft is justified by its history so far,” thus it “is inseparable from the tradition through which it was achieved”
  • MacIntyre uses Thomas Aquinas to illustrate the revolutionary potential of traditional enquiry. Thomas was educated in Augustinian theology and Aristotelian philosophy, and through this education he began to see not only the contradictions between the two traditions, but also the strengths and weaknesses that each tradition revealed in the other. His education also helped him to discover a host of questions and problems that had to be answered and solved. Many of Thomas Aquinas’ responses to these concerns took the form of disputed questions. “Yet to each question the answer produced by Aquinas as a conclusion is no more than and, given Aquinas’s method, cannot but be no more than, the best answer reached so far. And hence derives the essential incompleteness”
  • argue that the virtues are essential to the practice of independent practical reason. The book is relentlessly practical; its arguments appeal only to experience and to purposes, and to the logic of practical reasoning.
  • Like other intelligent animals, human beings enter life vulnerable, weak, untrained, and unknowing, and face the likelihood of infirmity in sickness and in old age. Like other social animals, humans flourish in groups. We learn to regulate our passions, and to act effectively alone and in concert with others through an education provided within a community. MacIntyre’s position allows him to look to the animal world to find analogies to the role of social relationships in the moral formation of human beings
  • The task for the human child is to make “the transition from the infantile exercise of animal intelligence to the exercise of independent practical reasoning” (DRA, p. 87). For a child to make this transition is “to redirect and transform her or his desires, and subsequently to direct them consistently towards the goods of different stages of her or his life” (DRA, p. 87). The development of independent practical reason in the human agent requires the moral virtues in at least three ways.
  • DRA presents moral knowledge as a “knowing how,” rather than as a “knowing that.” Knowledge of moral rules is not sufficient for a moral life; prudence is required to enable the agent to apply the rules well.
  • “Knowing how to act virtuously always involves more than rule-following” (DRA, p. 93). The prudent person can judge what must be done in the absence of a rule and can also judge when general norms cannot be applied to particular cases.
  • Flourishing as an independent practical reasoner requires the virtues in a second way, simply because sometimes we need our friends to tell us who we really are. Independent practical reasoning also requires self-knowledge, but self-knowledge is impossible without the input of others whose judgment provides a reliable touchstone to test our beliefs about ourselves. Self-knowledge therefore requires the virtues that enable an agent to sustain formative relationships and to accept the criticism of trusted friends
  • Human flourishing requires the virtues in a third way, by making it possible to participate in social and political action. They enable us to “protect ourselves and others against neglect, defective sympathies, stupidity, acquisitiveness, and malice” (DRA, p. 98) by enabling us to form and sustain social relationships through which we may care for one another in our infirmities, and pursue common goods with and for the other members of our societies.
  • MacIntyre argues that it is impossible to find an external standpoint, because rational enquiry is an essentially social work (DRA, p. 156-7). Because it is social, shared rational enquiry requires moral commitment to, and practice of, the virtues to prevent the more complacent members of communities from closing off critical reflection upon “shared politically effective beliefs and concepts”
  • MacIntyre finds himself compelled to answer what may be called the question of moral provincialism: If one is to seek the truth about morality and justice, it seems necessary to “find a standpoint that is sufficiently external to the evaluative attitudes and practices that are to be put to the question.” If it is impossible for the agent to take such an external standpoint, if the agent’s commitments preclude radical criticism of the virtues of the community, does that leave the agent “a prisoner of shared prejudices” (DRA, p. 154)?
  • The book moves from MacIntyre’s assessment of human needs for the virtues to the political implications of that assessment. Social and political institutions that form and enable independent practical reasoning must “satisfy three conditions.” (1) They must enable their members to participate in shared deliberations about the communities’ actions. (2) They must establish norms of justice “consistent with exercise of” the virtue of justice. (3) They must enable the strong “to stand proxy” as advocates for the needs of the weak and the disabled.
  • The social and political institutions that MacIntyre recommends cannot be identified with the modern nation state or the modern nuclear family
  • The political structures necessary for human flourishing are essentially local
  • Yet local communities support human flourishing only when they actively support “the virtues of just generosity and shared deliberation”
  • MacIntyre rejects individualism and insists that we view human beings as members of communities who bear specific debts and responsibilities because of our social identities. The responsibilities one may inherit as a member of a community include debts to one’s forbearers that one can only repay to people in the present and future
  • The constructive argument of the second half of the book begins with traditional accounts of the excellences or virtues of practical reasoning and practical rationality rather than virtues of moral reasoning or morality. These traditional accounts define virtue as arête, as excellence
  • Practices are supported by institutions like chess clubs, hospitals, universities, industrial corporations, sports leagues, and political organizations.
  • Practices exist in tension with these institutions, since the institutions tend to be oriented to goods external to practices. Universities, hospitals, and scholarly societies may value prestige, profitability, or relations with political interest groups above excellence in the practices they are said to support.
  • Personal desires and institutional pressures to pursue external goods may threaten to derail practitioners’ pursuits of the goods internal to practices. MacIntyre defines virtue initially as the quality of character that enables an agent to overcome these temptations:
  • “A virtue is an acquired human quality the possession and exercise of which tends to enable us to achieve those goods which are internal to practices
  • Excellence as a human agent cannot be reduced to excellence in a particular practice (See AV, pp. 204–
  • The virtues therefore are to be understood as those dispositions which will not only sustain practices and enable us to achieve the goods internal to practices, but which will also sustain us in the relevant kind of quest for the good, by enabling us to overcome the harms, dangers, temptations, and distractions which we encounter, and which will furnish us with increasing self-knowledge and increasing knowledge of the good (AV, p. 219).
  • The excellent human agent has the moral qualities to seek what is good and best both in practices and in life as a whole.
  • The virtues find their point and purpose not only in sustaining those relationships necessary if the variety of goods internal to practices are to be achieved and not only in sustaining the form of an individual life in which that individual may seek out his or her good as the good of his or her whole life, but also in sustaining those traditions which provide both practices and individual lives with their necessary historical context (AV, p. 223)
  • Since “goods, and with them the only grounds for the authority of laws and virtues, can only be discovered by entering into those relationships which constitute communities whose central bond is a shared vision of and understanding of goods” (AV, p. 258), any hope for the transformation and renewal of society depends on the development and maintenance of such communities.
  • MacIntyre’s Aristotelian approach to ethics as a study of human action distinguishes him from post-Kantian moral philosophers who approach ethics as a means of determining the demands of objective, impersonal, universal morality
  • This modern approach may be described as moral epistemology. Modern moral philosophy pretends to free the individual to determine for her- or himself what she or he must do in a given situation, irrespective of her or his own desires; it pretends to give knowledge of universal moral laws
  • Aristotelian metaphysicians, particularly Thomists who define virtue in terms of the perfection of nature, rejected MacIntyre’s contention that an adequate Aristotelian account of virtue as excellence in practical reasoning and human action need not appeal to Aristotelian metaphysic
  • one group of critics rejects MacIntyre’s Aristotelianism because they hold that any Aristotelian account of the virtues must first account for the truth about virtue in terms of Aristotle’s philosophy of nature, which MacIntyre had dismissed in AV as “metaphysical biology”
  • Many of those who rejected MacIntyre’s turn to Aristotle define “virtue” primarily along moral lines, as obedience to law or adherence to some kind of natural norm. For these critics, “virtuous” appears synonymous with “morally correct;” their resistance to MacIntyre’s appeal to virtue stems from their difficulties either with what they take to be the shortcomings of MacIntyre’s account of moral correctness or with the notion of moral correctness altogether
  • MacIntyre continues to argue from the experience of practical reasoning to the demands of moral education.
  • Descartes and his successors, by contrast, along with certain “notable Thomists of the last hundred years” (p. 175), have proposed that philosophy begins from knowledge of some “set of necessarily true first principles which any truly rational person is able to evaluate as true” (p. 175). Thus for the moderns, philosophy is a technical rather than moral endeavor
  • MacIntyre distinguishes two related challenges to his position, the “relativist challenge” and the “perspectivist challenge.” These two challenges both acknowledge that the goals of the Enlightenment cannot be met and that, “the only available standards of rationality are those made available by and within traditions” (p. 252); they conclude that nothing can be known to be true or false
  • MacIntyre follows the progress of the Western tradition through “three distinct traditions:” from Homer and Aristotle to Thomas Aquinas, from Augustine to Thomas Aquinas and from Augustine through Calvin to Hume
  • Chapter 17 examines the modern liberal denial of tradition, and the ironic transformation of liberalism into the fourth tradition to be treated in the book.
  • MacIntyre credits John Stuart Mill and Thomas Aquinas as “two philosophers of the kind who by their writing send us beyond philosophy into immediate encounter with the ends of life
  • First, both were engaged by questions about the ends of life as questioning human beings and not just as philosophers. . . .
  • Secondly, both Mill and Aquinas understood their speaking and writing as contributing to an ongoing philosophical conversation. . . .
  • Thirdly, it matters that both the end of the conversation and the good of those who participate in it is truth and that the nature of truth, of good, of rational justification, and of meaning therefore have to be central topics of that conversation (Tasks, pp. 130-1).
  • Without these three characteristics, philosophy is first reduced to “the exercise of a set of analytic and argumentative skills. . . . Secondly, philosophy may thereby become a diversion from asking questions about the ends of life with any seriousness”
  • Neither Rosenzweig nor Lukács made philosophical progress because both failed to relate “their questions about the ends of life to the ends of their philosophical writing”
  • First, any adequate philosophical history or biography must determine whether the authors studied remain engaged with the questions that philosophy studies, or set the questions aside in favor of the answers. Second, any adequate philosophical history or biography must determine whether the authors studied insulated themselves from contact with conflicting worldviews or remained open to learning from every available philosophical approach. Third, any adequate philosophical history or biography must place the authors studied into a broader context that shows what traditions they come from and “whose projects” they are “carrying forward
  • MacIntyre’s recognition of the connection between an author’s pursuit of the ends of life and the same author’s work as a philosophical writer prompts him to finish the essay by demanding three things of philosophical historians and biographers
  • Philosophy is not just a study; it is a practice. Excellence in this practice demands that an author bring her or his struggles with the questions of the ends of philosophy into dialogue with historic and contemporary texts and authors in the hope of making progress in answering those questions
  • MacIntyre defends Thomistic realism as rational enquiry directed to the discovery of truth.
  • The three Thomistic essays in this book challenge those caricatures by presenting Thomism in a way that people outside of contemporary Thomistic scholarship may find surprisingly flexible and open
  • To be a moral agent, (1) one must understand one’s individual identity as transcending all the roles that one fills; (2) one must see oneself as a practically rational individual who can judge and reject unjust social standards; and (3) one must understand oneself as “as accountable to others in respect of the human virtues and not just in respect of [one’s] role-performances
  • J is guilty because he complacently accepted social structures that he should have questioned, structures that undermined his moral agency. This essay shows that MacIntyre’s ethics of human agency is not just a descriptive narrative about the manner of moral education; it is a standard laden account of the demands of moral agency.
  • MacIntyre considers “the case of J” (J, for jemand, the German word for “someone”), a train controller who learned, as a standard for his social role, to take no interest in what his trains carried, even during war time when they carried “munitions and . . . Jews on their way to extermination camps”
  • J had learned to do his work for the railroad according to one set of standards and to live other parts of his life according to other standards, so that this compliant participant in “the final solution” could contend, “You cannot charge me with moral failure” (E&P, p. 187).
  • The epistemological theories of Modern moral philosophy were supposed to provide rational justification for rules, policies, and practical determinations according to abstract universal standards, but MacIntyre has dismissed those theorie
  • Modern metaethics is supposed to enable its practitioners to step away from the conflicting demands of contending moral traditions and to judge those conflicts from a neutral position, but MacIntyre has rejected this project as well
  • In his ethical writings, MacIntyre seeks only to understand how to liberate the human agent from blindness and stupidity, to prepare the human agent to recognize what is good and best to do in the concrete circumstances of that agent’s own life, and to strengthen the agent to follow through on that judgment.
  • In his political writings, MacIntyre investigates the role of communities in the formation of effective rational agents, and the impact of political institutions on the lives of communities. This kind of ethics and politics is appropriately named the ethics of human agency.
  • The purpose of the modern moral philosophy of authors like Kant and Mill was to determine, rationally and universally, what kinds of behavior ought to be performed—not in terms of the agent’s desires or goals, but in terms of universal, rational duties. Those theories purported to let agents know what they ought to do by providing knowledge of duties and obligations, thus they could be described as theories of moral epistemology.
  • Contemporary virtue ethics purports to let agents know what qualities human beings ought to have, and the reasons that we ought to have them, not in terms of our fitness for human agency, but in the same universal, disinterested, non-teleological terms that it inherits from Kant and Mill.
  • For MacIntyre, moral knowledge remains a “knowing how” rather than a “knowing that;” MacIntyre seeks to identify those moral and intellectual excellences that make human beings more effective in our pursuit of the human good.
  • MacIntyre’s purpose in his ethics of human agency is to consider what it means to seek one’s good, what it takes to pursue one’s good, and what kind of a person one must become if one wants to pursue that good effectively as a human agent.
  • As a philosophy of human agency, MacIntyre’s work belongs to the traditions of Aristotle and Thomas Aquinas.
  • in keeping with the insight of Marx’s third thesis on Feuerbach, it maintained the common condition of theorists and people as peers in the pursuit of the good life.
  • He holds that the human good plays a role in our practical reasoning whether we recognize it or not, so that some people may do well without understanding why (E&P, p. 25). He also reads Aristotle as teaching that knowledge of the good can make us better agents
  • AV defines virtue in terms of the practical requirements for excellence in human agency, in an agent’s participation in practices (AV, ch. 14), in an agent’s whole life, and in an agent’s involvement in the life of her or his community
  • MacIntyre’s Aristotelian concept of “human action” opposes the notion of “human behavior” that prevailed among mid-twentieth-century determinist social scientists. Human actions, as MacIntyre understands them, are acts freely chosen by human agents in order to accomplish goals that those agents pursue
  • Human behavior, according to mid-twentieth-century determinist social scientists, is the outward activity of a subject, which is said to be caused entirely by environmental influences beyond the control of the subject.
  • Rejecting crude determinism in social science, and approaches to government and public policy rooted in determinism, MacIntyre sees the renewal of human agency and the liberation of the human agent as central goals for ethics and politics.
  • MacIntyre’s Aristotelian account of “human action” examines the habits that an agent must develop in order to judge and act most effectively in the pursuit of truly choice-worthy ends
  • MacIntyre seeks to understand what it takes for the human person to become the kind of agent who has the practical wisdom to recognize what is good and best to do and the moral freedom to act on her or his best judgment.
  • MacIntyre rejected the determinism of modern social science early in his career (“Determinism,” 1957), yet he recognizes that the ability to judge well and act freely is not simply given; excellence in judgment and action must be developed, and it is the task of moral philosophy to discover how these excellences or virtues of the human agent are established, maintained, and strengthened
  • MacIntyre’s Aristotelian philosophy investigates the conditions that support free and deliberate human action in order to propose a path to the liberation of the human agent through participation in the life of a political community that seeks its common goods through the shared deliberation and action of its members
  • As a classics major at Queen Mary College in the University of London (1945-1949), MacIntyre read the Greek texts of Plato and Aristotle, but his studies were not limited to the grammars of ancient languages. He also examined the ethical theories of Immanuel Kant and John Stuart Mill. He attended the lectures of analytic philosopher A. J. Ayer and of philosopher of science Karl Popper. He read Ludwig Wittgenstein’s Tractatus Logico Philosophicus, Jean-Paul Sartre’s L'existentialisme est un humanisme, and Marx’s Eighteenth Brumaire of Napoleon Bonaparte (What happened, pp. 17-18). MacIntyre met the sociologist Franz Steiner, who helped direct him toward approaching moralities substantively
  • Alasdair MacIntyre’s philosophy builds on an unusual foundation. His early life was shaped by two conflicting systems of values. One was “a Gaelic oral culture of farmers and fishermen, poets and storytellers.” The other was modernity, “The modern world was a culture of theories rather than stories” (MacIntyre Reader, p. 255). MacIntyre embraced both value systems
  • From Marxism, MacIntyre learned to see liberalism as a destructive ideology that undermines communities in the name of individual liberty and consequently undermines the moral formation of human agents
  • For MacIntyre, Marx’s way of seeing through the empty justifications of arbitrary choices to consider the real goals and consequences of political actions in economic and social terms would remain the principal insight of Marxism
  • After his retirement from teaching, MacIntyre has continued his work of promoting a renewal of human agency through an examination of the virtues demanded by practices, integrated human lives, and responsible engagement with community life. He is currently affiliated with the Centre for Contemporary Aristotelian Studies in Ethics and Politics (CASEP) at London Metropolitan University.
  • The second half of AV proposes a conception of practice and practical reasoning and the notion of excellence as a human agent as an alternative to modern moral philosophy
  • AV rejects the view of “modern liberal individualism” in which autonomous individuals use abstract moral principles to determine what they ought to do. The critique of modern normative ethics in the first half of AV rejects modern moral reasoning for its failure to justify its premises, and criticizes the frequent use of the rhetoric of objective morality and scientific necessity to manipulate people to accept arbitrary decisions
  • MacIntyre uses “modern liberal individualism” to name a much broader category that includes both liberals and conservatives in contemporary American political parlance, as well as some Marxists and anarchists (See ASIA, pp. 280-284). Conservatism, liberalism, Marxism, and anarchism all present the autonomous individual as the unit of civil society
  • The sources of modern liberal individualism—Hobbes, Locke, and Rousseau—assert that human life is solitary by nature and social by habituation and convention. MacIntyre’s Aristotelian tradition holds, on the contrary, that human life is social by nature.
  • MacIntyre identifies moral excellence with effective human agency, and seeks a political environment that will help to liberate human agents to recognize and seek their own goods, as components of the common goods of their communities, more effectively. For MacIntyre therefore, ethics and politics are bound together.
  • For MacIntyre ethics is not an application of principles to facts, but a study of moral action. Moral action, free human action, involves decisions to do things in pursuit of goals, and it involves the understanding of the implications of one’s actions for the whole variety of goals that human agents seek
  • In this sense, “To act morally is to know how to act” (SMJ, p. 56). “Morality is not a ‘knowing that’ but a ‘knowing how’”
  • If human action is a ‘knowing how,’ then ethics must also consider how one learns ‘how.’ Like other forms of ‘knowing how,’ MacIntyre finds that one learns how to act morally within a community whose language and shared standards shape our judgment
  • MacIntyre had concluded that ethics is not an abstract exercise in the assessment of facts; it is a study of free human action and of the conditions that enable rational human agency.
  • MacIntyre gives Marx credit for concluding in the third of the Theses on Feuerbach, that the only way to change society is to change ourselves, and that “The coincidence of the changing of human activity or self-changing can only be comprehended and rationally understood as revolutionary practice”
  • MacIntyre distinguishes “religion which is an opiate for the people from religion which is not” (MI, p. 83). He condemns forms of religion that justify social inequities and encourage passivity. He argues that authentic Christian teaching criticizes social structures and encourages action
  • Where “moral philosophy textbooks” discuss the kinds of maxims that should guide “promise-keeping, truth-telling, and the like,” moral maxims do not guide real agents in real life at all. “They do not guide us because we do not need to be guided. We know what to do” (ASIA, p. 106). Sometimes we do this without any maxims at all, or even against all the maxims we know. MacIntyre Illustrates his point with Huckleberry Finn’s decision to help Jim, Miss Watson’s escaped slave, to make his way to freedom
  • MacIntyre develops the ideas that morality emerges from history, and that morality organizes the common life of a community
  • The book concludes that the concepts of morality are neither timeless nor ahistorical, and that understanding the historical development of ethical concepts can liberate us “from any false absolutist claims” (SHE, p. 269). Yet this conclusion need not imply that morality is essentially arbitrary or that one could achieve freedom by liberating oneself from the morality of one’s society.
  • From this “Aristotelian point of view,” “modern morality” begins to go awry when moral norms are separated from the pursuit of human goods and moral behavior is treated as an end in itself. This separation characterizes Christian divine command ethics since the fourteenth century and has remained essential to secularized modern morality since the eighteenth century
  • From MacIntyre’s “Aristotelian point of view,” the autonomy granted to the human agent by modern moral philosophy breaks down natural human communities and isolates the individual from the kinds of formative relationships that are necessary to shape the agent into an independent practical reasoner.
  • the 1977 essay “Epistemological Crises, Dramatic Narrative, and the Philosophy of Science” (Hereafter EC). This essay, MacIntyre reports, “marks a major turning-point in my thought in the 1970s” (The Tasks of Philosophy, p. vii) EC may be described fairly as MacIntyre’s discourse on method
  • First, Philosophy makes progress through the resolution of problems. These problems arise when the theories, histories, doctrines and other narratives that help us to organize our experience of the world fail us, leaving us in “epistemological crises.” Epistemological crises are the aftermath of events that undermine the ways that we interpret our world
  • it presents three general points on the method for philosophy.
  • To live in an epistemological crisis is to be aware that one does not know what one thought one knew about some particular subject and to be anxious to recover certainty about that subject.
  • To resolve an epistemological crisis it is not enough to impose some new way of interpreting our experience, we also need to understand why we were wrong before: “When an epistemological crisis is resolved, it is by the construction of a new narrative which enables the agent to understand both how he or she could intelligibly have held his or her original beliefs and how he or she could have been so drastically misled by them
  • MacIntyre notes, “Philosophers have customarily been Emmas and not Hamlets” (p. 6); that is, philosophers have treated their conclusions as accomplished truths, rather than as “more adequate narratives” (p. 7) that remain open to further improvement.
  • To illustrate his position on the open-endedness of enquiry, MacIntyre compares the title characters of Shakespeare’s Hamlet and Jane Austen’s Emma. When Emma finds that she is deeply misled in her beliefs about the other characters in her story, Mr. Knightly helps her to learn the truth and the story comes to a happy ending (p. 6). Hamlet, by contrast, finds no pat answers to his questions; rival interpretations remain throughout the play, so that directors who would stage the play have to impose their own interpretations on the script
  • Another approach to education is the method of Descartes, who begins by rejecting everything that is not clearly and distinctly true as unreliable and false in order to rebuild his understanding of the world on a foundation of undeniable truth.
  • Descartes presents himself as willfully rejecting everything he had believed, and ignores his obvious debts to the Scholastic tradition, even as he argues his case in French and Latin. For MacIntyre, seeking epistemological certainty through universal doubt as a precondition for enquiry is a mistake: “it is an invitation not to philosophy but to mental breakdown, or rather to philosophy as a means of mental breakdown.
  • MacIntyre contrasts Descartes’ descent into mythical isolation with Galileo, who was able to make progress in astronomy and physics by struggling with the apparently insoluble questions of late medieval astronomy and physics, and radically reinterpreting the issues that constituted those questions
  • To make progress in philosophy one must sort through the narratives that inform one’s understanding, struggle with the questions that those narratives raise, and on occasion, reject, replace, or reinterpret portions of those narratives and propose those changes to the rest of one’s community for assessment. Human enquiry is always situated within the history and life of a community.
  • The third point of EC is that we can learn about progress in philosophy from the philosophy of science
  • Kuhn’s “paradigm shifts,” however, are unlike MacIntyre’s resolutions of epistemological crises in two ways.
  • First they are not rational responses to specific problems. Kuhn compares paradigm shifts to religious conversions (pp. 150, 151, 158), stressing that they are not guided by rational norms and he claims that the “mopping up” phase of a paradigm shift is a matter of convention in the training of new scientists and attrition among the holdouts of the previous paradigm
  • Second, the new paradigm is treated as a closed system of belief that regulates a new period of “normal science”; Kuhn’s revolutionary scientists are Emmas, not Hamlets
  • MacIntyre proposes elements of Imre Lakatos’ philosophy of science as correctives to Kuhn’s. While Lakatos has his own shortcomings, his general account of the methodologies of scientific research programs recognizes the role of reason in the transitions between theories and between research programs (Lakatos’ analog to Kuhn’s paradigms or disciplinary matrices). Lakatos presents science as an open ended enquiry, in which every theory may eventually be replaced by more adequate theories. For Lakatos, unlike Kuhn, rational scientific progress occurs when a new theory can account both for the apparent promise and for the actual failure of the theory it replaces.
  • The third conclusion of MacIntyre’s essay is that decisions to support some theories over others may be justified rationally to the extent that those theories allow us to understand our experience and our history, including the history of the failures of inadequate theories
  • For Aristotle, moral philosophy is a study of practical reasoning, and the excellences or virtues that Aristotle recommends in the Nicomachean Ethics are the intellectual and moral excellences that make a moral agent effective as an independent practical reasoner.
  • MacIntyre also finds that the contending parties have little interest in the rational justification of the principles they use. The language of moral philosophy has become a kind of moral rhetoric to be used to manipulate others in defense of the arbitrary choices of its users
  • examining the current condition of secular moral and political discourse. MacIntyre finds contending parties defending their decisions by appealing to abstract moral principles, but he finds their appeals eclectic, inconsistent, and incoherent.
  • The secular moral philosophers of the eighteenth and nineteenth centuries shared strong and extensive agreements about the content of morality (AV, p. 51) and believed that their moral philosophy could justify the demands of their morality rationally, free from religious authority.
  • MacIntyre traces the lineage of the culture of emotivism to the secularized Protestant cultures of northern Europe
  • Modern moral philosophy had thus set for itself an incoherent goal. It was to vindicate both the moral autonomy of the individual and the objectivity, necessity, and categorical character of the rules of morality
  • MacIntyre turns to an apparent alternative, the pragmatic expertise of professional managers. Managers are expected to appeal to the facts to make their decisions on the objective basis of effectiveness, and their authority to do this is based on their knowledge of the social sciences
  • An examination of the social sciences reveals, however, that many of the facts to which managers appeal depend on sociological theories that lack scientific status. Thus, the predictions and demands of bureaucratic managers are no less liable to ideological manipulation than the determinations of modern moral philosophers.
  • Modern moral philosophy separates moral reasoning about duties and obligations from practical reasoning about ends and practical deliberation about the means to one’s ends, and in doing so it separates morality from practice.
  • Many Europeans also lost the practical justifications for their moral norms as they approached modernity; for these Europeans, claiming that certain practices are “immoral,” and invoking Kant’s categorical imperative or Mill’s principle of utility to explain why those practices are immoral, seems no more adequate than the Polynesian appeal to taboo.
  • MacIntyre sifts these definitions and then gives his own definition of virtue, as excellence in human agency, in terms of practices, whole human lives, and traditions in chapters 14 and 15 of AV.
  • In the most often quoted sentence of AV, MacIntyre defines a practice as (1) a complex social activity that (2) enables participants to gain goods internal to the practice. (3) Participants achieve excellence in practices by gaining the internal goods. When participants achieve excellence, (4) the social understandings of excellence in the practice, of the goods of the practice, and of the possibility of achieving excellence in the practice “are systematically extended”
  • Practices, like chess, medicine, architecture, mechanical engineering, football, or politics, offer their practitioners a variety of goods both internal and external to these practices. The goods internal to practices include forms of understanding or physical abilities that can be acquired only by pursuing excellence in the associated practice
  • Goods external to practices include wealth, fame, prestige, and power; there are many ways to gain these external goods. They can be earned or purchased, either honestly or through deception; thus the pursuit of these external goods may conflict with the pursuit of the goods internal to practices.
  • An intelligent child is given the opportunity to win candy by learning to play chess. As long as the child plays chess only to win candy, he has every reason to cheat if by doing so he can win more candy. If the child begins to desire and pursue the goods internal to chess, however, cheating becomes irrational, because it is impossible to gain the goods internal to chess or any other practice except through an honest pursuit of excellence. Goods external to practices may nevertheless remain tempting to the practitioner.
  • Since MacIntyre finds social identity necessary for the individual, MacIntyre’s definition of the excellence or virtue of the human agent needs a social dimension:
  • These responsibilities also include debts incurred by the unjust actions of ones’ predecessors.
  • The enslavement and oppression of black Americans, the subjugation of Ireland, and the genocide of the Jews in Europe remained quite relevant to the responsibilities of citizens of the United States, England, and Germany in 1981, as they still do today.
  • Thus an American who said “I never owned any slaves,” “the Englishman who says ‘I never did any wrong to Ireland,’” or “the young German who believes that being born after 1945 means that what Nazis did to Jews has no moral relevance to his relationship to his Jewish contemporaries” all exhibit a kind of intellectual and moral failure.
  • “I am born with a past, and to cut myself off from that past in the individualist mode, is to deform my present relationships” (p. 221).  For MacIntyre, there is no moral identity for the abstract individual; “The self has to find its moral identity in and through its membership in communities” (p. 221).
Javier E

One of Us - Lapham's Quarterly - 0 views

  • On what seems like a monthly basis, scientific teams announce the results of new experiments, adding to a preponderance of evidence that we’ve been underestimating animal minds, even those of us who have rated them fairly highly
  • an international group of prominent neuroscientists meeting at the University of Cambridge issued “The Cambridge Declaration on Consciousness in Non-Human Animals,” a document stating that “humans are not unique in possessing the neurological substrates that generate consciousness.” It goes further to conclude that numerous documented animal behaviors must be considered “consistent with experienced feeling states.”
  • Only with the Greeks does there enter the notion of a formal divide between our species, our animal, and every other on earth.
  • ...7 more annotations...
  • there’s that exquisite verse, one of the most beautiful in the Bible, the one that says if God cares deeply about sparrows, don’t you think He cares about you? One is so accustomed to dwelling on the second, human, half of the equation, the comforting part, but when you put your hand over that and consider only the first, it’s a little startling: God cares deeply about the sparrows. Not just that, He cares about them individually. “Are not five sparrows sold for two pennies?” Jesus says. “Yet not one of them is forgotten in God’s sight.”
  • The modern conversation on animal consciousness proceeds, with the rest of the Enlightenment, from the mind of René Descartes, whose take on animals was vividly (and approvingly) paraphrased by the French philosopher Nicolas Malebranche: they “eat without pleasure, cry without pain, grow without knowing it; they desire nothing, fear nothing, know nothing.” Descartes’ term for them was automata
  • In On the Origin of Species, Charles Darwin made the intriguing claim that among the naturalists he knew it was consistently the case that the better a researcher got to know a certain species, the more each individual animal’s actions appeared attributable to “reason and the less to unlearnt instinct.” The more you knew, the more you suspected that they were rational. That marks an important pivot, that thought, insofar as it took place in the mind of someone devoted to extremely close and meticulous study of living animals, a mind that had trained itself not to sentimentalize.
  • The sheer number and variety of experiments carried out in the twentieth century—and with, if anything, a renewed intensity in the twenty-first—exceeds summary. Reasoning, language, neurology, the science of emotions—every chamber where “consciousness” is thought to hide has been probed. Birds and chimps and dolphins have been made to look at themselves in mirrors—to observe whether, on the basis of what they see, they groom or preen (a measure, if somewhat arbitrary, of self-awareness). Dolphins have been found to grieve. Primates have learned symbolic or sign languages and then been interrogated with them. Their answers show thinking but have proved stubbornly open to interpretation on the issue of “consciousness,” with critics warning, as always, about the dangers of anthropomorphism, animal-rights bias, etc.
  • If we put aside the self-awareness standard—and really, how arbitrary and arrogant is that, to take the attribute of consciousness we happen to possess over all creatures and set it atop the hierarchy, proclaiming it the very definition of consciousness (Georg Christoph Lichtenberg wrote something wise in his notebooks, to the effect of: only a man can draw a self-portrait, but only a man wants to)—it becomes possible to say at least the following: the overwhelming tendency of all this scientific work, of its results, has been toward more consciousness. More species having it, and species having more of it than assumed.
  • The animal kingdom is symphonic with mental activity, and of its millions of wavelengths, we’re born able to understand the minutest sliver. The least we can do is have a proper respect for our ignorance.
  • The philosopher Thomas Nagel wrote an essay in 1974 titled, “What Is It Like To Be a Bat?”, in which he put forward perhaps the least overweening, most useful definition of “animal consciousness” ever written, one that channels Spinoza’s phrase about “that nature belonging to him wherein he has his being.” Animal consciousness occurs, Nagel wrote, when “there is something that it is to be that organism—something it is like for the organism.” The strangeness of his syntax carries the genuine texture of the problem. We’ll probably never be able to step far enough outside of our species-reality to say much about what is going on with them, beyond saying how like or unlike us they are. Many things are conscious on the earth, and we are one, and our consciousness feels like this; one of the things it causes us to do is doubt the existence of the consciousness of the other millions of species. But it also allows us to imagine a time when we might stop doing that.
anonymous

Symmetry in the universe: Physics says you shouldn't exist. - 0 views

  • You, me, and even the most calming manatee are nothing but impurities in an otherwise beautifully simple universe.
  • Your existence wasn’t just predicated on amorousness and luck of your ancestors, but on an almost absurdly finely tuned universe. Had the universe opted to turn up the strength of the electromagnetic force by even a small factor, poof
  • if the universe were only minutely denser than the one we inhabit, it would have collapsed before it began.
  • ...12 more annotations...
  • Worse still, the laws of physics themselves seem to be working against us. Ours isn’t just a randomly hostile universe, it's an actively hostile universe
  • The history of physics, in fact, is a marvel of using simple symmetry principles to construct complicated laws of the universe
  • if the entire universe were made symmetric, then all of the good features (e.g., you) are decidedly asymmetric lumps that ruin the otherwise perfect beauty of the cosmo
  • it would be a mistake to be comforted by the symmetries of the universe. In truth, they are your worst enemies. Everything we know about those rational, predictable arrangements dictates that you shouldn't be here at all.
  • How hostile is the universe to your fundamental existence? Very. Even the simplest assumptions about our place in the universe seem to lead inexorably to devastating results
  • The symmetry of the universe would bake us in no time at all, but an asymmetry rescues us
  • In literally every experiment and observation that we’ve ever done, matter and antimatter get created (or annihilated) in perfect concert. That is, every experiment except for one: us.
  • Matter and antimatter should have completely annihilated one another in the first nanoseconds after the Big Bang. You should not even exist. But you do, and there’s lots more matter where you came from.
  • if the perfect symmetry between matter and antimatter remained perfect, you wouldn’t be here to think about it.
  • The flow of time (as near as we can tell) is completely arbitrary. Does entropy increase with time or does it make time? Are our memories the thing that ultimately breaks the symmetry of time?
  • It seems only a matter of luck (and some fairly arbitrary-looking math) that a symmetric universe would end up being remotely hospitable to complex creatures like us
  • Without electrons binding to protons, there would be no chemistry, no molecules, and nothing more complicated than a cloud of charged gas. And you’re not a sentient cloud of gas, are you?
Javier E

Reasons for Reason - NYTimes.com - 0 views

  • Rick Perry’s recent vocal dismissals of evolution, and his confident assertion that “God is how we got here” reflect an obvious divide in our culture.
  • underneath this divide is a deeper one. Really divisive disagreements are typically not just over the facts. They are also about the best way to support our views of the facts. Call this a disagreement in epistemic principle. Our epistemic principles tell us what is rational to believe, what sources of information to trust.
  • I suspect that for most people, scientific evidence (or its lack) has nothing to do with it. Their belief in creationism is instead a reflection of a deeply held epistemic principle: that, at least on some topics, scripture is a more reliable source of information than science.  For others, including myself, this is never the case.
  • ...17 more annotations...
  • appealing to another method won’t help either — for unless that method can be shown to be reliable, using it to determine the reliability of the first method answers nothing.
  • Every one of our beliefs is produced by some method or source, be it humble (like memory) or complex (like technologically assisted science). But why think our methods, whatever they are, are trustworthy or reliable for getting at the truth? If I challenge one of your methods, you can’t just appeal to the same method to show that it is reliable. That would be circular
  • How do we rationally defend our most fundamental epistemic principles? Like many of the best philosophical mysteries, this a problem that can seem both unanswerable and yet extremely important to solve.
  • it seems to suggest that in the end, all “rational” explanations end up grounding out on something arbitrary. It all just comes down to what you happen to believe, what you feel in your gut, your faith.  Human beings have historically found this to be a very seductive idea,
  • this is precisely the situation we seem to be headed towards in the United States. We live isolated in our separate bubbles of information culled from sources that only reinforce our prejudices and never challenge our basic assumptions. No wonder that — as in the debates over evolution, or what to include in textbooks illustrate — we so often fail to reach agreement over the history and physical structure of the world itself. No wonder joint action grinds to a halt. When you can’t agree on your principles of evidence and rationality, you can’t agree on the facts. And if you can’t agree on the facts, you can hardly agree on what to do in the face of the facts.
  • We can’t decide on what counts as a legitimate reason to doubt my epistemic principles unless we’ve already settled on our principles—and that is the very issue in question.
  • The problem that skepticism about reason raises is not about whether I have good evidence by my principles for my principles. Presumably I do.[1] The problem is whether I can give a more objective defense of them. That is, whether I can give reasons for them that can be appreciated from what Hume called a “common point of view” — reasons that can “move some universal principle of the human frame, and touch a string, to which all mankind have an accord and symphony.”[2]
  • Any way you go, it seems you must admit you can give no reason for trusting your methods, and hence can give no reason to defend your most fundamental epistemic principles.
  • So one reason we should take the project of defending our epistemic principles seriously is that the ideal of civility demands it.
  • there is also another, even deeper, reason. We need to justify our epistemic principles from a common point of view because we need shared epistemic principles in order to even have a common point of view. Without a common background of standards against which we measure what counts as a reliable source of information, or a reliable method of inquiry, and what doesn’t, we won’t be able to agree on the facts, let alone values.
  • democracies aren’t simply organizing a struggle for power between competing interests; democratic politics isn’t war by other means. Democracies are, or should be, spaces of reasons.
  • we need an epistemic common currency because we often have to decide, jointly, what to do in the face of disagreement.
  • Sometimes we can accomplish this, in a democratic society, by voting. But we can’t decide every issue that way
  • We need some forms of common currency before we get to the voting booth.
  • Even if, as the skeptic says, we can’t defend the truth of our principles without circularity, we might still be able to show that some are better than others. Observation and experiment, for example, aren’t just good because they are reliable means to the truth. They are valuable because almost everyone can appeal to them. They have roots in our natural instincts, as Hume might have said.
  • that is one reason we need to resist skepticism about reason: we need to be able to give reasons for why some standards of reasons — some epistemic principles — should be part of that currency and some not.
  • Reasons for Reason By MICHAEL P. LYNCH
kortanekev

Democracies Always Fail - Decline of the Empire - 0 views

  • The U.S. was never intended to be a democracy; it was designed as a Republic. The Rule of Law (the U.S. Constitution,) was crafted to "bind down from mischief" the Rule of Man. Nevertheless, two centuries later the Rule of Law has been subverted completely, the Constitution is now meaningless, and we find ourselves living in a Fascist dictatorship run chiefly (though not solely) by Wall Street. A quiet coup d'etat has taken place over the span of many years and decades, and ours is an occpied country now. Even so, American fascism will ultimately fail and collapse. It is inevitable. Too much debt, too much misery, too many structural problems, and more Black Swans waiting in the wings than AmeriKa can possibly hope to survive. It's hard to say what the next chapter will look like, but it will be Hell on wheels, I'm sure.
  •  
    The section highlighted below is a comment written in 2012 in response to this article... He refers to the Rule of Law, the restriction of the arbitrary exercise of power by subordinating it to well-defined and established law, and claims that it has been completely subverted.. so much so that the constitution is meaningless.. this claim seems quite radical, but how far away are we really from a state such as this?  (Evie Kortanek 3/29/17) 
tongoscar

Vagueness | The First Amendment Encyclopedia - 0 views

shared by tongoscar on 03 Nov 19 - No Cached
  • A law that defines a crime in vague terms is likely to raise due-process issues.
  • Vague laws raise problems with due process
  • a law is unconstitutionally vague when people “of common intelligence must necessarily guess at its meaning.”
  • ...7 more annotations...
  • Thus, in overturning a California loitering law that required persons who wander or loiter on the streets to provide “credible and reliable” identification in Kolender v. Lawson (1983), the Supreme Court explained that “the void-for-vagueness doctrine requires that a penal statute define the criminal offense with sufficient definiteness that ordinary people can understand what conduct is prohibited and in a manner that does not encourage arbitrary and discriminatory treatment.”
  • the requirement that every law clearly define and articulate “the right to be observed, and the wrongs to be eschewed. . . .”
  • These examples undoubtedly were known to early American commentators and jurists, who often reiterated the importance of clarity in criminal statutes. James Madison in Federalist No. 62 warns of the “calamitous” results if laws are “so incoherent that they cannot be understood. . . .” In an early federal court case, United States v. Sharp (1815), the Court argued that laws that “create crimes, ought to be so explicit in themselves, or by reference to some other standard, that all men, subject to their penalties, may know what acts it is their duty to avoid.”
  • Court has shown three reasons vague statutes are unconstitutional
  • First, due process requires that a law provide fair warning and provides a “persons of ordinary intelligence a reasonable opportunity to know what is prohibited, so that he may act accordingly.”
  • Second, the law must provide “explicit standards” to law enforcement officials, judges, and juries so as to avoid “arbitrary and discriminatory application.”
  • Third, a vague statute can “inhibit the exercise” of First Amendment freedoms and may cause speakers to “steer far wider of the unlawful zone . . . than if the boundaries of the forbidden areas were clearly marked.”
Javier E

A Note on Sentimentality « Kenyon Review Blog - 0 views

  • One of the underdiscussed aspects of literary taste is the principle of exclusion: Not what is welcomed in a work, but what is disallowed.
  • In the world of contemporary fiction, one of the biggest no-no’s is what goes by the name “sentimentality.” It’s hard to perceive it for what it is, when you’re on the inside, but this is actually an arbitrary allergy on our part; it’s analogous to the Victorian one against “immorality” in literature.
  • You can’t assert on a factual basis that sentimentality is wrong or false, that it somehow misrepresents the human experience—because mushy-gushy moments are an actual part of real lived life
  • ...2 more annotations...
  • “Good” artists instinctively exclude elements of what they know to be part of real life if they feel it may be “bad,” artistically speaking. The prudish Victorians regarded sexual language in fiction that way. Charles Dickens, obviously, knew that people have sex, but he would never spell such a thing out in a novel, even though he knew dirty language was part of real life (especially among the lowlifes he sometimes wrote about, like Fagin and company). It was “bad.” It may have been “bad” in a different sense—as in indecent, improper—but it was “bad” artistically as well, in that his sense of his audience kept him from being too graphic or explicit, either in scenes or dialogue. Meanwhile, Dickens was at liberty to engineer a scene in which, say, a tuberculosis-stricken orphan switches places at the guillotine with a virginal seamstress. Today, you can put all sorts of explicit sexual references in fiction, and the average critic won’t chide you for immorality or indecency; sentimentality will get you panned every time.
  • The critical temperament of an age shapes an age’s creativity not just in the supply-demand way, motivating writers to produce what is praised and valued by critics and readers. The critical temperament actually blocks off areas of life to create a portrayal of the world that fits its idea of the world. So a prudish era like the Victorian will target immorality—and a cynical or ironic era like ours will target sentimentality.
Javier E

The American Scholar: Hardwired for Talk? - Jessica Love - 0 views

  • during the last decade, the pendulum of scientific thought has begun its inevitable swing in the other direction. These days, general cognitive mechanisms, not language-specific ones, are all the rage. We humans are really smart. We’re fantastic at recognizing patterns in our environments—patterns that may have nothing to do with language. Who says that the same abilities that allow us to play the violin aren’t also sufficient for learning subject-verb agreement? Perhaps speech isn’t genetically privileged so much as babies are just really motivated to learn to communicate.
  • If the brain did evolve for language, how did it do so? An idea favored by some scholars is that better communicators may also have been more reproductively successful. Gradually, as the prevalence of these smooth talkers’ offspring increased in the population, the concentration of genes favorable to linguistic communication may have increased as well.
  • two recent articles, one published in 2009 in the Proceedings of the National Academy of the Sciences and a 2012 follow-up in PLOS ONE (freely available), rebut this approach
  • ...4 more annotations...
  • Over the course of many generations, the gene pool thickens with helpful alleles until—voila!—the overwhelming number of these alleles are helpful and learners guesses are so uncannily accurate as to seem instinctual. Makes sense, no? But now consider that languages change. (And in the real world they do—quickly.) If the language’s principles switch often, many of those helpfully biased alleles are suddenly not so helpful at all. For fast-changing languages, the model finds, neutral alleles win out:
  • when the language is programmed to hardly mutate at all, the genes have a chance to adapt to the new language. The two populations become genetically distinct, their alleles heavily biased toward the idiosyncrasies of their local language—precisely what we don’t see in the real world
  • when the language is programmed to change quickly, neutral alleles are again favored.
  • maybe our brains couldn’t have evolved to handle language’s more arbitrary properties, because languages never stay the same and, as far as we know, they never have. What goes unspoken here is that the simulations seem to suggest that truly universal properties—such as language’s hierarchical nature—could have been encoded in our brains.
Javier E

Moral Puzzles That Tots Struggle With | Mind & Matter - WSJ.com - 2 views

  • children are "intuitive sociologists" trying to make sense of the social world. We already know that very young children make up theories about everyday physics, psychology and biology. Dr. Rhodes thinks that they have theories about social groups, too.
  • children aren't just biased against other racial groups: They also assume that everybody else will be biased against other groups. And this extends beyond race, gender and religion to the arbitrary realm of Zazes and Flurps.
  • intuitive social theory may even influence how children develop moral distinctions
  • ...4 more annotations...
  • Back in the 1980s, Judith Smetana and colleagues discovered that very young kids could discriminate between genuinely moral principles and mere social conventions. First, the researchers asked about everyday rules—a rule that you can't be mean to other children, for instance, or that you have to hang up your clothes. The children said that, of course, breaking the rules was wrong. But then the researchers asked another question: What would you think if teachers and parents changed the rules to say that being mean and dropping clothes were OK? Children as young as 2 said that, in that case, it would be OK to drop your clothes, but not to be mean. No matter what the authorities decreed, hurting others, even just hurting their feelings, was always wrong. It's a strikingly robust result—true for children from Brazil to Korea.
  • in the new study, Dr. Rhodes asked similar moral questions about the Zazes and Flurps. The 4-year-olds said it would always be wrong for Zazes to hurt the feelings of others in their group. But if teachers decided that Zazes could hurt Flurps' feelings, then it would be OK to do so. Intrinsic moral obligations only extended to members of their own group.
  • The 4-year-olds demonstrate the deep roots of an ethical tension that has divided philosophers for centuries. We feel that our moral principles should be universal, but we simultaneously feel that there is something special about our obligations to our own group, whether it's a family, clan or country.
  • you don't have to be taught to prefer your own group—you can pick that up fine by yourself. But we do have to teach our children how to widen the moral circle, and to extend their natural compassion and care even to the Flurps.
Emily Freilich

Free Will Is Not Going Away : 13.7: Cosmos And Culture : NPR - 0 views

  • recent work in neuroscience is forcing a reconsideration of free will, to the point of questioning our freedom to choose. Many neuroscientists, and some philosophers, consider free will to be an illusion.
  • our brains decide a course of action before we know it.
  • , the motor region responsible for making a motion in response to a question fired up before the subject was aware of it. The brain seems to be deciding before the mind knows about it.
  • ...3 more annotations...
  • imagine that in the future scientists will be able to map and decode all your mental states with arbitrary precision. They could then predict what you will do before you are aware of your choice.
  • the experiments in question here are limited to decisions that are far removed from the truly complex choices we make in our lives, those that involve a lot of back and forth thinking, prompt confusion, result in pondering, require talking to other people and generally take time to arrive at a conclusion
  • When it comes to the choices we make in life, there is a spectrum of complexity and this is reflected in the issue of free will. Some do indeed happen before conscious awareness, and others don't.
Javier E

Joshua Foer: John Quijada and Ithkuil, the Language He Invented : The New Yorker - 2 views

  • Languages are something of a mess. They evolve over centuries through an unplanned, democratic process that leaves them teeming with irregularities, quirks, and words like “knight.” No one who set out to design a form of communication would ever end up with anything like English, Mandarin, or any of the more than six thousand languages spoken today.“Natural languages are adequate, but that doesn’t mean they’re optimal,” John Quijada, a fifty-four-year-old former employee of the California State Department of Motor Vehicles, told me. In 2004, he published a monograph on the Internet that was titled “Ithkuil: A Philosophical Design for a Hypothetical Language.” Written like a linguistics textbook, the fourteen-page Web site ran to almost a hundred and sixty thousand words. It documented the grammar, syntax, and lexicon of a language that Quijada had spent three decades inventing in his spare time. Ithkuil had never been spoken by anyone other than Quijada, and he assumed that it never would be.
  • his “greater goal” was “to attempt the creation of what human beings, left to their own devices, would never create naturally, but rather only by conscious intellectual effort: an idealized language whose aim is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language.”
  • Ithkuil, one Web site declared, “is a monument to human ingenuity and design.” It may be the most complete realization of a quixotic dream that has entranced philosophers for centuries: the creation of a more perfect language.
  • ...25 more annotations...
  • Since at least the Middle Ages, philosophers and philologists have dreamed of curing natural languages of their flaws by constructing entirely new idioms according to orderly, logical principles.
  • What if, they wondered, you could create a universal written language that could be understood by anyone, a set of “real characters,” just as the creation of Arabic numerals had done for counting? “This writing will be a kind of general algebra and calculus of reason, so that, instead of disputing, we can say that ‘we calculate,’ ” Leibniz wrote, in 1679.
  • nventing new forms of speech is an almost cosmic urge that stems from what the linguist Marina Yaguello, the author of “Lunatic Lovers of Language,” calls “an ambivalent love-hate relationship.” Language creation is pursued by people who are so in love with what language can do that they hate what it doesn’t. “I don’t believe any other fantasy has ever been pursued with so much ardor by the human spirit, apart perhaps from the philosopher’s stone or the proof of the existence of God; or that any other utopia has caused so much ink to flow, apart perhaps from socialism,”
  • Quijada began wondering, “What if there were one single language that combined the coolest features from all the world’s languages?”
  • Solresol, the creation of a French musician named Jean-François Sudre, was among the first of these universal languages to gain popular attention. It had only seven syllables: Do, Re, Mi, Fa, So, La, and Si. Words could be sung, or performed on a violin. Or, since the language could also be translated into the seven colors of the rainbow, sentences could be woven into a textile as a stream of colors.
  • “I had this realization that every individual language does at least one thing better than every other language,” he said. For example, the Australian Aboriginal language Guugu Yimithirr doesn’t use egocentric coördinates like “left,” “right,” “in front of,” or “behind.” Instead, speakers use only the cardinal directions. They don’t have left and right legs but north and south legs, which become east and west legs upon turning ninety degrees
  • Among the Wakashan Indians of the Pacific Northwest, a grammatically correct sentence can’t be formed without providing what linguists refer to as “evidentiality,” inflecting the verb to indicate whether you are speaking from direct experience, inference, conjecture, or hearsay.
  • In his “Essay Towards a Real Character, and a Philosophical Language,” from 1668, Wilkins laid out a sprawling taxonomic tree that was intended to represent a rational classification of every concept, thing, and action in the universe. Each branch along the tree corresponded to a letter or a syllable, so that assembling a word was simply a matter of tracing a set of forking limbs
  • he started scribbling notes on an entirely new grammar that would eventually incorporate not only Wakashan evidentiality and Guugu Yimithirr coördinates but also Niger-Kordofanian aspectual systems, the nominal cases of Basque, the fourth-person referent found in several nearly extinct Native American languages, and a dozen other wild ways of forming sentences.
  • he discovered “Metaphors We Live By,” a seminal book, published in 1980, by the cognitive linguists George Lakoff and Mark Johnson, which argues that the way we think is structured by conceptual systems that are largely metaphorical in nature. Life is a journey. Time is money. Argument is war. For better or worse, these figures of speech are profoundly embedded in how we think.
  • I asked him if he could come up with an entirely new concept on the spot, one for which there was no word in any existing language. He thought about it for a moment. “Well, no language, as far as I know, has a single word for that chin-stroking moment you get, often accompanied by a frown on your face, when someone expresses an idea that you’ve never thought of and you have a moment of suddenly seeing possibilities you never saw before.” He paused, as if leafing through a mental dictionary. “In Ithkuil, it’s ašţal.”
  • Neither Sapir nor Whorf formulated a definitive version of the hypothesis that bears their names, but in general the theory argues that the language we speak actually shapes our experience of reality. Speakers of different languages think differently. Stronger versions of the hypothesis go even further than this, to suggest that language constrains the set of possible thoughts that we can have. In 1955, a sociologist and science-fiction writer named James Cooke Brown decided he would test the Sapir-Whorf hypothesis by creating a “culturally neutral” “model language” that might recondition its speakers’ brains.
  • most conlangers come to their craft by way of fantasy and science fiction. J. R. R. Tolkien, who called conlanging his “secret vice,” maintained that he created the “Lord of the Rings” trilogy for the primary purpose of giving his invented languages, Quenya, Sindarin, and Khuzdul, a universe in which they could be spoken. And arguably the most commercially successful invented language of all time is Klingon, which has its own translation of “Hamlet” and a dictionary that has sold more than three hundred thousand copies.
  • He imagined that Ithkuil might be able to do what Lakoff and Johnson said natural languages could not: force its speakers to precisely identify what they mean to say. No hemming, no hawing, no hiding true meaning behind jargon and metaphor. By requiring speakers to carefully consider the meaning of their words, he hoped that his analytical language would force many of the subterranean quirks of human cognition to the surface, and free people from the bugs that infect their thinking.
  • Brown based the grammar for his ten-thousand-word language, called Loglan, on the rules of formal predicate logic used by analytical philosophers. He hoped that, by training research subjects to speak Loglan, he might turn them into more logical thinkers. If we could change how we think by changing how we speak, then the radical possibility existed of creating a new human condition.
  • today the stronger versions of the Sapir-Whorf hypothesis have “sunk into . . . disrepute among respectable linguists,” as Guy Deutscher writes, in “Through the Looking Glass: Why the World Looks Different in Other Languages.” But, as Deutscher points out, there is evidence to support the less radical assertion that the particular language we speak influences how we perceive the world. For example, speakers of gendered languages, like Spanish, in which all nouns are either masculine or feminine, actually seem to think about objects differently depending on whether the language treats them as masculine or feminine
  • The final version of Ithkuil, which Quijada published in 2011, has twenty-two grammatical categories for verbs, compared with the six—tense, aspect, person, number, mood, and voice—that exist in English. Eighteen hundred distinct suffixes further refine a speaker’s intent. Through a process of laborious conjugation that would befuddle even the most competent Latin grammarian, Ithkuil requires a speaker to home in on the exact idea he means to express, and attempts to remove any possibility for vagueness.
  • Every language has its own phonemic inventory, or library of sounds, from which a speaker can string together words. Consonant-poor Hawaiian has just thirteen phonemes. English has around forty-two, depending on dialect. In order to pack as much meaning as possible into each word, Ithkuil has fifty-eight phonemes. The original version of the language included a repertoire of grunts, wheezes, and hacks that are borrowed from some of the world’s most obscure tongues. One particular hard-to-make clicklike sound, a voiceless uvular ejective affricate, has been found in only a few other languages, including the Caucasian language Ubykh, whose last native speaker died in 1992.
  • Human interactions are governed by a set of implicit codes that can sometimes seem frustratingly opaque, and whose misreading can quickly put you on the outside looking in. Irony, metaphor, ambiguity: these are the ingenious instruments that allow us to mean more than we say. But in Ithkuil ambiguity is quashed in the interest of making all that is implicit explicit. An ironic statement is tagged with the verbal affix ’kçç. Hyperbolic statements are inflected by the letter ’m.
  • “I wanted to use Ithkuil to show how you would discuss philosophy and emotional states transparently,” Quijada said. To attempt to translate a thought into Ithkuil requires investigating a spectrum of subtle variations in meaning that are not recorded in any natural language. You cannot express a thought without first considering all the neighboring thoughts that it is not. Though words in Ithkuil may sound like a hacking cough, they have an inherent and unavoidable depth. “It’s the ideal language for political and philosophical debate—any forum where people hide their intent or obfuscate behind language,” Quijada co
  • In Ithkuil, the difference between glimpsing, glancing, and gawking is the mere flick of a vowel. Each of these distinctions is expressed simply as a conjugation of the root word for vision. Hunched over the dining-room table, Quijada showed me how he would translate “gawk” into Ithkuil. First, though, since words in Ithkuil are assembled from individual atoms of meaning, he had to engage in some introspection about what exactly he meant to say.For fifteen minutes, he flipped backward and forward through his thick spiral-bound manuscript, scratching his head, pondering each of the word’s aspects, as he packed the verb with all of gawking’s many connotations. As he assembled the evolving word from its constituent meanings, he scribbled its pieces on a notepad. He added the “second degree of the affix for expectation of outcome” to suggest an element of surprise that is more than mere unpreparedness but less than outright shock, and the “third degree of the affix for contextual appropriateness” to suggest an element of impropriety that is less than scandalous but more than simply eyebrow-raising. As he rapped his pen against the notepad, he paged through his manuscript in search of the third pattern of the first stem of the root for “shock” to suggest a “non-volitional physiological response,” and then, after several moments of contemplation, he decided that gawking required the use of the “resultative format” to suggest “an event which occurs in conjunction with the conflated sense but is also caused by it.” He eventually emerged with a tiny word that hardly rolled off the tongue: apq’uxasiu. He spoke the first clacking syllable aloud a couple of times before deciding that he had the pronunciation right, and then wrote it down in the script he had invented for printed Ithkuil:
  • “You can make up words by the millions to describe concepts that have never existed in any language before,” he said.
  • Many conlanging projects begin with a simple premise that violates the inherited conventions of linguistics in some new way. Aeo uses only vowels. Kēlen has no verbs. Toki Pona, a language inspired by Taoist ideals, was designed to test how simple a language could be. It has just a hundred and twenty-three words and fourteen basic sound units. Brithenig is an answer to the question of what English might have sounded like as a Romance language, if vulgar Latin had taken root on the British Isles. Láadan, a feminist language developed in the early nineteen-eighties, includes words like radíidin, defined as a “non-holiday, a time allegedly a holiday but actually so much a burden because of work and preparations that it is a dreaded occasion; especially when there are too many guests and none of them help.”
  • “We think that when a person learns Ithkuil his brain works faster,” Vishneva told him, in Russian. She spoke through a translator, as neither she nor Quijada was yet fluent in their shared language. “With Ithkuil, you always have to be reflecting on yourself. Using Ithkuil, we can see things that exist but don’t have names, in the same way that Mendeleyev’s periodic table showed gaps where we knew elements should be that had yet to be discovered.”
  • Lakoff, who is seventy-one, bearded, and, like Quijada, broadly built, seemed to have read a fair portion of the Ithkuil manuscript and familiarized himself with the language’s nuances.“There are a whole lot of questions I have about this,” he told Quijada, and then explained how he felt Quijada had misread his work on metaphor. “Metaphors don’t just show up in language,” he said. “The metaphor isn’t in the word, it’s in the idea,” and it can’t be wished away with grammar.“For me, as a linguist looking at this, I have to say, ‘O.K., this isn’t going to be used.’ It has an assumption of efficiency that really isn’t efficient, given how the brain works. It misses the metaphor stuff. But the parts that are successful are really nontrivial. This may be an impossible language,” he said. “But if you think of it as a conceptual-art project I think it’s fascinating.”
sgardner35

Edward Snowden: The World Says No to Surveillance - NYTimes.com - 0 views

  • MOSCOW — TWO years ago today, three journalists and I worked nervously in a Hong Kong hotel room, waiting to see how the world would react to the revelation that the National Security Agency had been making records of nearly every phone call in the United States. In the days that followed, those journalists and others published documents revealing that democratic governments had been monitoring the private activities of ordinary citizens who had done nothing wrong.
  • Privately, there were moments when I worried that we might have put our privileged lives at risk for nothing — that the public would react with indifference, or practiced cynicism, to the revelations.
  • Since 2013, institutions across Europe have ruled similar laws and operations illegal and imposed new restrictions on future activities. The United Nations declared mass surveillance an unambiguous violation of human rights. In Latin America, the efforts of citizens in Brazil led to the Marco Civil, an Internet Bill of Rights. Recognizing the critical role of informed citizens in correcting the excesses of government, the Council of Europe called for new laws to protect whistle-blowers.
  • ...2 more annotations...
  • are now enabled by default in the products of pioneering companies like Apple, ensuring that even if your phone is stolen, your private life remains private. Such structural technological changes can ensure access to basic privacies beyond borders, insulating ordinary citizens from the arbitrary passage of anti-privacy laws, such as those now descending upon Russia.
  • Spymasters in Australia, Canada and France have exploited recent tragedies to seek intrusive new powers despite evidence such programs would not have prevented attacks. Prime Minister David Cameron of Britain recently mused, “Do we want to allow a means of communication between people which we cannot read?” He soon found his answer, proclaiming that “for too long, we have been a passively tolerant society, saying to our citizens: As long as you obey the law, we will leave you alone.”
Javier E

Enlightenment's Evil Twin - The Atlantic - 0 views

  • The first time I can remember feeling like I didn’t exist, I was 15. I was sitting on a train and all of a sudden I felt like I’d been dropped into someone else’s body. My memories, experiences, and feelings—the things that make up my intrinsic sense of “me-ness”—projected across my mind like phantasmagoria, but I felt like they belonged to someone else. Like I was experiencing life in the third person.
  • It’s characterized by a pervasive and disturbing sense of unreality in both the experience of self (called “depersonalization”) and one’s surroundings (known as “derealization”); accounts of it are surreal, obscure, shrouded in terms like “unreality” and “dream,” but they’re often conveyed with an almost incongruous lucidity.
  • It’s not a psychotic condition; the sufferers are aware that what they’re perceiving is unusual. “We call it an ‘as if’ disorder. People say they feel as if they’re in a movie, as if they’re a robot,” Medford says.
  • ...13 more annotations...
  • Studies carried out with college students have found that brief episodes are common in young people, with a prevalence ranging from 30 to 70 percent. It can happen when you’re jet-lagged, hungover, or stressed. But for roughly 1 to 2 percent of the population, it becomes persistent, and distressing
  • Research suggests that areas of the brain that are key to emotional and physical sensations, such as the amygdala and the insula, appear to be less responsive in chronic depersonalization sufferers. You might become less empathetic; your pain threshold might increase. These numbing effects mean that it’s commonly conceived as a defense mechanism; Hunter calls it a “psychological trip switch” which can be triggered in times of stress.
  • Have you ever played that game when you repeat a word over and over again until it loses all meaning? It’s called semantic satiation. Like words, can a sense of self be broken down into arbitrary, socially-constructed components?
  • That question may be why the phenomenon has attracted a lot of interest from philosophers. In a sense, the experience presupposes certain notions of how the self is meant to feel. We think of a self as an essential thing—a soul or an ego that everyone has and is aware of—but scientists and philosophers have been telling us for a while now that the self isn’t quite as it seems
  • there is no center in the brain where the self is generated. “What we experience is a powerful depiction generated by our brains for our benefit,” he writes. Brains make sense of data that would otherwise be overwhelming. “Experiences are fragmented episodes unless they are woven together in a meaningful narrative,” he writes, with the self being the story that “pulls it all together.”
  • “The unity [of self that] we experience, which allows us legitimately to talk of ‘I,’ is a result of the Ego Trick—the remarkable way in which a complicated bundle of mental events, made possible by the brain, creates a singular self, without there being a singular thing underlying it,”
  • depersonalization is both a burden, a horrible burden—but it’s in some strange way a blessing, to reach some depths, some meaning which somehow comes only in the broken mirror,” Bezzubova says. “It’s a Dostoyevsky style illumination—where clarity cannot be distinguished from pain.”
  • for her, the experience is pleasant. “It’s helped me in my life,” she says. Over the past few years, she has learned to interpret her experiences in a Buddhist context, and she describes depersonalization as a “deconditioning” of sorts: “The significance I place on the world is all in my mind,”
  • “I believe I am on the path to enlightenment,” she says.
  • The crossover between dark mental states and Buddhist practices is being investigated
  • Mindfulness has become increasingly popular in the West over the past few years, but as Britton told The Atlantic, the practice in its original form isn’t just about relaxation: It’s about the often painstaking process of coming to terms with three specific insights of the Theravadin Buddhist tradition, which are anicca, or impermanence; dukkha, or dissatisfaction; and anatta, or not-self.
  • depersonalization must cause the patient distress and have an impact on her daily functioning for it to be classified as clinically significant. In this sense, it seems inappropriate to call Alice’s experiences pathological. “We have ways of measuring disorders, but you have to ask if it’s meaningful. It’s an open question,”
  • “I think calling it a loss of self is maybe a convenient shorthand for something that’s hard to capture,” he says. “I prefer to talk about experience—because that’s what’s important in psychiatry.”
Javier E

Why Disruptors Are Always White Guys -- NYMag - 0 views

  • There are three pernicious and interrelated phenomena at work here. First, founders are disproportionately white dudes. Second, white dudes are disproportionately encouraged to become founders. Third, white dudes are disproportionately recognized as founders.
  • Ultimately, this phenomenon can lead to the erasure of women and minorities in leadership roles from the picture
  • The research is broad and deep on the “glass walls” that keep women from becoming entrepreneurs, or the profound discrimination that many people of color face when starting their own businesses. Investors like a pitch coming from a man better than coming from a woman — even when the pitch is the same. Minority-owned firms get rejected for loans at twice the rate of white-owned firms, and often pay higher interest rates when they get them, too. 
  • ...5 more annotations...
  • There’s a single discriminatory phenomenon underpinning all three of these issues, sometimes called “think manager, think male.”
  • . “Probably the single most important hurdle for women in management in all industrialized countries is the persistent stereotype that associates management with being male,” one study reads
  • Other studies find distinctly similar stereotypes related to people of color and leadership.
  • In today’s media economy, we’re facing the “think entrepreneur, think white dude” problem.
  • it need not be so. As a very preliminary step, if publications insisted on putting women and minorities on their stupid, arbitrary lists, it would elevate those entrepreneurs and founders. It might help break down the deep stereotypes that help to discourage women and minorities from becoming entrepreneurs in the first place.
Javier E

The Agency Moment - NYTimes.com - 0 views

  • I’ve been thinking about moments of agency of this sort because often you see people who lack full agency. Sometimes you see lack of agency among the disadvantaged. Their lives can be so blown about by economic disruption, arbitrary bosses and general disorder that they lose faith in the idea that input leads to predictable output.
  • Among the privileged, especially the privileged young, you see people who have been raised to be approval-seeking machines. They act active, busy and sleepless, but inside they often feel passive and not in control. Their lives are directed by other people’s expectations, external criteria and definitions of success that don’t actually fit them.
  • So many people are struggling for agency. They are searching for the solid criteria that will help them make their own judgments. They are hoping to light an inner fire that will fuel relentless action in the same direction.
  • ...1 more annotation...
  • Agency is not automatic. It has to be given birth to, with pushing and effort. It’s not just the confidence and drive to act. It’s having engraved inner criteria to guide action. The agency moment can happen at any age, or never. I guess that’s when adulthood starts.
Javier E

How to Raise a University's Profile: Pricing and Packaging - NYTimes.com - 0 views

  • I talked to a half-dozen of Hugh Moren’s fellow students. A highly indebted senior who was terrified of the weak job market described George Washington, where he had invested considerable time getting and doing internships, as “the world’s most expensive trade school.” Another mentioned the abundance of rich students whose parents were giving them a fancy-sounding diploma the way they might a new car. There are serious students here, he acknowledged, but: “You can go to G.W. and essentially buy a degree.”
  • A recent study from the Organization for Economic Cooperation and Development found that, on average, American college graduates score well below college graduates from most other industrialized countries in mathematics. In literacy (“understanding, evaluating, using and engaging with written text”), scores are just average. This comes on the heels of Richard Arum and Josipa Roksa’s “Academically Adrift,” a study that found “limited or no learning” among many college students.Instead of focusing on undergraduate learning, nu
  • colleges have been engaged in the kind of building spree I saw at George Washington. Recreation centers with world-class workout facilities and lazy rivers rise out of construction pits even as students and parents are handed staggeringly large tuition bills. Colleges compete to hire famous professors even as undergraduates wander through academic programs that often lack rigor or coherence. Campuses vie to become the next Harvard — or at least the next George Washington — while ignoring the growing cost and suspect quality of undergraduate education.
  • ...58 more annotations...
  • Mr. Trachtenberg understood the centrality of the university as a physical place. New structures were a visceral sign of progress. They told visitors, donors and civic leaders that the institution was, like beams and scaffolding rising from the earth, ascending. He added new programs, recruited more students, and followed the dictate of constant expansion.
  • the American research university had evolved into a complicated and somewhat peculiar organization. It was built to be all things to all people: to teach undergraduates, produce knowledge, socialize young men and women, train workers for jobs, anchor local economies, even put on weekend sports events. And excellence was defined by similarity to old, elite institutions. Universities were judged by the quality of their scholars, the size of their endowments, the beauty of their buildings and the test scores of their incoming students.
  • John Silber embarked on a huge building campaign while bringing luminaries like Saul Bellow and Elie Wiesel on board to teach and lend their prestige to the B.U. name, creating a bigger, more famous and much more costly institution. He had helped write a game plan for the aspiring college president.
  • GWU is, for all intents and purposes, a for-profit organization. Best example: study abroad. Their top program, a partnering with Sciences Po, costs each student (30 of them, on a program with 'prestige' status?) a full semester's tuition. It costs GW, according to Sciences Po website, €1000. A neat $20,000 profit per student (who is in digging her/himself deeper and deeper in debt.) Moreover, the school takes a $500 admin fee for the study abroad application! With no guarantee that all credits transfer. Students often lose a partial semester, GW profits again. Nor does GW offer help with an antiquated, one-shot/no transfers, tricky registration process. It's tough luck in gay Paris.Just one of many examples. Dorms with extreme mold, off-campus housing impossible for freshmen and sophomores. Required meal plan: Chick-o-Filet etc. Classes with over 300 students (required).This is not Harvard, but costs same.Emotional problems? Counselors too few. Suicides continue and are not appropriately addressed. Caring environment? Extension so and so, please hold.It's an impressive campus, I'm an alum. If you apply, make sure the DC experience is worth the price: good are internships, a few colleges like Elliot School, post-grad.GWU uses undergrad $$ directly for building projects, like the medical center to which students have NO access. (Student health facility is underfunded, outsourced.)Outstanding professors still make a difference. But is that enough?
  • Mr. Trachtenberg, however, understood something crucial about the modern university. It had come to inhabit a market for luxury goods. People don’t buy Gucci bags merely for their beauty and functionality. They buy them because other people will know they can afford the price of purchase. The great virtue of a luxury good, from the manufacturer’s standpoint, isn’t just that people will pay extra money for the feeling associated with a name brand. It’s that the high price is, in and of itself, a crucial part of what people are buying.
  • Mr. Trachtenberg convinced people that George Washington was worth a lot more money by charging a lot more money. Unlike most college presidents, he was surprisingly candid about his strategy. College is like vodka, he liked to explain.
  • The Absolut Rolex plan worked. The number of applicants surged from some 6,000 to 20,000, the average SAT score of students rose by nearly 200 points, and the endowment jumped from $200 million to almost $1 billion.
  • The university became a magnet for the children of new money who didn’t quite have the SATs or family connections required for admission to Stanford or Yale. It also aggressively recruited international students, rich families from Asia and the Middle East who believed, as nearly everyone did, that American universities were the best in the world.
  • U.S. News & World Report now ranks the university at No. 54 nationwide, just outside the “first tier.”
  • The watch and vodka analogies are correct. Personally, I used car analogies when discussing college choices with my kids. We were in the fortunate position of being able to comfortably send our kids to any college in the country and have them leave debt free. Notwithstanding, I told them that they would be going to a state school unless they were able to get into one of about 40 schools that I felt, in whatever arbitrary manner I decided, that was worth the extra cost. They both ended up going to state schools.College is by and large a commodity and you get out of it what you put into it. Both of my kids worked hard in college and were involved in school life. They both left the schools better people and the schools better schools for them being there. They are both now successful adults.I believe too many people look for the prestige of a named school and that is not what college should be primarily about.
  • In 2013, only 14 percent of the university’s 10,000 undergraduates received a grant — a figure on a par with elite schools but far below the national average. The average undergraduate borrower leaves with about $30,800 in debt.
  • When I talk to the best high school students in my state I always stress the benefits of the honors college experience at an affordable public university. For students who won't qualify for a public honors college. the regular pubic university experience is far preferable to the huge debt of places like GW.
  • Carey would do well to look beyond high ticket private universities (which after all are still private enterprises) and what he describes as the Olympian heights of higher education (which for some reason seems also to embitter him) and look at the system overall . The withdrawal of public support was never a policy choice; it was a political choice, "packaged and branded" as some tax cutting palaver all wrapped up in the argument that a free-market should decide how much college should cost and how many seats we need. In such an environment, trustees at private universities are no more solely responsible for turning their degrees into commodities than the administrations of state universities are for raising the number of out-of-state students in order to offset the loss of support from their legislatures. No doubt, we will hear more about market based solutions and technology from Mr. Carey
  • I went to GW back in the 60s. It was affordable and it got me away from home in New York. While I was there, Newsweek famously published a article about the DC Universities - GW, Georgetown, American and Catholic - dubbing them the Pony league, the schools for the children of wealthy middle class New Yorkers who couldn't get into the Ivy League. Nobody really complained. But that wasn't me. I went because I wanted to be where the action was in the 60s, and as we used to say - "GW was literally a stone's throw from the White House. And we could prove it." Back then, the two biggest alumni names were Jackie Kennedy, who's taken some classes there, and J. Edgar Hoover. Now, according to the glossy magazine they send me each month, it's the actress Kerry Washington. There's some sort of progress there, but I'm a GW alum and not properly trained to understand it.
  • This explains a lot of the modern, emerging mentality. It encompasses the culture of enforced grade inflation, cheating and anti-intellectualism in much of higher education. It is consistent with our culture of misleading statistics and information, cronyism and fake quality, the "best and the brightest" being only schemers and glad handers. The wisdom and creativity engendered by an honest, rigorous academic education are replaced by the disingenuous quick fix, the winner-take-all mentality that neglects the common good.
  • I attended nearby Georgetown University and graduated in 1985. Relative to state schools and elite schools, it was expensive then. I took out loans. I had Pell grants. I had work-study and GSL. I paid my debt of $15,000 off in ten years. Would I have done it differently? Yes: I would have continued on to graduate school and not worried about paying off those big loans right after college. My career work out and I am grateful for the education I received and paid for. But I would not recommend to my nieces and nephews debts north of $100,000 for a BA in liberal arts. Go community. Then go state. Then punch your ticket to Harvard, Yale or Stanford — if you are good enough.
  • American universities appear to have more and more drifted away from educating individuals and citizens to becoming high priced trade schools and purveyors of occupational licenses. Lost in the process is the concept of expanding a student's ability to appreciate broadly and deeply, as well as the belief that a republican democracy needs an educated citizenry, not a trained citizenry, to function well.Both the Heisman Trophy winner and the producer of a successful tech I.P.O. likely have much in common, a college education whose rewards are limited to the financial. I don't know if I find this more sad on the individual level or more worrisome for the future of America.
  • This is now a consumer world for everything, including institutions once thought to float above the Shakespearean briars of the work-a-day world such as higher education, law and medicine. Students get this. Parents get this. Everything is negotiable: financial aid, a spot in the nicest dorm, tix to the big game. But through all this, there are faculty - lots of 'em - who work away from the fluff to link the ambitions of the students with the reality and rigor of the 21st century. The job of the student is to get beyond the visible hype of the surroundings and find those faculty members. They will make sure your investment is worth it
  • My experience in managing or working with GW alumni in their 20's or 30's has not been good. Virtually all have been mentally lazy and/or had a stunning sense of entitlement. Basically they've been all talk and no results. That's been quite a contrast to the graduates from VA/MD state universities.
  • More and more, I notice what my debt-financed contributions to the revenue streams of my vendors earn them, not me. My banks earned enough to pay ridiculous bonuses to employees for reckless risk-taking. My satellite tv operator earned enough to overpay ESPN for sports programming that I never watch--and that, in turn, overpays these idiotic pro athletes and college sports administrators. My health insurer earned enough to defeat one-payor insurance; to enable the opaque, inefficient billing practices of hospitals and other providers; and to feed the behemoth pharmaceutical industry. My church earned enough to buy the silence of sex abuse victims and oppose progressive political candidates. And my govt earned enough to continue ag subsidies, inefficient defense spending, and obsolete transportation and energy policies.
  • as the parent of GWU freshman I am grateful for every opportunity afforded her. She has a generous merit scholarship, is in the honors program with some small classes, and has access to internships that can be done while at school. GWU also gave her AP credits to advance her to sophomore status. Had she attended the state flagship school (where she was accepted into that exclusive honors program) she would have a great education but little else. It's not possible to do foreign affairs related internship far from D.C. or Manhattan. She went to a very competitive high school where for the one or two ivy league schools in which she was interested, she didn't have the same level of connections or wealth as many of her peers. Whether because of the Common Application or other factors, getting into a good school with financial help is difficult for a middle class student like my daughter who had a 4.0 GPA and 2300 on the SAT. She also worked after school.The bottom line - GWU offered more money than perceived "higher tier" universities, and brought tuition to almost that of our state school system. And by the way, I think she is also getting a very good education.
  • This article reinforces something I have learned during my daughter's college application process. Most students choose a school based on emotion (reputation) and not value. This luxury good analogy holds up.
  • The entire education problem can be solved by MOOCs lots and lots of them plus a few closely monitored tests and personal interviews with people. Of course many many people make MONEY off of our entirely inefficient way of "educating" -- are we even really doing that -- getting a degree does NOT mean one is actually educated
  • As a first-generation college graduate I entered GW ambitious but left saddled with debt, and crestfallen at the hard-hitting realization that my four undergraduate years were an aberration from what life is actually like post-college: not as simple as getting an [unpaid] internship with a fancy titled institution, as most Colonials do. I knew how to get in to college, but what do you do after the recess of life ends?I learned more about networking, resume plumping (designated responses to constituents...errr....replied to emails), and elevator pitches than actual theory, economic principles, strong writing skills, critical thinking, analysis, and philosophy. While relatively easy to get a job after graduating (for many with a GW degree this is sadly not the case) sustaining one and excelling in it is much harder. It's never enough just to be able to open a new door, you also need to be prepared to navigate your way through that next opportunity.
  • this is a very telling article. Aimless and directionless high school graduates are matched only by aimless and directionless institutes of higher learning. Each child and each parent should start with a goal - before handing over their hard earned tuition dollars, and/or leaving a trail of broken debt in the aftermath of a substandard, unfocused education.
  • it is no longer the most expensive university in America. It is the 46th.Others have been implementing the Absolut Rolex Plan. John Sexton turned New York University into a global higher-education player by selling the dream of downtown living to students raised on “Sex and the City.” Northeastern followed Boston University up the ladder. Under Steven B. Sample, the University of Southern California became a U.S. News top-25 university. Washington University in St. Louis did the same.
  • I currently attend GW, and I have to say, this article completely misrepresents the situation. I have yet to meet a single person who is paying the full $60k tuition - I myself am paying $30k, because the school gave me $30k in grants. As for the quality of education, Foreign Policy rated GW the #8 best school in the world for undergraduate education in international affairs, Princeton Review ranks it as one of the best schools for political science, and U.S. News ranks the law school #20. The author also ignores the role that an expanding research profile plays in growing a university's prestige and educational power.
  • And in hundreds of regional universities and community colleges, presidents and deans and department chairmen have watched this spectacle of ascension and said to themselves, “That could be me.” Agricultural schools and technical institutes are lobbying state legislatures for tuition increases and Ph.D. programs, fitness centers and arenas for sport. Presidents and boards are drawing up plans to raise tuition, recruit “better” students and add academic programs. They all want to go in one direction — up! — and they are all moving with a single vision of what they want to be.
  • this is the same playbook used by hospitals the past 30 years or so. It is how Hackensack Hospital became Hackensack Medical Center and McComb Hospital became Southwest Mississippi Regional Medical Center. No wonder the results have been the same in healthcare and higher education; both have priced themselves out of reach for average Americans.
  • a world where a college is rated not by the quality of its output, but instaed, by the quality of its inputs. A world where there is practically no work to be done by the administration because the college's reputation is made before the first class even begins! This is isanity! But this is the swill that the mammoth college marketing departments nationwide have shoved down America's throat. Colleges are ranked not by the quality of their graduates, but rather, by the test scores of their incoming students!
  • The Pew Foundation has been doing surveys on what students learn, how much homework they do, how much time they spend with professors etc. All good stuff to know before a student chooses a school. It is called the National Survey of Student Engagement (NSSE - called Nessy). It turns out that the higher ranked schools do NOT allow their information to be released to the public. It is SECRET.Why do you think that is?
  • The article blames "the standard university organizational model left teaching responsibilities to autonomous academic departments and individual faculty members, each of which taught and tested in its own way." This is the view of someone who has never taught at a university, nor thought much about how education there actually happens. Once undergraduates get beyond the general requirements, their educations _have_ to depend on "autonomous departments" because it's only those departments know what the requirements for given degree can be, and can grant the necessary accreditation of a given student. The idea that some administrator could know what's necessary for degrees in everything from engineering to fiction writing is nonsense, except that's what the people who only know the theory of education (but not its practice) actually seem to think. In the classroom itself, you have tremendously talented people, who nevertheless have their own particular strengths and approaches. Don't you think it's a good idea to let them do what they do best rather than trying to make everyone teach the same way? Don't you think supervision of young teachers by older colleagues, who actually know their field and its pedagogy, rather than some administrator, who knows nothing of the subject, is a good idea?
  • it makes me very sad to see how expensive some public schools have become. Used to be you could work your way through a public school without loans, but not any more. Like you, I had the advantage of a largely-scholarship paid undergraduate education at a top private college. However, I was also offered a virtually free spot in my state university's (then new) honors college
  • My daughter attended a good community college for a couple of classes during her senior year of high school and I could immediately see how such places are laboratories for failure. They seem like high schools in atmosphere and appearance. Students rush in by car and rush out again when the class is over.The four year residency college creates a completely different feel. On arrival, you get the sense that you are engaging in something important, something apart and one that will require your full attention. I don't say this is for everyone or that the model is not flawed in some ways (students actually only spend 2 1/2 yrs. on campus to get the four yr. degree). College is supposed to be a 60 hour per week job. Anything less than that and the student is seeking himself or herself
  • This. Is. STUNNING. I have always wondered, especially as my kids have approached college age, why American colleges have felt justified in raising tuition at a rate that has well exceeded inflation, year after year after year. (Nobody needs a dorm with luxury suites and a lazy river pool at college!) And as it turns out, they did it to become luxury brands. Just that simple. Incredible.I don't even blame this guy at GWU for doing what he did. He wasn't made responsible for all of American higher ed. But I do think we all need to realize what happened, and why. This is front page stuff.
  • I agree with you, but, unfortunately, given the choice between low tuition, primitive dorms, and no athletic center VS expensive & luxurious, the customers (and their parents) are choosing the latter. As long as this is the case, there is little incentive to provide bare-bones and cheap education.
  • Wesleyan University in CT is one school that is moving down the rankings. Syracuse University is another. Reed College is a third. Why? Because these schools try hard to stay out of the marketing game. (With its new president, Syracuse has jumped back into the game.) Bryn Mawr College, outside Philadelphia hasn't fared well over the past few decades in the rankings, which is true of practically every women's college. Wellesley is by far the highest ranked women's college, but even there the acceptance rate is significantly higher than one finds at comparable coed liberal arts colleges like Amherst & Williams. University of Chicago is another fascinating case for Mr. Carey to study (I'm sure he does in his forthcoming book, which I look forward to reading). Although it has always enjoyed an illustrious academic reputation, until recently Chicago's undergraduate reputation paled in comparison to peer institutions on the two coasts. A few years ago, Chicago changed its game plan to more closely resemble Harvard and Stanford in undergraduate amenities, and lo and behold, its rankings shot up. It was a very cynical move on the president's part to reassemble the football team, but it was a shrewd move because athletics draw more money than academics ever can (except at engineering schools like Cal Tech & MIT), and more money draws richer students from fancier secondary schools with higher test scores, which lead to higher rankings - and the beat goes on.
  • College INDUSTRY is out of control. Sorry, NYU, GW, BU are not worth the price. Are state schools any better? We have the University of Michigan, which is really not a state school, but a university that gives a discount to people who live in Michigan. Why? When you have an undergraduate body 40+% out-of-state that pays tuition of over $50K/year, you tell me?Perhaps the solution is two years of community college followed by two at places like U of M or Michigan State - get the same diploma at the end for much less and beat the system.
  • In one recent yr., the majority of undergrad professors at Harvard, according to Boston.com, where adjuncts. That means low pay, no benefits, no office, temp workers. Harvard.Easily available student loans fueled this arms race of amenities and frills that in which colleges now engage. They moved the cost of education onto the backs of people, kids, who don't understand what they are doing.Students in colleges these days are customers and the customers must be able to get through. If it requires dumbing things down, so be it. On top of tuition, G.W. U. is known by its students as the land of added fees on top of added fees. The joke around campus was that they would soon be installing pay toilets in the student union. No one was laughing.
  • You could written the same story about my alma mater, American University. The place reeked of ambition and upward mobility decades ago and still does. Whoever's running it now must look at its measly half-billion-dollar endowment and compare it to GWU's $1.5 billion and seethe with envy, while GWU's president sets his sights on an Ivy League-size endowment. And both get back to their real jobs: 24/7 fundraising,Which is what university presidents are all about these days. Money - including million-dollar salaries for themselves (GWU's president made more than Harvard's in 2011) - pride, cachet, power, a mansion, first-class all the way. They should just be honest about it and change their university's motto to Ostende mihi pecuniam! (please excuse my questionable Latin)Whether the students are actually learning anything is up to them, I guess - if they do, it's thanks to the professors, adjuncts and the administrative staff, who do the actual work of educating and keep the school running.
  • When I was in HS (70s), many of my richer friends went to GW and I was then of the impression that GW was a 'good' school. As I age, I have come to realize that this place is just another façade to the emptiness that has become America. All too often are we faced with a dilemma: damned if we do, damned if we don't. Yep, 'education' has become a trap for all too many of our citizen.
  • I transferred to GWU from a state school. I am forever grateful that I did. I wanted to get a good rigorous education and go to one of the best International Affairs schools in the world. Even though the state school I went to was dirt-cheap, the education and the faculty was awful. I transferred to GW and was amazed at the professors at that university. An ambassador or a prominent IA scholar taught every class. GW is an expensive school, but that is the free market. If you want a good education you need to be willing to pay for it or join the military. I did the latter and my school was completely free with no debt and I received an amazing education. If young people aren't willing to make some sort of sacrifice to get ahead or just expect everything to be given to then our country is in a sad state.We need to stop blaming universities like GWU that strive to attract better students, better professors, and better infrastructure. They are doing what is expected in America, to better oneself.
  • "Whether the students are actually learning anything is up to them, I guess." How could it possibly be otherwise??? I am glad that you are willing to give credit to teachers and administrators, but it is not they who "do the actual work of educating." From this fallacy comes its corollary, that we should blame teachers first for "under-performing schools". This long-running show of scapegoating may suit the wallets and vanity of American parents, but it is utterly senseless. When, if ever, American culture stops reeking of arrogance, greed and anti-intellectualism, things may improve, and we may resume the habit of bothering to learn. Until then, nothing doing.
  • Universities sell knowledge and grade students on how much they have learned. Fundamentally, there is conflict of interest in thsi setup. Moreover, students who are poorly educated, even if they know this, will not criticize their school, because doing so would make it harder for them to have a career. As such, many problems with higher education remain unexposed to the public.
  • I've lectured and taught in at least five different countries in three continents and the shortest perusal of what goes on abroad would totally undermine most of these speculations. For one thing American universities are unique in their dedication to a broad based liberal arts type education. In France, Italy or Germany, for example, you select a major like mathematics or physics and then in your four years you will not take even one course in another subject. The amount of work that you do that is critically evaluated by an instructor is a tiny fraction of what is done in an American University. While half educated critics based on profoundly incomplete research write criticism like this Universities in Germany Italy, the Netherlands, South Korea and Japan as well as France have appointed committees and made studies to explain why the American system of higher education so drastically outperforms their own system. Elsewhere students do get a rather nice dose of general education but it ends in secondary school and it has the narrowness and formulaic quality that we would just normally associate with that. The character who wrote this article probably never set foot on a "campus" of the University of Paris or Rome
  • The university is part of a complex economic system and it is responding to the demands of that system. For example, students and parents choose universities that have beautiful campuses and buildings. So universities build beautiful campuses. State support of universities has greatly declined, and this decline in funding is the greatest cause of increased tuition. Therefore universities must compete for dollars and must build to attract students and parents. Also, universities are not ranked based on how they educate students -- that's difficult to measure so it is not measured. Instead universities are ranked on research publications. So while universities certainly put much effort into teaching, research has to have a priority in order for the university to survive. Also universities do not force students and parents to attend high price institutions. Reasonably priced state institutions and community colleges are available to every student. Community colleges have an advantage because they are funded by property taxes. Finally learning requires good teaching, but it also requires students that come to the university funded, prepared, and engaged. This often does not happen. Conclusion- universities have to participate in profile raising actions in order to survive. The day that funding is provided for college, ranking is based on education, and students choose campuses with simple buildings, then things will change at the university.
  • This is the inevitable result of privatizing higher education. In the not-so-distant past, we paid for great state universities through our taxes, not tuition. Then, the states shifted funding to prisons and the Federal government radically cut research support and the GI bill. Instead, today we expect universities to support themselves through tuition, and to the extent that we offered students support, it is through non-dischargeable loans. To make matters worse, the interest rates on those loans are far above the government's cost of funds -- so in effect the loans are an excise tax on education (most of which is used to support a handful of for-profit institutions that account for the most student defaults). This "consumer sovereignty" privatized model of funding education works no better than privatizing California's electrical system did in the era of Enron, or our privatized funding of medical service, or our increasingly privatized prison system: it drives up costs at the same time that it replace quality with marketing.
  • There are data in some instances on student learning, but the deeper problem, as I suspect the author already knows, is that there is nothing like a consensus on how to measure that learning, or even on when is the proper end point to emphasize (a lot of what I teach -- I know this from what students have told me -- tends to come into sharp focus years after graduation).
  • Michael (Baltimore) has hit the nail on the head. Universities are increasingly corporatized institutions in the credentialing business. Knowledge, for those few who care about it (often not those paying for the credentials) is available freely because there's no profit in it. Like many corporate entities, it is increasingly run by increasingly highly paid administrators, not faculty.
  • GWU has not defined itself in any unique way, it has merely embraced the bland, but very expensive, accoutrements of American private education: luxury dorms, food courts, spa-like gyms, endless extracurricular activities, etc. But the real culprit for this bloat that students have to bear financially is the college ranking system by US News, Princeton Review, etc. An ultimately meaningless exercise in competition that has nevertheless pushed colleges and universities to be more like one another. A sad state of affairs, and an extremely expensive one for students
  • It is long past time to realize the failure of the Reagonomics-neoliberal private profits over public good program. In education, we need to return to public institutions publicly funded. Just as we need to recognize that Medicare, Social Security, the post office, public utilities, fire departments, interstate highway system, Veterans Administration hospitals and the GI bill are models to be improved and expanded, not destroyed.
  • George Washington is actually not a Rolex watch, it is a counterfeit Rolex. The real Rolexes of higher education -- places like Hopkins, Georgetown, Duke, the Ivies etc. -- have real endowments and real financial aid. No middle class kid is required to borrow $100,000 to get a degree from those schools, because they offer generous need-based financial aid in the form of grants, not loans. The tuition at the real Rolexes is really a sticker price that only the wealthy pay -- everybody else on a sliding scale. For middle class kids who are fortunate enough to get in, Penn actually ends up costing considerably less than a state university.The fake Rolexes -- BU, NYU, Drexel in Philadelphia -- don't have the sliding scale. They bury middle class students in debt.And really, though it is foolish to borrow $100,000 or $120,000 for an undergraduate degree, I don't find the transaction morally wrong. What is morally wrong is our federal government making that loan non-dischargeable in bankruptcy, so many if these kids will be having their wages garnished for the REST OF THEIR LIVES.There is a very simple solution to this, by the way. Cap the amount of non-dischargeable student loan debt at, say, $50,000
  • The slant of this article is critical of the growth of research universities. Couldn't disagree more. Modern research universities create are incredibly engines of economic opportunity not only for the students (who pay the bills) but also for the community via the creation of blue and white collar jobs. Large research university employ tens of thousands of locals from custodial and food service workers right up to high level administrators and specialist in finance, computer services, buildings and facilities management, etc. Johns Hopkins University and the University of Maryland system employ more people than any other industry in Maryland -- including the government. Research universities typically have hospitals providing cutting-edge medical care to the community. Local business (from cafes to property rental companies) benefit from a built-in, long-term client base as well as an educated workforce. And of course they are the foundry of new knowledge which is critical for the future growth of our country.Check out the work of famed economist Dr. Julia Lane on modeling the economic value of the research university. In a nutshell, there are few better investments America can make in herself than research universities. We are the envy of the world in that regard -- and with good reason. How many *industries* (let alone jobs) have Stanford University alone catalyzed?
  • What universities have the monopoly on is the credential. Anyone can learn, from books, from free lectures on the internet, from this newspaper, etc. But only universities can endow you with the cherished degree. For some reason, people are will to pay more for one of these pieces of paper with a certain name on it -- Ivy League, Stanford, even GW -- than another -- Generic State U -- though there is no evidence one is actually worth more in the marketplace of reality than the other. But, by the laws of economics, these places are actually underpriced: after all, something like 20 times more people are trying to buy a Harvard education than are allowed to purchase one. Usually that means you raise your price.
  • Overalll a good article, except for - "This comes on the heels of Richard Arum and Josipa Roksa’s “Academically Adrift,” a study that found “limited or no learning” among many college students." The measure of learning you report was a general thinking skills exam. That's not a good measure of college gains. Most psychologists and cognitive scientists worth their salt would tell you that improvement in critical thinking skills is going to be limited to specific areas. In other words, learning critical thinking skills in math will make little change in critical thinking about political science or biology. Thus we should not expect huge improvements in general critical thinking skills, but rather improvements in a student's major and other areas of focus, such as a minor. Although who has time for a minor when it is universally acknowledged that the purpose of a university is to please and profit an employer or, if one is lucky, an investor. Finally, improved critical thinking skills are not the end all and be all of a college education even given this profit centered perspective. Learning and mastering the cumulative knowledge of past generations is arguably the most important thing to be gained, and most universities still tend to excel at that even with the increasing mandate to run education like a business and cultivate and cull the college "consumer".
  • As for community colleges, there was an article in the Times several years ago that said it much better than I could have said it myself: community colleges are places where dreams are put on hold. Without making the full commitment to study, without leaving the home environment, many, if not most, community college students are caught betwixt and between, trying to balance work responsibilities, caring for a young child or baby and attending classes. For males, the classic "end of the road" in community college is to get a car, a job and a girlfriend, one who is not in college, and that is the end of the dream. Some can make it, but most cannot.
  • as a scientist I disagree with the claim that undergrad tuition subsidizes basic research. Nearly all lab equipment and research personnel (grad students, technicians, anyone with the title "research scientist" or similar) on campus is paid for through federal grants. Professors often spend all their time outside teaching and administration writing grant proposals, as the limited federal grant funds mean ~%85 of proposals must be rejected. What is more, out of each successful grant the university levies a "tax", called "overhead", of 30-40%, nominally to pay for basic operations (utilities, office space, administrators). So in fact one might say research helps fund the university rather than the other way around. Flag
  • It's certainly overrated as a research and graduate level university. Whether it is good for getting an undergraduate education is unclear, but a big part of the appeal is getting to live in D.C..while attending college instead of living in some small college town in the corn fields.
Javier E

Narcissism Is Increasing. So You're Not So Special. - The New York Times - 1 views

  • A 2010 study in the journal Social Psychological and Personality Science found that the percentage of college students exhibiting narcissistic personality traits, based on their scores on the Narcissistic Personality Inventory, a widely used diagnostic test, has increased by more than half since the early 1980s, to 30 percent. In their book “Narcissism Epidemic,” the psychology professors Jean M. Twenge and W. Keith Campbell show that narcissism has increased as quickly as obesity has since the 1980s. Even our egos are getting fat.
  • This is a costly problem. While full-blown narcissists often report high levels of personal satisfaction, they create havoc and misery around them. There is overwhelming evidence linking narcissism with lower honesty and raised aggression.
  • narcissism isn’t an either-or characteristic. It’s more of a set of progressive symptoms (like alcoholism) than an identifiable state (like diabetes). Millions of Americans exhibit symptoms, but still have a conscience and a hunger for moral improvement. At the very least, they really don’t want to be terrible people.
  • ...13 more annotations...
  • Rousseau wrote about “amour-propre,” a kind of self-love based on the opinions of others. He considered it unnatural and unhealthy, and believed that arbitrary social comparison led to people wasting their lives trying to look and sound attractive to others.
  • Narcissus falls in love not with himself, but with his reflection. In the modern version, Narcissus would fall in love with his own Instagram feed, and starve himself to death while compulsively counting his followers.
  • If our egos are obese with amour-propre, social media can indeed serve up the empty emotional carbs we crave. Instagram and the like doesn’t create a narcissist, but studies suggest it acts as an accelerant — a near ideal platform to facilitate what psychologists call “grandiose exhibitionism.”
  • No doubt you have seen this in others, and maybe even a little of it in yourself as you posted a flattering selfie — and then checked back 20 times for “likes.”
  • A healthy self-love that leads to true happiness is what Rousseau called “amour de soi.” It builds up one’s intrinsic well-being, as opposed to feeding shallow cravings to be admired.
  • First, take the Narcissistic Personality Inventory test.
  • Here is an individual self-improvement strategy that combines a healthy self-love (for Valentine’s Day) with a small sacrifice (possibly for Lent).
  • Cultivating amour de soi requires being fully alive at this moment, as opposed to being virtually alive while wondering what others think. The soulful connection with another person, the enjoyment of a beautiful hike alone (not shared on Facebook) or a prayer of thanks over your sleeping child (absent a #blessed tweet) could be considered expressions of amour de soi.
  • Second, get rid of the emotional junk food that is feeding any unhealthy self-obsession. Make a list of opinions to disregard — especially those of flatterers and critics — and review the list each day. Resolve not to waste a moment trying to impress others,
  • Third, go on a social media fast. Post to communicate, praise and learn — never to self-promote.
  • As for clinically significant narcissism—along with greed, invidious prejudice, and habitual lying—it is simply another one of our anti-social behaviors that mutated from our basic genetic drives…in this case the drive to survive. The opposite of narcissism is empathy, a brain-wiring that evolved much later and in parallel with our increased reliance on social interaction as a means to improve the chances of sending our genes down the line (the drive to reproduce). There is thus a certain irony in the fact that the misnamed “social” media are encouraging a decline in empathy. Your thoughts?
  • Sure you're not confusing narcissism with vanity? If you've ever had the misfortune of having someone with narcissistic personality disorder in your life, you would know it's about more than selfies and seeking constant approval. They are truly sick individuals that destroy the lives of those they claim to love.I would say people's addictions to social media "likes" and posting selfies is vanity
  • Perhaps we need to distinguish between Narcissistic Personality Disorder (NPD) and the adjective "narcissistic." We all know lots of people with way too much self-regard. NPD on the other hand ruins lives and certainly families. People who have NPD are way beyond self centered. They see the world as black and white and all people they interact with become reflections. People with NPD go to extreme lengths to control those around them and will lie, cheat and steal to do that. They are never wrong the other person is always wrong. I have worked for Narcissists and lived with one. Let's not throw around this term without defining it, please.
Javier E

Guns, Germs, and The Future of Us - Wyatt Edward Gates - Medium - 0 views

  • ared Daimond’s seminal work Guns, Germs, and Steel has many flaws, but it provides some useful anecdotes about how narrative and consciousness shapes human organization progresses
  • Past critical transformations of thought can help us see how we need to transform ourselves now in order to survive the future.
  • something both ancient and immediate: the way we define who is in our tribe plays a critical role in what kind of social organization we can build and maintain
  • ...25 more annotations...
  • You can’t have a blood family of 300 million, nor even a large enough one to do things like build an agrarian society
  • In order to have large cities built on agrarianism it was necessary not only to innovate technology, but to transform our very consciousness as it related to how we defined what a person was, both ourselves and others
  • Instead of needing to have real, flowing blood with common DNA from birth, it was merely necessary to be among the same abstract family organized under a king of some kind — a kind of stand in for the father or patriarch. We developed law and law enforcement as abstract disembodied voices of the father. This allowed total strangers without any family ties to interact in the same society in a constructive and organized way. Thus: civilization as we know it
  • Those ancient polities have developed finally into the Nation, a kind of tribe so fully abstracted that you can be of any blood and language and religion and still function within it.
  • So, too, are all other forms of human separation — and the opposition and conflicts they spawn — illusory in nature. We moved beyond blood, but then it was language or religion or fealty that made it impossible to work together, and we warred over that
  • we’re told these borders mean everything, that they are real and urgent and demand constant sacrifice to maintain.
  • why is that border there? Why borders?
  • We’re stuck in a mode of thinking that’s no longer sensible. There isn’t a reason for borders. There never really was, but now more than ever we have no utility for them, no need for them
  • What humanity has to do is wake up to the reality of post-tribalism. This means seeing through all these invented borders to the truth that we are all people, we are all fundamentally the same, and we can all learn to live with one another.
  • It was the idea of necessary conflict based on blood that preceded the fights that appeared to justify the belief in that blood-based conflict.
  • Nations have saturated the entire globe. There are no more frontiers. It’s all Nations butting up against one another.
  • We are all people of a similar nature and we do have the option to relate to one another as people for the sake of saving our shared homes and futures. We all hunger and thirst and become lonely, we all laugh and weep in the same language. Stripped of confounding symbols we are undivided.
  • There are a lot of people upset about the illusion of borders. They want a different reality, one in which there are Good Tribes (their tribe) and Bad Tribes (all the other ones).
  • but the world is already so mixed together they can’t draw those borders anymore. Hence: fascism.
  • There are no firm foundations for defining this tribe, however, so he’s left to cobble together some kind of ad hoc notion of in- and out-group. Like a magpie he collects ways of dividing people as appeals to his caprice: race, sex, Nation, etc., but there’s no greater sense to it, so it’s all arbitrary, all a mess.
  • No amount of magical thinking from conservatives can change the reality of globalism, however; what one Nation does to pollute will affect us all, and that is according to the laws of physics. No political movement can change those physics. We have to adapt or perish.
  • a key part of it is a simple lack of imagination. He just doesn’t realize there’s an option to not have borders, because his entire consciousness is married to the idea of of-me and not-of-me, Us and Them, and if there is no Them there can’t be an Us, and therefore life stops making sense
  • What has to be true if there are no tribes? We have no need to discriminate among who we may love. Loving and caring for all people as if they were blood family is the path forward
  • There needs to be a new story for us to share. It’s not enough to stop believing in the old way of borders, we have to actively seek out a new way of thinking and speaking and living that reflects the world as it is and as it can be.
  • there are others who have more tangible investments in borders: Those who have grown fat off the conflicts driven by these invented borders don’t want us to see how pointless it all is. These billionaires and presidents and kings want us to keep fighting against one another over the borders they so lazily define because it gives them a means of power and control.
  • We have to be ready for their opposition, however. They’ll do what they can to force us to act as if their borders are real. We don’t need to listen, though we do need to be ready to sacrifice.
  • Without a globally-coordinated response we can’t resolve a globally-driven problem such as climate change. If we can grant the humanity of all people we can start to imagine ways of relating to one another that aren’t opposed and antagonistic, but which are cooperative and aimed at harmony.
  • This transformation of consciousness must happen in our own hearts and minds before it can happen in concert.
  • the Nation has already been shown to be unnecessary because of social globalism. Pick a major city on earth and you’ll find every kind of person living together in peace! Not perfect peace, but not constant and unavoidable war, and that is what counts.
  • We can’t keep pretending as if borders matter when we can so clearly see that they don’t, but we can’t just have no story at all, there must be a way of contextualizing a future without borders. I don’t know what that story is, exactly, but I believe it is something like love writ large. Once we’re ready to start telling it we can start living it.
lucieperloff

Model Of Red Flag Law For States Shared By Justice Department : NPR - 0 views

  • part of the Biden administration's ongoing effort to curb U.S. gun violence.
  • e welcome the opportunity to work with communities in the weeks and months ahead in our shared commitment to end gun violence."
  • combines two distinct approaches adopted by states where red flag laws already exist.
  • ...9 more annotations...
  • gives law enforcement an immediate vehicle with which they can seek to seize a person's guns once a court has determined that the individual presents a danger to themselves or others.
  • provides for longer-term prohibition of a person's access to firearms.
  • combines the two approaches and would authorize the courts to more quickly issue a prohibition on a person's right to possess firearms
  • The department said it was not endorsing any specific implementation of an ERPO law, but the Monday guidelines were instead meant to provide a framework
  • they enable a person's loved ones to reach out to law enforcement and report dangerous behavior before a gun owner is able to endanger themselves or others.
  • the rules are too arbitrary and can be weaponized against gun owners during personal disputes.
  • "By allowing family members or law enforcement to intervene and to petition for these orders before warning signs turn into tragedy, 'extreme risk protection orders' can save lives,"
  • a proposal that would strengthen restrictions on individuals' ability to convert pistols into "short-barreled rifles,"
  • In April, Biden unveiled a series of executive actions, aimed at what he called the nation's "international embarrassment" of gun violence.
Javier E

How Does Science Really Work? | The New Yorker - 1 views

  • I wanted to be a scientist. So why did I find the actual work of science so boring? In college science courses, I had occasional bursts of mind-expanding insight. For the most part, though, I was tortured by drudgery.
  • I’d found that science was two-faced: simultaneously thrilling and tedious, all-encompassing and narrow. And yet this was clearly an asset, not a flaw. Something about that combination had changed the world completely.
  • “Science is an alien thought form,” he writes; that’s why so many civilizations rose and fell before it was invented. In his view, we downplay its weirdness, perhaps because its success is so fundamental to our continued existence.
  • ...50 more annotations...
  • In school, one learns about “the scientific method”—usually a straightforward set of steps, along the lines of “ask a question, propose a hypothesis, perform an experiment, analyze the results.”
  • That method works in the classroom, where students are basically told what questions to pursue. But real scientists must come up with their own questions, finding new routes through a much vaster landscape.
  • Since science began, there has been disagreement about how those routes are charted. Two twentieth-century philosophers of science, Karl Popper and Thomas Kuhn, are widely held to have offered the best accounts of this process.
  • For Popper, Strevens writes, “scientific inquiry is essentially a process of disproof, and scientists are the disprovers, the debunkers, the destroyers.” Kuhn’s scientists, by contrast, are faddish true believers who promulgate received wisdom until they are forced to attempt a “paradigm shift”—a painful rethinking of their basic assumptions.
  • Working scientists tend to prefer Popper to Kuhn. But Strevens thinks that both theorists failed to capture what makes science historically distinctive and singularly effective.
  • Sometimes they seek to falsify theories, sometimes to prove them; sometimes they’re informed by preëxisting or contextual views, and at other times they try to rule narrowly, based on t
  • Why do scientists agree to this scheme? Why do some of the world’s most intelligent people sign on for a lifetime of pipetting?
  • Strevens thinks that they do it because they have no choice. They are constrained by a central regulation that governs science, which he calls the “iron rule of explanation.” The rule is simple: it tells scientists that, “if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with”; from there, they must “conduct all disputes with reference to empirical evidence alone.”
  • , it is “the key to science’s success,” because it “channels hope, anger, envy, ambition, resentment—all the fires fuming in the human heart—to one end: the production of empirical evidence.”
  • Strevens arrives at the idea of the iron rule in a Popperian way: by disproving the other theories about how scientific knowledge is created.
  • The problem isn’t that Popper and Kuhn are completely wrong. It’s that scientists, as a group, don’t pursue any single intellectual strategy consistently.
  • Exploring a number of case studies—including the controversies over continental drift, spontaneous generation, and the theory of relativity—Strevens shows scientists exerting themselves intellectually in a variety of ways, as smart, ambitious people usually do.
  • “Science is boring,” Strevens writes. “Readers of popular science see the 1 percent: the intriguing phenomena, the provocative theories, the dramatic experimental refutations or verifications.” But, he says,behind these achievements . . . are long hours, days, months of tedious laboratory labor. The single greatest obstacle to successful science is the difficulty of persuading brilliant minds to give up the intellectual pleasures of continual speculation and debate, theorizing and arguing, and to turn instead to a life consisting almost entirely of the production of experimental data.
  • Ultimately, in fact, it was good that the geologists had a “splendid variety” of somewhat arbitrary opinions: progress in science requires partisans, because only they have “the motivation to perform years or even decades of necessary experimental work.” It’s just that these partisans must channel their energies into empirical observation. The iron rule, Strevens writes, “has a valuable by-product, and that by-product is data.”
  • Science is often described as “self-correcting”: it’s said that bad data and wrong conclusions are rooted out by other scientists, who present contrary findings. But Strevens thinks that the iron rule is often more important than overt correction.
  • Eddington was never really refuted. Other astronomers, driven by the iron rule, were already planning their own studies, and “the great preponderance of the resulting measurements fit Einsteinian physics better than Newtonian physics.” It’s partly by generating data on such a vast scale, Strevens argues, that the iron rule can power science’s knowledge machine: “Opinions converge not because bad data is corrected but because it is swamped.”
  • Why did the iron rule emerge when it did? Strevens takes us back to the Thirty Years’ War, which concluded with the Peace of Westphalia, in 1648. The war weakened religious loyalties and strengthened national ones.
  • Two regimes arose: in the spiritual realm, the will of God held sway, while in the civic one the decrees of the state were paramount. As Isaac Newton wrote, “The laws of God & the laws of man are to be kept distinct.” These new, “nonoverlapping spheres of obligation,” Strevens argues, were what made it possible to imagine the iron rule. The rule simply proposed the creation of a third sphere: in addition to God and state, there would now be science.
  • Strevens imagines how, to someone in Descartes’s time, the iron rule would have seemed “unreasonably closed-minded.” Since ancient Greece, it had been obvious that the best thinking was cross-disciplinary, capable of knitting together “poetry, music, drama, philosophy, democracy, mathematics,” and other elevating human disciplines.
  • We’re still accustomed to the idea that a truly flourishing intellect is a well-rounded one. And, by this standard, Strevens says, the iron rule looks like “an irrational way to inquire into the underlying structure of things”; it seems to demand the upsetting “suppression of human nature.”
  • Descartes, in short, would have had good reasons for resisting a law that narrowed the grounds of disputation, or that encouraged what Strevens describes as “doing rather than thinking.”
  • In fact, the iron rule offered scientists a more supple vision of progress. Before its arrival, intellectual life was conducted in grand gestures.
  • Descartes’s book was meant to be a complete overhaul of what had preceded it; its fate, had science not arisen, would have been replacement by some equally expansive system. The iron rule broke that pattern.
  • by authorizing what Strevens calls “shallow explanation,” the iron rule offered an empirical bridge across a conceptual chasm. Work could continue, and understanding could be acquired on the other side. In this way, shallowness was actually more powerful than depth.
  • it also changed what counted as progress. In the past, a theory about the world was deemed valid when it was complete—when God, light, muscles, plants, and the planets cohered. The iron rule allowed scientists to step away from the quest for completeness.
  • The consequences of this shift would become apparent only with time
  • In 1713, Isaac Newton appended a postscript to the second edition of his “Principia,” the treatise in which he first laid out the three laws of motion and the theory of universal gravitation. “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses,” he wrote. “It is enough that gravity really exists and acts according to the laws that we have set forth.”
  • What mattered, to Newton and his contemporaries, was his theory’s empirical, predictive power—that it was “sufficient to explain all the motions of the heavenly bodies and of our sea.”
  • Descartes would have found this attitude ridiculous. He had been playing a deep game—trying to explain, at a fundamental level, how the universe fit together. Newton, by those lights, had failed to explain anything: he himself admitted that he had no sense of how gravity did its work
  • Strevens sees its earliest expression in Francis Bacon’s “The New Organon,” a foundational text of the Scientific Revolution, published in 1620. Bacon argued that thinkers must set aside their “idols,” relying, instead, only on evidence they could verify. This dictum gave scientists a new way of responding to one another’s work: gathering data.
  • Quantum theory—which tells us that subatomic particles can be “entangled” across vast distances, and in multiple places at the same time—makes intuitive sense to pretty much nobody.
  • Without the iron rule, Strevens writes, physicists confronted with such a theory would have found themselves at an impasse. They would have argued endlessly about quantum metaphysics.
  • ollowing the iron rule, they can make progress empirically even though they are uncertain conceptually. Individual researchers still passionately disagree about what quantum theory means. But that hasn’t stopped them from using it for practical purposes—computer chips, MRI machines, G.P.S. networks, and other technologies rely on quantum physics.
  • One group of theorists, the rationalists, has argued that science is a new way of thinking, and that the scientist is a new kind of thinker—dispassionate to an uncommon degree.
  • As evidence against this view, another group, the subjectivists, points out that scientists are as hopelessly biased as the rest of us. To this group, the aloofness of science is a smoke screen behind which the inevitable emotions and ideologies hide.
  • At least in science, Strevens tells us, “the appearance of objectivity” has turned out to be “as important as the real thing.”
  • The subjectivists are right, he admits, inasmuch as scientists are regular people with a “need to win” and a “determination to come out on top.”
  • But they are wrong to think that subjectivity compromises the scientific enterprise. On the contrary, once subjectivity is channelled by the iron rule, it becomes a vital component of the knowledge machine. It’s this redirected subjectivity—to come out on top, you must follow the iron rule!—that solves science’s “problem of motivation,” giving scientists no choice but “to pursue a single experiment relentlessly, to the last measurable digit, when that digit might be quite meaningless.”
  • If it really was a speech code that instigated “the extraordinary attention to process and detail that makes science the supreme discriminator and destroyer of false ideas,” then the peculiar rigidity of scientific writing—Strevens describes it as “sterilized”—isn’t a symptom of the scientific mind-set but its cause.
  • The iron rule—“a kind of speech code”—simply created a new way of communicating, and it’s this new way of communicating that created science.
  • Other theorists have explained science by charting a sweeping revolution in the human mind; inevitably, they’ve become mired in a long-running debate about how objective scientists really are
  • In “The Knowledge Machine: How Irrationality Created Modern Science” (Liveright), Michael Strevens, a philosopher at New York University, aims to identify that special something. Strevens is a philosopher of science
  • Compared with the theories proposed by Popper and Kuhn, Strevens’s rule can feel obvious and underpowered. That’s because it isn’t intellectual but procedural. “The iron rule is focused not on what scientists think,” he writes, “but on what arguments they can make in their official communications.”
  • Like everybody else, scientists view questions through the lenses of taste, personality, affiliation, and experience
  • geologists had a professional obligation to take sides. Europeans, Strevens reports, tended to back Wegener, who was German, while scholars in the United States often preferred Simpson, who was American. Outsiders to the field were often more receptive to the concept of continental drift than established scientists, who considered its incompleteness a fatal flaw.
  • Strevens’s point isn’t that these scientists were doing anything wrong. If they had biases and perspectives, he writes, “that’s how human thinking works.”
  • Eddington’s observations were expected to either confirm or falsify Einstein’s theory of general relativity, which predicted that the sun’s gravity would bend the path of light, subtly shifting the stellar pattern. For reasons having to do with weather and equipment, the evidence collected by Eddington—and by his colleague Frank Dyson, who had taken similar photographs in Sobral, Brazil—was inconclusive; some of their images were blurry, and so failed to resolve the matter definitively.
  • it was only natural for intelligent people who were free of the rule’s strictures to attempt a kind of holistic, systematic inquiry that was, in many ways, more demanding. It never occurred to them to ask if they might illuminate more collectively by thinking about less individually.
  • In the single-sphered, pre-scientific world, thinkers tended to inquire into everything at once. Often, they arrived at conclusions about nature that were fascinating, visionary, and wrong.
  • How Does Science Really Work?Science is objective. Scientists are not. Can an “iron rule” explain how they’ve changed the world anyway?By Joshua RothmanSeptember 28, 2020
1 - 20 of 29 Next ›
Showing 20 items per page