Honesty Is The Best Even Though It Doesn’t Pay!

“Famed Duke expert on human dishonesty suspected of fraud. Manipulated data in study of truth and behaviour threatens career of popular TED Talk star Dan Ariely” this news headline hit me like a bolt from the blue. (https://www.timeshighereducation.com/news/famed-duke-expert-human-dishonesty-suspected-fraud) Dan Ariely (https://danariely.com/) is a Duke University professor of psychology and behavioural economics and author of best-selling books including “The Honest Truth about Dishonesty.” Another headline screamed at me – Israeli-American academic Ariely is under renewed scrutiny for his role in research found to be based on falsified data. (https://www.timesofisrael.com/claims-swirl-around-academic-ariely-after-honesty-study-found-to-be-dishonest/). With age, I have learnt to expect the unexpected, but this news hit me hard because I am an ardent fan of the works of Dan and also because he is married to one Sumedha Gupta, a person of Indian Extraction, who is an accomplished academic in her field.

From fixing the traffic-rules violations on the roadside with the police to CAT-score scandals in admissions to IIMs, cheating and dishonesty are ever-present parts of our national news cycle and unavoidable parts of the human condition. Call it rational or call it irrational, the honest truth is that dishonesty is entirely human. We are dishonestly honest or even honestly dishonest.

Our teachers, parents and family elders always taught us to be honest. Later on, during the course of our higher education and learning, we were persuaded by the economists, ethicists, and business sages that honesty is the best policy. But when we hit the streets living our lives, to our surprise, our pet theories failed to stand up. Treachery, we found, can pay. There is no compelling economic reason to tell the truth or keep one’s word—punishment for the treacherous in the real world is neither swift nor sure.

Economists tell us that trust is enforced in the marketplace through retaliation and reputation. If you violate a trust, your victim is apt to seek revenge and others are likely to stop doing business with you, at least under favourable terms. Sounds logical but is unreal. Cases that apparently demonstrate the awful consequences of abusing trust turn out to be few and weak, while evidence that treachery can pay seems compelling. Compared with the few ambiguous tales of treachery punished, we can find numerous stories in which deceit was unquestionably rewarded.

What do professional athletes, football players do? They sign a long-term contract and after one good year, they threaten to quit unless the contract’s renegotiated. The stupidity of it all is that they get their way.

Does treachery eventually get punished in the long term? Nothing in the record suggests it does. Men seldom rise from low condition to high rank without employing either force or fraud. Power can be an effective substitute for trust. Power, the ability to do others great harm or great good, can induce widespread amnesia. Switching loyalties to contrary political ideologies works around power. Sometimes the powerful leave no other choice. Babus and bureaucrats have to play ball with the politicians in power, no matter how badly they were treated in the past or expect to be treated in the future. Usually, though, power is not that absolute, and some degree of trust is a necessary ingredient in their working relationships.

Powerful people and business-people do not stand on principle when it comes to dealing with abusers of power and trust. When the expected reward is substantial and avoidance becomes really strong, reference checking goes out the window. In the eyes of people blinded by greed, the most tarnished reputations shine brightly. Even with a fully disclosed public record of bad faith, hard-nosed people will still try to find reasons to trust. Trust breakers are not only unhindered by bad reputations, they are also usually spared retaliation by parties they injure. The difference between the right and the wrong evaporates as ‘MIGHT’ becomes ‘RIGHT.”

Mistrust can be a self-fulfilling prophecy. People aren’t exclusively saints or sinners; few adhere to an absolute moral code. Most respond to circumstances, and their integrity and trustworthiness can depend as much on how they are treated as on their basic character. Initiating a relationship assuming that the other party is going to try to get you may induce him or her to do exactly that.

By and large, most people are intrinsically honest. It’s just the tails, the ends of the bell-shaped curve, which are dishonest in any industry, in any area. So it’s just a question of tolerating them.

Honesty is, in fact, primarily a moral choice, but there is little factual or logical basis for this conviction. Without values, without a basic preference for right over wrong, trust based on such self-delusion would crumble in the face of temptation. It is also true however that by and large, most people are neither powerful nor power-seekers. For such people, honesty is the most logical policy for leading a contented life.

Most of us choose virtue because we want to believe in ourselves and have others respect and believe in us. And for this, we should be happy. We can be proud of a system in which people are honest because they want to be, not because they have to be. Materially, too, trust based on morality provides great advantages. It allows us to join in great and exciting enterprises that we could never undertake if we relied on economic incentives alone.

Dishonesty and mistrust are as rational or irrational as is greed or deceit. Forgiving past lapses can make a righteous and godly sense. People do change. After all Ratnakar, a dacoit, did become a Maharshi Valmiki; so we are told!

*****

First published 25 Aug 2021

*****

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

Business of Education

Education dates back to the very first humans ever to inhabit Earth. Why? To survive, every generation has found it necessary to pass on its accumulated knowledge, skills, values, and traditions to the next generation. How can they do this? Education! Each subsequent generation must be taught these things. Stretching the idea wider, even animals educate their off springs in matters of safety, food-gathering and survival in some ways.

Education is a Human Right and ‘Education in human rights’ is itself a fundamental human right. The Universal Declaration of Human Rights affirms that education is a fundamental human right for everyone and this right was further detailed in the Convention against Discrimination in Education. Right to education entails

  1. Primary education that is free, compulsory and universal
  2. Secondary education, including technical and vocational, that is generally available, accessible to all and progressively free and
  3. Higher education, accessible to all on the basis of individual capacity and progressively free.

The Right to Education Act 2009 describes modalities of the importance of free and compulsory education for children aged between 6-14 years in India under Article 21 (A) of the Constitution of India. Compulsory means no child can refuse to be educated.

This act has made education a fundamental right for every child. Delivery of Fundamental Rights would not be a business even if the government were to entrust it to any of its instrumentality, agency or authority.

Any business has customers who have the right to accept or reject the products or services offered to them by the business entity. By this definition, at least education for children aged between 6-14 years cannot be a business.

The history of formal education extends at least as far back as the first written records recovered from ancient civilizations. India had the good fortune of having institutions of Higher Education, Takshshila and Nalanda, even before the 5th century B.C. Education in India was always focussed on careers – Scriptures for Brahmins, battle-science and governance for Kshatriyas, and crafts for others. The Muslim invaders and the Christian Missionaries influenced the education system to a large extent, former using force while the latter using demonstration. Macaulay destroyed the system nearly fully though Swamy Dayanand and his contemporaries tried to preserve it.

Horace Mann, credited with creating the foundation of American modern public education system, saw that the industrializing world demanded different skills than its agricultural predecessor. He prioritizes certain aspects over others. For example, lumping students into groups rather than treating them as individuals. This made “education” much easier, even if it did nothing for the individual student who didn’t adapt well to this new system. It’s worth reminding ourselves now about the key characteristics of the industrial era, and how we can see them manifested in the education system that continues to be emulated in India to this day:

  • Schools focus on respecting authority
  • Schools focus on punctuality
  • Schools focus on measurement
  • Schools focus on basic literacy
  • Schools focus on basic arithmetic

Notice how these reinforce each other. You enter the system one way, and are crammed through an extended moulding process. The result? A “good enough” cog to jam into an industrial machine.

The higher education institutions are plagued by the erosion of academic integrity, corrosion of standards in the curriculum, the oversimplification of admission standards without understanding the importance of true preparation for higher education, and the rise of economic self-interest in both institutions and faculty, places the teaching of classes much lower on their priority.

Even the school education is equally diseased. Government schools face a social burden placed on them by poverty and hopelessness. Troubled children carry the ills of their homes and neighbourhoods into their classrooms every day. In many schools, teachers must feed the bodies and souls of their students before they can even begin to feed their minds. These schools face inflexible bureaucracies, inane regulations, and incompetent administrators and their teachers being called upon to run every chore for the government outside the school other than teaching in the school. High school drop-out rates and students whose performance on maths and science tests puts them at or near the very bottom of their cohorts elsewhere in the world.

It is this set of facts that has provided legitimacy to the private enterprise in education and has sparked business-of-education initiatives.

The business-of-education thrives on the logic: If you can compete, you will be hired for a job. If you are hired, your virtuous habits would eventually lead to your promotion. As promotions accumulate, your pay increases and eventually you reach financial comfort. Or perhaps even significant wealth!

Is this logic responsible for accelerating the acceptance of education as business?

*****

First published 02 Aug 2021

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

**

Who Failed Afghanistan? Who will help it to succeed?

The International Security Assistance Force (ISAF) was a multinational military mission in Afghanistan from 2001 to 2014. It was established by United Nations Security Council Resolution 1386 pursuant to the Bonn Agreement, which outlined the establishment of a permanent Afghan government following the U.S. invasion in October 2001. ISAF’s primary goal was to train the Afghan National Security Forces (ANSF) and assist Afghanistan in rebuilding key government institutions, though it gradually took part in the broader war in Afghanistan against the Taliban insurgency.

ISAF’s initial mandate was to secure the Afghan capital of Kabul and its surrounding area against opposition forces to facilitate the formation of the Afghan Transitional Administration headed by Hamid Karzai. In 2003, NATO took command of the mission at the request of the UN and Afghan government, marking its first deployment outside Europe and North America. Shortly thereafter the UN Security Council expanded ISAF’s mission to provide and maintain security beyond the capital region. It gradually broadened its operations in four stages, and by 2006 took responsibility for the entire country; ISAF subsequently engaged in more intensive combat in southern and eastern Afghanistan.

From 2006 until 2014, NATO debate on ISAF centred around means instead of ends: how the burden of fighting should be equally distributed among the member states; what operational concepts like the “comprehensive approach” or “counterinsurgency”—often wrongly termed “strategies”—should be followed, or how to “transition” to Afghan responsibility. Pursuant to its ultimate aim of transitioning security responsibilities to Afghan forces, ISAF ceased combat operations and was disbanded in December 2014. A number of troops remained to serve a supporting and advisory role as part of its successor organization, the Resolute Support Mission.

The decision to launch a follow-on, NATO-led non-combat mission to continue supporting the development of the Afghan security forces after the end of ISAF’s mission in December 2014 was jointly agreed between Allies and partners with the Afghan government at the NATO Summit in Chicago in 2012. This commitment was reaffirmed at the Wales Summit in 2014.

Resolute Support was a NATO-led, non-combat mission. The mission was established at the invitation of the Afghan government and in accordance with United Nations (UN) Security Council Resolution 2189 of 2014. Its purpose was to help the Afghan security forces and institutions develop the capacity to defend Afghanistan and protect its citizens in the long term. 38 Countries (Albania, Armenia, Australia, Austria, Azerbaijan, Belgium, Bosnia-Herzegovina, Bulgaria, Croatia, Czech Republic, Denmark, Estonia, Finland, Georgia, Germany, Greece, Hungary, Iceland, Italy, Latvia, Lithuania, Luxembourg, Mongolia, Netherlands, New Zealand, North Macedonia, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Turkey, Ukraine, United Kingdom an United States) had posted their personnel to the mission in Afghanistan at different points in time.

In February 2020, the United States and the Taliban signed an agreement on the withdrawal of international forces from Afghanistan by May 2021.

On 14 April 2021, recognising that there is no military solution to the challenges Afghanistan faces, the Allies decided to start the withdrawal of RSM forces by 1 May 2021.

NATO’s assumption of ISAF command on the one hand, and ISAF expansion on the other did not go hand in hand with a total revision of the DOD’s (US Department of Defence) position. Not only the sentiments of the “unilateralist” major US but the emotions of the non-Muslim world post “9/11”, which pushed NATO to be engaged in Afghanistan as intensely as possible − even without clearly defined political goals. This was not a conscious project but an unintended result of the colluding interests of the political masters in NATO countries with those of their administrative cadres. UN was made the Accidental Front.

The Afghans now have suffered generation after generation of not just continuous warfare but humanitarian crises, one after the other, and the world has to remember that this is not a civil war that the Afghans started among themselves that the rest of the world got sucked into. This situation was triggered by an outside invasion, initially by the Soviet Union, during the Cold War, and since then the country has been a battleground for regional and global powers seeking their own security by trying to militarily intervene in Afghanistan, whether it be the United States after 2001, the C.I.A. in the nineteen-eighties, Pakistan through its support first for the Mujahedeen and later the Taliban, or Iran and its clients. To blame Afghans for not getting their act together in light of that history is just wrong.

In the nineteen-nineties, there were only three governments in the world that recognized the Taliban: Pakistan, Saudi Arabia, and the United Arab Emirates. And this time around, too, Pakistan will be one of them. It isn’t the nineties, but Pakistan is still in the same awkward place that it was last time around. The Saudis and the Emiratis have a new geopolitical outlook. But China is not the same country that it was in the nineties. How will China support Pakistan in trying to manage a second Taliban regime, especially one that may attract sanctions or other kinds of pressure from the United States and its allies is something to be watched? Flirting with Taliban will blow back on Pakistan in one way or another, be that in the form of international pressure or instability.

Biden Administration is unlikely to change its policy. US cannot reverse the Taliban’s momentum without bombing Afghanistan to shards. US can certainly take responsibility for the lion’s share of the response to this unfolding humanitarian crisis to arrest the setting in of another massive refugee flow, which could certainly have political consequences.

US does what it likes – be it in Korea, Vietnam, Persian Gulf, Iraq or Afghanistan – the rest of the countries either support or keep quiet, few feeble voices of dissent are barely audible noises.

*****

First posted on 28 Aug 2021

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

B-Schools Gone Adrift

Harvard Business School defines its mission as “(t)o educate leaders who make a difference in the world.” Tuck School of Business states its mission as, “Tuck develops wise, decisive leaders who better the world through business.” Stanford Graduate School of Business aims to “(c)reate ideas that deepen and advance our understanding of management and with those ideas to develop innovative, principled, and insightful leaders who change the world,” and MIT’s Sloan School of Management says “(t)o develop principled, innovative leaders who improve the world and to generate ideas that advance management practice.”  INSEAD, the business school for the world, has the mission: “We bring together people, cultures and ideas to develop responsible leaders who transform business and society”

IIMA aims “(t)o continue to be recognized as a premier global management school operating at the frontiers of management education and practice while creating a progressive and sustainable impact on society.” IIMB has an elaborate mission statement, “Nurture innovative global business leaders, entrepreneurs, policy-makers and social change agents through holistic and transformative education – Provide thought leadership that is contextually embedded and socially relevant and makes positive impact – Pursue excellence in education and thought leadership simultaneously without making any trade-offs.” IIM Kolkata states its mission as, “(t)o develop innovative and ethical future leaders capable of managing change and transformation in a globally competitive environment and to advance the theory and practice of management.”

It is clearly discernible from the above that the top-ranked Business Schools of the world are now about ‘leaders’ and ‘leadership’ where ‘management’ is sometimes mentioned in the passing and ‘business’ is practically absent. Some scholars argue that although management and leadership overlap, the two activities are not synonymous. Furthermore, the degree of overlap is a point of disagreement. In fact, some individual see them as extreme opposites, and they believe that good leader cannot be a good manager and the opposite is true.

It is also clear that business schools have substituted leadership paradigm for the managerial one in stating their mission or purpose. The consequential question that emerges is whether the leadership paradigm constitutes an adequate foundation for a professional business school. One of the central features of a bona fide profession is possession of a coherent body of expert knowledge erected on a well-developed theoretical foundation.

Despite tens of thousands of studies and writings on leadership since the days of the Ohio State Leadership Studies, several scholarly reviews of the literature on leadership have found little progress in the field. Most studies failed to even define the terms ‘leader’ and ‘leadership.’ There are almost as many definitions of leadership as there are persons who have attempted to define the concept. Competing theories abound. We find great men theories, trait theories, environmental theories, person-situation theories, interaction-expectation theories, humanistic theories, exchange theories, behavioural theories and perceptual and cognitive theories. The dynamics of leadership have remained very much a puzzle. We still know little about what makes a good leader. Research that aims to decipher intrapsychic thought processes and resulting actions involves the study of “psycho-political drama” that relates managerial personality both to role behaviour and to the administrative setting.

In business schools in India, leadership is possibly being taught, using an ad hoc, convenience based amalgam of any of three distinctly different approaches, each of which possesses a certain validity but no one of which in whole or in part, make up for an element of a genuinely professional education. The first approach focuses on content and the transmission of explicit knowledge derived from academic theories in the fields of psychology, sociology, and economics. A second approach focuses on the development of interpersonal skills and their application to small-group situations. In this approach, leadership is conceptualized as tacit knowledge that must be mastered through hands-on practice rather than as a matter of explicit knowledge or content. A third approach associates leadership with personal growth and self-discovery and focuses on giving students opportunities for personal development. This approach gives students a great deal of freedom to explore personal values and use a variety of exercises and self-assessments, such as the MBTI, in an attempt to help students integrate discoveries about themselves into their career choices and professional lives.

Thoughts of leaders and leadership bring a wide array of images to mind, often conveying emotional reactions. Some leaders elicit thoughts of strength, power, and care; others recall the forces of terror, malevolence, and destructiveness. Our pervasive judgments of a leader’s degree of goodness or evil are reflected in epithets such as Ashoka the Great, Alexander the Great or Akbar the Great yet not everyone agrees that they were all great. For some, they were ‘the terrible.’

Our most secret desire, the one that inspires all our deeds and designs, is “to be praised.” Yet we never confess this because to announce such a pitiful and humiliating weakness arising from a sense of loneliness and insecurity, a feeling that afflicts both the fortunate and the unfortunate with equal intensity, seems dishonourable. We are not sure of who we are or what do we do. Full as we may be of our own worth, we are distressed by anxiety and long to receive approval from no matter where or no matter whom. Evidence shows that because narcissistic personalities are often driven by intense needs for power and prestige to assume positions of authority and leadership, individuals with such characteristics are found rather frequently in top leadership positions.

All people show signs of narcissistic behaviour, albeit of differing magnitudes. Among individuals who possess only limited narcissistic tendencies, there are those who are very talented and capable of making great contributions to society. Those who incline toward the extremes, however, give narcissism its negative reputation. Excesses of rigidity, narrowness, resistance, and discomfort in dealing with the external environment is very evident in those cases. The managerial implications of narcissism can be both dramatic and crucial.

Leaders may thus be seen to occupy different positions on a spectrum ranging from healthy narcissism to pathology. These are not distinct categories. These are factors that distinguish between health and dysfunction of the leader. To understand the different types of narcissistic orientations beginning with the most unreasonable and proceeding toward the more functional, it is easier to look at three sets (black, grey and white), which could be referred to as reactive, self-deceptive and constructive (adaptive). In practice, however, a distinction may be more difficult to make. The influence of each of these configuration on interpersonal relations and decision making in a managerial context are different. Does an MBA degree programme add to or mellow down the degree of narcissism amongst the graduates? 

Not having answers to the dilemmas that are described above does not prevent me from raising some questions, though I am not sure, to whom these be addressed to – leaders or managers, of business or business schools:

  • Is it time to stop referring to MBA schools as Business Schools and rechristen them as LEADERSHIP SCHOOLS?
  • Since the focus has shifted to Leadership, is it leadership in any specific walk of life, some limited facets of life or in every walk of life? Are politics, diplomacy, government, security included?
  • Is crisis leadership (response in emergency and unforeseen situations like Mumbai terrorist attacks or shut-downs due to COVID-19) included or is immaterial being no different?
  • Since the words ‘business’ and ‘administration’ are now out from the mission statements of these schools, should an MBA degree be rechristened as an ML (Leadership) in its most expansive form or MBL (Business Leadership) in its most narrow orientation or something in between?

*****

First published 11 Aug 21

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

Entrepreneurship Development– But where are the Potential Entrepreneurs?

The underlying metaphor of so much of our thinking, though we rarely think of it as a metaphor, is our much celebrated idea of ‘the individual’ and, in business especially, ‘the self-made man or woman.’ But even a genius has to be sufficiently steeped in the culture that makes his or her invention possible. We will never understand the world of business, or for that matter any other human world, unless we begin with human interrelations and how people fit into cultures, organizations, and institutions.

One business hero of particular interest, especially in light of current corporate uncertainty, is the entrepreneur. The entrepreneur, according to the familiar John Wayne imagery (John Wayne was a legendary American cowboy hero of numerous epic Western films), is the lone frontiersman who single-handedly sets up an industry or perhaps establishes a whole new world. The myth is part and parcel of a much older American myth, the myth of individualism, the myth of the solitary hero. The entrepreneur simply brings John Wayne up-to-date and puts him firmly at the centre of the business world.

Loosely put, the world of business is made possible through an established set of practices, in which implicit rules, tacit knowledge, and collective values, needs, and understandings are the principal structure. It is not the individual motives and attributes or individual personalities that make the world of business. It may be true that behind every successful business is some entrepreneur, that is, one of those relatively rare individuals who is both creative and business-minded, who is willing to take considerable risks and work single-mindedly to turn a dream into a marketable reality. But corporations, once formed, do not operate on the same risk-prone, creative principles that motivated the originator of the business, and the corporate world could not possibly function if, as we so often hear, everyone were to aspire to be an entrepreneur.

Who is an entrepreneur and what is entrepreneurship? We probably think that the answer is obvious, but like the buzz-words and other fads being doled out in the quest for ‘intellectualisation of the domain of management,’ expressions like strategy, business-model and entrepreneurship are all pliant. Managers describe entrepreneurship with such terms as innovative, flexible, dynamic, risk taking, creative, and growth oriented. The popular press, on the other hand, often defines the term as starting and operating new ventures. For some, it refers to venture capital-backed start-ups and their kin; for others, to any small business. For some, corporate-entrepreneurship is a rallying cry while others consider it as an oxymoron.  Some people think of entrepreneurship as a specific stage in an organization’s life cycle (i.e., start-up), a specific role for an individual (i.e., founder), or a constellation of personality attributes (e.g., predisposition for risk taking; preference for independence). People have different perceptions of an entrepreneur – an Inventor or discoverer or innovator or an upstart in business clamouring and struggling for survival of his start-up and dreaming for growth through scaling-up and scoping-up to a successful and stable enterprise or simply selling off the start-up at a premium.

The continuing corporate obsession with the almost mythological character called the entrepreneur is both unrealistic and, if taken seriously, counterproductive. Most people are not entrepreneurial. Cheapening the word by taking any initiative or innovation as entrepreneurship only fogs our understanding about what this phenomenon really is. Entrepreneurship is itself a social practice, and it consists, in part, of appreciating marginal or neglected aspects of more general social practices.

The history of the word “entrepreneurship” is fascinating. Without getting into those details and controversies, whether entrepreneurship is an inborn personality-trait or a learned behaviour, let us focus on the definition formulated by Professor Howard Stevenson, the Godfather of entrepreneurship studies. For Stevenson, entrepreneurship is the pursuit of opportunity beyond resources controlled. Entrepreneurship is thus a distinctive approach to managing.

To simplify this understanding, it is useful to view managerial behaviour in terms of extremes. At one extreme is what might be called the promoter type of manager, who feels confident of his or her ability to seize opportunity. This manager expects surprises and expects not only to adjust to change but also to capitalise on it and make things happen. At the other extreme is the trustee type, who feels threatened by change and the unknown and whose inclination is to rely on the status quo. To the trustee type, predictability fosters effective management of existing resources while unpredictability endangers them. Most people, of course, fall somewhere between the extremes. But it’s safe to say that as managers move closer to the promoter end of the scale they become more entrepreneurial, and as they move toward the trustee end of the scale they become less so (or, perhaps, more administrative).

Relentless focus with a sense of urgency in pursuit of a break, the opportunity may entail:

  1. Pioneering a truly innovative product;
  2. Devising a new business model;
  3. Creating a better or cheaper version of an existing product; or
  4. Targeting an existing product to new sets of customers.

These opportunity types are not mutually exclusive. For example, a new venture might employ a new business model for an innovative product. Likewise, the list above is not the collectively exhaustive set of opportunities available to organizations.

Many profit improvement opportunities are not novel, and thus are not entrepreneurial, for example, raising the price of a product or, hiring more field-sales-reps once a firm has a scalable sales strategy.

Before there can be entrepreneurship there must be the potential for entrepreneurship, whether in a community seeking to develop or in a large organization seeking to innovate. Entrepreneurial potential, however, requires potential entrepreneurs. Opportunities are seized by those who are prepared to seize them. Entrepreneurial activity does not occur in a vacuum. Instead, it is deeply embedded in a cultural and social context, often amid a web of human networks that are both social and economic. A group, an organization or a community could be entrepreneurial without necessarily having any discernible entrepreneurs per se. The group, organization, or community need not be already rich in entrepreneurs, but should have the potential for increasing entrepreneurial activity. Such potential exists in economically self-renewing communities and organizations. Regardless of the existing level of entrepreneurial activity, such “seedbeds” establish fertile ground for potential entrepreneurs when and where they perceive a personally viable opportunity.

Any agenda for developing Entrepreneurship and birthing Entrepreneurs rests on the basic understanding of the following very minimum requirements:

  • Identifying and establishing policies that increase both their perceived feasibility and their perceived desirability.

Creating social perceptions that entrepreneurial activity is both desirable and feasible.

Entrepreneurs prefer being seen as benefiting their communities, not as exploiting them.

  • Providing a “nutrient-rich” environment for potential entrepreneurs.

Credible information, credible role models, along with emotional and psychological support as well as more tangible resources

Opportunities to attempt innovative things at relatively low risk, e.g., trying and failing can be OK.

Training interested people in critical competencies, raising their self-efficacy at key entrepreneurial tasks. We must also make resources both available and visible.

Increasing the diversity of possible opportunities

  • For developing Intrapreneurship and Intrapreneurs (Entrepreneurship and Entrepreneurs within the Corporate Ventures)

Increasing perceptions of positive outcomes for internal venturing, including intrinsic rewards such as a supportive culture

Providing opportunities for managers to run an independent project or any of the existing entrepreneurial vehicles for channelling innovation and entrepreneurship.

Innovation in most organizations is inherently “illegitimate” as it unavoidably disrupts the status quo. Downsizing typically leads to less innovation. Stability, not innovation alone, makes companies and their people secure and successful.

Educators can help to increase perceptions of feasibility of entrepreneurship and desirability, not just for prospective entrepreneurs but also for community and its institutional leaders.

Globalisation has erased the line between business and International Business. Opportunities, all over the Globe, can now be pursued from anywhere in the world. India is parroting the western practice of introducing Entrepreneurship related courses in business schools, launching skills-universities, promoting Incubation Centres (New Enterprise Development Centres) and financing Entrepreneurship Development Centres (Training). Surely, there can be no single-universal prescription to such large initiatives. There is no visible evidence however, if these efforts have even considered the very basics of segmenting the target beneficiaries or customers for such relentless efforts. To illustrate the point, there are no noticeable signs of entrepreneurship education providers segmenting their market using any of the simple Segmentation bases (there are many more) for targeting their efforts:

DEMOGRAPHICS

  • Gender: women
  • Age: youth/young age
  • Minorities

DELIVERY PREFERENCES

  • Classroom, on-line, interactive

INDUSTRIES

  • Technology
  • Music/Leisure
  • Medical services
  • Others

STAGE OF ENTREPRENEUR/BUSINESS LIFECYCLE

  • Pre-start-up decision entrepreneurs
  • Nascent/intention entrepreneurs
  • Start-up entrepreneurs
  • Early growth: consolidation
  • Growth entrepreneurs
  • Corporate entrepreneurs
  • Cashed out entrepreneurs
  • Serial entrepreneurs

PSYCHOGRAPHICS

  • Activities, interests, attitudes, beliefs, opinions

The intent is honourable. One does not know though, the depth and width of thought going into designing and executing the effort.  Developing Entrepreneurship in India requires Entrepreneurs not Administrators.

*****

First Published 07 Aug 21

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

COVID-19 – Lessons so far

Corona Virus pandemic has taught us a complex and contradictory set of lessons. On the positive side, the pandemic confirmed the importance of droplet and contact infection. The pandemic travelled as fast as the modern transportation could take it around, confirming that it was human bodies that spread it. 

On the negative side of the lessons from the pandemic is that it is exceedingly difficult to get an urban population to stay at home. People need to work so they can eat; parents want their children to go to school; businesses dependent on customers, whether department stores or movie theatre operators, do not want to close down.

Hence, the most practical strategy in dealing with COVID-19 is been: move quickly to isolate the acutely ill in hospital wards or at home, under professional care and roll-out an intensive public education effort about personal hygiene to everyone else.

It is learned that it is not easy to get the public to practice the rules of modern nose/ mouth/ hand hygiene. Even at the height of the pandemic, educated and well informed people broke the rules. It appears that COVID-19 has been a ‘simple to understand, but difficult to control’ pandemic. Perhaps the most demonstrably useful methods of protection are certain forms of quarantine and isolation but, under conditions of modern life these are not readily applicable. In spite of being difficult to apply and uncertain of success as it may be, the minimizing of contact seems at present to offer the best chance we have of controlling the ravages of covid-19. Our response to the next wave of pandemic COVID-19 will likely confirm these lessons.

This odd combination of futility and certainty would continue to characterize summaries of the ‘lessons learned’ from the pandemic. In the field of prevention little real progress has been made. It will therefore be justifiable to increase the emphasis already placed on the COVID-19 patient as a definite focus of infection and to adopt reasonable measures to reduce crowding and direct contact to a minimum during a period of epidemic prevalence.

The opportunities for self-protection by individuals lie along the same line: avoidance of crowds and of direct contact with COVID patients and with people suffering from the infection; rigorous avoidance of the use of common drinking glasses, common towels and the like; and scrupulous hand washing before eating. Techniques of safe coughing and sneezing should be taught to people. A careless sneeze from an infected person without a mask or a face-cover is a super-spreader.

Vaccination is a saviour but not a licence to break the discipline of personal and social hygiene.

*****

First published 14 Aug 21

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

Politics of Commotion: Superficial Dialogue through Digital and Social Media

Over the last several years, we are witnessing, may be not perceiving it seriously, that political discourse in India is now getting confined to TV and Social Media and is commandeered by the scheduling consideration of these media options.

To enable the TV editors to gather participants for the debates and encapsulate content for prime time viewing, the messages are created no later than 5:00 pm. Likewise, to ensure proper rest for the media persons and the message sources, political activities, agitations, rallies, sloganeering, press-conferences, are all usually held after 10:00 am but before 2:00 pm.

The use and proliferation of digital and social media has radically changed both the way we are using language and the way we are ‘doing politics’ these days. Virtual space has now become the ‘natural habitat’ of an increasing number of individuals around the world; a space where they engage in discussions, work, shop, bank, hangout, relax, vote, find love partners, conduct their day-to-day activities, and so forth. A large proportion of day-to-day verbal and visual communication has migrated to various participatory web platforms. Social media have been hailed as either emancipatory tools contributing to a more participatory democracy, creating instant awareness about different social issues, a new public space of sorts (‘Arab Spring’ and the ‘Occupy’ movement are two widely cited examples).

A public sphere is a space of political communication and access to resources that allow citizens to participate in it. In this sense, given the exclusionary and commodified character of digital and social media, they cannot be considered as public spheres nor should they raise our hopes that revolution will be tweeted. Social and digital Media is dominated by corporations that make money by exploiting and commodifying users and this is why they can never be truly participatory. On a serious consideration, digital and social media are just another tool of control and containment, a profoundly depoliticising arena that fetishizes technology leading to a denial of a more fundamental political disempowerment.

One can realize the magnitude and impact of the medium if they consider that in the famous ‘Russia meddling,’ posts from a Russian company had reached the newsfeeds of 126 million users on Facebook during the 2016 US election and hundreds of thousands of bots posted political messages during the election on Twitter alone.

Digital and Social media is a new kind of an effective political instrument that, in the context of advanced capitalism, both dehumanizes politics and struggles and absolves people from the guilt of inertia in the face of major social and economic crises. It serves as an escape from the stress of intelligence, the pain and tension which accompany autonomous mental activity. Social Media is actually an effective anaesthesia against the mind in its socially disturbing, critical functions – leading to the knocking out of the mental agitation. Social media, as tools for producing and consuming different kinds of texts promotes a one-dimensional discourse. Consider the characteristics of Twitter’s one-dimensional discourse:

Language used in Twitter is short, fragmented and decontextualized: it is a language that tends to express and promote the immediate identification of reason and fact, truth and established truth, essence and existence, the thing and its function leaving no room for a dialogue and counter-reason. Twitter demands simplicity, promotes impulsivity, and fosters incivility.

Digital media takes the pedestal of instrumental and technological rationality and reduce audiences to the status of commodities and consumers of advertisements.  Such audience commodities that the media consumers become themselves are than sold as an audience to the advertising clients of the media.

Face-book, Twitter and other sites serve as an escape from the mechanised work process, and a breather to muster strength in order to be able to cope with the next round of work again. This allows social media to be marketed as entertainment – an entertainment that is accessible, on demand, any time and every time. For this entertainment to remain as a pleasure, it must not demand any effort of independent thinking from the audience. This constructs an involvement through inertia that creates a false sense of participation, security, homogeneity and consensus. Everyone is presumed to be a producer as well as a consumer of content, and the meaning of the messages get lost.

While there is around-the-clock exposure, constant access, and immediacy (all content is immediately available for reading and commenting), the message in the digital and social media is often decontextualized. The context is always that of-the-moment, limiting broader interpretations, connections and exploration of ramifications. Such content have a planned obsolescence, as the next programme or tweet will draw even more attention, commentary, visibility, and currency. The contents history is the here and now, as an ongoing critique of reality. Meaning loses history.

It comes, then, as no surprise that digital and social media have been serving as the ideal medium for populist parties and their leaders promoting the Politics of Commotion.  Digital and social media constitute an alternative to the mainstream media. Political campaigns started using social media as early as 2009, but it was with the 2019 General Elections that their use was taken to the next level.

Today, most political figures and parties use digital and social media platforms to disseminate their agendas and this has largely changed the way politics is conducted. This is a time when politics is ‘branded’ through social media. While democracies need liberation of the individuals from politics over which they have no effective control,  it seems that digital and social media have a firm grip on a large percentage of the our population, while people, in turn, have no control over digital and social media.

*****

First published 05 Aug 21

***

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

Peer-Reviewed and Impact-Factor Dilemma in Research

Peer review (as the publishers claim) is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles.

From a publisher’s perspective, peer review functions as a filter for content, directing better quality articles to better quality journals and so creating journal brands.

Different journals use different types of peer review. At least four main types of peer review processes are in vogue:

Single-Blind: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report.

Double-Blind: the reviewers do not know the names of the authors, and the authors do not know who reviewed their manuscript.

Open Peer: authors know who the reviewers are, and the reviewers know who the authors are. If the manuscript is accepted, the named reviewer reports are published alongside the article and the authors’ response to the reviewer.

Transparent Peer: the reviewers know the names of the authors, but the authors do not know who reviewed their manuscript unless the reviewer chooses to sign their report. If the manuscript is accepted, the anonymous reviewer reports are published alongside the article and the authors’ response to the reviewer.

The peer review system is not without criticism. Studies show that even after peer review, some articles still contain inaccuracies and demonstrate that most rejected papers will go on to be published somewhere else.

Despite criticisms, peer review is still the only widely accepted method for research validation and has continued with relatively minor changes for some 350 years.

The impact factor (IF) is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times its articles are cited.

The calculation of Impact Factor is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable. To illustrate – Calculation of 2010 IF of a journal:

  • A = the number of times articles published in 2008 and 2009 were cited by indexed journals during 2010.
  • B = the total number of “citable items” published in 2008 and 2009.
  • A/B = 2010 impact factor 

If one publishes an article in a Journal with Impact Factor 2.0, then it must live up to the average citation performance of articles in that journal and it must therefore be cited at least twice in each of the succeeding years, or else, the article will contribute towards lowering of the IF of the journal.

The belief that peer-reviewed publications should be the metric for research success seems to be rooted in the following assertions that are accepted as truths:

  • A peer-reviewed publication conveys more research authority because its findings have been vetted and then accepted by other members of the discipline.
  • The peer-review process is anonymous, more competitive and (supposedly) more objective in its selection process
  • Peer-reviewed publications are more prestigious and convey the expertise of the researcher.

These assertions are oversimplifications that erase the real nuance of peer-review publication. The peer-review process leaves the fate of someone’s research findings subject to the whims of two or three people who, like all of us, are influenced by variables including their own natural preferences for certain kinds of work. It is just a generalization to claim that peer-reviewed publications are always more selective or rigorous. (Admittedly, however, a peer-reviewed publication will almost always take longer to appear in print, which, for some people, adds to the genre’s perceived rigour.)

Peer-reviewed publications are simply not the only place where intellectual conversations are happening and where a researcher might want to share their ideas.

Over emphasis on Peer-Reviewed and high IF journals for publishing has resulted into more of replication and conformist research. Within my limited experience, I have found that most of the people who are creative types produce work that would not make sense in many of the “top” academic journals. They would therefore follow the path of Influencing without High IF Publishing through Usable, Useful and Timely Preprints and overcome the limitations inherent in High IF Publishing without Influencing.

A great research paper is not enough and it requires development, mobilisation, and exposure. A preprint is a version of a scientific manuscript posted on a public server prior to any formal peer review. Once it is posted, the preprint becomes a permanent part of the scientific record and is citable with its own unique DOI. By sharing ones research early, one can accelerate the speed at which science moves forward.

One Rewarding Story:

I am gratified with the fact that instead of waiting to publish my research in esoteric journals with high impact factors, which would have taken their own sweet time to convey their rejection or seeking revision or acceptance and publication, on MODELLING SPREAD OF CORONA VIRUS USING ADAPTED BASS MODEL (ResearchGate DOI: 10.13140/RG.2.2.30944.43522), I chose to publish it as a non-peer-reviewed, preprint under Creative Commons license CC BY-NC allowing others to remix, adapt, and build upon my work non-commercially, and although their new works must also acknowledge me and be non-commercial, they don’t have to license their derivative works on the same terms.

The benefit of giving free access to my Applied Managerial Research to others is that the same has been pursued, acknowledged, utilised and improved upon quickly and proactively. Here are the citations that I am now aware of (in the chronological order of their appearance and not conforming to citation styles):

  • Zeny L Maureal, Jovelin Lapates, Madelaine S Dumandan, Vanda Kay B. Bicar and Derren Gaylo (of Bukidnon State University, Philippines) (August 2020) “Adapted Bass Diffusion Model for the Spread of COVID-19 in the Philippines: Implications to Interventions and Flattening the Curve” International Journal of Innovation, Creativity and Change.  www.ijicc.net Volume 14, Issue 3, 2020
  • Ted G Lewis (of Naval Postgraduate School, Monterey, CA, United States) and Waleed I. Al (of Bahrain Defence Ministry, Manama, Bahrain) (March 2021) “Predicting the Size and Duration of the COVID-19 Pandemic” Frontiers in Applied Mathematics and Statistics.   Vol.6. DOI: 10.3389/fams.2020.611854
  • Ted G Lewis (of Naval Postgraduate School, Monterey, CA, United States) (July 2021) “Emergence of Contagion Networks from Random Populations” Research Gate, Pre-print.

The above story has enthused me to do more creative and out-of-box research and not care too much about getting it endorsed and accepted by 2-3 unknown peers. I would prefer to share my research with the world without any delay and leave it to a more inclusive, extensive and democratic review by fellow researchers and audiences. I have been actively involved in delivering keynote addresses at non-academic conferences and writing hard-hitting op-ed piece that shapes public policy. These have often been contrarian approaches to conformist thinking.

The views of pure-applied research which I wish to lay before you have sprung from the soil of abstraction of observations into model-building and therein lies their strength. They are radical. Henceforth management-theory by itself, and management-practice by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality of managerial wisdom.

**

First Published 19 July 2021

**

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

***

I Have Tested Positive. Am I Going To Die?

I am not insensitive to the grief of so many around who have already lost someone close to this terrible disease. I feel and share their grief and anger having lost not just one but many from amongst my family and friends over the last few days. While they were gasping for life, all of them repeatedly asked me this question, “Am I going to die?” Many others, who were by their side, attended by the same medical teams, also asked this question recurrently. Of them, many survived but a few could not.

Our pain is unique to us, our relationship to the person we lost is unique, and the emotional processing can feel different to each person. It is acceptable for us to take the time we need and remove any expectation of how we should be performing as we process our grief.

When we lose a loved one, the pain we experience can feel unbearable. Understandably, grief is complicated and we sometimes wonder if the pain will ever end. We go through a variety of emotional experiences such as anger, confusion, and sadness.

This post reflects my concern for those who are battling for life and for their family and friends who are equally anxious.

“I have tested positive. Am I going to die?” is a straightforward question that most people would like answered. This simple question is hard to answer. Ask this to someone who has seen a dear one succumb to this disease and the frank answer would be, “to be true and forthright, yes you are going to die, unless some miracle happens.” Ask the same question to someone who has seen a dear one survive this disease and the likely answer would be, “it is going to be a long, painful and apprehensive battle, but don’t worry, everything will be fine.”

A forthright question, “I have tested positive. Am I going to die?” is remarkably challenging to be answered by a bystander to the agony of the raging pandemic, who can only look at numbers and statistics to support his answer.

When the risk of death from COVID-19 is discussed, the Case Fatality Rate, sometimes called Case Fatality Risk or Case Fatality Ratio, or CFR, is often used. The CFR is very easy to calculate. The number of people who have died, divided by the total number of people diagnosed with the disease is CFR.

CFR is the ratio between the number of confirmed deaths from the disease and the number of confirmed cases, not total cases. That means that it is not the same as the risk of death for an infected person and, in early stages of fast-changing situations like that of COVID-19, probably not even very close to the true risk for an infected person.

Recall the question we asked at the beginning- if someone is infected with COVID-19, how likely is it that they will die? What we want to know is not the Case Fatality Rate; it is the Infection Fatality Rate (IFR). CFR is not the answer to the question, for two reasons. First, CFR relies on the number of confirmed cases, and many cases are never confirmed; secondly, CFR relies on the total number of deaths, and with COVID-19, some people who are sick and may die soon, are not counted in total number of deaths until have not died. The first reason inflates CFR while the second one deflates it.

With the COVID-19 outbreak, it can take between two to eight weeks for people to go from first symptoms to death, according to data from early cases. With CFR data available for the last 67 weeks that this pandemic has been raging, it is seen that the CFR for a country is not fluctuating as wildly as it was in the first 40 weeks and the CFR for many countries, including India, have not seen large deviations from a stable trend line over the last 18 week.

It is exceptionally important however to note that CFR for cases under Home-Isolation, under Medical-care and under critical-care are different. Further, these CFRs vary across states and locations within India. National CFR is an aggregated mean of all of this CFRs. The cases under critical care are overwhelming the health-care-system at this time, for which the CFR is logically and expectedly much higher.

With IFR being non-available, CFR is being used, albeit quite cautiously, to answer the question, “I have COVID-19. Am I going to die?” and the tremendously relieving answer to the question with a very high chance of being true, at least for patients under home-isolation and those kept in quarantine is a very loud NO. I hope the COVID-19 survivors, who constitute over 98% of the confirmed cases of COVID-19 infections will join the chorus.

**

First published 11 May 2021

**

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

COVID Confusions

COVID-19 is a new acronym coined for Corona-Virus-Induced-Disease of the year 2019. Year 2020 made some old word or phrases suddenly very fashionable and buzzing with new meanings, and injected them into active vocabulary of people. Corona, a word hitherto associated with the Sun, novelty and SARS-Coronavirus-1 was not so much in use but became suddenly a dreaded word linked to COVID-19. Positivity, a word that was generally used for the practice of being or tendency to be positive or optimistic in attitude up until then, took on the other meaning of the presence rather than absence of a certain substance, condition, or feature, now a measure of incidence of disease.

Check out some of these words or phrases for yourself, because your inability to use them in conversations may be mistaken as your ignorance – animal-human interface, asymptomatic, carrier, clinical trials, community spread, contact tracing, Contagious, Droplets, Epidemic, flatten the curve, herd immunity, HRCT scan, incubation period, Isolation, Mask, mRNA Vaccines, Mutant, Outbreak, Oxygen-concentrator, Oximeter, Pandemic, Pathogen, patient zero, PCR test, personal protective equipment (PPE), Plasma, Quarantine, Rapid-Antigen Test, Severe Acute Respiratory Syndrome Corona Virus 2 (SARS-CoV-2), Screening, self-isolate, social distancing, Super spreader, Symptomatic, Transmission, Vax, Ventilator, Viral Vector Vaccines, Zoonotic – and the list goes on.

Some proper nouns also made their way in the active vocabulary – Wuhan, AstraZeneca, Covax, Covaxin, Covishield, Sputnik5, Pfizer-BioNTech, Moderna, Johnson & Johnson’s Janssen, Novavax, Coronil, CoviSelf, Remdesivir, 2-DG, and so on; but the most conspicuous proper noun is FAUCI.

Anthony Stephen FAUCI (born December 24, 1940) is an American physician-scientist and immunologist who serves as the director of the U.S. National Institute of Allergy and Infectious Diseases (NIAID) and the chief medical advisor to the president. He has acted as an advisor to every U.S. president since Ronald Reagan. From 1983 to 2002, Fauci was one of the world’s most frequently cited scientists across all scientific journals. In the early stages of the COVID-19 pandemic, The New Yorker and The New York Times described Fauci as one of the most trusted medical figures in the United States. Currently Fauci is the Chief Medical Advisor to President Joe Biden, officially appointed in 2021.

After initially declaring in April of last year that the virus was “not a major threat to the people of the United States” and that it was “not something the citizens of the United States right now should be worried about,” Fauci repeatedly urged Americans not to wear masks early in the pandemic. Later, Fauci admitted that he had believed all along that masks were effective but said he had wanted to ensure that supplies would be reserved for medical professionals. In other words, he asserted that he had the right to lie to the public for what he believed to be their own benefit. If Fauci is correct that masks effectively contain the spread, then the cost of his misinformation as the pandemic worsened may be incalculably large, for the US community. (https://www.delcotimes.com/opinion/chris-freind-dr-fauci-needs-a-dose-of-reality/article_9bce984e-7641-11eb-8c87-4f0114a8a7a2.html )

After repeatedly dismissing the theory that the COVID-19 virus escaped from the Wuhan Institute of Virology in China, Fauci now says he cannot rule out the theory.

Fauci has now backtracked on his comments about the National Institutes of Health (NIH) funding for the Chinese lab under his leadership, that funding was not for “gain of function” research, a laboratory technique that intentionally makes pathogens more dangerous and transmissible. Gain of function research in Wuhan was indeed funded through one of Fauci’s grants.

Late last week, COVID policies stated that fully vaccinated individuals do not need to wear masks indoors or outdoors, any longer. Defending the policy, Fauci declared that the abolition of mask mandates was not a contradiction of previous policy but instead followed “evolving science” on the virus; although no examples of this supposedly new scientific evidence were forthcoming. Fauci then added to the confusion by declaring, apparently on his own authority, that young children would still be required to wear masks in school. Then, just a gay later, Fauci suggested that it was “reasonable” for businesses to maintain mask mandates even for vaccinated Americans, in blatant defiance of the CDC’s recent guidance. Whichever way one looks at it, Fauci has become a key player in the current controversy, which completes his transformation from an independent doctor into a political football, at the age of 80 years.

Fauci has also steadily moved the goalposts on the percentage of the population that will need to be vaccinated to achieve herd immunity. Earlier this year, he said herd immunity would be achieved when 60% were vaccinated; in recent interviews, he has spewed out numbers as high as 85%. At the very least, the top infectious diseases expert of the US and chief medical adviser to Biden is loose with the facts and is prone to changing his mind. To be fair, the pandemic caught a lot of people unaware, but the thing about Fauci is that he always is so sure of himself. (https://nypost.com/2021/01/24/dr-fauci-needs-to-be-held-responsible-for-mistakes-devine/ ).

India has done well in vaccinating the armed forces personnel with 90% of them having already received both doses of vaccine. India did not listen to the US guidelines (CDC) on reopening of schools, which is now being associated with untold misery that followed in Texas.

Luckily, Indian policy-makers do listen to Dr. Anthony Fauci but do not blindly subscribe to all his utterances. Good, is not it, that while being open to all the information, suggestions, knowledge and advice coming from everywhere, we have a mind of our own. When it comes to inconsistent and improvisational COVID messaging, no one can surpass Dr. Anthony Fauci.

**

First published 24 May 21

**

“Likes” “Follows” “Shares” and “Comments” are welcome.

We hope to see energetic, constructive and thought provoking conversations. To ensure the quality of the discussion, we may edit the comments for clarity, length, and relevance. Kindly do not force us to delete your comments by making them overly promotional, mean-spirited, or off-topic.

**