Jump to ContentJump to Main Navigation
A Population History of IndiaFrom the First Modern People to the Present Day$

Tim Dyson

Print publication date: 2018

Print ISBN-13: 9780198829058

Published to Oxford Scholarship Online: October 2018

DOI: 10.1093/oso/9780198829058.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 27 February 2020

From Anxiety to Unconcern on Population

From Anxiety to Unconcern on Population

c.1971 to c.2016

(p.216) 9 From Anxiety to Unconcern on Population
A Population History of India

Tim Dyson

Oxford University Press

Abstract and Keywords

India experienced substantial mortality decline in the decades after 1971. By 2016 life expectation probably averaged about 68 years. However, much of the mortality decline resulted from narrow technical developments (e.g. immunizations) and was not always matched by commensurate advances in the state of the population’s health. The Emergency of 1975–77 led to sterilization excesses, and there was a backlash against the family planning programme. Subsequently, political leaders generally avoided talking about family planning and population growth. Partly as a result, the pace of fertility decline during 1971–2016 was slow. By 2016 the average level of fertility was about 2.4 births per woman. It was not until the 2001–11 intercensal decade that there was an appreciable fall in the rate of population growth. Consequently, between 1971 and 2016 the population grew from 548 million to more than 1.3 billion.

Keywords:   mortality decline, 1975–77 Emergency, slow fertility decline, north‒south contrast, demographic dividend, rhetoric and reality

India’s prospects did not look promising at the start of the 1970s. Two decades of state direction under the aegis of the Planning Commission had been associated with only sluggish economic growth. Poverty remained widespread. There were mounting food shortages. And, following Mrs Gandhi’s election victory in 1971, even the integrity of the country’s democratic institutions appeared to be under some threat.

However, after an easing of the exchange rate in the late 1970s, there emerged a period of substantial economic growth. This gained impetus from reforms introduced in the early 1990s by the then finance minister, Manmohan Singh. The improved economic performance also involved the country’s re-engagement with the world economy.1

Whereas between 1947 and 1971 the level of per capita income grew at an average annual rate of 1.5 per cent, between 1971 and 2011 it grew at about 3.4 per cent.2 Indeed, by 2016 the economy was growing at around 6 per cent per year. Yet while living standards rose, they rose much more for some people than for others. In short, the economic growth saw rising income inequality. In addition, during 1971‒2016 the population more than doubled. Estimating the number of people living in poverty has always been difficult and contentious. Nevertheless, by some estimates the absolute number of poor people in India in 2016 was not much lower than it was in the early 1950s—indeed, it may have been slightly greater.3

As India re-engaged with the world economy, so it also became more aware of international experience about the ways in which fertility and mortality could be reduced, and people’s health could be improved. In this context, the realization in the 1980s that China had been exceptionally successful in raising life expectation came as something of a shock. Also, from the 1990s onwards there was an increasing—and rather awkward—recognition that (p.217) Bangladesh was being surprisingly effective in promoting contraception and reducing fertility.

Broadly similar developments occurred with respect to demographic data collection. The Sample Registration System (SRS) operated during 1971‒2016 and, of course, decennial censuses were taken. But while in the 1970s and 1980s many countries held demographic household surveys as part of major international survey programmes, India remained somewhat apart—preferring to hold its own rather basic sample surveys.4 With technical and financial assistance from the United States, however, this situation changed in the 1990s with the launch of the National Family Health Survey (NFHS). The state-level NFHS surveys were managed by the International Institute for Population Sciences (IIPS) in Mumbai, and associated with the worldwide Demographic and Health Survey programme. NFHS surveys were conducted in every major state, and they gathered systematic information on many subjects for the first time.* Four NFHS survey rounds were conducted during the period under review—in 1992‒93, 1998‒99, 2005‒06, and 2015‒16. Studies associated with other major international survey programmes—for example dealing with the health and conditions of older people—were also undertaken.

In what follows, no attempt is made to provide a definitive set of demographic estimates for the country—or its states—on the basis of the available census, SRS, and NFHS data. Rather, what is attempted is a broad description of trends, drawing on estimates from several sources. The following account, then, is only a sketch—there being insufficient reason (or space) here for a more detailed treatment.5

Looking at India as a whole, the decline in fertility during the period 1971‒2016 was slow. Therefore there was much greater population growth than there might otherwise have been. In this context, the north and south of the country experienced rather different demographic trajectories. In particular, fertility decline occurred appreciably later in the populous north—and therefore the north experienced much more population growth. The improvement in the economy also contributed to a change in the attitudes of politicians towards population issues. Views about the birth rate once characterized by anxiety were eventually replaced by views of unconcern.

The chapter starts by considering population trends. It then examines relevant history of the 1970s and 1980s, and then of the period 1991‒2016. This is followed by discussion of the urban sector and migration. The chapter ends by addressing several implications of the country’s population growth (p.218) which are sometimes overlooked. As background to the discussion, Map 9.1 shows India’s main states as they were in 2016.

From Anxiety to Unconcern on Populationc.1971 to c.2016

Map 9.1 India and adjacent countries in 2016

Population Trends

The population grew from 548 to 1,210 million between the 1971 and 2011 censuses—a rise of 662 million or about 120 per cent (see Table 9.1). While the (p.219) growth rate fell slightly between the 1961‒71 and 1991‒2001 intercensal decades, it nevertheless remained close to 2 per cent. This was because the birth and death rates fell by similar amounts—leaving the growth rate little changed. It was not until 2001‒11 that there was an appreciable fall in the growth rate—to around 1.6 per cent. The fall occurred because the birth rate fell by more than the death rate. It is worth noting, however, that although the growth rate decreased between 1991‒2001 and 2001‒11, the absolute addition to the country’s population in both decades remained constant at about 182 million.

Despite worries that the pace of mortality decline might have slackened in the late 1960s, the estimates in Table 9.1 suggest that the substantial decline of the 1950s and 1960s continued into the 1970s, the 1980s, and, quite possibly, the early 1990s. The mortality decline of these years was especially pronounced for infants and young children. However, the rise in life expectation appears to have slowed somewhat by the 2001‒11 decade (perhaps slightly earlier).6

Table 9.1 Demographic estimates for India, 1971‒2011


Growth rate

Sex ratio

Life expectation at birth (years)

Total fertility

Female marriage age

Crude death rate

Crude birth rate


(% per year)




(per woman)


(per 1,000)

(per 1,000)














































Notes: As in Tables 7.1 and 8.1, the life expectation, total fertility, and death and birth rate estimates refer to the intercensal decades preceding the years against which they are shown. The figures for 1971‒91 are based on Bhat’s (1998) ‘integrated’ estimates. The figures for 1991‒2001 and 2001‒11 actually relate to the periods 1990‒2000 and 2000‒2010 and are the estimates of the United Nations (2015a). As in Table 8.1, the marriage ages are singulate mean ages at marriage (SMAMs). They provide only a rough indication of age at marriage. For 1971‒2001 the SMAMs are taken from a compilation by Kulkarni (2014) and the estimate for 2011 is from Srinivasan and James (2015).

Sources: Bhat (1998: 51); Census of India (2011a); Kulkarni (2014: 58); Srinivasan and James (2015: 40‒1); United Nations (2015a).

It is worth noting here that economic growth is not a guarantor of mortality decline. The effects of preventive public health measures and technologies, and rising levels of education, can be much more potent.7 It is not necessarily surprising, then, that mortality decline was relatively fast during a period of fairly slow per capita income growth (i.e. from the 1950s to the 1980s) and that it appears to have slowed a little in a time of faster economic growth (i.e. during and after the 1990s). The slight slow-down in the pace of mortality decline may partly have reflected a rise in income inequality. And, despite a lot of policymaking and rhetoric, it may also have reflected a decrease in the concern of politicians with issues of public health. Certainly, government spending on health fell to low levels by international standards.8

Sex Differentials

Female life expectancy rose by more than that of males during the period under review (see Table 9.1). Whereas in the 1960s and 1970s males probably experienced slightly higher life expectation, by 2001‒11 female life expectation clearly exceeded that of males.9 In this context, death rates recorded by the SRS at ages 20‒49 show an appreciably faster fall for women than for men. One reason for this was fertility decline. The estimates in Table 9.1 suggest that between 1971‒81 and 2001‒11 the total fertility rate fell from 5.4 to about 3.0 births per woman. In earlier times, when women frequently had five or six live births each, their experience of repeated pregnancy and childbirth undoubtedly harmed their health. However, fertility decline almost certainly benefited the—albeit very unsatisfactory—health status of women, and reduced their risk of dying from causes linked to pregnancy and childbirth. (p.220) (p.221) Moreover, the fact that smoking, drinking, and chewing paan (i.e. mixtures of betel leaf, areca nut, etc.) were more frequent among men probably acted as an impediment to the improvement of adult male mortality.10

Yet while females experienced a larger rise in life expectancy, the population sex ratios from the censuses show no clear trend (see Table 9.1). As already noted, these ratios are difficult to interpret—partly because they are affected by changes in census coverage.11 However, it is worth mentioning that—continuing an earlier trend—the masculinity of the child population increased during 1971‒2011. Indeed, the sex ratio (m/f) of children aged 0‒6 years rose from 1.04 in 1971 to 1.09 in 2011. From the 1980s onwards, this rise was increasingly a reflection of sex-selective abortion. Indeed, by 2001‒11 the national sex ratio at birth appears to have risen to about 1.08 compared to an expected figure—i.e. without sex-selective abortion—of around 1.05.12

Sex-selective abortion requires access to prenatal sex determination techniques (e.g. ultrasound) and it may also have been facilitated by the greater availability of abortion resulting from the 1971 Medical Termination of Pregnancy (MTP) Act. To try to restrict the spread of sex-selective abortion, in 1994 the government passed the Pre-natal Diagnostic Techniques (Regulation and Prevention of Misuse) Act. This enabled the prosecution of people who were determining the sex of foetuses so that female foetuses could be aborted. In fact, however, prosecutions under this Act (and subsequent legislation) were rare. And the spread of sex-selective abortion, out from its initial ‘core’ area in Punjab and Haryana, proved very difficult to restrict. Indeed, the practice became more prevalent between 1991 and 2011—especially, but not only, in much of the north and the west.13 However, despite public anxiety on the issue it should be noted that only a small fraction of the abortions performed each year—probably more than 7 million by 2011—were undertaken for the purpose of sex-selection.14

Two ways in which different population trends seem to have interacted during 1971‒2016 deserve mention here. First, given the very strong desire of most married couples to have a son, fertility decline probably helped to fuel the spread of sex-selective abortion. This was because women were less likely to give birth to a son if they had only one or two births—as opposed to, say, five or six. Thus, with prenatal sex determination services becoming increasingly—if discreetly—available, the fact that women were having fewer births raised the demand for such services to help ensure the birth of a son. This was especially the case among better-off, more educated couples—people who most wanted to have only one or two children and who could also afford the associated private medical costs.15

The second interaction was that the spread of sex-selective abortion may have contributed to the faster rise of female life expectation (compared to that of males). This is because sex-selective abortion was used to avoid the development of foetuses which, if they were not aborted, would often have become (p.222) unwanted daughters (e.g. second- or third-borne girls). Therefore, the spread of sex-selective abortion presumably worked to raise the proportion of female children who were wanted—and so better treated—and reduce the proportion who were unwanted—and so at greater risk of neglect and early death.16

In contrast to the period before 1971, mortality improved at a slower rate in urban than in rural areas during 1971‒2016. Thus, according to the SRS, in 1970‒75 life expectation in urban areas (58.9 years) exceeded that in rural areas (48.0) by 10.9 years. However, by 2011 the corresponding estimates (71.0 and 66.3 years) suggest that the gap had narrowed to about 4.7 years. Relatedly, in 1971 the SRS urban and rural infant mortality rates were respectively 82 and 138 infant deaths per 1,000 live births. However, by 2011 the figures were 29 and 48 per 1,000—a much smaller absolute differential, though still distinctly to the advantage of the urban population.17 India’s infant mortality rate fell from 129 to 44 per 1,000 between 1971 and 2011. By 2011, about half of all deaths under the age of five happened during the neonatal period, i.e. the first month of life.18

Another feature of the mortality decline was a major reduction in the share of deaths that were caused by infectious and parasitic diseases, and a corresponding rise in the share caused by non-communicable diseases. As might be expected, as infectious and parasitic diseases were brought under varying degrees of control, so a ‘core’ of ailments, which were more degenerative in nature, and which tend to afflict adults, became more prominent.

Despite efforts to collect cause of death data by so-called ‘verbal autopsy’ methods, evidence on the topic remained unreliable. Nevertheless, data collected by the Registrar General for rural areas suggest that, due to progress against diseases such as malaria, smallpox, and cholera in the 1950s and 1960s, by 1971 only about half of deaths were due to communicable diseases. Moreover, by the early 1990s the fraction was only about a quarter. Diseases which, in relative terms, became less prominent included tetanus, pneumonia, dysentery, kala-azar, and measles. But the fraction of rural deaths from typhoid, malaria, and—at a distinctly higher level—tuberculosis (TB) remained stubbornly high. Non-communicable ailments which became more prominent included cancers, heart disease, bronchitis, asthma, and stroke. Road traffic deaths also increased in importance. In addition, environmental and lifestyle factors featured more in the country’s mortality and health profiles—for example, in relation to diabetes, hypertension, obesity, and ailments linked to the consumption of tobacco, alcohol, and paan.19 From the 1990s onwards progress was made against malaria and TB. Nevertheless, though much improved the population’s general health status in 2016 left much to be desired.

(p.223) Many of the developments behind mortality decline during 1971‒2016 were similar to those of the 1950s and 1960s. Thus, according to the census, between 1981 and 2011 the literacy rate of the population aged 7 years and over rose from 43 to 73 per cent. Also, attitudes towards dealing with illness continued to become more secular and practical. There were significant advances in the provision of clean water. But progress with respect to sanitation was more modest. In 2011 only 60 and 24 per cent of urban and rural households respectively were estimated to have access to ‘improved sanitation facilities’.20

Health Care Provision

The use of private health care services increased greatly. The lacklustre service provided at many government health centres contributed to this. Moreover, especially from the 1990s onwards, successive governments found it opportune to favour the expansion of the private health care sector. In this context, in 2005 a World Bank report commented that:

In the poorest states, such as Bihar and Uttar Pradesh, the public sector is completely dysfunctional, and no effective alternatives to the private sector exist. At the other end of the spectrum, in the richest states, such as Punjab and Maharashtra, much of the population can afford and prefers private services.21

Back in 1946 the Bhore Committee estimated that fewer than 10 per cent of the country’s medical institutions were supported wholly by private organizations and individuals. The figure was probably not vastly greater in 1971. However, towards the end of the period under review private spending was estimated to account for about three-quarters of all expenditure on health.22 This represented a major change compared to the 1950s and 1960s.

Nevertheless, there was substantial progress in overall health care provision. For instance, according to the 2015‒16 NFHS about half of women who had given birth in the previous 5 years had received at least four antenatal health care visits. And about 79 per cent of births in the previous 5 years had occurred in an institutional setting (e.g. a primary health centre (PHC), municipal hospital, or other modern health facility).23 That said, the fact that such figures represented progress reflected the dismal circumstances of the early 1970s. There is no doubt that government health care services often remained of very poor quality—especially in the rural areas of some northern states.

As in the 1950s and 1960s, much mortality decline during 1971‒2016 resulted from low-cost health interventions which cut death rates largely irrespective of socio-economic conditions. Tetanus provides a good illustration. According to the SRS, the infant mortality rate in Uttar Pradesh (UP) in the early 1970s was extremely high—at roughly 180 deaths per 1,000 live (p.224) births. Research suggests that about 60 per cent of infant deaths in UP at the time occurred in the first month of life, and that about two-thirds of these neonatal deaths were due to tetanus.24 Therefore tetanus was responsible for perhaps 40 per cent of all infant deaths in UP, and the figure was about 15 per cent for India as a whole.25 The prevalence of tetanus reflected the very unhygienic conditions in which many women gave birth—cow-dung, for example, often being used to dress the umbilical stump. However, tetanus mortality could be reduced relatively easily by educating village midwives, providing them with sterile equipment (e.g. clean razors and dressings) and, above all, by injecting pregnant women with tetanus vaccine—which transfers a high degree of immunity to the unborn child. Towards the end of the period under review, most pregnant women were being vaccinated, as were most new-born children.26 Deaths from tetanus were not eliminated, but they were greatly reduced.

The NFHS surveys supplied evidence on the use of sterile razors and tetanus vaccine. Data collected during 2005‒06, for example, indicated that clean razors were being used in about 90 per cent of home-delivered births.27 The surveys also produced useful information on the progress of other low-cost interventions, such as: the provision of vitamin A supplements to children (e.g. to reduce cases of blindness); the use antibiotics to combat pneumonia; knowledge of parents regarding the treatment of diarrhoea (e.g. through oral rehydration); the use of iodized salt in cooking (to reduce cases of iodine deficiency); the administration of folic acid/iron tablets to pregnant women to combat anaemia; and, the provision of antenatal and postnatal health care services.28 The NFHS also furnished data on the most powerful low-cost health intervention—the child immunization campaigns (see the section on immunizations and health care below).

Fertility, Marriage, and Sterilization

India’s total fertility rate probably more than halved between 1971 and 2016 (see Figure 9.1). Fertility decline in the 15‒19 age group was chiefly due to the rise in the age of women at marriage. According to rough measures derived from the 1971 and 2011 censuses, the average age of females at marriage rose from about 17 to around 22 years (see Table 9.1). Relatedly, whereas 56 per cent of women aged 15‒19 were classed as ‘married’ in 1971, by 2011 the figure was just 12 per cent.

From Anxiety to Unconcern on Populationc.1971 to c.2016

Figure 9.1 Age-specific fertility rates for India, 1971 and 2011

Sources: Registrar General, India (1999, 2013).

Echoing the title of Sarda’s legislation of 1929, the Child Marriage Restraint Act of 1978 raised the legal age at marriage for women and men to 18 and 21 years respectively. Nevertheless, girls younger than 18 continued to be married—especially in much of the north—and this led to the strengthening of legislation in the Prohibition of Child Marriage Act of 2006.29 These Acts (p.225) may have influenced public opinion, slightly.30 But there were other reasons why women were being married later. In particular, the promotion of education for girls meant that they remained in school longer (i.e. to higher ages) and perhaps also experienced a widening of their personal horizons. In addition, the increase in the age of women at marriage may have reflected a ‘marriage squeeze’ effect. In short, mortality decline meant that there was a slight rise in the ratio of (younger) marriageable women to (older) marriageable men. This presumably made it slightly harder for young women to find partners. The fact that the age at marriage for men increased more slowly—from about 23 years in 1971 to 25 years in 2011—can be seen as consistent with this explanation.31

However, marriage remained a resilient institution in India. There were no signs of any significant separation of childbearing from marriage. Most marriages continued to be arranged. The costs associated with divorce and separation remained extremely high. Moreover, inter-religious and inter-caste unions were infrequent and widely criticized. The fate of almost all young girls was to be dependent on their parents before being married off, and then to be reliant on their husband, or if they became widowed reliant on a son. Young boys were socialized for lives in which their family duties—including their responsibilities to their parents—were expected to be paramount.32 Towards the end of the period 1971‒2016 there were indications that some young urban women were becoming more active in choosing their husbands—usually in combination with their parents—and that inter-caste marriages were increasing. There was also evidence that the degree to which spouses first met each other on their wedding day was falling. Nevertheless, these changes were relatively modest.33

(p.226) Most fertility decline during 1971‒2016 was due to increased use of birth control by women in their late twenties and older ages (see Figure 9.1). The largest absolute falls in fertility occurred in the age groups 25‒29, 30‒34, and 35‒39—although women aged in their forties experienced an even greater fall in percentage terms. The fact that fertility decline was especially sizeable at later reproductive ages is as expected—because it was at these ages that women had higher order (e.g. fourth, fifth, sixth, etc.) births. But the way in which contraceptive methods were used to achieve the decline in fertility was unexpected.

As discussed in Chapter 8, in the early 1970s the family planning programme was mainly concerned with performing sterilizations. Indeed, most of the little contraceptive ‘protection’ that was afforded to people at that time came from providing vasectomies. Nevertheless, it was widely thought that future fertility decline would require much greater provision of reversible forms of contraception. For instance, writing in the 1970s, the distinguished analyst Dorothy Nortman remarked of India’s prospective requirement for birth control that it would:

imply a surge of demand by younger couples for nonpermanent methods of birth control—a demand by couples who wish to postpone and space rather than terminate births. In programmatic terms this means a demand for IUDs, the pill, for condoms, and for abortion…It is plainly obvious however that, given its heavy emphasis on sterilization, the program is now ill prepared for such a change.34

Yet while government officials often commented on the necessity of providing people with a choice of contraceptive methods, in practice no major change in the orientation of the family planning programme took place. Instead, the fall in fertility during 1971‒2016 was achieved overwhelmingly through sterilization.35

But whereas in the early 1970s vasectomy was the main method offered, most of the subsequent reduction in fertility came from the provision of tubectomy (i.e. tubal ligation). Indeed, Indian doctors pioneered the development of less invasive forms of female sterilization—especially ‘mini-lap’ procedures (the ‘lap’ being short for laparoscopy) which employed ‘keyhole’ surgery techniques. By the 2001‒11 decade female sterilization accounted for about three-quarters of all modern contraceptive use in India. Moreover, NFHS data suggested that many women did not want to use reversible contraceptives—partly due to concerns about their side effects on health.36

One outcome of the reliance on female sterilization was the emergence of an exceptional pattern of family formation as fertility reached low levels in some areas (especially in the south). In brief, women would marry at a young age—e.g. while in their late teens—have two births in quick succession, and then (p.227) get sterilized. By 2001‒11, for example, the median age at sterilization of currently married women in Andhra Pradesh (AP) and Karnataka was put at 25 years.37 A related change was the increasing concentration of childbearing in the 20‒24 age group (see also Figure 9.1). Thus in AP, Karnataka, and Maharashtra in 2011, more than half of the overall total fertility rate was accounted for by women in this single five-year age group.38

This brings us to geographical aspects of India’s fertility decline. Research by Guilmoto and Irudaya Rajan on district-level data was mentioned in Chapter 8.39 The work suggests that fertility decline originated in coastal areas of the south and, a little later, in Punjab in the north. Drawing on their research Figure 9.2 shows the evolution of the decline during 1966‒91 using a proxy indicator of fertility. It is clear that fertility was falling in much of peninsular India by 1966‒71—a development which may have begun in a few coastal districts in the late 1950s. By 1976‒81 fertility decline had spread both inwards and northwards throughout much of the south. In addition, relatively low fertility levels had also been achieved in Punjab and neighbouring areas (e.g. in Himachal Pradesh). In 1976‒81 the remaining high fertility area formed a distinctive northern belt. It stretched from areas of Rajasthan in the west, through Madhya Pradesh (MP) and UP, to parts of Bihar in the east. By 1986‒91 the belt had contracted further. Indeed, by this time fertility was falling—albeit often slowly—in most of the country’s districts, and was approaching two births per woman in the far south.

From Anxiety to Unconcern on Populationc.1971 to c.2016

Figure 9.2 Fertility indices based on district-level census data, 1966‒71, 1976‒81, and 1986‒91

Source: The maps are based on coloured maps presented in Guilmoto and Irudaya Rajan (2001a: 103‒4).

Using data from the 2001 census, Guilmoto and Irudaya Rajan estimate that in 1996‒2001 there were still districts in Rajasthan, MP, UP, and Bihar (including the then new state of Jharkhand) where total fertility exceeded five births.40 Estimates made using data from the 2011 census confirm that fertility (p.228) had fallen further by 2006‒11. Nevertheless, remnants of the high fertility belt persisted. In particular, there were still quite a few districts in Bihar, Jharkhand, and UP where total fertility exceeded four births and some elsewhere too—notably in Rajasthan. At the other end of the spectrum, however, fertility was in the vicinity of 1.5 births in many southern districts—especially in Kerala and Tamil Nadu. Looking at the country as a whole, Guilmoto and Irudaya Rajan refer to the overall process of decline as being ‘continuous but extremely slow’.41

Regional, Religious, and Age-Structural Dimensions

Using estimates for the major states, Table 9.2 presents a summary of the country’s regional demography around 2011. According to the SRS, total fertility was less than two births in almost half the states. The main contrast was between the four southern states in Table 9.2—where fertility decline had begun early, and where by 2011 fertility had been low for some time—and the four northern states of Rajasthan, MP, UP, and Bihar—where, from higher initial levels, fertility had really only begun to decline in the late 1970s. In these northern states total fertility was still three births per woman, or more, in 2011. Elsewhere, it was below two births in Punjab, Maharashtra, and West Bengal, and only moderately higher in Haryana, Gujarat, Odisha, and Assam.

Table 9.2 Demographic estimates for India’s major states around 2011 arranged by broad regional groupings

Population (millions)

Sex ratio (m/f)

Per cent urban

Annual rate of growth over 1971‒2011 (%)

Life expectation at birth (years)

Total fertility (per woman)

Death rate (per 1,000)

Birth rate (per 1,000)

Rate of natural increase (per 1,000)

























Uttar Pradesh











Madhya Pradesh













































Andhra Pradesh






















Tamil Nadu























West Bengal



































































Notes: The 2011 census populations for Uttar Pradesh, Bihar, and Madhya Pradesh shown in column (i) are inclusive of the populations of the new states created from them in 2000 of Uttarakhand (10.1m), Jharkhand (33.0m), and Chhattisgarh (25.5m) respectively. The populations shown are therefore comparable to the figures in Table 8.6. However, all other figures for Uttar Pradesh, Bihar, and Madhya Pradesh refer to the populations of the state territories as they were in 2011 (i.e. excluding the new states). The demographic measures in columns (v) to (x) are SRS estimates.

Sources: Census of India (2011a); Registrar General, India (2013).

By 2011 life expectancy in every state was much higher than it had been in the early 1970s (see Tables 9.2 and 8.6). Moreover, in every state—including now those in the north—female life expectancy exceeded that of males. The very masculine population sex ratios shown in Table 9.2 for the northern states—especially Punjab and Haryana—reflected the effect of past excess female mortality and the newer influence of sex-selective abortion. Life expectancy for both sexes combined in 2011 appears to have been about 75 years in Kerala, and it probably exceeded 70 years in Punjab, Tamil Nadu, and Maharashtra. On the other hand, the SRS figures suggest that life expectancy was 65 years or less in UP, MP, Odisha, and Assam.

Because fertility fell significantly earlier in the south—and from a lower level—populations in the south grew much less during 1971‒2011.42 Thus, taken together, the four southern states grew at an average annual rate of 1.5 per cent, while the four large northern states grew at 2.2 per cent. The effect of these differences can be illustrated by the fact that during 1971‒2011 the populations of Tamil Nadu and Uttar Pradesh grew by about 75 per cent and 140 per cent respectively. Comparison of the state-level growth rates for 1971‒2011 and the SRS rates of natural increase for 2011 given in Table 9.2 suggests strongly that by 2011 population growth rates were falling everywhere. Nevertheless, the rates of natural increase indicated for Rajasthan, MP, (p.229) (p.230) UP, and Bihar in 2011 were still around 2 per cent per year. The higher rates of natural increase in these northern states reflected their higher birth rates. There was little difference in the death rates of the country’s states—which fell in the (narrow) range of 6.2‒8.5 per 1,000. In contrast, birth rates varied between 15.2 and 27.8 per 1,000 (see Table 9.2).

One result of the different population growth trajectories was a shift in the regional balance of India’s population. Thus in 1971 the four large northern states contained 38.7 per cent of the population. But by 2011 the same territory (i.e. inclusive of the new states of Uttarakhand, Jharkhand, and Chhattisgarh) held 42.4 per cent of the population. Conversely, in 1971 the four southern states contained 24.7 per cent of the population. However, by 2011 the share was 20.8 per cent. The northern states had also grown faster before 1971, and they would do so after 2011.43 So the shift in the regional composition of the population during 1971‒2011 formed part of a significantly larger picture.

Fertility differentials also produced changes in the country’s religious composition. The share of Muslims in the population rose from 11.2 to 14.2 per cent between the 1971 and 2011 censuses.44 The main reason was that, in general, fertility decline among the country’s Muslims began somewhat later, and from a slightly higher initial level. Therefore the birth rate of Muslims was higher than that of Hindus. Part of the explanation for the slower fertility decline of the Muslim population can be ascribed to their geographical distribution and relatively deprived socio-economic status (although, interestingly, they seem to have experienced slightly more favourable mortality levels than the rest of the population). However, residential and socio-economic factors explain only part of the slower fertility decline. In this context, it has been suggested that feelings of social exclusion and insecurity led some Muslims to be slower in adopting birth control.45 It may also be significant that some Muslims place particular stress on religious forms of education. Certainly, by most conventional criteria the educational status of Muslims in India remained particularly deficient.46 Concerns about such issues led to the appointment of an investigatory Committee, chaired by the former judge Rajinder Sachar. The Committee submitted its report in 2006. Among other things, the Sachar Committee thought that replacement-level fertility might be reached about 10 years later among Muslims than among the general population, and that the share of Muslims might eventually stabilize at around 19 per cent (it was about 10 per cent after Independence in 1947).47

The decline of fertility also changed the country’s age composition. Figure 9.3 shows the age/sex structure of the population, in absolute terms, according to the 1971 and 2011 censuses. It illustrates both the considerable growth of the population and the change in age structure. In 1971 about 42 per cent of the population was aged 0‒14 years, but by 2011 the figure had fallen to around (p.231) 31 per cent. Relatedly, the median (i.e. central) age of the population rose from about 19.4 to 25.1 years between 1971 and 2011.48 Fertility decline meant that by 2011 the base of the country’s age pyramid was beginning to be undercut. Indeed, comparing estimates for 2000‒05 and 2010‒15 suggests that there was almost a 7 per cent fall in the number of births.49

From Anxiety to Unconcern on Populationc.1971 to c.2016

Figure 9.3 The population of India by age and sex, 1971 and 2011

Sources: Registrar General, India (1975, 2011a).

Due to their later fertility declines, states like Rajasthan, MP, UP, and Bihar had much younger populations in 2011. Indeed, the median ages of these populations were close to 20 years. On the other hand, challenges arising from population ageing were beginning to be encountered in more advanced states like Kerala and Tamil Nadu—where the median ages in 2011 were about 30 years and rising briskly.

In concluding this section, it is worth underlining that the occurrence of fertility decline in India does not seem to have been closely tied to (p.232) conventional indicators of socio-economic development. It was undoubtedly the case that fertility fell somewhat earlier in urban than in rural areas, among better-educated couples, and among people who had access to modern forms of communication (e.g. radio, television, newspapers). So socio-economic factors did have some influence on the timing of fertility decline within states. That said, it is important to note that in states where fertility was low in 2011‒16 it was low in all sections of the population—largely irrespective, for example, of whether people lived in urban or rural areas, were rich or poor, or were Hindu or Muslim. The essential point is underscored by research which found that although the education of women was thought to be a potent cause of fertility decline, in fact most of the fertility decline that occurred during the 1980s and 1990s happened among women with little or no education.50

We remarked on an apparent degree of disconnect between mortality decline and the pace of economic growth. Something broadly similar seems to have occurred with respect to fertility. Thus K. S. James notes that fertility decline often occurred largely independently of substantial socio-economic changes. And Guilmoto and Irudaya Rajan observe that the effect of economic growth on fertility decline seems to have been minimal.51 As we shall see, however, the different demographic growth trajectories experienced by the north and the south of India during 1971‒2016 probably did have a significant impact on the economic growth trajectories of the north and the south.

Having reviewed population trends during 1971‒2016, we now consider relevant history of the period. This takes us back to the unpromising 1970s.

The 1970s and 1980s

After her election victory in 1971, Mrs Gandhi’s leadership proved populist and ineffectual.52 The early 1970s were very difficult years. Large areas of the country were afflicted by severe drought. There were food shortages and rising levels of unemployment. Then in 1973 the international price of oil shot up—leading to inflation and government cutbacks which, among other things, affected spending on both the family planning programme and the SRS.

The droughts of the early 1970s were part of what was then called a ‘world food crisis’. This affected many countries, and was related to the occurrence of a strong El Niño-Southern Oscillation (ENSO) during 1972‒73. Although in economic terms it was comparatively developed, Maharashtra was badly hit. The state suffered 3 years of drought. They culminated in the monsoon failure of 1972—which resulted in a huge loss of crops and great hardship during much of 1973. The worst affected districts lay in the ‘rain shadow’ area behind the Western Ghats.53 The crisis eased with better rainfall in the second half of 1973.

(p.233) One response of cultivators to the crisis was migration to urban areas in search of work and food. The chief destinations were Bombay (i.e. Mumbai) and neighbouring Thana, although Pune and Nagpur received migrants too. The birth rate in Maharashtra also fell slightly—presumably due to a decline in the frequency of sexual intercourse among married couples.54

Opinions differ as to the adequacy of the government of Maharashtra’s response. Given previous experience, the main strategy was to provide employment on public relief works at a subsistence cash wage. In Drèze’s view this ‘was eminently successful…in drawing food into deficit areas through the generation of purchasing power in the right hands, at the right time and in the right places’.55 Other observers were more critical.56 Nevertheless, it does seem that the relief efforts were quite well-targeted. For instance, much higher proportions of people received assistance in those districts where food production was most reduced. The evidence suggests that during the peak period of privation—i.e. January to September 1973—around 10 per cent of the population of the rain shadow area were on public relief works, and in some districts—e.g. Bhir, Osmanabad, and Sholapur—the figure approached 20 per cent.57

There was certainly an increase in mortality. Analysis of the relatively reliable data for Maharashtra suggests tentatively that in 1972 and 1973 there may have been about 130,000 excess deaths.58 However, some of these deaths occurred to migrants from other states—notably Madhya Pradesh. Moreover, in interpreting the figure of 130,000 it should be remembered that it refers to a two-year period and a state with around 50 million people. To reiterate, Maharashtra was not the only state affected by severe drought. However, partly because the operation of the SRS was disrupted in the second half of 1973 (due to the government spending cuts) it is difficult to assess excess mortality for the entire country. For 1972 and 1973 it may have been a few hundred thousand.59 But there is no doubt that because of crucial political, epidemiological, policy, and other changes the number of deaths was small compared to many similar events in earlier times.

The 1971 census results suggested that the rate of population growth had increased slightly between 1951‒61 and 1961‒71. And the family planning programme was in a dismal condition in the early 1970s. As noted in Chapter 8, the introduction of mass sterilization ‘camps’ had raised the number of sterilizations to 3.1 million in 1972‒73. But the rise was short-lived. In 1973‒74 the figure fell back to 0.9 million, and in 1974‒75 it was only 1.3 million. Especially in parts of the north, the family planning programme was also affected by bogus claims that the use of contraceptives by Hindus would lead to their becoming outnumbered by Muslims. Moreover, on a wider canvas, by 1974 there was resigned commentary in some newspapers that India’s people were not yet ready for birth control. Relatedly, politicians (p.234) avoided the issue. Indeed, while supportive in private, Mrs Gandhi rarely endorsed the family planning programme in public at this time.60

With this as background, in 1974 the United Nations convened a World Population Conference in Bucharest. This saw a highly charged debate between, on the one hand, some western countries—notably the United States—who argued that family planning was a way of reducing population growth and raising living standards and, on the other hand, many developing countries (supported by the former Soviet Union) who rejected this position. India inclined to the latter camp. Karan Singh, the country’s Health and Family Planning Minister, led the national delegation to the conference and helped to generate the catch-phrase ‘development is the best contraceptive’—a remark that was widely interpreted as being lukewarm (at best) about family planning.61

The Emergency and its Aftermath

High food and energy prices contributed to increasingly chaotic conditions in India in 1974 and the first half of 1975. Then, shaken by a court ruling which challenged her election in 1971, in June of 1975 Mrs Gandhi declared a national state of Emergency—a situation which continued until March 1977. The Emergency occurred in the course of the Fifth Plan period.62 During the Emergency the press was censored, political opponents of Mrs Gandhi were imprisoned, and constitutional and civil rights were suppressed. The Emergency saw the Prime Minister’s impetuous younger son, Sanjay, instigate several autocratic campaigns. One entailed large-scale slum clearance—often with scant attention as to how many families were to be rehoused. However, the most notorious campaign involved a drive to reduce the birth rate. Sanjay Gandhi declared that family planning efforts must be afforded ‘the utmost attention and importance because all our industrial, economic, and agricultural progress would be of no use if the population continued to rise at the present rate’.63

During the Emergency many politicians suddenly became very interested in family planning issues.64 And the central government exerted heightened sway over the implementation of the family planning programme down to the grass-roots level (which previously had been more the concern of the states). In addition, the government placed great stress on setting ‘target’ numbers of ‘acceptors’—for example, at the state, district, and PHC levels—and on the achievement of the targets. Addressing a conference of the Association of Physicians of India in January 1976 Mrs Gandhi declared:

We must now act decisively, and bring down the birth rate speedily to prevent the doubling of our population in a mere 28 years. We should not hesitate to take steps which might be described as drastic. Some personal rights have to be kept (p.235) in abeyance, for the human rights of the nation, the right to live, the right to progress…65

Ten months into the Emergency, in April 1976, a National Population Policy was approved by Parliament in Delhi. This involved measures such as (i) linking the size of central government financial allocations to the states according to their family planning performance; (ii) raising the cash payments made to individuals who were sterilized; (iii) changing employment rules so that government workers with fewer children received favoured treatment (e.g. in housing); (iv) requiring that states paid greater attention to improving levels of female literacy; and (v) allowing states to introduce legislation to enable the compulsory sterilization of people with large numbers of children (legislation that was not actually enacted in any state).66 Also in 1976, the country’s constitution was changed so that the number of representatives each state had in the Lok Sabha (i.e. the Lower House of Parliament) was fixed according to the size of the state’s population enumerated in the 1971 census. The rationale for this change—which was to be revoked after the 2001 census—was that states which were more successful in lowering their birth rates should not be penalized in terms of their political representation in Delhi.67

The Indian Youth Congress (part of Mrs Gandhi’s ruling party) was used by Sanjay Gandhi to spread the family planning message. And because it was known that he had his mother’s backing, politicians in most states were eager to instigate measures—both incentives and disincentives—consistent with Sanjay’s aims. For instance, in Andhra Pradesh the government declared that those of its employees who were sterilized would receive a pay rise. In Bihar married couples with three or more children received lower public food rations. And in Uttar Pradesh the state government withheld pay from family planning workers who failed to meet their sterilization targets. Indeed, state and local authorities often went well beyond any proposals emanating from Delhi. Chiefly in parts of the north, there was sometimes ruthlessness and coercion in the ‘recruitment’ of sterilization acceptors.68

In 1975‒76 there were 2.7 million sterilizations—slightly above the target of 2.5 million. Then, with the Emergency fully underway, the national target for 1976‒77 was set at 4.3 million. Nearly all states declared that they would surpass their target. According to the official statistics, the number of sterilizations during 1976‒77 was 8.3 million—6.2 million vasectomies and 2.1 million tubectomies (see Figure 9.4). These numbers may be overstatements—because family planning workers may have exaggerated their performance in order to meet, or exceed, their targets. Nevertheless, with the huge spike in vasectomies, there is no doubt that there was a sharp rise in the proportion of women who were ‘protected’ against pregnancy. Indeed, in the two-year period from 1974‒75 to 1976‒77 the estimated ‘couple protection rate’—i.e. (p.236) the proportion of married couples estimated to be effectively protected against pregnancy—rose from 14.8 to 23.5 per cent. The core of this increase, deriving specifically from sterilizations, was a rise from 12.6 to 20.7 per cent.69

From Anxiety to Unconcern on Populationc.1971 to c.2016

Figure 9.4 Annual male and female sterilizations, India, 1969‒70 to 2010‒11

Sources: Visaria and Visaria (1981: 40‒1); Government of India (2011: Tab. B6).

Because of problems in interpreting the statistics, it is difficult to relate the rise in the couple protection rate to a change in fertility. Nevertheless, according to the SRS the total fertility rate fell from 4.9 births in 1975 to 4.5 births in 1977. The eminent demographer Jack Caldwell suggested that the policies of the Emergency might, if sustained, have cut India’s total fertility rate to around two births per woman by sometime in the 1980s.70

In January 1977 Mrs Gandhi suddenly announced that an election would be held in March. The economy had performed quite well during the Emergency, and she presumably felt that her Congress Party government would be re-elected. In fact, however, she suffered a rout at the hands of the Janata (i.e. People’s) Party—a loose coalition led by Morarji Desai. The aggressive nature of the family planning campaign undoubtedly contributed to Mrs Gandhi’s defeat. She lost the votes of many poor people, and Muslims—sections of society which previously had tended to support her. Moreover, the rout was greatest in the north, where the worst excesses of the campaign had happened.71

The new Janata government quickly established a Commission to investigate the Emergency. It was chaired by J. C. Shah, a former Chief Justice. The Shah Commission’s ‘Third and Final Report’ was submitted in August 1978. However, with Mrs Gandhi’s return to power in the election of January 1980, the Commission’s report was suppressed. Nevertheless, the Commission recorded that it was informed of 1,774 cases where people might have died as a result of poorly performed sterilizations, and 548 cases where unmarried (p.237) people were sterilized.72 The exact numbers will never be known. But an informed observer, Davidson Gwatkin, commented:

[B]y the time the intensive family planning drive came to an end, millions had suffered harassment at the hands of government officials bent on implementing it; many, perhaps hundreds, had died from it; the political leaders who had willed it were out of power and in disrepute; and the [family planning] program itself was in total disarray.73

The number of vasectomies would never recover. The backlash to the Emergency meant that only around 1 million sterilizations were performed in 1977‒78 (see Figure 9.4). In a patriarchal society, where husbands usually held sway, tubectomy soon became—and henceforth remained—the main birth control method. Moreover, the family planning programme’s image was badly tarnished.

Three comments on the Emergency are worth making here.74 First, it is clear that the unrestrained operation of ‘political will’ over a short period did not achieve a sustained rise in family planning performance. Indeed, the events of 1975‒77 were counterproductive. Second, the country’s diversity complicated the task of raising family planning acceptance. This included, but went beyond, matters of religion. Finally, the fact that India was ultimately a democracy was crucial in influencing what could—and could not—be accomplished.

The Janata government was fractious and brief, but it maintained the basic structure of the family planning programme. Nevertheless, the name of the programme was changed from ‘Family Planning’ to ‘Family Welfare’. The change was made to emphasize that acceptance of contraception—including sterilization—was purely voluntary. After the Emergency, every government was careful to stress this. Indeed, in general, little resolve was shown in relation to family planning activities. Political leaders approached the subject of birth control with great caution. This applied to Mrs Gandhi herself when, early in 1980, she became Prime Minister again (Sanjay Gandhi died in a plane crash later that year).

For the first year of her new premiership Mrs Gandhi said little about family planning. However, the 1981 census results caused considerable dismay in revealing a larger population than was expected. Apparently, and despite the Emergency, there had been virtually no change in the population growth rate (Table 9.1).75 A statement titled the New Delhi Declaration of Parliamentarians referred to the ‘phenomenal rise in the country’s population over the last 10 years’ and called for a major increase in funding for family planning activities. Mrs Gandhi stated that she was ‘shocked’ by the census results (p.238) and that the time had come to ‘revamp and revitalize’ what was now called the ‘family welfare’ programme.76 With much stress on its voluntary nature, Mrs Gandhi’s return to power probably helped to restore some confidence in the programme.77 Between 1980 and 1984—the year of her assassination—the number of sterilizations (primarily tubectomies) more than doubled (see Figure 9.4). Mrs Gandhi was succeeded by her son Rajiv, who in 1984 led the Congress Party to a major election victory.

The Sixth and Seventh Five-Year Plans covered the 1980s and maintained a ‘low key’ approach to family planning.78 The share of public spending on ‘family welfare’ remained lower than during 1969‒74 (i.e. the Fourth Plan). Both plans involved unrealistic expectations as to how rapidly—i.e. easily—the birth rate could be reduced.79 During the Sixth Plan period, 1980‒85, it became clear that demand for vasectomy had largely disappeared, and efforts were made to promote the availability of reversible contraceptives (i.e. condoms, IUDs, oral pills). Attempts were also made to improve the relationship between family planning and maternal and child health care services. The Sixth Plan saw several international agencies—e.g. the United Nations Population Fund (UNFPA), the United States Agency for International Development (USAID), the World Bank—become involved in providing technical and financial assistance for family planning activities in specific parts of the country.80 Indeed, about 15 per cent of all family planning expenditure came from external sources. In the Seventh Plan period, 1985‒90, the number of female sterilizations stagnated (see Figure 9.4). There was a rise in the reported use of reversible methods, but this seems to have been partly fictitious—family welfare workers exaggerating levels of acceptance in order to meet their targets. Nevertheless, by 1990 the official ‘couple protection rate’ exceeded 40 per cent.81

Immunizations and Health Care

Most of the substantial rise in life expectation of the 1970s and 1980s occurred after the adversities of 1972 and 1973. Mortality decline was very pronounced at young ages. Thus the death rate recorded by the SRS for the 0‒4 age group almost halved between 1976 and 1991—falling from 51.0 to 26.5 deaths per 1,000.82 This fall was underpinned by increased immunization coverage and it made a large contribution to the rise in life expectation.

In the early 1970s, immunization coverage levels against major infectious diseases were very low. Thus, averaging across the 5 years from 1970‒71 to 1974‒75, official statistics indicate that each year only about 340,000 pregnant women were being immunized against tetanus, and only about 900,000 children received the combined DPT vaccine—which confers protection (p.239) against diphtheria, pertussis (i.e. whooping cough), and tetanus. While the immunity bestowed by these immunizations was long-lasting—and so had a cumulative effect—the numbers were tiny given that there were around 22 million births each year.83

Immunization coverage began to improve from around 1975‒78. From low levels, these years saw rises in both tetanus vaccine provision to pregnant women and the number of children receiving DPT. The initial rises were followed by progress in subsequent years. There was a similar trend in the distribution of folic acid/iron tablets to women and children—a measure introduced to reduce the huge problem of nutritional anaemia.84 The year 1978 was especially important because—influenced by the World Health Organization (WHO) and the United Nations Children’s Fund (UNICEF)—it saw the start of the Expanded Programme of Immunization. This was followed by the more ambitious Universal Immunization Programme in 1985‒86.

By 1991 the Universal Immunization Programme was operating in virtually every district in India.85 As well as DPT, it gave vaccinations for TB (through BCG), polio, and measles. Figure 9.5 illustrates the great progress that was made in the late 1970s and 1980s. Comparing official estimates for 1981 and 1990‒91, the proportion of infants immunized against TB rose from 12 to 92 per cent; for DPT the coverage level rose from 31 to 89 per cent, while for polio the figures were 7 and 89 per cent. Moreover, whereas in 1981 only 24 per cent of pregnant women were receiving tetanus vaccine, by 1990‒91 the figure was 80 per cent. Measles immunization also rose rapidly from its (p.240) introduction around 1986‒87. By 1990‒91 about 86 per cent of infants were being immunized against measles.86

From Anxiety to Unconcern on Populationc.1971 to c.2016

Figure 9.5 Annual tetanus, polio, DPT, and measles immunizations, India, 1970‒71 to 2010‒11

Sources: Jain et al. (1985: 362); Government of India (2011: Tab. B1).

There were also developments in basic health care provision. In 1978 WHO and UNICEF convened an International Conference on Primary Health Care in Alma Ata (in the former Soviet Union). India was a signatory to the resulting Declaration—which aimed to make primary health care services universally available by the year 2000. Partly as a result, Mrs Gandhi’s government introduced a National Health Policy in 1981, and specific efforts were made to improve rural health care facilities. Although it was difficult to recruit suitably qualified medical personnel to work in rural areas, so-called ‘Multi-Purpose Health Workers’ and ‘Community Health Volunteers’ (the latter role influenced by China’s experience with ‘barefoot doctors’) were trained to take maternal and child health (MCH) and family welfare services out to the rural population. By 1991 there were about 20,000 PHCs and 130,000 rural health sub-centres.87 In urban areas, more and more people were turning to private health services. Nevertheless, MCH and family planning services were also accessible through a network of public hospitals, dispensaries, and urban family welfare clinics.

One consideration behind the expansion of rural MCH services, and their closer integration with family welfare activities, was the belief that falls in infant and child mortality would help to persuade people to use birth control. It should be stressed, however, that the overall level of health service provision remained very unsatisfactory. For instance, the NFHS surveys conducted during 1992‒93 found that only about 16 per cent of rural births had occurred in a modern health care facility.88

The 1970s and 1980s saw little progress in reducing tuberculosis. It is thought that by the late 1980s between 2 and 4 million new cases of the disease developed each year, and that there were several hundred thousand tuberculosis deaths. Indeed, it is likely that TB was responsible for more deaths than any other communicable disease. The death toll from malaria was certainly much smaller than for TB, perhaps being measured in the tens of thousands. The estimated number of malaria cases briefly exceeded 6 million in the mid-1970s, but the annual number then fell back to around 2‒3 million for most of the 1980s.89

Cases of HIV/AIDS were first recognized in the United States in 1981. And HIV testing began in a few places in India in 1985. Instances of infection were soon detected among female commercial sex workers (CSWs) living in the cities of Vellore (in Tamil Nadu) and Pune and Bombay. It is likely that HIV entered the country in the early 1980s by way of a major city—and then spread through networks of CSWs and their clients. Two ‘full blown’ cases of AIDS were identified in Bombay as early as 1986. They were traced to infected blood transfusions performed in the United States. The overseas origin of these cases fitted the then widespread view in India that HIV/AIDS (p.241) would not become a serious problem. This rested partly on an idealized representation of Hindu marriages—in which men were sexually faithful to their wives. However, cases of HIV/AIDS infection were soon detected among people with no foreign contacts. A network of HIV testing centres began to be created, with many collecting and analysing blood samples taken from ‘high risk’ people—for example, individuals suffering from sexually transmitted diseases (STDs). By 1990‒91 around 864,000 people had been tested, with 5.5 per cent being found positive. Those tested were unrepresentative of the country’s population, but there was clearly cause for concern.90

Rajiv Gandhi’s period as Prime Minister ended with the Congress Party’s defeat in 1989. Two events of this period merit mention. The first was the poison gas leak from the Union Carbide factory in Bhopal, Madhya Pradesh, in 1984. This was perhaps the worst industrial accident ever recorded. While the exact numbers will never be known, several thousand people died immediately from the gas leak, thousands more died later because of gas inhalation, and many tens of thousands—perhaps several hundred thousand—were afflicted by long-lasting health problems such as blindness and breathing difficulties.91 The second occurrence was India’s involvement in the civil war between the Sri Lankan government and the so-called ‘Tamil Tigers’. Among other things, the war led to the migration of tens of thousands of Tamil refugees to Tamil Nadu. It also led to Rajiv sending Indian troops to Sri Lanka during 1987‒90.

Although there had been progress in reducing the nation’s birth and death rates in the 1980s, the provisional census results released in March 1991 led to much public debate because, once more, the population growth rate had hardly changed (see Table 9.1). Two months later Rajiv Gandhi was killed by a Tamil Tiger suicide bomber.

From 1991 to c.2016

The economy grew exceptionally fast during 1991‒2016. As elsewhere in the world, this period saw a retreat of the state and increasing reliance placed on the private sector. The Five-Year Plans—from the Eighth Plan (1992‒97) to the Twelfth Plan (starting in 2012)—became much less influential. And politics also acquired a more sectarian bent—one feature of this being the rise of the Hindu nationalist Bharatiya Janata Party (BJP). In a phenomenon described as ‘Saffron Demography’ some politicians and writers exaggerated the degree to (p.242) which the share of Muslims in the population was rising.92 Recall in this context that the Sachar Committee submitted its report on the country’s Muslim population in 2006.

The more stable governments of the period were those led by (i) Narasimha Rao, of Congress, during 1991‒96; (ii) Atal Bihari Vajpayee, of the BJP, during 1999‒2004; and (iii) Manmohan Singh, of Congress, during 2004‒14. The election in 2014 produced the BJP government of Narendra Modi. Reflecting the huge changes in perspective since 1947, one of his first moves was to announce the abolition of the Planning Commission.

Changing Views on Family Planning

Increasing political fragmentation during 1991‒2016 meant that the central government’s interest in population and family planning matters reduced even more. And the ability of state governments to influence such matters also decreased.93 Rajiv Gandhi had fostered a shift towards democratic decentralization. Accordingly, in the 1990s village panchayats (i.e. councils)—one-third of their members being women—began to take greater responsibility for the provision of health and family planning services through PHCs and health sub-centres.

The period also saw the rise of many non-governmental organizations (NGOs) concerned with women’s rights. These NGOs, both national and international, became increasingly vocal. Consequently, the family welfare programme was questioned more and more—especially with regard to its use of targets, incentives and, above all, its reliance on female sterilization. With justification, NGOs argued that the family welfare programme imposed very disproportionate demands on women. It was also clear that the programme’s setting of targets was compromising the integrity of the official statistics on contraceptive use. Prompted by these and other concerns, in 1993 the government established a Committee to draft a new National Population Policy. The Committee, chaired by the distinguished scientist M. S. Swaminathan, submitted its report in 1994. But it would be 6 years before a policy was introduced.94

Pressure to reform the family welfare programme also arose from the International Conference on Population and Development (ICPD) held by the United Nations in Cairo in 1994. With women’s organizations in the lead, this conference saw a marked shift in international attitudes towards family planning. Henceforth, the priorities became serving the reproductive health needs of individual people, and improving the ‘quality of care’ that they received. The use of targets and incentives was viewed as objectionable by many of the conference delegates. Moreover, aggregate issues—such as whether population growth impeded economic growth—were seen as being (p.243) of secondary significance. The ICPD ‘Programme of Action’, to which India was a signatory, emphasized the importance of women’s rights, reproductive health, and gender equity.

Against this background, in 1996 Narasimha Rao’s government initiated the Reproductive and Child Health (RCH) Programme. This involved abandoning the use of targets at the national level. In phrases redolent of the time, the so-called ‘tyranny of the targets’ was—at least in theory—replaced by a ‘target-free’ strategy. The RCH Programme entailed the full integration of family welfare and maternal and child health services. But it also had even wider objectives. They included: the provision of reproductive health education for adolescents; the detection and treatment of STDs; and the screening of near-menopausal women for cervical and uterine cancer.95 The strongly integrative philosophy of the RCH Programme influenced many later policies. However, concern was also expressed that basic family planning services might receive reduced attention.96

On 11 May 2000 the government used the birth of a baby girl in Delhi’s Safdarjang Hospital to mark India’s population reaching one billion. The date was spuriously precise, and there were at least 60,000 other births in the country on that day. Some commentators took pride in what they saw as a national achievement. Nevertheless, the event was staged partly to highlight the challenges arising from population growth. Less than a year later, the provisional results of the 2001 census revealed a mixed picture. For the first time since 1947 there was evidence that the rate of population growth was probably falling (see Table 9.1). But the census count of 1,028 million was about 16 million higher than the Registrar General’s projection. Analysis suggested that the population had probably surpassed one billion people in 1999, or possibly even late 1998.97

Several policies relevant to population issues were initiated, or revamped, during Vajpayee’s BJP government. They included: the National Policy for the Empowerment of Women (2001); the National AIDS Prevention and Control Policy (2002); the National Health Policy (2002); and, the National Youth Policy (2003). These policies were all consistent with the RCH approach and stressed the importance of serving the needs of the poorest people. Furthermore, the objectives of these policies were reiterated in the Tenth, Eleventh, and Twelfth Five-Year Plans.98 However, the most notable policy was the National Population Policy that was introduced in the year 2000.

The National Population Policy (NPP) 2000 was a fairly comprehensive document. It began, of course, by emphasizing the government’s commitment to ‘voluntary and informed choice and consent of citizens while availing of reproductive health care services, and the continuation of the target free approach’. However, the document set some rather unrealistic goals. Thus a total fertility rate of 2.1 births was to be reached by 2010, with zero population (p.244) growth being attained by 2045. Other unlikely goals for 2010 included: achieving universal access to contraceptive services in relation to a ‘wide basket of choices’; reducing the infant mortality rate to 30 per 1,000; attaining complete child immunization against vaccine preventable diseases; and, quite unrealistically, the registration of all births, deaths, marriages, and pregnancies.99

HIV/AIDS, Tuberculosis, and Other Diseases

Another stated aim of the NPP 2000 was to restrict the spread of AIDS. Despite considerable denial of the problem, by 1991 it was clear to many experts that the country could face a significant threat from the disease. For instance: most people had no knowledge of HIV/AIDS; condom use was very low; STDs—potential co-factors for HIV transmission—were common (especially in urban areas); there was a large and unregulated market in blood and blood products; and unsterilized needles/syringes were often reused by registered and, still more, unregistered medical staff. Moreover, tuberculosis was rife, and it was known that HIV/AIDS interacted synergistically with TB, each disease bolstering the other.100

Due to such concerns, and with mounting evidence of substantial HIV infection in some parts of the population, in 1992 the National AIDS Control Organization (NACO) was created. NACO’s remit was to implement a National AIDS Control Programme. The programme developed in stages (e.g. in 1999 and 2007) with associated increases in funding.101 Building on the pre-existing network of HIV testing centres, a ‘sentinel’ surveillance system for HIV infection was established. But because the sentinel testing sites collected their blood samples mainly from pregnant women who were visiting antenatal clinics and people (mostly men) attending STD treatment clinics, analysis of the blood samples produced estimates of HIV prevalence that were potentially biased. Nevertheless, it became clear that infection levels were generally greater in the south—notably in Andhra Pradesh, Karnataka, Maharashtra, and Tamil Nadu. Moreover, in some of the small north-eastern states—such as Manipur and Mizoram—there were very high levels of HIV infection related to widespread injecting drug use. In addition, infection levels were greater in urban than in rural areas, and among men than among women. It was also obvious that, having acquired HIV by engaging in unprotected commercial sex, husbands often passed the virus on to their wives.

Estimates made on the basis of blood samples collected by the sentinel surveillance system in the early 2000s suggested that there were more people infected with HIV in India than in any other country. The level of infection among adults was put at about 0.6 per cent.102 However, the 2005‒06 round of the NFHS took blood from a nationally representative sample of people, and (p.245) analysis of the results led to a major downward revision of the scale of the epidemic. The estimated number of individuals with HIV in India in 2006 was reduced from 5.2 to 2.5 million. Relatedly, the NFHS data suggested that 0.28 per cent of people aged 15‒49 (0.36 per cent of males, and 0.22 per cent of females) were infected.103

In general, then, India experienced a low-level HIV/AIDS epidemic—especially in rural areas. Moreover, there seem to have been falls in both HIV prevalence, and the annual number of new HIV infections, from the early 2000s onwards. These falls may have reflected the ‘natural course’ of the epidemic—for example, more susceptible people being infected and dying first. But they probably also reflected NACO’s targeted interventions—for instance, to protect the supply of blood, raise awareness of HIV/AIDS among ‘high risk’ groups (e.g. lorry drivers, prisoners, and hijra (i.e. transgender people)), and encourage condom use among CSWs and their clients. In addition, especially after 2004, there seem to have been fewer AIDS-related deaths because of the government’s support for the distribution of low-cost anti-retroviral drugs, of which by then India was a leading producer.104

Tuberculosis remained the most important specific communicable disease during 1991‒2016. Indeed, with respect to TB there probably were more cases in India than in any other country. Due to India’s dismal record of tackling the disease, in 1997 the Revised National Tuberculosis Control Programme (RNTCP, emphasis added) was introduced. The RNTCP focused on curing persons who were newly identified as having TB, and by 2006 it covered most of the country. The RNTCP incorporated the DOTS approach (i.e. directly observed treatment—short course) that was endorsed by WHO. In the application of DOTS a trained observer tried to ensure that patients followed their prescribed drug regimes. However, among other things, DOTS required that people with TB were diagnosed correctly, monitored closely, and that they had continuous access to the prescribed drugs. In short, for success, DOTS required a high degree of administrative support.105

In 2010 the remit of the RNTCP was expanded to cover everyone with tuberculosis—including people who had previously received unsuccessful treatment. And in 2012 TB was made a notifiable disease (i.e. one involving a legal requirement to report cases). Tuberculosis certainly remained a major health problem in 2016. Many infected people were undiagnosed, and many people who were diagnosed turned to the private sector for help. However, physicians and medical facilities often demanded payment for drugs which, at least in theory, were freely available at government health centres. Moreover, there was a huge amount of defective and intermittent treatment—which contributed to the growing problem of multi-drug-resistant forms of TB.106 That said, there are reasons to think that there was some progress. At any rate, (p.246) estimates made by the WHO suggest that there was a modest fall in the tuberculosis death rate after the introduction of the RNTCP.107

Progress continued to be made against other infectious and parasitic diseases; and technical and financial assistance from organizations like UNICEF, USAID, WHO, and the World Bank, often helped. Mortality from both kala-azar and leprosy was largely eliminated by the early 2000s.108 Deaths from tetanus, measles, diphtheria, and pertussis fell greatly as levels of immunization coverage reached about 90 per cent by 2010 (see also Figure 9.5). To surprise and alarm, plague broke out in Surat in 1994. This led to mass flight from the city and tarnished the country’s image. Nevertheless, the use of insecticides and antibiotics meant that the epidemic was quickly stamped out and the number of deaths was small. With the prospect of eradication, and as part of a global effort, in 1995 the central government strengthened its drive against polio with the ‘Pulse Polio’ campaign. This involved giving oral vaccine to young children. In 2014 WHO declared India free of polio.

The malaria death rate, already much reduced, fell further during 1991‒2016. In this context, relevant changes included the use of new insecticides, the expansion of diagnostic testing, the introduction of combined anti-malarial medicines (which included artemisinin), and the promotion of insecticide-treated bed-nets. Malaria remained a significant problem in specific places (e.g. rural areas of Odisha). And there was dispute about the degree to which malaria mortality had declined.109 Nevertheless, WHO estimated that in 2013 malaria was responsible for only about 1 per cent of deaths among young children, and the disease did not feature in WHO’s list of India’s ten leading causes of death. These causes—which together accounted for nearly 60 per cent of all deaths—were: ischaemic heart disease (12 per cent), chronic obstructive pulmonary disease (11 per cent), stroke (9 per cent), diarrhoeal diseases (6 per cent), lower respiratory infections (5 per cent), pre-term birth complications (4 per cent), TB (3 per cent), self-harm (3 per cent), falls (3 per cent), and road traffic injuries (2 per cent).110 It is clear that non-communicable diseases and the behavioural and environmental factors affecting them—such as smoking and household air pollution—had grown hugely in prominence.

Rhetoric and Reality

During 1991‒2016 there was increasing divergence between the central government’s often rather loftily worded policies on the one hand, and what actually happened on the ground on the other. Developments which interacted and contributed to the decline of the government’s sway included the growth of the private sector, political fragmentation, and the increasing (p.247) influence of international factors. Population growth may also have contributed to the weakening of central government control (see the chapter’s conclusion). And, as elsewhere in the world, politics arguably became even more concerned with appearance than with substance.

A disparity between policies and outcomes during the period—sometimes termed a contrast between ‘rhetoric’ and ‘reality’—can be illustrated in various ways. For example, national family planning targets may have been scrapped in 1996, but many of the country’s states continued to use targets. Relatedly, although they were often disparaged, states continued to provide cash incentives for people—mainly women—who were sterilized.111 Again, the government’s RCH programme might proclaim that cervical cancer and STDs would be diagnosed and treated at government health centres, but the allocation of funding to meet these stated objectives was derisory.112 Further, despite all of the talk of a ‘wide basket of choices’ in relation to contraception, in fact family planning services stayed very sterilization-based. Table 9.3 shows trends in contraceptive use as indicated by the NFHS. Between 1992‒93 and 2015‒16 the use of ‘modern methods’ rose from about 36 to 48 per cent. But the rise was mainly due to female sterilization. The use of reversible methods remained limited. With strong opposition from women’s organizations, the use of injectable contraceptives was negligible. The proportion of women whose husbands were sterilized declined. In addition, there appeared to be little if any progress between 2005‒06 and 2015‒16 (see Table 9.3).

Table 9.3 Contraceptive use, in percentages, reported by women aged 15‒49 in India, 1992‒93, 1998‒99, 2005‒06, and 2015‒16





Female sterilization





Male sterilization















Injectable contraceptives








Any modern method





Any method (including traditional)





Notes: Traditional methods are mainly designated as ‘periodic abstinence’, ‘rhythm’, and ‘withdrawal’. All estimates come from NFHS surveys.

Sources: IIPS (1995: 137, 2017: 2); IIPS and Macro International (2007a: 122); IIPS and ORC Macro (2000: 132).

As noted, while life expectation rose during 1991‒2016, it appears to have done so at a somewhat slowed pace. Moreover, the general biomedical and health status of India’s population remained extremely poor by international standards. Thus using data from the 2015‒16 NFHS, 36 per cent of young children (aged 0‒4) were classed as being underweight, with 38 per cent being classed as stunted. Almost 60 per cent of children aged 6‒59 months were (p.248) anaemic. The average heights of men and women aged 20‒29 years, though increasing slowly, were still only 1.56 and 1.52 metres respectively.113

As regards fertility during 1991‒2016, it became ever more difficult to reconcile trends in the SRS total fertility rate with changes in the official statistics on contraception (e.g. the couple protection rate). This was probably partly because the official statistics continued to exaggerate contraceptive use. But it was also because an increasing number of couples were turning to the private sector for contraception and abortion.114 Interest in promoting family planning also waned as an increasing number of states reached low levels of fertility. With respect to both fertility and family planning, and mortality and health, by far the greatest challenges lay in the populous northern states—where political leaders were often ineffectual and silent on these matters.

In 2011 the fourteenth synchronous census enumerated 1,211 million people (see Table 9.1 and Figure 9.3). The intercensal growth rate had fallen to 1.6 per cent per year, and the age structure had become a little older. Yet again, however, the enumerated population exceeded the Registrar General’s projection (by 18 million). Commenting on the state-level results, demographers noted that the ‘centre of population gravity’ within the country would move further northward in the coming decades—with accompanying social, economic, and political implications.115

The Urban Sector

There were huge changes in the urban sector during 1971‒2016. Whereas in 1971 about 109 million people were classed as living in urban areas, by 2016 the number probably exceeded 425 million. While average living standards in urban areas unquestionably improved, very large numbers of people continued to live in slums.116 Moreover, just as the country’s fertility decline was slow so—at least according to the official statistics—was its rate of urbanization. Table 9.4 shows that between 1971 and 2011 the level of urbanization only rose from about 20 to 31 per cent.

Table 9.4 Measures of urbanization and estimates of international migration, India, 1971‒2011

Per cent urban

Urban population (millions)

Number of towns

Average annual rate of population growth (% per year)

Net international migration (000s)




Urban (iv)

Rural (v)





































Notes: The figures in columns (i) to (v) are based on census results. The growth rates refer to the intercensal decades preceding the years against which they are shown. With the exception of the first figure, which is taken from Table 8.4, the estimates of migration in column (vi) are those of the United Nations and refer to periods which differ slightly from the intercensal periods (e.g. 2000‒10, as opposed to 2001‒11). However, the difference is of negligible consequence.

Sources: Chandramouli (2013); United Nations (2013); Visaria (1997: 267).

As noted in this chapter’s section on population trends, the mortality advantage experienced by the urban population diminished during 1971‒2016. However, at the end of the period urban life expectancy probably still exceeded that in rural areas by 4‒5 years. As regards fertility, whereas during 1971‒73 the rural and urban total fertility rates from the SRS were 5.3 and 4.0 births respectively, by 2011‒13 the figures were just 2.6 and 1.8 births. Therefore, in 2016 urban fertility was below the so-called ‘replacement’ level (of about two births per woman). The birth and death rates recorded by the SRS for the rural population during 2011‒13 were 23.1 and 7.6 per 1,000 respectively—implying a rate of natural increase of about 1.6 per cent per (p.249) year. In contrast, the birth and death rates for the urban population were 17.4 and 5.6—implying a rate of only 1.2 per cent per year.117 Of course, rural to urban migration was the reason for the higher growth rate of the urban population in each intercensal decade (see Table 9.4). However, given that the urban population grew on average at about 2.8 per cent per year during 2001‒11, while the urban rate of natural increase was about 1.2 per cent, it appears that by 2001‒11 migration, rather than natural increase, had become the main driver of urban population growth.

We saw in Chapter 8 that, due to the relatively fixed way in which places were defined as ‘urban’, the rates of urban growth and urbanization during 1921‒71 were probably understated. A similar point applies to 1971‒2016.118 If India had employed a less rigid approach to classifying places as ‘urban’ then the level of urbanization in 2011 would doubtless have been higher than 31 per cent.119 In fact, a few states did take steps to facilitate reclassification. Thus between 1991 and 2001 the level of urbanization in Tamil Nadu rose from 34 to 44 per cent, while between 2001 and 2011 the level in Kerala rose from 26 to 48 per cent. The sizeable nature of both these changes reflected the adoption of a more realistic definition. Another sign of pressure for a more flexible approach was the large rise in the number of towns during 2001‒11 (see Table 9.4). There was, in particular, an increased propensity to classify some places as ‘census towns’—i.e. places defined as ‘urban’ because of their demographic and economic characteristics.*

(p.250) With the exception of Kerala—where the diffuse nature of settlements complicated the task of distinguishing between ‘rural’ and ‘urban’ areas—regional variation in the level of urbanization remained largely unchanged (see Tables 8.6 and 9.2). Thus states in the west and south exhibited higher levels. The most urbanized states according to the 2011 census were Tamil Nadu (48.4 per cent), Kerala (47.7), Maharashtra (45.2), and Gujarat (42.6). In contrast, Uttar Pradesh (22.3), and, further east, Odisha (16.7), Assam (14.1), and Bihar (11.3) were largely rural (see Table 9.2).

Between 1971 and 2011 the number of urban agglomerations (UAs) with more than 1 million people rose from eight to fifty-three.120 Table 9.5 shows the UAs with more than 2 million people according to the 2011 census. Greater Mumbai was the largest, with 18.4 million inhabitants. However, whereas Mumbai’s growth rate declined between 1971 and 2011, Delhi’s was both higher and comparatively constant. Indeed, just adding Ghaziabad’s population to that of (adjacent) Delhi in 2011 gives a total of 18.7 million. While still ranking third, Kolkata’s growth trajectory clearly lacked the drive not only of Mumbai and Delhi, but also of places like Ahmedabad, Bangalore (Bengaluru), Chennai, Hyderabad, and Pune. Overall there were two especially large and dynamic urban networks in the country—one centred on the (p.251) National Capital Territory (NCT) of Delhi, the other focused on Mumbai and embracing Ahmedabad, Pune, and Surat.

Table 9.5 Urban agglomerations with populations of 2 million or more in 2011

Urban agglomeration (UA)


Population (millions)

Approximate population in 1971 (millions)

1. Greater Mumbai




2. Delhi

NCT of Delhi



3. Kolkata

West Bengal



4. Chennai

Tamil Nadu



5. Bangalore




6. Hyderabad

Andhra Pradesh



7. Ahmedabad




8. Pune




9. Surat




10. Jaipur




11. Kanpur




12. Lucknow




13. Nagpur




14. Ghaziabad



15. Indore

Madhya Pradesh



16. Coimbatore

Tamil Nadu



17. Kochi




18. Patna




19. Kozhikode (Calicut)




Note: Jaipur was classified as a Municipal Corporation. NCT stands for National Capital Territory.

Sources: Registrar General, India (1975, 2011b).


With improvements in transport—from auto-rickshaws and scooters, to cars, buses, and aircraft—India’s population undoubtedly became more mobile during 1971‒2016. But that is not to say that migration as gauged by the census increased greatly. Thus, despite significant economic disparities, there does not seem to have been a marked increase in out-migration rates from poor states, or in-migration rates to better-off ones.121 In addition, an important by-product of the transport improvements was the growth of commuting—which meant, for example, that workers did not necessarily have to change their place of residence when they changed their place of work. Nevertheless, with the expansion of the population, the number of people in most migration flows certainly increased. Moreover, different communities inevitably came into closer contact. An outcome of such developments was the emergence of disputes between people who already inhabited an area—sometimes referred to as ‘sons of the soil’—and newcomers.122

Of course, there were strong similarities with earlier migration patterns during 1971‒2016. According to the 2001 census, about 30 per cent of the country’s people were ‘lifetime’ migrants—meaning that they were living in a place other than their place of birth.123 Due to the movement of women at marriage, about 44 per cent of females were classified as being lifetime migrants—compared to just 17 per cent of males. Almost two-thirds of female migrants had moved for reasons of marriage. But among male migrants the largest share (38 per cent) moved for reasons of employment. The 2001 census indicated that about 10 per cent of India’s population had changed their place of residence in the previous ten years—a very low figure by international standards.124 Most moves were rural to rural, and did not involve a change of state. Apart from marriage migration, men were much more likely than women to migrate, to do so over longer distances, and to move for reasons of work.125

As the overall level of urbanization in the country increased so a rising share of migration involved the urban sector. Thus, according to the 1971 census about 27 per cent of interstate movements in the previous ten years entailed a change of residence from a rural to an urban place. In contrast, the 2001 census indicated that 38 per cent of interstate movements in the previous ten years had been from a rural to an urban place.126

(p.252) The basic pattern of interstate migration also remained largely unchanged. Maharashtra was the most important destination. Between 1971 and 2011 Maharashtra was probably a net attractor of people from every other state and, within it, Mumbai was the country’s most cosmopolitan large city. Gujarat too was a major destination. The period also saw ever-increasing migration to the NCT of Delhi and neighbouring areas of Uttar Pradesh and Haryana. West Bengal and Kolkata lost much of their previously strong attracting force. As was the case before 1971, UP and Bihar exported the largest numbers of people.127

Estimates of international migration are little more than impressionistic. However, assessments made by the United Nations suggest that India was a net receiver of people in the 1970s and 1980s, before becoming a net exporter from the 1990s onwards (see Table 9.4).

Although the numbers were always relatively modest, there was net immigration from Nepal throughout the period 1971‒2016. The number of refugees who moved from Sri Lanka to Tamil Nadu between 1983 and 2009 was tiny by comparison. Moreover, some of the refugees eventually returned to Sri Lanka.

Bangladesh was the main source of immigrants to India. The 1971 war that led to the creation of Bangladesh may have caused the temporary movement of several million people—many of them Hindus—into neighbouring West Bengal.128 Most of these people returned home after the war. But the event was something of a precedent for Hindus living in a sometimes unstable Bangladesh.

More importantly, however, the flow of Muslim farming households from densely settled Bangladesh into Assam and its adjoining north-eastern states continued. This led to conflict—which, among other things, prevented the holding of the census in Assam in 1981. By 2001 an estimated one-sixth of Assam’s population were either people who had migrated to Assam from East Pakistan/Bangladesh since 1951 or their descendants.129 The migration was a source of friction between the governments of India and Bangladesh. To address the issue, the government of India built a fence—more than 3,000 kilometres long—along the international border. The fence was patrolled to limit smuggling, but also to curb migration. Because of the illegal nature of the migration (which also involved the border with West Bengal) assessing its scale is difficult. However, the number of Bangladeshis residing in India has variously been put in the range of 12‒20 million.130

The period 1971‒2016 also saw major rises in migration from India to the Middle East, North America, and Europe. Whereas in 1971 there may have been fewer than one million Indian nationals residing overseas, by 2016 the number was perhaps more than ten times greater. Indeed, the ‘diaspora’—not all of whom had ever lived in India—was put at 20‒25 million. By 2016 there may have been 4‒5 million Indian migrants living in the Gulf—notably in (p.253) Saudi Arabia and the United Arab Emirates.131 They were mostly men, working in low-skilled jobs, on short-term contracts, and often experiencing very difficult conditions. These workers came mostly from the south—especially Kerala, Tamil Nadu, and Andhra Pradesh. In 1983 Mrs Gandhi’s government passed an Emigration Act to help defend their interests. But there was always tension between providing protection to the workers—protection which was not always wanted—and enabling them to get relatively well-paid employment which also produced foreign exchange earnings for India.

The United States became the leading destination for better-educated Indians during 1971‒2016. Indeed, towards the end of the period there may have been at least 2‒3 million Indian migrants living in the United States. Canada and Britain each contained sizeable numbers, but in both countries the figure was probably under 1 million. There may have been roughly two hundred thousand Indians living in Australia. And there were smaller numbers residing in the Netherlands, Germany, Italy, and Scandinavia.132 It is likely that the number of people leaving India for these countries—especially the United States—was increasing in the years before 2016. Factors which influenced this migration included the demand for postgraduate university education—which could lead to opportunities for employment and permanent residence—and the demand for skilled workers in fields such as medicine and information technology. India’s re-emergence in the 1990s as a net exporter of people was certainly influenced by the emigration of skilled young people.


Perhaps the most important feature of India’s population history during the period 1971‒2016 was that the country’s fertility decline was fairly slow. No state experienced a rapid decline in fertility. The process occurred much earlier in the south. It was in the north, covering much of the Ganges basin, that the decline was tardy. As a result, there was very substantial population growth. And unfavourable comparisons were drawn between India’s performance in reducing fertility and those of neighbouring countries.133

In the dark days of 1976 Mrs Gandhi had, largely for effect, mentioned the unlikely possibility that the population might grow by 100 per cent by 2004. In fact, it grew by about 76 per cent.134 Despite her experience with the Emergency, Mrs Gandhi would probably have seen this amount of growth as both avoidable to a significant degree, and undesirable. By 2016 the country’s population was around 1,327 million and rising at about 15 million per year.135 The population growth had implications for many aspects of development—such as providing people with access to safe water. It affected (p.254) progress in fields such as education, employment, food production, and the environment. These issues cannot be addressed here.136 But a few implications which are often neglected are briefly discussed.

The growth of the population from 548 to about 1,327 million in 45 years had major administrative and political effects. For example, between the 1971 and 2011 censuses the number of districts in the country increased from 357 to 640. Population growth was a key factor behind this development. All the same, there were still many districts in 2011 with more than 3 million people. Indeed, there were eight districts in West Bengal with populations exceeding 5 million.137

Just as ‘scale issues’ produced challenges for the administration of districts, so similar issues arose for states. Population growth contributed to the creation of new states. Examples include the carving out, in 2000, of Jharkhand from Bihar, Chhatisgarh from Madhya Pradesh, and Uttaranchal (later Uttarakhand) from Uttar Pradesh. In addition, in 2014 Telangana was formed from Andhra Pradesh. The former states of Bihar, MP, UP, and AP were all very populous. Therefore, partly due to their size, they faced particular issues of cohesion and administration. Indeed, one argument for the division of states was that it helped to strengthen the relationship between people and their governments. Moreover, population growth made it more likely that cultural and linguistic minorities would reach a size that was sufficient to justify—or help to bring about—the creation of a new state.

Another implication of the growth of the population relates to the ‘freezing’, in 1976, of state-level representation in the Lok Sabha. Recall that the number of parliamentary seats for the states was fixed with reference to their population size as gauged by the 1971 census. The freeze was seen as temporary. The 2001 census results would provide the basis for a fresh allocation of seats—founded, naturally, on the principle that political representation should be in proportion to population. In fact, however, the issue proved extremely difficult, and in 2003 the 91st Constitutional Amendment extended the ‘freeze’ to 2026. Therefore, from the 1970s onwards, representation in Parliament was based on population data that were increasingly out-of-date. Hence, the ‘value’ of a vote in the large northern states fell relative to the ‘value’ of a vote in the south. The freeze was extended partly ‘to encourage accelerated demographic transition in the large Hindi-speaking states’.138 Much the same comment might have been made in 1976. However, the extension helped to conserve national integrity—ending the freeze would have produced resentment in much of the south.

Different demographic trajectories—and related changes in age structure—also had important economic implications. In their seminal study of 1958 Coale and Hoover had used the example of India to argue that fertility decline, the consequent fall in the ratio of children to working-age adults, and the resulting slowing of population growth, would help to promote higher levels (p.255) of savings and investment, and therefore contribute to higher living standards. This was sometimes seen as being a neo-Malthusian thesis. The argument was neglected until the 1990s—when it was revived by scholars trying to explain the rapid economic progress that was then underway in places like China and South Korea.139 The idea that fertility decline, and consequent changes in age structure and population growth, might exert a strong positive effect on economic growth became subsumed under the term ‘demographic dividend’. However, it took some time for the argument to be reassessed in relation to the country on which it was developed.

That said, research suggests that the areas of India that experienced earlier fertility decline did indeed benefit economically as a result. Thus, towards the end of the period 1971‒2016, there was a strong positive relationship across states between the ratio of working-age adults to children, and the level of per capita income. In short, those states—predominantly in the south—which reduced their fertility earlier, and therefore had higher worker/child ratios, had markedly higher living standards.140 In addition, a state-level study of demographic and economic changes during 1971‒2001 found that the independent effects on the economy of both greater age-structural change and slower population growth were distinctly positive. The study concluded that Bihar, Madhya Pradesh, Rajasthan, and Uttar Pradesh had not yet reduced their fertility levels enough to really gain from a ‘demographic dividend’.141 Further, the later fertility declines in these major northern states might mean that they eventually experienced weaker ‘dividends’ and ended up containing even larger numbers of poor people. Broadly analogous economic implications may also derive from the slower fertility decline of the country’s Muslim population.

Issues of family planning and fertility decline ranked low on the political agenda at the end of the period 1971‒2016. A common view was that quite a few states had already achieved below-replacement fertility and that the economy was also growing fairly rapidly anyway. Moreover, the birth rates of the major northern states were declining. It was widely believed too, almost on a laissez faire basis, that the large northern states would inevitably experience a demographic dividend—eventually. The fact that making the most of the potential opportunity required intervention in areas such as family planning and education was often overlooked.

The circumstances underlying fertility and mortality remained unsettling in 2016. There was progress. But informed observers noted the ‘considerable disconnect’ between the promises of successive governments and conditions on the ground.142 Circumstances were bleakest in the northern heartland. There remained considerable ‘unmet need’ for contraception, and very strong son preference—something which, like sex-selective abortion, was perhaps still spreading. The country’s contraceptive services remained highly skewed (p.256) towards female sterilization;143 nor were circumstances much better with respect to health. For example, while slowly getting taller—men more than women—Indians remained among the shortest and most undernourished people in the world.144 So, while fertility and mortality were falling, progress with respect to underlying conditions did not always seem to be commensurate. On the fact that fertility levels and population growth were much higher in the north, and the serious political and socio-economic ramifications of this fact, the eminent demographer, K. Srinivasan, remarked that ‘the apathy of the [political] leadership to this fundamental problem is appalling’.145

Nevertheless, in 2016 average life expectancy was about 68 years, and total fertility was about 2.4 births per woman. The population was still fairly young—the median age being about 27 years—but it was on a trajectory that would steadily make it older.146 As regards mortality and fertility, most southern states were 2‒3 decades ahead of Bihar and UP. And urbanization was also significantly higher in the south. According to the official statistics, one-third of the country’s people resided in urban areas in 2016. Adopting a more conventional classificatory approach, however, would put the figure closer to 40 per cent.147 According to the United Nations, in 2016 Delhi and Mumbai were respectively the second and fourth largest urban agglomerations in the world.148 It was certain that India would become the world’s most populous country—overtaking China within about 10 years. Indeed, the UN’s central projection was that India’s population would exceed 1.7 billion by 2050.149


(1.) Roy (2012: 237–42).

(2.) These and subsequent growth rates are based on per capita income estimates expressed in constant currency units. For 1947–71 the source is Sivasubramonian (2000) and for 1971–2011 it is World Bank (2014).

(3.) World Bank (2014) puts the poverty headcount ratio in 2011 at 25 per cent—which would imply approximately 300 million poor people. Also, Cassen and McNay (2005: 181) state that ‘[w]hile the percentage of the poor has fallen, the numbers in poverty have risen with the increase in population: around 300 million today, compared with some 200 million in the early 1950s’. See also Ahluwalia (2011: 89). That said, the problems of making meaningful comparisons should be stressed.

(4.) The international demographic household surveys referred to involved collecting detailed ‘birth histories’ from women, from which it was possible to estimate levels and trends of fertility and child mortality. Such surveys were pioneered by the World Fertility Survey during 1973–84 and, from 1984 onwards, by the Demographic and Health Survey. The demographic surveys conducted in India before the NFHS rarely collected birth histories.

(*) Subjects addressed by NFHS surveys included, fertility, mortality, morbidity, fertility preferences, contraceptive use, knowledge of family planning, child immunization status, nutritional status, prevalence of anaemia, knowledge of HIV/AIDS, antenatal problems and care, and quality of health care provision.

(5.) But see Kulkarni (2014) and Visaria and Visaria (2003).

(p.257) (6.) The estimates of the United Nations (2015a) for the whole period 1971–2011 suggest that the slowdown in life expectation may have occurred in the 1990s.

(7.) See, for example, Easterlin (2012); Preston (1975).

(8.) Rowlatt (2015).

(9.) See also the estimates of the United Nations (2015a) for the period 1971–2011.

(10.) See Visaria (2005a: 41–2). The greater rise in female life expectation may also have reflected the inherent mortality advantage that females seem to have, relative to males. Research suggests that as the level of mortality improves in a population so females tend to experience greater life expectation gains. See Dyson (2012: 447–51).

(11.) That a deterioration in census coverage produces a rise in the sex ratio (m/f) of the enumerated population was discussed in Chapter 8 apropos 1941 and 1971. Something similar may have happened in 1991; see Dyson (1994: 3237–8). The low ratio for 2011 in Table 9.1 probably reflects special efforts made in the census to improve the enumeration of females, see Navaneetham and Dharmalingam (2011: 15).

(12.) See Kulkarni (2014: 43).

(13.) See Jha et al. (2011: 1925).

(14.) Kulkarni (2007: 16) says there were approximately 10 million sex-selective abortions during the entire period 1981–2006. See also Santhya and Jejeebhoy (2014: 25–6); Stillman et al. (2014: 12–18).

(15.) Das Gupta and Bhat (1998); Guilmoto (2009); Jha et al. (2011).

(16.) See Das Gupta and Bhat (1998: 90); Goodkind (1996).

(17.) For the estimates, see Registrar General, India (1999, 2013).

(18.) Garbero and Pamuk (2014: 300).

(19.) See Visaria (2005a: 42–53).

(20.) See World Bank (2014). The word ‘sanitation’ here refers specifically to toilet facilities.

(21.) See Radwan (2005: 15).

(22.) See Radwan (2005: 12); Ramasubban (2008: 102). See also Jeffery and Jeffery (2006: 113, 131).

(23.) IIPS (2017: 3).

(24.) See Registrar General, India (1999: 230) and Smucker et al. (1980: 321).

(25.) Jain et al. (1985: 362).

(26.) Around 85 per cent of expectant mothers were being vaccinated. See Government of India (2011).

(27.) IIPS and Macro International (2007a: xxxvi).

(28.) See IIPS (1995); IIPS and Macro International (2007a, 2007b); IIPS and ORC Macro (2000).

(29.) Santhya and Jejeebhoy (2014: 19).

(30.) The Acts may also have made people more cautious about reporting young women as ‘married’ in the census.

(31.) See Kulkarni (2014: 57–8) for these singulate mean ages at marriage (SMAMs).

(32.) See Srinivasan and James (2015: 43).

(33.) See Allendorf and Pandian (2016).

(p.258) (34.) Nortman (1978: 300).

(35.) See Kulkarni (2014: 58–60) and Government of India (2011).

(36.) See Haub and Sharma (2006: 15); Khan and Hazra (2012: 12); Visaria (2005b: 65).

(37.) See James (2011: 577); Visaria (2005b: 65).

(38.) Registrar General, India (2013).

(39.) Guilmoto and Irudaya Rajan (2001a, 2001b).

(40.) The new states of Chhattisgarh (formed from MP) and Uttarakhand (formed from UP) were not areas of particularly high fertility in 2001. See Guilmoto and Irudaya Rajan (2002: 671).

(41.) Guilmoto and Irudaya Rajan (2013: 59).

(42.) Also, the fact that fertility was initially slightly higher in the northern states meant that they had slightly younger age distributions—with greater growth potential.

(43.) These statements are supported by census and SRS data. For example, the vital rates in Table 8.6 suggest that in the 1960s there was faster growth in the north than in the south, and those in Table 9.2 imply the same for after 2011.

(44.) Census of India (2011b).

(45.) Jeffery and Jeffery (2006: 123–32).

(46.) Saxena and Hussein (2016: 418–19).

(47.) Saxena and Hussein (2016: 417). See also Bhat and Francis Zavier (2005: 399).

(48.) The estimates of median age strictly refer to 1970 and 2010 and are taken from United Nations (2015a).

(49.) A reduction of 6.8 per cent is suggested by United Nations (2015a).

(50.) See, for example, Bhat (2002c) and McNay et al. (2003).

(51.) Guilmoto and Irudaya Rajan (2013: 69); James (2011: 576).

(52.) Metcalf and Metcalf (2011: 253–4).

(53.) They included Sholapur and Satara which, as noted in Chapter 8, are particularly drought-prone.

(54.) Dyson and Maharatna (1992: 1327–9).

(55.) Drèze (1988: 101).

(56.) See, for example, Oughton (1982).

(57.) Dyson and Maharatna (1992: 1329–30).

(58.) Dyson and Maharatna (1992: 1328).

(59.) Recall from Chapter 8 that the SRS was only launched in the late 1960s and it was exceptionally deficient in some states (e.g. Bihar). Nevertheless, according to one set of unadjusted figures (see Bhat et al. 1984: 70) the national death rate in 1972 (a more difficult year than 1973 in many areas) was 1.3 points higher than the average figure for 1970 and 1971.

(60.) This paragraph draws on Cassen (1978: 171–4).

(61.) However, in the very different circumstances which applied by October 1975 Singh had changed his views, see Pai Panandiker and Umashankar (1994: 89). Indeed, he later remarked that ‘contraception is the best development’, see Bloom (2011a: 565).

(62.) The Fifth Plan ran from 1974–75 to 1977–78 and was succeeded by Annual Plans in 1978–79 and 1979–80.

(63.) For this quote, see Visaria and Visaria (1981: 38).

(64.) Maharatna (2002: 973).

(p.259) (65.) For the quote, see Sezhiyan (2010: 76).

(66.) Only Maharashtra passed legislation to enable compulsory sterilization (of couples with three or more children), but the legislation was not enacted because it failed to receive the necessary Presidential assent.

(67.) This paragraph draws on Visaria and Visaria (1981: 38–9) and Maharatna (2002: 972–3).

(68.) See Gwatkin (1979: 37–46); Visaria and Visaria (1981: 38–9).

(69.) See Srikantan and Balasubramanian (1989: 82); Visaria and Visaria (1981: 38–9).

(70.) See Caldwell (1993: 311). For the SRS estimates, see Registrar General, India (1999: 3).

(71.) Gwatkin (1979: 29).

(72.) These numbers appear on page 167 of the ‘Third and Final Report’. For this, and the Commission’s two ‘Interim’ Reports, see Sezhiyan (2010).

(73.) Gwatkin (1979: 29).

(74.) These comments draw from Gwatkin (1979: 52) and Pai Panandiker and Umashankar (1994: 92–8).

(75.) Ironically, the growth rate indicated for 1971–81 was probably inflated slightly due to relatively poor census coverage in 1971 related to Mrs Gandhi’s calling of an election (see Chapter 8).

(76.) See New Delhi Declaration of Parliamentarians, First National Conference of the Indian Association of Parliamentarians for Population and Development, New Delhi, 25 May 1981, pamphlet issued by the Mass Mailing Unit, Department of Family Welfare, Government of India, 1981.

(77.) Maharatna (2002: 973); Visaria and Visaria (1981: 42).

(78.) Srinivasan (2001: 22).

(79.) For example, in the Sixth Plan a Planning Commission working group set as a goal the achievement of a birth rate of 21 per 1,000 in all states by the year 2000. See Srinivasan (2001: 22).

(80.) The involvement began in the late 1970s. The other agencies were the British Overseas Development Administration, the Danish International Development Agency, and the Swedish International Development Authority.

(81.) See IIPS (1995: 19); Maharatna (2002: 973–4).

(82.) Registrar General, India (1999: 12).

(83.) Jain et al. (1985: 362).

(84.) Jain et al. (1985: 361).

(85.) Ministry of Health and Family Welfare (2014: 3).

(86.) UNICEF (1992: 76, 1993: 72).

(87.) IIPS (1995: 19–20).

(88.) See IIPS (1995: 238); Visaria and Visaria (1981: 32).

(89.) Visaria (2005a: 46).

(90.) Nag (1996: 5–8).

(91.) Keay (2010: 583–4); Metcalf and Metcalf (2011: 261–2).

(92.) On so-called ‘Saffron Demography’ see Jeffery and Jeffery (2006). For an illustration of the phenomenon, see Joshi et al. (2003).

(93.) Srinivasan (2001: 24).

(94.) Haub and Sharma (2006:14); Srinivasan (2001: 24–5).

(p.260) (95.) IIPS and ORC Macro (2000: 315); Srinivasan (2001: 24–6).

(96.) Srinivasan (2001: 25–7).

(97.) See Dyson (2001b: 352–3).

(98.) Santhya and Jejeebhoy (2014: 11).

(99.) See Government of India (2000: 2–3); Santhya and Jejeebhoy (2014: 6–7).

(100.) Ramasubban (1992: 175–7).

(101.) Government of India (2013).

(102.) UNAIDS (2005: 33).

(103.) IIPS and Macro International (2007a: xiv).

(104.) Government of India (2013).

(105.) Aspects of DOTS reflected measures introduced in urban areas in the 1960s and 1970s. See also TB Facts (2015); Visaria (2005a: 46).

(106.) This was perhaps the most important example of the worrying proliferation of drug-resistant infections in India because of the uncontrolled use of antibiotics. Such developments did not bode well for the future.

(107.) TB Facts (2015); WHO (2015).

(108.) Visaria (2005a: 44–6).

(109.) See, for example, Dhingra et al. (2010); Shah et al. (2011).

(110.) See WHO (2015). As a broad category cancers would also feature.

(111.) Haub and Sharma (2006: 14).

(112.) Srinivasan (2001: 26).

(113.) See Deaton (2008: 470–1); IIPS (2017: 3–6). These heights, in metres, correspond to 61.4 and 59.8 inches. Because they reflect different geographical coverage and relate to different age groups, these figures cannot be compared directly with those for the nineteenth-century indentured labour recruits given in Table 6.2. Nonetheless, it is notable that the figure for women in 2000–05 is similar to the figures in Table 6.2. The figure for men is lower than the figures in Table 6.2. This may suggest that male labourers were recruited partly on the basis of characteristics associated with height (although other explanations are possible).

(114.) Srinivasan (2001: 25–7).

(115.) Navaneetham and Dharmalingam (2011: 16).

(116.) Haub and Sharma (2006: 9–11).

(117.) Registrar General, India (2013).

(118.) See also Kulkarni (2014: 63).

(119.) In this context, urbanization in sub-Saharan Africa in 2010 was estimated at 35.4 per cent. See United Nations (2015b: 205).

(*) To be classed as a ‘census town’ a place had to (a) contain at least 5,000 people; (b) have at least 75 per cent of the male working population engaged in non-agricultural employment; and (c) have a density of at least 400 people per square kilometre. Given these criteria, which themselves are fairly stringent, the number of census towns increased from 1,350 to 3,894 between 2001 and 2011 while the number of statutory towns rose from only 3,811 to 4,041. See Chandramouli (2013).

(120.) These figures include a small number of cities.

(121.) Kundu and Gupta (2000).

(122.) For a pioneering study, see Weiner (1978).

(123.) Data on migration from the 2011 census were not available at the time of writing.

(124.) In this context, see Bell et al. (2015: 44–5).

(125.) Kulkarni (2014: 61–2).

(126.) See Dyson and Visaria (2005: 110–11) and Kulkarni (2014: 62).

(127.) See Chapman and Pathak (2000); Dyson and Visaria (2005:112–14); Skeldon (1984: 5–7).

(p.261) (128.) Naujoks (2009).

(129.) Saikia et al. (2016: 36).

(130.) Naujoks (2009).

(131.) The numbers mentioned are taken from Naujoks (2009).

(132.) Naujoks (2009).

(133.) For example, Guilmoto and Irudaya Rajan (2013: 69) state that ‘[n]eighbouring countries with higher fertility rates, such as Pakistan, Nepal and even Afghanistan, have reported during the last 10 years a fertility decline faster than India’s. Fertility rates in Bangladesh, Sri Lanka, Bhutan, and the Maldives are already below India’s’.

(134.) Populations for 1976 and 2004 of about 612 million and 1,080 million are implied by the figures in Table 9.1.

(135.) These figures draw on estimates in United Nations (2015a).

(136.) However, see for example the contributions in Dyson et al. (2005).

(137.) Both North and South 24 Parganas districts contained more than 8 million people, making them similar in population size to Tunisia and Sweden.

(138.) Srinivasan (2001: 27).

(139.) See Bloom and Williamson (1998); Coale and Hoover (1958).

(140.) See Bloom (2011b).

(141.) James (2008).

(142.) Santhya and Jejeebhoy (2014: 23–4).

(143.) IIPS and Macro International (2007a: 90–113); Santhya and Jejeebhoy (2014: 23–7).

(144.) Deaton (2008: 471).

(145.) For this quote, see Srinivasan (2001: 27).

(146.) The figures cited are based on estimates in United Nations (2015a).

(147.) Dyson (2011: 47–50).

(148.) United Nations (2015b: 92–3).

(149.) The precise figure projected by the United Nations (2015a) for 2050 was 1,705 million. The figure was subsequently revised downwards slightly. For a lower, perhaps more realistic, projection of 1,579 million in 2051, see Dyson (2005: 101–5).