Are you a hipster? Probably not, at least if you accept the definition of the word that Norman Mailer made famous in the 1950s. To be a hipster, Mailer wrote, is “to live with death as immediate danger, to divorce oneself from society, to exist without roots, to set out on that uncharted journey into the rebellious imperatives of the self.”
No doubt there are a few people like that still hanging around our cities, some of them perhaps even contemporaries of Mailer. But that isn’t what we mean today when we say a neighborhood is full of hipsters. Nowadays, one can qualify as a hipster merely by being under 40 years old, living single or childless in the center of a city, drinking pour-over coffee and craft beer, and listening to the music of obscure homegrown indie rock bands.
There is a growing body of criticism and even scholarship devoted to the “morphing hipster” question. But why would we take a once-powerful word and dilute it to the point of absurdity? In part, it’s because our cities are changing at a rate of speed few of us can easily comprehend, and we are searching for ways to describe what’s going on. Recycling an old word in watered-down form is one small attempt to do that.
But the word at the heart of the uncertainty isn’t “hipster.” It’s “gentrification,” an idea that some of us applaud and some abhor, but one whose relevance to 21st-century urban life is impossible to deny. It has all happened rather fast.
A couple of decades ago, gentrification was a concept that applied to a select group of large American cities aspiring to global status: Chicago, New York, San Francisco, Seattle. We might do well to call this the first phase of gentrification. It began with the transformation of working-class and light-industrial neighborhoods -- Wicker Park and Bucktown in Chicago; Soho, Tribeca and later Williamsburg in New York -- into residential enclaves favored by the avant-garde young. The street crime that would have scared them off a few years earlier had begun to recede. At first, the colonizers were largely gays, artists and even hipsters by Mailer’s definition.
In the beginning, the housing in these early gentrification sites was more or less affordable to groups of young people with at least one steady income. It was possible for restless teenagers in small Midwestern towns to dream of making it in New York in dance or theater or haute cuisine and then to find a place in Lower Manhattan where they could give it a try.
As the years went by, the gentrifying population of these neighborhoods was less prominently gay or artistic and more frequently just singles and couples in their 20s and 30s who felt drawn to the amenities that city life provided. There was a great deal of turnover among the settlers. But affordable apartments were still available to newcomers, if sometimes on the slightly more dangerous fringes of the original gentrified neighborhood.
This first wave of gentrification did not displace many people. Most of the areas that became part of that wave were either blue-collar territory that the previous residents had largely abandoned or old factory districts where few people had been living at all. In some cases, long-vacant downtown office buildings were converted to residences with the help of subsidies from city government. There was scarcely any gentrification of neighborhoods housing the poor.
That’s what gentrification looked like in the 1990s, in the few American cities that had begun to experience it in a significant way. It is not what gentrification looks like in those cities anymore. They have passed through the first phase of the process and embarked on a new version that is quite a bit different and in some ways disturbing.
In this second phase of gentrification, demand for living space in the reclaimed neighborhoods far outstrips the supply of condos and apartments available. Wicker Park, Tribeca, the South End in Boston, downtown Seattle and virtually all of San Francisco become unaffordable even to a couple with two decent incomes. They emerge as the province of the extremely rich. Luxury high-rise towers sprout up wherever there is room for them. Much of this property becomes the domain of speculators, many of them from foreign countries and most of them absent for a large part of the year.
The soaring property values in the center of America’s hottest cities create a whole series of other consequences. With a large enough percentage of absentee owners and super-rich tenants, the community cohesion that nearly always marks first phase gentrification begins to break down. Commercial rents rise so much that local shopkeepers can’t afford them. The quirky coffee shops and mom-and-pop retail storefronts that attracted the original gentrifiers to the neighborhood begin to disappear, replaced by banks, chain drug stores and high-end boutiques.
As the second phase of gentrification takes hold, several other important things begin to happen. With the demand for housing in the original gentrified neighborhoods exceeding the supply, middle-class singles and couples, priced out of the original renewal area, decide to settle in neighborhoods farther from the center that had never been considered candidates for gentrification: African-American neighborhoods like Fort Greene and Bedford-Stuyvesant in Brooklyn; Hispanic enclaves like Logan Square and Humboldt Park in Chicago.
As this happens, tensions between the existing residents of color and the mostly white new arrivals inevitably heat up. (It was in Fort Greene in 2014 that the filmmaker Spike Lee accused white newcomers of being part of a “Christopher Columbus syndrome” -- although his exact words were more colorful.) The demographics of the central city as a whole begin to tilt in a white direction. It took only a few years for Washington, D.C., and Atlanta, both two-thirds black at one time, to lose their African-American majorities.
Sudden and important changes begin to take place in the suburbs during the second phase as well. Poorer suburban communities take in minorities who either are forced to leave or wish to leave the gentrifying neighborhoods closer in. More affluent suburbs, most of them car-dependent communities built in the 1970s and 1980s, look for ways to attract the gentrification overflow. They convert failing shopping malls into outdoor lifestyle centers that attempt to recreate an urban shopping experience. They reconfigure some of their main streets, rush to create bike lanes and tout themselves, wherever possible, as pedestrian-friendly.
This is a very rough picture of what has happened in the past 20 years in the cities and metro areas generally rated as the most successful in the country. The details will be familiar to anyone who has lived in one of these cities during those years. Not all of it is pretty. Even the most ardent supporter of gentrification will find it difficult to defend the invasion of the super rich and the decline of small-scale commerce that has occurred in Lower Manhattan. Or to dismiss the reality that, as the process moves farther out beyond the city centers, some displacement of the poor does take place.
Still, there are realities on the other side of the ledger as well. Crime-ridden and physically deteriorating neighborhoods like Bedford-Stuyvesant and Humboldt Park become safer and more attractive, and they acquire the grocery stores and other shops that have been missing for decades. Much of the time, public schools improve. The pros and cons of the gentrification process can be debated ad infinitum, but it’s impossible to deny that both positives and negatives exist.
But there is one positive effect that is sometimes underestimated. At the very moment that the first phase is coming to a close in such places as Chicago and New York, it is just taking root in second-tier cities: Cincinnati, Indianapolis and Milwaukee, among others. In these cities, even more than in the first group, the changes are focused downtown. The downtown streets no longer empty out
at the end of the workday. First to arrive are restaurants, more inviting than anything the city has had before, and nightlife that has not existed there for a long time. The millennials who patronize these businesses express a desire to live downtown. It is easy to build new residential towers because there is plenty of property there that has been unused or underused for decades. Displacement isn’t really an issue, because hardly anyone lived in the vicinity.
The interesting question for these cities is whether they will move on to the second phase, with all of its accompanying negatives. My guess is probably not. Indianapolis, Milwaukee and their counterparts aren’t global cities; they are not likely to attract the foreigners and speculators who have plagued New York and San Francisco. The demand for central city living may exceed the supply for a while, but there is enough undeveloped land in most of these city centers to meet the demand and prevent prices from becoming exorbitant in the near future. In the end, these second-tier cities may be better able to sustain the urbanist vision of community than the ones where the process began decades ago.
Many seemingly contradictory things are going on in American cities at the same time. They are enough to confuse anyone, even a serious student of urban life. Who exactly is living in these reclaimed neighborhoods? Why are they there? How many of these people will want to stay for life? You can make a reasonable case for almost any scenario. Or you can just call them hipsters and leave it at that.