Skip to main content
Publications Explainer

Life in a Quantified Society

Julia Angwin
4:00

What is the quantified society?

The quantified society describes the widespread collection of information—or big data—about individuals, groups, and even whole societies, and the use of that information by public and private actors to make inferences and decisions about many aspects of our lives. The use of this data can have real life consequences, affecting people’s access to credit, housing, jobs, and more.

What is big data?

Big data is information about individuals and groups that has been extracted from larger data sets by a variety of analytical techniques. These data sets are collected by public and private actors from sources that play a big part in our lives today: social media, credit scores, online search histories, phone apps, consumer loyalty clubs, online purchase histories, and even the products you own, such as a car that tracks your driving habits or a mattress that collects data on your body temperature and sleep movement—these personal items collectively make up what is known as the “internet of things.”

Who is collecting this data, and how?

Data is an extremely valuable asset, so much so that a “data broker” industry—defined by the U.S. Federal Trade Commission as “companies that collect consumers’ personal information and resell or share that information with others”—is growing globally.

Though early data brokerage predates the internet, the enormous increase in the volume, velocity, and variety of digital data, and advances in “data mining”—extracting useful information from the data—means data brokerage today is big business. Data brokers are not the only ones collecting data—more and more industry sectors such as health care and retail are now driven by data and data collection. 

What are algorithms and how do they affect people’s lives?

Big data uses sophisticated algorithms to search, aggregate, and cross-reference individual data sets to analyze different aspects of our lives and societies.

Algorithms process large volumes of data to make split-second decisions about who we are, what we want, what we do, and what we might do in the future. Algorithms are generally proprietary in nature and therefore secret. This type of automated decision making was introduced into financial systems in the 1990s through high-frequency trading and is now used in ways that affect the lives of more and more people.

The use of algorithmic decision making varies around the world. Algorithms can be used to determine your credit score, whether you get access to a loan or other financial services, how many police patrol your neighborhood, what news you read through Google and Facebook, if you will be stopped for a security check at an airport, if you will be granted bail, whether you get hired for a particular job, what job listings you see on an employment website, what findings appear in your search engine results, which products companies will market to you, and the price you pay for some goods and services online. Political campaigns use algorithms, too—for example, to control what articles will appear under the name of the candidate that you enter into a search engine.

The quantified society sounds efficient. Why is it a problem?

Despite the appearance of impartial objectivity, the quantified society allows for bias, inaccuracies, surveillance, and prejudice.

For one thing, the quantified society is based on data, but data can be wrong. It can be out of date, inaccurate, relate only to a small sample of the population, or lack vital context. Big data is also based on correlations between behaviors and activities, but it cannot authoritatively provide causal links to explain why people or groups think or behave in certain ways. This distinction is often lost when findings from big data are presented as “truth.”

Algorithms, too, can contain bias. A White House report cited research on big data that showed how web searches involving black-identifying names (e.g., Jermaine) were more likely to display ads with the word “arrest” in them than searches with white-identifying names (e.g., Geoffrey).

People are often unaware of how their data is used; they may agree for it to be used for one purpose and not know the other purposes it is used for. New forms of discrimination are possible, too. For example, there are now companies looking to offer credit scores on the basis of social networks and browsing history.

All of these things make the quantified society problematic. As personal data becomes more available and algorithms more sophisticated, decisions that affect us can be made based not on us as individuals, but on what type of person we are presumed to be. In other words, if an algorithm determines that you fit into a large data set of people who are in some way similar to you, it might, based on that data set, simply reject your job application instead of considering you as an individual applicant.

These kinds of decisions, driven by big data rather than individual consideration, are likely to become more common as ever more objects are connected to the internet. From phones to cars to household appliances, the number of devices that are able to capture information about our behaviors—information that is then stored and could be shared with others—is rapidly rising.

More pressingly, there is a serious power imbalance between the data holders and the data subjects, between the profiler and the person being profiled. We are being profiled repeatedly by countless organizations, who then share and sell data to others, often without our knowledge or consent.

How is algorithmic decision-making affecting real people’s lives today?

Real-life examples of the quantified society are proliferating and entering public debate, mostly in the United States.

Algorithms are now commonly used in the U.S. justice system. They produce scores on the likelihood of someone reoffending—known as risk assessments—used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin, the results of such assessments are given to judges during criminal sentencing.

In 2014, then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. “Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice,” he said, adding, “They may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.”

Algorithms are also an increasingly important factor in determining the news people see as more people rely on the internet and social networks for their media consumption. According to Reuters research across 26 countries, social media now “outstrips TV” as a news source for young people; similarly, Pew research in 2016 found that 35 percent of 18-to-29-year-olds said social media was the “most helpful” source of information about the U.S. presidential campaign. A small change to a secret algorithm can dramatically alter what appears in a person’s newsfeeds, potentially altering that person’s perspective of the world.

Why is this an open society issue?

An open society is built around open access to information. We expect to understand how decisions are made about us, and that someone will be accountable for those decisions and provide redress if something goes wrong.

When proprietary algorithms are the decision makers, this accountability and opportunity for redress is much harder to achieve. For one thing, you don’t know on what basis the decision was made or by whom.

There is also growing evidence that algorithms are often discriminatory themselves, due to bad data or selection bias. This creates the danger of perpetuating existing societal discrimination, or even creating new forms of it.

What are the Open Society Foundations doing?

In 2015, the Open Society Foundations supported a number of organizations in the Global South and Europe to document different examples of the quantified society.

For example, we supported the initiative Bits of Freedom in the Netherlands to investigate data brokers and profiling in that country, Cardiff University to report on the use of social media for policing domestic extremism and disorder in the UK, and Asociacion por los Derechos Civiles in Argentina to consider state practices and use of big data technology.

We support investigative journalism efforts to probe the broader consequences of bias and discrimination in a quantified society. We fund research on the need for a clear regulatory agenda for data brokers, looking at the broader issues of automated, data-driven decision making by large institutions that shapes people’s lives and impacts their rights.

We have specific work focused on these issues to document the harms that are taking place and develop ways to make sure companies and institutions continue to be held to account as we move towards a world where more decisions are determined by algorithm.

Read more

Subscribe to updates about Open Society’s work around the world

By entering your email address and clicking “Submit,” you agree to receive updates from the Open Society Foundations about our work. To learn more about how we use and protect your personal data, please view our privacy policy.