Data Monopoly

From SI410
Jump to: navigation, search
Data monopoly in different businesses [1]

Data monopoly is when a single company or entity controls a significant portion of a specific type of data or data market. This can happen in a number of ways, such as through acquisitions of smaller companies, exclusive partnerships or agreements, or simply by having a large and dominant market share. In a data monopoly, the company or entity in control significantly influences market prices, terms of service, and data access. This can lead to reduced competition, innovation, and potential privacy concerns. Additionally, a data monopoly can make it difficult for other companies to enter the market, which can stifle economic growth.[2] Big Tech companies have used their unchecked access to private personal information to create in-depth profiles about nearly all Americans and to protect their market position against competition from startups. Data monopoly can be a concern in many areas, including data-driven industries such as advertising, finance, healthcare, and transportation. The antitrust laws of many countries are in place to prevent such monopolies.

History

The history of the concept of a data monopoly can be traced back to the early days of the internet, when a few large tech companies, such as Google and Facebook, emerged as the dominant players in the digital landscape.[3] These companies built massive databases of user data through their online services, which they used to personalize advertisements and other content for their users. Over time, these companies became some of the largest and most valuable companies in the world, in part due to their control over vast amounts of data.

In recent years, the concept of a data monopoly has become a topic of concern for many people, as these companies have been accused of using their control over data to stifle competition, manipulate the market, and undermine privacy and civil liberties. In response, there has been a growing movement calling for more regulation of these companies and for the creation of alternative data ecosystems that are more transparent, accountable, and accessible.

The Monopoly of Attention

Mechanics under the attention marketplace

We pay for goods and services in the fiscal marketplace with legitimate currencies, while in the attention marketplace, we pay with our attention. The key performance indicators (KPIs) of attention-based companies often include the following core metrics: monthly active users (MAUs), time on site, and a handful of engagement metrics unique to the platform—likes, shares, views, comments, etc. These metrics are then used to evaluate the amount of attention paid to a platform. For example, more MAUs (people using the site) means there are more eyeballs to engage potentially. Time on site tells companies how long those MAUs are paying attention to their website or app—although more time on site could mean someone left their tab open. More MAUs spending more time on site means multiple factors of more attention paid. From there, other engagement metrics (clicks, swipes, shares, comments, etc) can be used to define the most common activities, which ones lead to more engagement, how and why, etc.

Nature of attention economy

Although referred to as the "attention economy," companies are not primarily seeking attention but rather data.[4] This is why they are defensive about being labeled as addictive services. While they may not intend to harm society or create addictions, users' engagement on the platform results in valuable data when they pay attention. Data is now considered one of the most valuable assets in the world due to its non-rivalrous nature, meaning it can be replicated for free with little to no loss in quality and used simultaneously by an infinite number of people, which is a departure from traditional rivalrous goods. Rivalrous goods, such as books, cars, or oil, can only be used by one person or entity at a time. Photocopying a book, for example, takes time and money, and the copy may deteriorate over time. This is a limitation of rivalrous goods. In contrast, data does not have this limitation. Data can be utilized in various ways, such as conducting research, developing products, training AI and ML systems, and through data brokering operations or targeted advertising, all resulting in financial gain. Additionally, one set of data can be used by multiple companies, creating a network effect where all parties become increasingly reliant on the original source.

Network effect

In short, network effects are mechanisms in a product or business where every new user makes the product, service, or experience more valuable for every other user. As a network becomes larger and denser, its value to the user grows exponentially.[5] It is often cited as a key factor in the concentration of markets in the technology industry. This creates an advantage for companies with a large user base, making it difficult for new companies to enter the market and compete. These companies at dominant positions, like Google and Amazon, will be able to attract many buyers, sellers, viewers, and content providers, creating a self-reinforcing cycle. As more people use these platforms, new buyers, sellers, viewers, and content generators are more likely to continue using the dominant platforms rather than trying new or small-scale competitors. This further solidifies the positions of these big companies and makes it harder for new entrants to break into the market.

Data Misuse and Data Breach

the Cambridge Analytica scandal [6]

The Cambridge Analytica scandal was a political and data privacy scandal that took place in early 2018. It involved the unauthorized harvesting of millions of Facebook users' personal data by the political consulting firm Cambridge Analytica. The data was allegedly used to influence voter opinion and decision-making during the 2016 US Presidential election and the Brexit referendum in the UK.

Cambridge Analytica was hired by the campaign of US President Donald Trump and by Leave.EU[7], a pro-Brexit group, to target voters with personalized political advertisements. The firm obtained the data of tens of millions of Facebook users through an app that paid users to take a personality test and granted access to their Facebook profiles and their friends' profiles. This was a violation of Facebook's rules and the company claimed that they were unaware that the data had been misused. However, a whistleblower, Christopher Wylie, who had worked at Cambridge Analytica, stated that Facebook was fully aware of the situation and had ignored warnings about it. The scandal raised questions about the responsibility of social media companies to protect their users' data and the ethical use of such data in political campaigns.

The data harvested by Cambridge Analytica was later revealed to have been accessed by Russian agents who used it to spread disinformation during the 2016 US Presidential election.[7] This added another layer to the scandal, with concerns about the role of foreign interference in democratic processes. In the aftermath of the scandal, Facebook faced numerous legal challenges and fines, including a $5 billion settlement with the Federal Trade Commission (FTC) in July 2019. The company also faced criticism for its role in spreading false information and amplifying divisive political messages. As a result of the scandal, there have been calls for increased regulation of technology companies, particularly with regard to their handling of personal data. The European Union introduced the General Data Protection Regulation (GDPR) in 2018, which strengthens data protection rules for EU citizens. Several countries, including the UK and the US, are considering similar regulations.[7]

Overall, the Cambridge Analytica scandal serves as a cautionary tale about the potential misuse of personal data and the need for increased transparency and accountability in the technology industry. It has sparked a global conversation about the responsible use of data in politics and the role of technology companies in protecting the privacy of their users.

Google Street View program [8]

The Google Street View program was a feature introduced by Google Maps that allowed users to see panoramic images of streets and landmarks around the world. However, the program also became involved in a data privacy controversy.

In 2010, it was discovered that Google's Street View vehicles had been collecting data from unsecured Wi-Fi networks while they were driving around taking photographs for the Street View feature.[8] This data included personal information such as emails, passwords, and other sensitive information. Google initially claimed that the data collection was a mistake and that the information was not used. However, it was later revealed that the data was actually collected on purpose and that some of the information was used for internal research projects. The incident sparked widespread concern about the privacy implications of the Street View program and led to multiple investigations by privacy regulators around the world. Google faced significant criticism for data misuse and was eventually fined by several countries for its actions.

The Google Street View program incident serves as a reminder of the importance of privacy and the need for companies to be transparent about their data practices. It highlights the need for robust privacy protections to ensure that personal information is not misused by companies for their own purposes. The incident has also helped to raise awareness about the privacy risks associated with new technologies and has led to increased efforts to address these risks.

Overall, the Google Street View program is an example of the challenges and controversies that can arise in the collection and use of data and underscores the need for companies to be mindful of the privacy implications of their actions.

Ethical Concerns

Lack of Imformed Concent

One of the biggest ethical concerns is that individuals are often not fully informed about the extent to which their data is being collected or how it will be used. This can lead to a violation of their right to privacy and autonomy. Many tech companies collect vast amounts of data from individuals without providing them with a clear understanding of what data is being collected or how it will be used.[9] For example, companies often have lengthy and complex privacy policies that are difficult for individuals to understand, and the consent process can be opaque and misleading.

Discrimination

There is a risk that the data collected by tech companies could be used to discriminate against certain groups of people or reinforce existing biases and discrimination. For example, companies may use algorithms that rely on data to make decisions about employment, housing, or credit, and these algorithms may be biased against certain groups of people. This can result in unequal treatment and harm individuals who are already marginalized.

Privacy and Responsibility

It is the tech companies' responsibility to protect the privacy and security of the personal data they collect from individuals. This responsibility extends to ensuring that the data is used in a manner that is consistent with the expectations of individuals and in compliance with relevant privacy and data protection laws. When companies fail to adequately protect the personal data they collect, the consequences can be serious.

Mitigations

Antitrust Law

Antitrust laws are regulations that encourage competition by limiting the market power of any particular firm. This often involves ensuring that mergers and acquisitions don’t overly concentrate market power or form monopolies, as well as breaking up firms that have become monopolies.[10]

For example, antitrust laws can be used to break up companies that have become too dominant in a market and are using their power to engage in anti-competitive behavior. This can include using their dominance to engage in anti-competitive practices, such as acquiring potential competitors, setting prices in an anti-competitive manner, or using their market power to discriminate against smaller competitors.

Additionally, antitrust laws can also be used to regulate the collection and use of personal data by big tech companies. For example, antitrust authorities can require companies to provide consumers with more control over their personal data and to be more transparent about their data collection and use practices.

Furthermore, antitrust laws can also be used to regulate the acquisition of smaller companies by big tech firms to prevent the consolidation of power in the hands of a few companies. This can help to prevent the misuse of data by ensuring that there is a competitive and dynamic marketplace in which companies are incentivized to act in the best interests of consumers.

In conclusion, while antitrust law alone may not completely solve the problem of data misuse by big tech companies, it can play an important role in promoting competition, protecting consumers, and mitigating the risks associated with the collection and use of personal data by these companies.

References

  1. Amanda Lotz (2018). ‘Big Tech’ isn’t one big monopoly – it’s 5 companies all in different businesses. Retrieved from: https://theconversation.com/big-tech-isnt-one-big-monopoly-its-5-companies-all-in-different-businesses-92791
  2. Richard Blumenthal (2021). Letter to FTC chair Lina Khan. https://www.blumenthal.senate.gov/imo/media/doc/2021.09.20%20-%20FTC%20-%20Privacy%20Rulemaking.pdf
  3. We Need to Talk About Data: How Digital Monopolies Arise and Why They Have Power and Influence. https://scholarship.law.ufl.edu/cgi/viewcontent.cgi?article=1188&context=jtlp
  4. Michael H. Goldhaber. The Attention Economy and Net https://firstmonday.org/article/view/519/440
  5. James Currier. Network Effects: The Hidden Force Behind 70% Of Value In Tech. https://www.forbes.com/sites/forbesbusinesscouncil/2022/10/11/network-effects-the-hidden-force-behind-70-of-value-in-tech/?sh=f6fbd2630edf
  6. Alvin Chang (2018). The Facebook and Cambridge Analytica scandal, explained with a simple diagram. https://www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram
  7. 7.0 7.1 7.2 Matthew Rosenberg (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
  8. 8.0 8.1 Jemima Kiss (2010). Google admits collecting Wi-Fi data through Street View cars. https://www.theguardian.com/technology/2010/may/15/google-admits-storing-private-data
  9. Turilli, Matteo, and Luciano Floridi (2009). "The Ethics of Information Transparency." Ethics and Information Technology. 11(2): 105-112. doi:10.1007/s10676-009-9187-9.
  10. Antitrust Laws: What They Are, How They Work, Major Examples. https://www.investopedia.com/terms/a/antitrust.asp#:~:text=Antitrust%20laws%20regulate%20the%20concentration,foster%20innovation%20through%20increased%20competition.