Difference between revisions of "Parental Controls"

From SI410
Jump to: navigation, search
m (Ponette moved page Role of Parents in Internet Censorship to Parental Controls: Prof. Pasquetto and I discussed new name.)
 
(98 intermediate revisions by 7 users not shown)
Line 1: Line 1:
[[File:Moderation workflow.png|600px|thumbnail|Archetypal Content Moderation Workflow]]
+
 
 
{{Nav-Bar|Topics#A}}<br>
 
{{Nav-Bar|Topics#A}}<br>
'''Content moderation''' is the process of monitoring, filtering and removing online user-generated content according to the rules of a private organization or the regulations of a government. It is used to restrict illegal or obscene content, [[spam]], and content considered offensive or incongruous with the values of the moderator. This is done to protect the general public and to abide by generally accepted cultural practices and norms. When applied to dominant platforms with significant influence, content moderation may be conflated with [[censorship]]. Ethical issues involving content moderation include the psychological effects on content moderators, human and algorithmic bias in moderation, the trade-off between free speech and free association, and the impact of content moderation on minority groups.
+
'''Parental controls''' allows parents to control what websites their children can see, determine at what times their children have access to the internet, track their children's internet history, limit accessible content, block outgoing content, and many other things. <ref>J. D. Biersdorfer. [https://www.nytimes.com/2018/03/02/technology/personaltech/setting-up-parental-controls-on-pcs-and-macs.html "Tools to Keep Your Children in Line When They’re Online"] (2018, March 02). ''New York Times''. Retrieved April 01, 2021. </ref> <ref name=FTC> Federal Trade Commission. (2018, March 13). Parental controls. Retrieved April 06, 2021, from https://www.consumer.ftc.gov/articles/0029-parental-controls </ref> The introduction of parental control software has raised ethical concerns over the last decade. <ref name = who>Thierer, Adam. (2019). Parental Controls & Online Child Protection. The Progress & Freedom Foundation, 45-51. https://poseidon01.ssrn.com/delivery.php?ID=396031102089030101016075082091103088058033095009026094104027126098086093111106075097029031099028051096054088127017025008122107111073000085023006114113101117080113006077037031064068080020002099009101067004068104007107105022107078007111120115083101094&EXT=pdf&INDEX=TRUE</ref> Parental controls are most likely used between the ages of 7 and 16, but parents with “very young children or older teens often have very little need for parental control technologies.” <ref name="who"/> Ethical concerns include loss of trust between parents and children <ref> Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/ </ref> and a decreased sense of autonomy that leads to reduced opportunities for self-directed learning.  <ref> Controlling Parents – The Signs And Why They Are Harmful https://www.parentingforbrain.com/controlling-parents/ </ref> Additionally, various monitoring tools may be used with or without the child's consent. <ref name=FTC> Federal Trade Commission. (2018, March 13). Parental controls. Retrieved April 06, 2021, from https://www.consumer.ftc.gov/articles/0029-parental-controls </ref>
  
==Overview==
+
[[File:Phone.png|400px|thumbnail|Apple's built-in parental controls. <ref>‌amaysim. (2020, November 3). How to set up parental controls on your iPhone, iPad or Android device. amaysim. https://www.amaysim.com.au/blog/world-of-mobile/set-up-parental-controls-apple-android </ref>]]
  
Most types of moderation involve a top-down approach, where a moderator or small group of moderators are given discretionary power by a platform to approve or disapprove user-generated content. These moderators may be paid contractors or unpaid volunteers. A moderation hierarchy may exist or each moderator may have independent and absolute authority to make decisions.
 
  
In general, content moderation can be broken down into 6 major categories:<ref>Grime-Viort, Blaise (December 7, 2010). [https://www.socialmediatoday.com/content/6-types-content-moderation-you-need-know-about "6 Types of Content Moderation You Need to Know About"]. ''Social Media Today.'' Retrieved March 26, 2019.</ref>
 
  
* '''Pre-Moderation''' screens each submission before it is visible to the public. This creates a bottleneck in user-engagement, and the delay may cause frustration in the user-base. However, it ensures maximum protection against undesired content, eliminating the risk of exposure to unsuspecting users. It is only practical for small user communities otherwise the flow of content would be slowed down too much. It was common in moderated newsgroups on Usenet.<ref>''Big-8.org''. August 4, 2012. [https://archive.is/t2xJ "Moderated Newsgroups"].  Archived from [http://www.big-8.org/wiki/Moderated_Newsgroups#Who_can_force_the_moderators_to_obey_the_group_charter.3F the original] on August 4, 2012. Retrieved March 26, 2019.</ref> Pre-moderation provides a high control of what ends up visible to the public. This method is suited towards communities where child protection is vital.
+
==Overview==
  
* '''Post-Moderation''' screens each submission after it is visible to the public. While preventing the bottleneck problem, it is still impractical for large user communities due to the vast number of submissions. Furthermore, as the content is often reviewed in a queue, undesired content may remain visible for an extended period of time, drowned out by benign content ahead of it, which must still be reviewed. This method is preferable to pre-moderation from a user-experience perspective, since the flow of content has not been slowed down by waiting for approval.  
+
Parental control software typically allows parents to customize internet permissions on their child's devices or accounts on shared devices. <ref>‌The Business Insider. (2020, September 18). The best internet parental control systems. Newstex LLC. https://go-gale-com.proxy.lib.umich.edu/ps/i.do?p=STND&u=umuser&id=GALE%7CA635821966&v=2.1&it=r&sid=summon</ref> The account administrator, typically a parent, can change internet permissions for the entire household, while accounts under the administrator do not have that capability. This allows parents to establish rules for their children without having to physically enforce them.  
  
* '''Reactive moderation''' reviews only that content which has been flagged by users. It retains the benefits of both pre- and post-moderation, allowing for real-time user-engagement and the immediate review of only potentially undesired content. However, it is reliant on user participation and is still susceptible to benign content being falsely flagged. Therefore it is most practical for large user communities which have a lot of user activity. Most modern social media platforms, including [[Facebook]] and [[YouTube]], rely on this method. This method allows for the users themselves to be held accountable for any information available and for determining what should or should not be taken down. This method is more easily scalable to a large number of users than both pre and post-moderation.
+
The parental control software has become more prevalent recently. For example, basic parental control software now comes with standard operating systems, such as Windows. <ref>‌Microsoft. (n.d.). Parental consent and Microsoft child accounts. Microsoft. https://support.microsoft.com/en-us/account-billing/parental-consent-and-microsoft-child-accounts-c6951746-8ee5-8461-0809-fbd755cd902e
 +
</ref>
  
* '''Distributed moderation''' is an exception to the top-down approach. It instead gives the power of moderation to the users, often making use of a voting system. This is common on [[Reddit]] and Slashdot, the latter also using a meta-moderation system, in which users also rate the decisions of other users.<ref>''Slashdot''. Retrieved March 26, 2019.[https://slashdot.org/faq/metamod.shtml "Moderation and Metamoderation"]. </ref> This method scales well across user-communities of all sizes, but also relies on users having the same perception of undesired content as the platform. It is also susceptible to groupthink and malicious coordination, also known as brigading.<ref>''Reddit''. January 18, 2018. Retrieved March 26, 2019.[https://www.reddit.com/wiki/reddiquette#wiki_in_regard_to_voting "Reddiquette: In Regard to Voting"] </ref>
+
There are three main ways that parental control software functions:
  
* '''Automated moderation''' is the use of software to automatically assess content for desirability. It can be used in conjunction with any of the above moderation types. Its accuracy is dependent on the quality of its implementation, and it is susceptible to algorithmic bias and adversarial examples<ref>Goodfellow, Ian; Papernot, Nicolas; et al (February 24, 2017). [https://openai.com/blog/adversarial-example-research/ "Attacking Machine Learning with Adversarial Examples"]. ''OpenAI''. Retrieved March 26, 2019.</ref>. Copyright detection software on YouTube and spam filtering are examples of automated moderation<ref name="Injustice">Tassi, Paul (December 19, 2013). [https://www.forbes.com/sites/insertcoin/2013/12/19/the-injustice-of-the-youtube-content-id-crackdown-reveals-googles-dark-side/#5762095b66c8 "The Injustice of the YouTube Content ID Crackdown Reveals Google's Dark Side"]. ''Forbes''. Retrieved March 26, 2019.</ref>.
+
* '''Complete disablement''' of the internet allows parents to cut off their child's connection to wifi entirely during chosen time intervals. This can range from disabling their wifi during a scheduled time interval, such as at bedtime, to turn off their internet indefinitely, such as in instances of punishment.  
  
* '''No moderation''' is the lack of moderation entirely. Such platforms are often hosts to illegal and obscene content, and typically operate outside the law, such as The Pirate Bay and [[Dark Web]] markets. Spam is a perennial problem for unmoderated platforms, but may be mitigated by other methods, such as limited posting frequency and monetary barriers to entry. However, small communities with shared values and few bad actors can also thrive under no moderation, like unmoderated Usenet newsgroups.
+
* '''Content blocking'''   focuses on filtering the content that children can see, and different accounts might have different age-appropriate preferences set. For larger households with several devices, this allows for different children to view age-appropriate content as determined by the account administrator.  
 +
 
 +
* '''Monitoring'''  means that parents can have access to the complete browsing history of their children at any time. This allows parents to monitor how their children navigate the internet without hard boundaries.
  
 
==History==
 
==History==
 +
The emergence of parental controls was first known as a content filter. <ref> (Clark, N). Content Filter Technology. Retrieved 1 April 2021, from https://www.britannica.com/technology/content-filter </ref> Content filters are interchangeable with internet filter, which is a software that assesses and blocks online content that has specific words or images. Although the Internet was created to make information more accessible, open access to all different kinds of information was deemed to be problematic, in instances where children or younger age groups could potentially view obscene or offensive materials. Content filters restrict what users may view on their computer by screening Web pages and e-mail messages for category-specific content. Such filters can be used by individuals, businesses, or even countries in order to regulate Internet usage. <ref> Web Content Filtering. Retrieved 1 April 2021, from https://www.webroot.com/us/en/resources/glossary/what-is-web-content-filtering </ref>
  
===Pre-1993: Usenet and the Open Internet===
+
===Uses for Parental Control Software===
 +
====Protection from Harmful Content====
 +
The vastness of the internet places a heavy burden on parents trying to protect their children from harmful content. Because children might not have the skills to successfully and safely navigate online environments, parental controls can be a helpful tool to guide them. While the internet is an integral part of children's schooling, the internet also makes available potentially traumatic content that these children would not otherwise see. Parental control software offers parents the ability to control what content their children have access to, even when they are not physically present to monitor them. There is evidence that parents who are involved in their children’s internet use in some way are more likely to encourage safe internet practices. <ref>Gallego, Francisco, A. (2020, August). Parental monitoring and children's internet use: The role of information, control, and cues. ScienceDirect. https://www-sciencedirect-com.proxy.lib.umich.edu/science/article/pii/S0047272720300724?via%3Dihub</ref>
 +
====Recommendations for Controlling Content for Children====
 +
There are various third-party parental control services such as Bark, Qustodio or NetNanny that allows people to keep track of the devices. Prices for these services range anywhere from $50 to more than $100 if there are several children to monitor. These costs include 24/7 device monitoring and full-range visibility into how they are using their devices. <ref>Knorr, Caroline. (2021, March). Parents' Ultimate Guide to Parental Controls. https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-parental-controls#What%20are%20the%20best%20parental%20controls%20for%20setting%20limits%20and%20monitoring%20kids?</ref> However, these parental controls can only monitor certain accounts that they know the children are using. For some instances, the services need passwords in order to monitor activity. <ref>Orchilles, Jorge. (2010, April). Parental Control. https://www.sciencedirect.com/topics/computer-science/parental-control</ref>
 +
====Health Benefits====
 +
Parental controls can also help one's child to live a healthier lifestyle. A study from 2016 found that about 59% of parents believed their children to be addicted to their cellular and/or electronic devices. <ref> Teenage Cellphone Addiction https://www.psycom.net/cell-phone-internet-addiction </ref> As children increasingly receive smartphones at younger and younger ages, it is important for parents to be able to limit their device usage so as to lower their children's chance of becoming addicted to their phone in the future. Addiction to cellular and other electronic devices has several negative symptoms. These symptoms range from psychological (anxiety and depression) to physical (eye strain and neck strain). <ref> Signs and Symptoms of Cell Phone Addiction https://www.psychguides.com/behavioral-disorders/cell-phone-addiction/signs-and-symptoms/ </ref> Less time spent on phones leads to increased physical activity and more legitimate social interaction, which makes for a more well-rounded lifestyle. The American Academy of Pediatrics found that limiting children's screentime improves their physical and mental health and they develop academically, creatively, and socially. <ref name=benefits> Miller, M. (2020, February 24). Benefits Of Limiting Screen Time For Children. Retrieved from https://web.magnushealth.com/insights/benefits-of-limiting-screen-time-for-children#:~:text=It%20is%20amazing%20to%20see,online%20and%20age%2Dinappropriate%20videos. </ref>
  
Usenet emerged in the early 1980s as a network of university and private computers, and quickly became the world's first Internet community. The decentralized platform hosted a collection of message boards known as newsgroups. These newsgroups were small communities by modern standards, and consisted of like-minded, technologically-inclined users sharing the hacker ethic. This collection of principles, including "access to computers should be unlimited", "mistrust authority: promote decentralization", and "information wants to be free", created a culture that was resistant to moderation and free of top-down censorship.<ref name="Hackers">Levy, Steven (2010). "Chapter 2: The Hacker Ethic". ''[https://classes.visitsteve.com/hacking/wp-content/Steven-Levy-Hackers-ch1+2.pdf Hackers: Heroes of the Computer Revolution]''. pp. 27-31. ISBN 978-1-449-38839-3. Retrieved March 26, 2019.</ref> The default assumptions were of users acting in good faith and that new users could be gradually assimilated into the shared culture. As a result, only a minority of newsgroups were moderated, most allowing anyone to post however they pleased, as long as they followed the community's social norms, known as "netiquette."<ref name="Art of Internet">Kehoe, Brendan P. (January 1992). "4. Usenet News". [https://www.cs.indiana.edu/docproject/zen/zen-1.0_6.html ''Zen and the Art of the Internet'']. Retrieved March 26, 2019.</ref> Furthermore, the Internet, in general, was considered separate and distinct from the physical space its servers were located, existing in its own "cyberspace" not subject to the will of the state. Throughout this era of the Open Internet, online activity mostly escaped the notice of government regulation, creating a policy gap that only began to close in the late 1990s.<ref name="Open Internet">Palfrey, John (2010). "Four Phases of Internet Regulation". ''Social Research''. '''77''' (3):  981-996. Retrieved from http://www.jstor.org/stable/40972303 on March 26, 2019.</ref>
+
==Parental Control Apps==
  
===1994 - 2005: Eternal September and Growth===
+
[[File:Qustodio.jpeg|250px|thumbnail|An example of what information parents could view about their children's device usage with Qustodio. <ref name="spy"/>]]
  
In September 1993, AOL began offering Internet access to the general public. The resulting flood of users arrived too quickly and in too many numbers to be assimilated into the existing culture, and the shared values that had allowed unmoderated newsgroups to flourish were lost. This was known as the Eternal September, and the resulting growth transformed the Internet from a high-trust to a low-trust community.<ref>Koebler, Jason (September 30, 2015). [https://motherboard.vice.com/en_us/article/nze8nb/its-september-forever "It's September Forever"]. ''Motherboard''. Retrieved March 26, 2019.</ref> The consequences of this transformation were first seen in 1994, when the first recorded instance of [[spam]] was sent out across Usenet.<ref>Everett-Church, Ray (April 13, 1999). [https://www.wired.com/1999/04/the-spam-that-started-it-all/ "The Spam That Started It All"]. ''Wired''. Retrieved March 26, 2019.</ref> The spam outraged Usenet users, and the first anti-spam bot was created in response, ushering in the era of content moderation.<ref>Gulbrandsen, Arnt (October 12, 2009). [http://rant.gulbrandsen.priv.no/canter-siegel "Canter & Siegel: What actually happened"]. Retrieved March 26, 2019.</ref>
+
Aside from standard operating systems’ built-in parental controls, parents can also download apps to set up these restrictions/monitoring systems. Life360 is a popular app that parents can use to have access to their children's location at all times. The app also offers driving reports so you can see if your teenager is speeding or not. Life360 has been controversial with people even calling it the "Big Brother" of apps. Teens have even said it has ruined their relationship with their parents. <ref> Lenore Skenazy, Life 360 Should Be Called “Life Sentence 360”</ref>Net Nanny is an app that can block inappropriate content online and “can also record instant messaging programs, monitor Internet activity, and send daily email reports to parents.<ref>‌Kanable, Rebecca. (2004, November). Policing Online: From Internet Safety to Employee Management and Parolee Monitoring, Technology Can Help. U.S. Department of Justice. https://www.ojp.gov/ncjrs/virtual-library/abstracts/policing-online-internet-safety-employee-management-and-parolee </ref>  
  
With the invention of the [https://en.wikipedia.org/wiki/World_Wide_Web World Wide Web], users began to drift away from Usenet, while thousands of [[Internet forum|forums]] and blogs emerged as replacements. These small communities were often overseen by single individuals or small teams, and exercised total moderating control over their domains. In response to the growth of spam and other bad actors, these often had much stricter rules than early Usenet groups. However, the vast marketplace of available forums and places of discussion was such that, if a user did not like the moderation policies in one platform, they could easily move to another.  
+
One app called Qustodio provides parents with activity summaries for each of their children. <ref name = spy>‌Teodosieva, Radina. (2015, October 16). Spy me, please! The University of Amsterdam. http://mastersofmedia.hum.uva.nl/blog/2015/10/16/spy-me-please-the-self-spying-app-that-you-need/ </ref> These summaries include total screen time, a breakdown of the screen time that shows how much time was spent on different apps, a list of words that the child searched on the internet, and a tab alerting parents to possibly questionable activity. <ref name="spy"/>
  
As corporate platforms matured, they began to adopt limited content policies as well, though in a more ad-hoc manner. In 2000, Compuserve was the first platform to develop an "Acceptable Use" policy, which banned racist speech<ref name="Secret History">Buni, Catherine; Chemaly, Soraya (March 13, 2016). [https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech "The Secret Rules of the Internet"]. ''The Verge''. Retrieved March 26, 2019.</ref> eBay soon followed in 2001, banning the sale of hate memorabilia and propaganda.<ref>Cox, Beth (May 3, 2001). [http://www.internetnews.com/ec-news/article.php/758221/eBay+Bans+Nazi+Hate+Group+Memorabilia.htm "eBay Bans Nazi, Hate Group Memorabilia"]. ''Internet News''. Retrieved March 26, 2019.</ref>
+
Another, more invasive, option is Key logger services. A Key logger tracks all key strokes made on a device and provides a file to the parent, usually via email, of everything logged in the child's device. These services can include access to contact lists and internet search histories. <ref name=key> Reporter, S. (2019, November 29). Parenting benefits of keylogger program. Retrieved April 08, 2021, from https://www.sciencetimes.com/articles/24367/20191129/parenting-benefits-of-keylogger-program.htm </ref>
  
===2006 - 2010: Social Media and Early Corporate Moderation===
+
About “16% of parents report using parental control apps to monitor and restrict their teens’ mobile online activities,and some parents are more likely than others to download these apps. <ref name = safety>‌Ghosh, Arup K, et al. (2018, April 26). A Matter of Control or Safety? Examining Parental Use of Technical Monitoring Apps on Teens’ Mobile Devices. Association for Computing Machinery Digital Library. Association for Computing Machinery. https://dl.acm.org/doi/pdf/10.1145/3173574.3173768 </ref> Two factors that correspond to higher rates of parental control app usage are if the parents are “low autonomy granting” and if the child is being “victimized online” or has had “peer problems.” <ref name="safety"/>
[[File:Fbmoderation.jpg|400px|thumbnail|Facebook Content Moderation Office]]
+
In the mid-2000s, social media platforms such as [[YouTube]], [[Twitter]], [[Tumblr]], [[Reddit]], and [[Facebook]] began to emerge, and quickly became dominant, centralized platforms that gradually displaced the multitude of blogs and message boards as a place for user discussion. These platforms initially struggled with content moderation. YouTube in particular developed ad-hoc policies from individual cases, gradually building up an internal set of rules that was opaque, arbitrary, and difficult for moderators to apply.<ref name="Secret History"></ref><ref name="Gatekeepers">Rosen, Jeffrey (November 28, 2008). [https://www.nytimes.com/2008/11/30/magazine/30google-t.html "Google's Gatekeepers"]. ''New York Times''. Retrieved March 26, 2019.</ref>
+
  
Other platforms, such as [[Twitter]] and Reddit adopted the unmoderated, free speech ethos of old, with Twitter claiming to be the "free speech wing of the free speech party" and Reddit stating that "distasteful" subreddits would not be removed, "even if we find it odious or if we personally condemn it."<ref>'The Guardian''. Halliday, Josh (March 22, 2012). [https://www.theguardian.com/media/2012/mar/22/twitter-tony-wang-free-speech "Twitter's Tony Wang: 'We are the free speech wing of the free speech party'"]. Retrieved March 26, 2019.</ref><ref>''BBC News'' October 17, 2012. [https://www.bbc.com/news/technology-19975375 "Reddit will not ban 'distasteful' content, chief executive says"]. Retrieved March 26, 2019.</ref>
+
==Bypassing Parental Control==
 +
[[File:Vpn.png|400px|thumbnail|How VPN works as the middle man between the client and the host server. <ref>Namecheap [https://www.namecheap.com/vpn/how-does-vpn-virtual-private-network-work/ . "How does VPN work?"] (January 3, 2020). ''PC Mag''. Retrieved April 01, 2021. </ref>]]
 +
While parental controls like Life360 are quite comprehensive, tech-savvy teens have found ways to bypass some of the controls. Most software has bugs that can be utilized by users to bypass the rules and regulations set for them. For example, on the iPhone parents can set screen time limits for certain apps like limiting two hours of iMessaging per day for their child. <ref>Lance Whitney. [https://www.pcmag.com/how-to/how-to-use-apples-screen-time-on-iphone-or-ipad#:~:text=Set%20App%20Limits,entire%20category%20or%20specific%20apps. "How to Use Apple's Screen Time on iPhone or iPad"] (January 3, 2020). ''PC Mag''. Retrieved April 01, 2021. </ref> However, what kids have figured out is that a lot of apps have a built-in share functionality that you can use to send messages through. This is a loophole to send messages even when the iMessage app is locked. <ref> Jellies App. [https://jelliesapp.com/blog/kids-bypassing-screen-time "Are Your Kids or Teens Unlocking Apple Screen Time Limits?"] (January 3, 2020). Retrieved April 01, 2021. </ref> And if done properly, the parents can remain ignorant that their kids have found a way to seemingly extended their time limits to have no bounds. Currently, there are no solutions for parents to fully implement a solution to the issue. A complete solution would need to come from developers at Apple.
  
===2010 - Present: Centralization and Expanded Moderation===
+
Additionally, teens can download VPN (Virtual private network) to bypass browsing the internet freely. A VPN can create an encrypted connection between your computer and the server you are trying to reach.<ref>Mark Smirniotis. [https://www.nytimes.com/wirecutter/guides/what-is-a-vpn/. "What Is a VPN and What Can (and Can’t) It Do?"] (March 3, 2021). ''New York Times''. Retrieved April 01, 2021. </ref> In essence, even though a teenager has parental controls for a certain website, the VPN can make a request to another computer that does not have parental controls, then that computer can give you the content you want. Creating a middle man between blocked content and teenager. Again there is really no complete solution to stopping someone from accessing a website once they have a VPN. The main use of VPNs is to protect people’s privacy, so they were built to be extremely secure. <ref>David Pierce. [https://www.wsj.com/articles/why-you-need-a-vpnand-how-to-choose-the-right-one-1537294244 . "Why You Need a VPN—and How to Choose the Right One"] (September 18, 2018). ''Wall Street Journal''. Retrieved April 01, 2021. </ref> The only sure way to prevent VPN use would just to prevent teenagers from downloading a VPN from the beginning. This can be done by monitoring the app store to prevent a VPN app from being downloaded in the first place. Additionally, most good VPNs have a monthly fee, so there is a financial barrier to entry. <ref>Yael Grauer. [https://www.nytimes.com/wirecutter/reviews/best-vpn-service/. "The Best VPN Service"] (February 25, 2021). ''New York Times''. Retrieved April 01, 2021. </ref>
  
Throughout the 2010s, as social media platforms became ubiquitous, the ethics of their moderation policies were brought into question. As these centralized platforms began to have significant influence over national and international discourse, concerns were raised over the presence of offensive content as well as the stifling of expression. <ref name="Impossible Choices">Masnick, Mike (August 9, 2019). [https://www.techdirt.com/articles/20180808/17090940397/platforms-speech-truth-policy-policing-impossible-choices.shtml "Platforms, Speech and Truth: Policy, Policing and Impossible Choices"]. ''Techdirt''. Retrieved March 26, 2019.</ref><ref name="Garbage">Jeong, Sarah (2018). [https://cdn.vox-cdn.com/uploads/chorus_asset/file/12599893/The_Internet_of_Garbage.0.pdf ''The Internet of Garbage'']. ISBN 978-0-692-18121-8. Retrieved March 26, 2019.</ref> Additionally, internet infrastructure providers also began to remove content hosted on their platforms.
+
A recent TikTok trend revealed how teens were escaping the potential punishments of their parents finding out their true whereabouts or activities by using creative methods to avoid setting off any triggers. Some such tactics are placing their phones in their friends' mailboxes to ensure their location is where they said they are, and then traveling without a phone to their actual destination. Teens have also figured out various cellular and wifi settings that allow them to restrict which of their activities are available to their parents via the app. <ref name=tiktok> Meisenzahl, M. (2019, November 08). Teens are Finding sneaky and clever ways to outsmart their parents' location-tracking apps, and it's turning into a meme ON TIKTOK. Retrieved April 08, 2021, from https://www.businessinsider.com/life360-location-tracker-teens-tiktok-memes-tips-2019-11#as-more-teens-and-their-parents-use-life360-the-community-on-tiktok-has-made-a-meme-out-of-it-videos-are-set-to-the-song-fly-by-still-lonely-which-has-the-lyrics-life-360-in-it-2 </ref>
  
In 2010, [[WikiLeaks]] leaked the US Diplomatic Cables and hosted the documents on [[Amazon.com|Amazon]] Web Services. These were later removed by Amazon as against their content policies. WikiLeaks' DNS provider also made the decision to drop their website, effectively removing WikiLeaks from the Internet until an alternative host could be found.<ref>Arthur, Charles; Halliday, Josh (December 3, 2010). [https://www.theguardian.com/media/blog/2010/dec/03/wikileaks-knocked-off-net-dns-everydns "WikiLeaks fights to stay online after US company withdraws domain name"]. ''The Guardian''. Retrieved March 26, 2019.</ref>
+
==Adoption==
  
In 2012, Reddit user /u/violentacrez was [[Doxxing|doxxed]] by Gawker Media for moderating several controversial subreddits, including /r/Creepshots. The subsequent media spotlight caused Reddit to reconsider their minimalist approach to content moderation.<ref>Boyd, Danah (October 29, 2012). [https://www.wired.com/2012/10/truth-lies-doxxing-internet-vigilanteism/ "Truth, Lies and 'Doxing": The Real Moral of the Gawker/Reddit Story"]. ''Wired''. Retrieved March 26, 2019.</ref> This set a precedent which was used to ban more subreddits over the next few years. In 2015, Reddit banned /r/FatPeopleHate, which marked a turning point at which Reddit no longer considered itself a "bastion of free speech."<ref>[https://www.reddit.com/r/announcements/comments/39bpam/removing_harassing_subreddits/ "Removing Harassing Subreddits"]. June 10, 2015. ''Reddit''. Retrieved March 26, 2019.</ref> In 2019, Reddit banned /r/WatchPeopleDie, in an effort to suppress the spread of the Christchurch mass shooting video, a move widely considered as [[censorship]]. <ref>Hatmaker, Taylor (March 15, 2019). [https://techcrunch.com/2019/03/15/reddit-watchpeopledie-subreddit-gore/ "After Christchurch, Reddit bans communities infamous for sharing graphic videos of death"]. ''TechCrunch''. Retrieved March 26, 2019.</ref>
+
About 32% of U.S. households have children, but that doesn’t mean all of them to utilize parental controls. <ref name = who>Thierer, Adam. (2019). Parental Controls & Online Child Protection. The Progress & Freedom Foundation, 45-51. https://poseidon01.ssrn.com/delivery.php?ID=396031102089030101016075082091103088058033095009026094104027126098086093111106075097029031099028051096054088127017025008122107111073000085023006114113101117080113006077037031064068080020002099009101067004068104007107105022107078007111120115083101094&EXT=pdf&INDEX=TRUE</ref> Parental controls are most likely used between the ages of 7 and 16, but parents with “very young children or older teens often have very little need for parental control technologies.<ref name="who"/> Other factors that influence whether or not a family chooses to use parental controls include aversions to these technologies, beliefs that these technologies are ineffective, and “alternative methods of controlling media content and access in the home,” such as “household media rules.<ref name="who"/> Sometimes, parents might elect not to deal with parental controls simply because they’re too out of energy. <ref name="who"/> However, parents who do choose to monitor their children’s technology use can do so in a variety of ways.
 
+
In 2015, [[Instagram]] came under fire for moderating female nipples, but not male nipples. It was later revealed that this decision was in turn due to content moderation policies for apps in Apple App Store.<ref>Kleeman, Sophie (October 1, 2015). [https://mic.com/articles/126137/instagram-banned-nipples-because-of-apple#.2IxeNH5Lr "Instagram Finally Revealed the Reason It Banned Nipples - It's Apple"]. ''Mic''. Retrieved March 26, 2019.</ref>
+
 
+
In 2016, in the aftermath of [[Gamergate]] and it's associated harassment, Twitter instituted the Trust and Safety Council, and began enforcing stricter moderation policies on their users.<ref>Cartes, Patricia (February 9, 2016). [https://blog.twitter.com/en_us/a/2016/announcing-the-twitter-trust-safety-council.html "Announcing the Twitter Trust & Safety Council"]. ''Twitter''. Retrieved March 26, 2019.</ref> In 2019, Twitter was heavily criticized for political bias, inconsistency and lack of transparency in their moderation practices.<ref>Rogan, Joe (March 5, 2019). [http://podcasts.joerogan.net/podcasts/jack-dorsey-vijaya-gadde-tim-pool "Jack Dorsey, Vijaya Gadde, & Tim Pool"]. ''Joe Rogan Experience'' (Podcast). Retrieved March 26, 2019.</ref>  
+
 
+
In 2018, Tumblr banned all adult content from its platform. This resulted in the mass removal of LGBT and GSM support groups and communities.<ref>Ho, Vivian (December 4, 2018). [https://www.theguardian.com/technology/2018/dec/03/tumblr-adult-content-ban-lgbt-community-gender "Tumblr's adult content ban dismays some users: 'It was a safe space'"]. ''The Guardian''. Retrieved March 26, 2019.</ref>
+
 
+
In 2019, politicians pushed for content regulation from Google and [[Facebook]], specifically regarding online hate speech. The increase in terrorist attacks and real-world violence has caused concern that these "tech giants have become digital incubators for some of the most deadly, racially motivated attacks around the world," including the white-supremacist attack in VA and the synagogue shooting in PA. <ref>Romm, T. (2019) The Washington Post. https://www.washingtonpost.com/technology/2019/04/08/facebook-google-be-quizzed-white-nationalism-political-bias-congress-pushes-dueling-reasons-regulation/?noredirect=on&utm_term=.e0dfe2d9dea7</ref> The mosque attack in Christchurch, New Zealand is an example of why the urge for regulation has become so prominent as the attacker live-recorded the shooting on Facebook. Facebook will reexamine its current procedures for identifying violent content and quickly taking action to remove it. <ref>Shaban, H. (2019). Facebook to reexamine how live stream videos are flagged after Christchurch shooting. The Washington Post. https://www.washingtonpost.com/technology/2019/03/21/facebook-reexamine-how-recently-live-videos-are-flagged-after-christchurch-shooting/?utm_term=.a604cc1428b8</ref>
+
  
 
==Ethical Issues==
 
==Ethical Issues==
The ethical issues regarding content moderation include how it is carried out, the possible bias of such content moderators, and the negative effects this kind of job has on moderators. The problem lies in the fact that content moderation cannot be carried out by an autonomous program since many cases are highly nuanced and detectable only by knowing the context and the way humans might perceive it. Not only is this job often ill-defined in terms of policy, content moderators are often expected to make very difficult judgments while being afforded very few to no mistakes.
+
In academia, there is a debate about whether or not parental control software leads to healthy outcomes. Some say that greater parental involvement in children's device usage allows for better internet safety practices. Others contend that parental control software enables parental behaviors that negatively affect family dynamics and internet safety practices. However, there is a consensus that the obstacles parents face when trying to protect their children from harmful content are largely shaped by how much information is easily accessible on the internet.
  
===Virtual Sweatshops===
+
===Trust===
Often, companies outsource their content moderation tasks to third parties. This work cannot be done by computer algorithms because it is often very nuanced, which is where [http://si410wiki.sites.uofmhosting.net/index.php/Virtual_sweatshops virtual sweatshops] enter the picture. Virtual sweatshops enlist workers to complete mundane tasks in which they will receive small monetary reward for their labor. While some view this as a new market for human labor with extreme flexibility, there are also concerns with labor laws. There is not yet policy that exists on work regulations for internet labor, requiring teams of people overseas who are underpaid for the labor they perform. Companies overlook and often choose not to acknowledge the hands-on effort they require. Human error is inevitable causing concerns with privacy and trust when information is sent to these third-party moderators. <ref> Zittrain, Jonathon. "THE INTERNET CREATES A NEW KIND OF SWEATSHOP." NewsWeek. December 7, 2009. https://www.newsweek.com/internet-creates-new-kind-sweatshop-75751 </ref>
+
While some parents can potentially harm their children by failing to teach them safe internet practices, research has shown that the reverse is also true. Because it provides parents with greater control over their children’s internet access, the parental control software can enable parents who may already struggle with being overcontrolling in their relationships with their children. This can lead to broken trust within families and leave the children without any of their own experience practicing safe internet practices. A study from the University of Central Florida found that two-thirds of teens' relationships with their parents soured after the installation of a parental control application. <ref> Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/ </ref> It is thought that perhaps parents may replace meaningful conversations about safe internet practices with parental control software.
  
=== Google's Content Moderation & the Catsouras Scandal ===
+
The teens who are bypassing parental controls also adjust the trust balance between parents and their children. <ref name=tiktok> </ref> Many of the children believe they are outsmarting their parents. Consequently, parents are not as alert to the safety of their children because they trust the app to do its job. Because of some of these behaviors, such as not having their phones present with them, teens may not be able to properly contact their parents or emergency services in the case that something goes wrong. This leads to an increased safety concern, when parents intended to decrease safety concerns.  
[https://en.wikipedia.org/wiki/Google Google] is home to a practically endless amount of content and information all of which is for the most part, not regulated. In 2006, a young teen in Southern California named Nikki Catsouras crashed her car, which resulted in her gruesome death and decapitation. On the scene, members of the police force were tasked with taking pictures of the scene. However, as a Halloween joke, a few of the members who took the photos sent them around to various friends and family members. The picture of Nikki's mutilated body was then passed around the internet and was easily accessible via Google. The Catsouras family was devastated that these pictures of their daughter were being seen and viewed by millions, and desperately attempted to get the photo removed from the Google platform. However, Google refused to comply with Catsouras plea. This is a clear ethical dilemma that involves content moderation as this picture was certainly not meant to be released to the public and was very difficult for the family, but because Google did not want to begin moderating specific content of their platform they did nothing. This brings up the ethical question of if people have "The Right to Be Forgotten" <ref> Toobin, Jeffrey. “The Solace of Oblivion.” The New Yorker, 22 Sept. 2014, www.newyorker.com/.</ref>.
+
  
Another massive ethical issue with the moderation of content online is the fact that the owners of the content or platform decide what is and what is not moderated. Thousands of people and companies claim that Google purposefully moderates content that directly competes with their platform. Shivaun Moeran and Adam Raff are two computer scientists who together created an incredibly powerful search platform called Foundem.com. The website was helpful for finding any amounts of information, it was particularly helpful for finding the cheapest possible items being sold on the internet. The key to the site was a Vertical Search Algorithm, which as an incredibly complex computer algorithm that focuses on search. This vertical search algorithm was significantly more powerful than Google's search algorithm, which was a horizontal search algorithm. The couple posted their site and within the first few days experienced great success and many site visitors, however, after a few days the visitor rate significantly decreased. They discovered that their site had been pushed multiple pages back on Google. This is because it directly competed with the "Google Shopping" app that had been released by Google. Morean and Raff had countless lawsuits filed and met with people at Google and other large companies to figure out what the issue was and how they could get it fixed but were met with silence or ambiguity. Foundem.com never became the next big search algorithm, partly because of the ethical issues seen in content moderation by Google <ref> Duhigg, Charles. “The Case Against Google.” The New York Times, The New York Times, 20 Feb. 2018, www.nytimes.com/. </ref>
+
=== Independence ===
 
+
As children enter adulthood, some have trouble adjusting to having autonomy of their internet practices due to heavy supervision in the home. <ref>Cetinkaya, L. (2019). The Relationship between Perceived Parental Control and Internet Addiction: A Cross-sectional study among Adolescents. Contemporary Educational Technology, 10(1), 55–74. https://doi.org/10.30935/cet.512531</ref> An important thing for children to develop is the ability to learn from their mistakes and solve their problems on their own. One study suggested that some children whose parents used parental controls were less likely to want to approach their parents about problems they had run into, both on- and offline, leading to these children being less able to solve their problems through collaboration with others. <ref> Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/ </ref> Additionally, there have been extensive studies done on the effects that overbearing parents can have on children. Children with controlling parents demonstrate lower self-esteem, act out more, and have lower academic performance. <ref> Controlling Parents – The Signs And Why They Are Harmful https://www.parentingforbrain.com/controlling-parents/ </ref> As parental controls can lead to controlling parenting, they need to be treated with great care.
===Psychological Effects on Moderators===
+
Content moderation can have significant negative effects on the individuals tasked with carrying out the motivation. Because most content must be reviewed by a human, professional content moderators spend hours every day reviewing disturbing images and videos, including pornography (sometimes involving children or animals) gore, executions, animal abuse, and hate speech. Viewing such content repeatedly, day after day can be stressful and traumatic, with moderators sometimes developing PTSD-like symptoms. Others, after continuous exposure to fringe ideas and conspiracy theories, develop intense paranoia and anxiety, and begin to accept those fringe ideas as true.<ref name="Secret History"></ref><ref name="Beheadings">Chen, Adrian (October 23, 2014). [https://www.wired.com/2014/10/content-moderation/ "The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed"] ''Wired''. Retrieved March 26, 2019.</ref><ref name="Trauma">Newton, Casey (February 25, 2019). [https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona "The Trauma Floor: The Secret Lives of Facebook Moderators in America"] ''The Verge''. Retrieved March 26, 2019.</ref>
+
 
+
Further negative effects are brought on by the stress of applying the subjective and inconsistent rules regarding content moderation. Moderators are often called upon to make judgment calls regarding ambiguously-objectionable material or content that is offensive but breaks no rules. However, the performance of their moderation decisions is strictly monitored and measured against the subjective judgment calls of other moderators. A few mistakes are all it takes for a professional moderator to lose their job.<ref name="Trauma"></ref>
+
 
+
A report detailing the lives of [[Facebook]] content moderators explained the poor conditions these workers are subject to <ref name = "facebook">Simon, Scott, and Emma Bowman. “Propaganda, Hate Speech, Violence: The Working Lives Of Facebook's Content Moderators.” NPR, NPR, 2 Mar. 2019, www.npr.org/2019/03/02/699663284/the-working-lives-of-facebooks-content-moderators.</ref>. Even though content moderators have an emotionally intense, stressful job they are often underpaid. In addition, Facebook does provide forms of counseling to their employees, however, many are dissatisfied with the service <ref name = "facebook"/>. The employees review a significant amount of traumatizing information daily, but it is their responsibility to seek therapy if needed, which is difficult for many. They are also required to constantly oversee content and are only allotted two 15 minute breaks and a half an hour lunch break. In the cases where they review particularly horrifying content, they are only given a nine minute break to recover <ref name = "facebook"/>. [[Facebook]] is often criticized for the ethical treatment of their content moderator employees.
+
 
+
===Information Transparency===
+
 
+
Information transparency is the degree to which information about a system is visible to its users.<ref>Turilli, Matteo; Floridi, Luciano (March 10, 2009). [https://link.springer.com/article/10.1007/s10676-009-9187-9 "The ethics of information transparency"] ''Ethics and Information Technology''. '''11''' (2): 105-112. doi:[https://doi.org/10.1007/s10676-009-9187-9
+
10.1007/s10676-009-9187-9
+
]. Retrieved March 26, 2019.</ref> By this definition, content moderation is not transparent at any level. First, content moderation is often not transparent to the public, those it is trying to moderate. While a platform may have public rules regarding acceptable content, these are often vague and subjective, allowing the platform to enforce them as broadly or as narrowly as it chooses. Furthermore, such public documents are often supplemented by internal documents accessible only to the moderators themselves.<ref name="Secret History"></ref>
+
 
+
Content moderation is not transparent at the level of moderators either. The internal documents are often as vague as the public ones and contain significantly more internal inconsistencies and judgment calls that make them difficult to apply fairly. Furthermore, such internal documents are often contradicted by statements from higher-ups, which in turn may be contradicted by similar statements.<ref name="Trauma"></ref>
+
 
+
Finally, even at the corporate level where policy is set, moderation is not transparent. Moderation policies are often created by an ad-hoc, case-by-case process and applied in the same manner. Some content that would normally be removed by moderation rules will be accepted for special circumstances, such as "newsworthiness". For example, videos of violent government suppression could be displayed or not, depending on the whims of moderation policy-makers and moderation QAs at the time.<ref name="Secret History"></ref>
+
 
+
===Bias===
+
 
+
Due to its inherently subjective nature, content moderation can suffer from various kinds of bias. Algorithmic bias is possible when automated tools are used to remove content. For example, YouTube's automated Content ID tools may flag reviews of films or games that feature clips or gameplay as copyright violations, despite being Fair Use when used to criticize<ref name="Injustice"></ref>. When a youtube content is flagged they lose out on any ad revenue from that video during the time their content is flagged. Even if a content creator is able to fight the claim and has their video unflagged by Youtube they don't receive any of the revenue from their video while it was flagged. The algorithm bias thus serious financial effects for creators and especially for small channel who can't afford to fight the copyright claim <ref name = "Romano"> Romano, Aja. “YouTube's ‘Ad-Friendly’ Content Policy May Push One of Its Biggest Stars off the Website.” Vox, Vox, 2 Sept. 2016, www.vox.com/2016/9/2/12746450/youtube-monetization-phil-defranco-leaving-site.</ref>. Moderation may also suffer from cultural bias, when something considered objectionable by one group may be considered fine to another. For example, moderators tasked with removing content that depicts minors engaging in violence may disagree over what constitutes a minor. Classification of obscenity is also culturally biased, with different societies around the world having different standards of modesty.<ref name="Secret History"></ref><ref name="Gatekeepers"></ref> Moderation, both from the perspective of humans and automated systems, may be inherently flawed in that the subjective nature that comes along with deciding what is right versus what is wrong can be difficult to lay out in concrete terms. While there is no uniform solution to issues of bias in content moderation, some have suggested that approaching these issues with a utilitarian approach may serve as guiding ethical standard. <ref>Mandal, Jharna, et al. “Utilitarian and Deontological Ethics in Medicine.” Tropical Parasitology, Medknow Publications & Media Pvt Ltd, 2016, www.ncbi.nlm.nih.gov/pmc/articles/PMC4778182/.</ref>
+
 
+
===Free Speech and Censorship===
+
 
+
Content moderation often finds itself in conflict with the principles of free speech, especially when the content it moderates is of a political, social or controversial nature.<ref name="Gatekeepers"></ref>. One the one hand, internet platforms are private entities with full control over what they can allow their users to post. On the other hand, large, dominant social media platforms like Facebook and Twitter have significant influence over the public discourse and act as effective monopolies on audience engagement. The ethical dilemma comes in when discussing who has the right to control what the public has to say and what gives them this right. In this sense, centralized platforms act as a modern day ''agoras'', where [[Utilitarian_Philosophy#John_Stuart_Mill|John Stuart Mill's]] "marketplace of ideas" allows good ideas to be separated from the bad without top-down influence.<ref name="Garbage"></ref> When corporations are allowed to decide with impunity what is or isn't allowed to be discussed in such a space, they circumvent this process and stifle free speech on the primary channel individuals use to express themselves.<ref name="Impossible Choices"></ref>
+
 
+
===Anonymous Social Medias===
+
[[File:mic.png|600px|thumbnail|right|Microsoft is one of many companies who have grown to include auto-moderating bots as part of their service offerings.<ref name=microsoft></ref>]]
+
Social media sites created with the intention of keeping users anonymous so that they may post freely is an ethical concern. [https://en.wikipedia.org/wiki/Spring.me Formspring], which is now defunct, was a platform that allowed anonymous users to ask selected individuals their questions. [https://en.wikipedia.org/wiki/Ask.fm Ask.fm]] which is a similar site, has outlived its rival, Formspring. However, a handful of content submitted by anonymous users on these sites are hateful comments that contribute to cyberbullying. There have been two suicides linked to cyberbullying on Formspring.<ref>James, Susan Donaldson. “Jamey Rodemeyer Suicide: Police Consider Criminal Bullying Charges.” ABC News, ABC News Network, 22 Sept. 2011, abcnews.go.com/.</ref><ref>“Teenager in Rail Suicide Was Sent Abusive Message on Social Networking Site.” The Telegraph, Telegraph Media Group, 22 July 2011, www.telegraph.co.uk/.</ref>. In 2013, when Formspring shut down, Ask.fm began a more active approach at content moderation.
+
 
+
Other, similar anonymous apps include [[Yik Yak|Yik Yak]], [https://en.wikipedia.org/wiki/Secret_(app) Secret] (now defunct), and [https://en.wikipedia.org/wiki/Whisper_(app) Whisper]. Learning from their predecessors and competition, YikYak and Whisper have also taken a more active approach at Content Moderation and have not just employed people to moderate content, but also algorithms <ref>Deamicis, Carmel. “Meet the Anonymous App Police Fighting Bullies and Porn on Whisper, Yik Yak, and Potentially Secret.” Gigaom – Your Industry Partner in Emerging Technology Research, Gigaom, 8 Aug. 2014, gigaom.com/. </ref>.
+
 
+
=== Bots ===
+
Although a lot of content moderation cannot be dealt with using computer algorithms and must be outsourced to "virtual sweatshops", a lot of content is still moderated through the use of computer bots. The use of these computer bots naturally comes with many ethical concerns <ref>Bengani, Priyanjana. “Controlling the Conversation: The Ethics of Social Platforms and Content Moderation.” Columbia University in the City of New York, Apr. 2018, www.columbia.edu/content/.</ref>. The largest concern lies among academics, an increasing portion of whom are worried that auto-moderation cannot be effectively implemented on a global scale<ref>Newton, Casey. "Facebook’s content moderation efforts face increasing skepticism." The Verge. 24 August 2018. https://www.theverge.com/2018/8/24/17775788/facebook-content-moderation-motherboard-critics-skepticism</ref> UCLA Professor Assistant Professor Sarah Roberts said in an interview with ''Motherboard'' regarding Facebook's attempt at global auto-moderation, "it’s actually almost ridiculous when you think about it... What they’re trying to do is to resolve human nature fundamentally."<ref name=motherboard>Koebler, Jason. "The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People." Motherboard. 23 August 2018. https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works</ref> The article's objective of making clear that auto-moderation isn't feasible includes a report that Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg often have to weigh in on content moderation themselves, a testament to how situational and subjective the job is.<ref name=motherboard></ref>
+
 
+
Tech companies such as Microsoft's Azure cloud service have begun offering automated content moderation packages for purchase by companies.<ref name=microsoft>"Content Moderator." Microsoft Azure. https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/</ref> The Microsoft Azure content moderator advertises expertise in image moderation, text moderation in over 100 languages that monitors for profanity and contextualized offensive language, video moderation including recognizing "racy" content, as well as a human review tool for situations where the automated moderator is unsure of what to do.<ref name=microsoft></ref>
+
 
+
==See Also==
+
{{resource|
+
*[[Virtual sweatshops]]
+
*[[Bias in Information]]
+
*[[Censorship]]
+
*[[Facebook newsfeed curation]]
+
*[[Censorship in China]]
+
*[[Punishments in Virtual Environments]]
+
*[[Reddit]]
+
*[[Wikipedia]]
+
*[[Yelp Reviewing]]
+
}}
+
  
 
==References==
 
==References==
  
 
<references/>
 
<references/>
 
[[Category:2019New]]
 
[[Category:Concepts]]
 
[[Category:Censorship]]
 
[[Category:Media Content]]
 
[[Category:BlueStar2019]]
 

Latest revision as of 00:50, 9 April 2021

Back • ↑Topics • ↑Categories

Parental controls allows parents to control what websites their children can see, determine at what times their children have access to the internet, track their children's internet history, limit accessible content, block outgoing content, and many other things. [1] [2] The introduction of parental control software has raised ethical concerns over the last decade. [3] Parental controls are most likely used between the ages of 7 and 16, but parents with “very young children or older teens often have very little need for parental control technologies.” [3] Ethical concerns include loss of trust between parents and children [4] and a decreased sense of autonomy that leads to reduced opportunities for self-directed learning. [5] Additionally, various monitoring tools may be used with or without the child's consent. [2]

Apple's built-in parental controls. [6]


Overview

Parental control software typically allows parents to customize internet permissions on their child's devices or accounts on shared devices. [7] The account administrator, typically a parent, can change internet permissions for the entire household, while accounts under the administrator do not have that capability. This allows parents to establish rules for their children without having to physically enforce them.

The parental control software has become more prevalent recently. For example, basic parental control software now comes with standard operating systems, such as Windows. [8]

There are three main ways that parental control software functions:

  • Complete disablement of the internet allows parents to cut off their child's connection to wifi entirely during chosen time intervals. This can range from disabling their wifi during a scheduled time interval, such as at bedtime, to turn off their internet indefinitely, such as in instances of punishment.
  • Content blocking focuses on filtering the content that children can see, and different accounts might have different age-appropriate preferences set. For larger households with several devices, this allows for different children to view age-appropriate content as determined by the account administrator.
  • Monitoring means that parents can have access to the complete browsing history of their children at any time. This allows parents to monitor how their children navigate the internet without hard boundaries.

History

The emergence of parental controls was first known as a content filter. [9] Content filters are interchangeable with internet filter, which is a software that assesses and blocks online content that has specific words or images. Although the Internet was created to make information more accessible, open access to all different kinds of information was deemed to be problematic, in instances where children or younger age groups could potentially view obscene or offensive materials. Content filters restrict what users may view on their computer by screening Web pages and e-mail messages for category-specific content. Such filters can be used by individuals, businesses, or even countries in order to regulate Internet usage. [10]

Uses for Parental Control Software

Protection from Harmful Content

The vastness of the internet places a heavy burden on parents trying to protect their children from harmful content. Because children might not have the skills to successfully and safely navigate online environments, parental controls can be a helpful tool to guide them. While the internet is an integral part of children's schooling, the internet also makes available potentially traumatic content that these children would not otherwise see. Parental control software offers parents the ability to control what content their children have access to, even when they are not physically present to monitor them. There is evidence that parents who are involved in their children’s internet use in some way are more likely to encourage safe internet practices. [11]

Recommendations for Controlling Content for Children

There are various third-party parental control services such as Bark, Qustodio or NetNanny that allows people to keep track of the devices. Prices for these services range anywhere from $50 to more than $100 if there are several children to monitor. These costs include 24/7 device monitoring and full-range visibility into how they are using their devices. [12] However, these parental controls can only monitor certain accounts that they know the children are using. For some instances, the services need passwords in order to monitor activity. [13]

Health Benefits

Parental controls can also help one's child to live a healthier lifestyle. A study from 2016 found that about 59% of parents believed their children to be addicted to their cellular and/or electronic devices. [14] As children increasingly receive smartphones at younger and younger ages, it is important for parents to be able to limit their device usage so as to lower their children's chance of becoming addicted to their phone in the future. Addiction to cellular and other electronic devices has several negative symptoms. These symptoms range from psychological (anxiety and depression) to physical (eye strain and neck strain). [15] Less time spent on phones leads to increased physical activity and more legitimate social interaction, which makes for a more well-rounded lifestyle. The American Academy of Pediatrics found that limiting children's screentime improves their physical and mental health and they develop academically, creatively, and socially. [16]

Parental Control Apps

An example of what information parents could view about their children's device usage with Qustodio. [17]

Aside from standard operating systems’ built-in parental controls, parents can also download apps to set up these restrictions/monitoring systems. Life360 is a popular app that parents can use to have access to their children's location at all times. The app also offers driving reports so you can see if your teenager is speeding or not. Life360 has been controversial with people even calling it the "Big Brother" of apps. Teens have even said it has ruined their relationship with their parents. [18]Net Nanny is an app that can block inappropriate content online and “can also record instant messaging programs, monitor Internet activity, and send daily email reports to parents.” [19]

One app called Qustodio provides parents with activity summaries for each of their children. [17] These summaries include total screen time, a breakdown of the screen time that shows how much time was spent on different apps, a list of words that the child searched on the internet, and a tab alerting parents to possibly questionable activity. [17]

Another, more invasive, option is Key logger services. A Key logger tracks all key strokes made on a device and provides a file to the parent, usually via email, of everything logged in the child's device. These services can include access to contact lists and internet search histories. [20]

About “16% of parents report using parental control apps to monitor and restrict their teens’ mobile online activities,” and some parents are more likely than others to download these apps. [21] Two factors that correspond to higher rates of parental control app usage are if the parents are “low autonomy granting” and if the child is being “victimized online” or has had “peer problems.” [21]

Bypassing Parental Control

How VPN works as the middle man between the client and the host server. [22]

While parental controls like Life360 are quite comprehensive, tech-savvy teens have found ways to bypass some of the controls. Most software has bugs that can be utilized by users to bypass the rules and regulations set for them. For example, on the iPhone parents can set screen time limits for certain apps like limiting two hours of iMessaging per day for their child. [23] However, what kids have figured out is that a lot of apps have a built-in share functionality that you can use to send messages through. This is a loophole to send messages even when the iMessage app is locked. [24] And if done properly, the parents can remain ignorant that their kids have found a way to seemingly extended their time limits to have no bounds. Currently, there are no solutions for parents to fully implement a solution to the issue. A complete solution would need to come from developers at Apple.

Additionally, teens can download VPN (Virtual private network) to bypass browsing the internet freely. A VPN can create an encrypted connection between your computer and the server you are trying to reach.[25] In essence, even though a teenager has parental controls for a certain website, the VPN can make a request to another computer that does not have parental controls, then that computer can give you the content you want. Creating a middle man between blocked content and teenager. Again there is really no complete solution to stopping someone from accessing a website once they have a VPN. The main use of VPNs is to protect people’s privacy, so they were built to be extremely secure. [26] The only sure way to prevent VPN use would just to prevent teenagers from downloading a VPN from the beginning. This can be done by monitoring the app store to prevent a VPN app from being downloaded in the first place. Additionally, most good VPNs have a monthly fee, so there is a financial barrier to entry. [27]

A recent TikTok trend revealed how teens were escaping the potential punishments of their parents finding out their true whereabouts or activities by using creative methods to avoid setting off any triggers. Some such tactics are placing their phones in their friends' mailboxes to ensure their location is where they said they are, and then traveling without a phone to their actual destination. Teens have also figured out various cellular and wifi settings that allow them to restrict which of their activities are available to their parents via the app. [28]

Adoption

About 32% of U.S. households have children, but that doesn’t mean all of them to utilize parental controls. [3] Parental controls are most likely used between the ages of 7 and 16, but parents with “very young children or older teens often have very little need for parental control technologies.” [3] Other factors that influence whether or not a family chooses to use parental controls include aversions to these technologies, beliefs that these technologies are ineffective, and “alternative methods of controlling media content and access in the home,” such as “household media rules.” [3] Sometimes, parents might elect not to deal with parental controls simply because they’re too out of energy. [3] However, parents who do choose to monitor their children’s technology use can do so in a variety of ways.

Ethical Issues

In academia, there is a debate about whether or not parental control software leads to healthy outcomes. Some say that greater parental involvement in children's device usage allows for better internet safety practices. Others contend that parental control software enables parental behaviors that negatively affect family dynamics and internet safety practices. However, there is a consensus that the obstacles parents face when trying to protect their children from harmful content are largely shaped by how much information is easily accessible on the internet.

Trust

While some parents can potentially harm their children by failing to teach them safe internet practices, research has shown that the reverse is also true. Because it provides parents with greater control over their children’s internet access, the parental control software can enable parents who may already struggle with being overcontrolling in their relationships with their children. This can lead to broken trust within families and leave the children without any of their own experience practicing safe internet practices. A study from the University of Central Florida found that two-thirds of teens' relationships with their parents soured after the installation of a parental control application. [29] It is thought that perhaps parents may replace meaningful conversations about safe internet practices with parental control software.

The teens who are bypassing parental controls also adjust the trust balance between parents and their children. [28] Many of the children believe they are outsmarting their parents. Consequently, parents are not as alert to the safety of their children because they trust the app to do its job. Because of some of these behaviors, such as not having their phones present with them, teens may not be able to properly contact their parents or emergency services in the case that something goes wrong. This leads to an increased safety concern, when parents intended to decrease safety concerns.

Independence

As children enter adulthood, some have trouble adjusting to having autonomy of their internet practices due to heavy supervision in the home. [30] An important thing for children to develop is the ability to learn from their mistakes and solve their problems on their own. One study suggested that some children whose parents used parental controls were less likely to want to approach their parents about problems they had run into, both on- and offline, leading to these children being less able to solve their problems through collaboration with others. [31] Additionally, there have been extensive studies done on the effects that overbearing parents can have on children. Children with controlling parents demonstrate lower self-esteem, act out more, and have lower academic performance. [32] As parental controls can lead to controlling parenting, they need to be treated with great care.

References

  1. J. D. Biersdorfer. "Tools to Keep Your Children in Line When They’re Online" (2018, March 02). New York Times. Retrieved April 01, 2021.
  2. 2.0 2.1 Federal Trade Commission. (2018, March 13). Parental controls. Retrieved April 06, 2021, from https://www.consumer.ftc.gov/articles/0029-parental-controls
  3. 3.0 3.1 3.2 3.3 3.4 3.5 Thierer, Adam. (2019). Parental Controls & Online Child Protection. The Progress & Freedom Foundation, 45-51. https://poseidon01.ssrn.com/delivery.php?ID=396031102089030101016075082091103088058033095009026094104027126098086093111106075097029031099028051096054088127017025008122107111073000085023006114113101117080113006077037031064068080020002099009101067004068104007107105022107078007111120115083101094&EXT=pdf&INDEX=TRUE
  4. Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/
  5. Controlling Parents – The Signs And Why They Are Harmful https://www.parentingforbrain.com/controlling-parents/
  6. ‌amaysim. (2020, November 3). How to set up parental controls on your iPhone, iPad or Android device. amaysim. https://www.amaysim.com.au/blog/world-of-mobile/set-up-parental-controls-apple-android
  7. ‌The Business Insider. (2020, September 18). The best internet parental control systems. Newstex LLC. https://go-gale-com.proxy.lib.umich.edu/ps/i.do?p=STND&u=umuser&id=GALE%7CA635821966&v=2.1&it=r&sid=summon
  8. ‌Microsoft. (n.d.). Parental consent and Microsoft child accounts. Microsoft. https://support.microsoft.com/en-us/account-billing/parental-consent-and-microsoft-child-accounts-c6951746-8ee5-8461-0809-fbd755cd902e
  9. (Clark, N). Content Filter Technology. Retrieved 1 April 2021, from https://www.britannica.com/technology/content-filter
  10. Web Content Filtering. Retrieved 1 April 2021, from https://www.webroot.com/us/en/resources/glossary/what-is-web-content-filtering
  11. Gallego, Francisco, A. (2020, August). Parental monitoring and children's internet use: The role of information, control, and cues. ScienceDirect. https://www-sciencedirect-com.proxy.lib.umich.edu/science/article/pii/S0047272720300724?via%3Dihub
  12. Knorr, Caroline. (2021, March). Parents' Ultimate Guide to Parental Controls. https://www.commonsensemedia.org/blog/parents-ultimate-guide-to-parental-controls#What%20are%20the%20best%20parental%20controls%20for%20setting%20limits%20and%20monitoring%20kids?
  13. Orchilles, Jorge. (2010, April). Parental Control. https://www.sciencedirect.com/topics/computer-science/parental-control
  14. Teenage Cellphone Addiction https://www.psycom.net/cell-phone-internet-addiction
  15. Signs and Symptoms of Cell Phone Addiction https://www.psychguides.com/behavioral-disorders/cell-phone-addiction/signs-and-symptoms/
  16. Miller, M. (2020, February 24). Benefits Of Limiting Screen Time For Children. Retrieved from https://web.magnushealth.com/insights/benefits-of-limiting-screen-time-for-children#:~:text=It%20is%20amazing%20to%20see,online%20and%20age%2Dinappropriate%20videos.
  17. 17.0 17.1 17.2 ‌Teodosieva, Radina. (2015, October 16). Spy me, please! The University of Amsterdam. http://mastersofmedia.hum.uva.nl/blog/2015/10/16/spy-me-please-the-self-spying-app-that-you-need/
  18. Lenore Skenazy, Life 360 Should Be Called “Life Sentence 360”
  19. ‌Kanable, Rebecca. (2004, November). Policing Online: From Internet Safety to Employee Management and Parolee Monitoring, Technology Can Help. U.S. Department of Justice. https://www.ojp.gov/ncjrs/virtual-library/abstracts/policing-online-internet-safety-employee-management-and-parolee
  20. Reporter, S. (2019, November 29). Parenting benefits of keylogger program. Retrieved April 08, 2021, from https://www.sciencetimes.com/articles/24367/20191129/parenting-benefits-of-keylogger-program.htm
  21. 21.0 21.1 ‌Ghosh, Arup K, et al. (2018, April 26). A Matter of Control or Safety? Examining Parental Use of Technical Monitoring Apps on Teens’ Mobile Devices. Association for Computing Machinery Digital Library. Association for Computing Machinery. https://dl.acm.org/doi/pdf/10.1145/3173574.3173768
  22. Namecheap . "How does VPN work?" (January 3, 2020). PC Mag. Retrieved April 01, 2021.
  23. Lance Whitney. "How to Use Apple's Screen Time on iPhone or iPad" (January 3, 2020). PC Mag. Retrieved April 01, 2021.
  24. Jellies App. "Are Your Kids or Teens Unlocking Apple Screen Time Limits?" (January 3, 2020). Retrieved April 01, 2021.
  25. Mark Smirniotis. "What Is a VPN and What Can (and Can’t) It Do?" (March 3, 2021). New York Times. Retrieved April 01, 2021.
  26. David Pierce. . "Why You Need a VPN—and How to Choose the Right One" (September 18, 2018). Wall Street Journal. Retrieved April 01, 2021.
  27. Yael Grauer. "The Best VPN Service" (February 25, 2021). New York Times. Retrieved April 01, 2021.
  28. 28.0 28.1 Meisenzahl, M. (2019, November 08). Teens are Finding sneaky and clever ways to outsmart their parents' location-tracking apps, and it's turning into a meme ON TIKTOK. Retrieved April 08, 2021, from https://www.businessinsider.com/life360-location-tracker-teens-tiktok-memes-tips-2019-11#as-more-teens-and-their-parents-use-life360-the-community-on-tiktok-has-made-a-meme-out-of-it-videos-are-set-to-the-song-fly-by-still-lonely-which-has-the-lyrics-life-360-in-it-2
  29. Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/
  30. Cetinkaya, L. (2019). The Relationship between Perceived Parental Control and Internet Addiction: A Cross-sectional study among Adolescents. Contemporary Educational Technology, 10(1), 55–74. https://doi.org/10.30935/cet.512531
  31. Managing Screen Time and Privacy | Could Parental Control Apps Do More Harm than Good? https://techden.com/blog/screen-time-privacy-parental-control-apps/
  32. Controlling Parents – The Signs And Why They Are Harmful https://www.parentingforbrain.com/controlling-parents/