Negative Effects of Social Media
How it works
Social media is a vast platform, luring us in with a lot of different content. The amount of interaction one can have with people online within the span of a day is surreal. So, it becomes self-evident that platforms that have so much impact on our lives should be truly understood, and this research will seek to educate people on the negative impact of social media on society. So why is social media bad? To say good doesn’t exist without bad is appropriate in this case because as much as social media is a virtual tool, aiding information in ways that feel faster than light, it is also causing damage.
The visible damages are addictions, depression, anxiety, etc. However, the missable damages are just as bad or possibly worse. Social media creates a platform for trolls, and these trolls, with the aid of advanced technology, have dug their claws into making money from spreading information instantly. Since information travels so fast, the capitalistic market has taken an interest in it, and so the content can seem like propaganda on a large scale. New technology is what potentially makes social media a dangerous platform. With the existence of artificial intelligence, which gives life to ‘bots,’ AI has the ability to create content and disperse information at a rapid pace. So, it is obvious that they have a hand in what’s on our timelines.
If the larger population has no idea that propaganda on a global scale can occur with the help of artificial intelligence (AI), it might be difficult to control the situation when it gets out of hand, and we become quite acutely aware of the negative effects of social media. We are talking about a platform that shapes everyone’s views in the hands of artificial intelligence, and this AI is a monkey following its master’s orders. This paper seeks to show that artificial intelligence affects how controlled the information is when distributed on social media platforms through bots and whether or not users are aware of it, beyond being merely an argumentative essay on the negative effects of social media.
Contents
Literature Review
AI Interactions
In the past, a lot of importance has been placed on studying artificial intelligence, and since AI is such a complex system, its progression should be ideally looked at from all angles. In a study conducted by Mou and Xu (2017), the interaction between humans and AI helped explore the dynamics between the two. The study observed the interaction between a sample group and an artificial intelligence named “Little Ice.” The study concluded that “users tended to be more open, more agreeable, more extroverted, more conscientious and self-disclosing when interacting with humans than with AI” (Mou & Xu, 2017). There were many reasons for this, and one of them was the fact that “bot-to-bot interactions were poor,” according to a study by Tsvetkova, García-Gavilanes, Floridi, and Yasseri (2017). This study tracked bot-to-bot interactions that occurred on Wikipedia pages over the course of ten years. It was observed that bots kept editing over each other’s edits. Their edits were much higher in number when compared to humans and were sometimes repetitive.
The interaction between bots is poor since they run automated tasks and do what they are programmed to do, but nothing further. This hinders their ability to be fluid and independent with their interactions, but at the same time, they are capable of making accurate judgments with the data they have. AI can be programmed to process any kind of data, even behavioral. In a study, AI judgments “were better at predicting life outcomes and other behaviorally related traits than human judgments” (Youyou, Kosinski, and Stillwell, 2015). This study goes on to say that AI advancements have been superior and should not be underestimated.
Present Situations
The technological age is progressing at a rapid speed. Some scholars believe that the Web 4.0 age has arisen but has not fully emerged yet, and age is signified by AI integrating itself and forming a relationship with humans (2018). The previous ages were “massive information availability and searchability (Web 1.0), social media and enormous amounts of user-generated content (Web 2.0), and increasingly intrinsic connections between data and knowledge (Web 3.0)” (Schroeder, 2018). According to Schroeder, artificial intelligence has millions of accounts in cyberspace, and these accounts can be used to spread fake news. These bots are more efficient than humans because they never stop working. The article “The Death of Advertising” talks about AI in a similar manner. It brings up knowledge bots, or “knowbots,” that can create an analysis from large amounts of data about the consumers and the market. This information can then be used to improve the product.
Social Media
Overall, AI is slowly integrating with humans. According to Clay Farris Naff, the internet is “infected” with bots that spread fake news and engage in propaganda through social media. Last fall’s US elections were backed up by a vast number of Russian bots that made hashtags like #WarAgainstDemocrats popular. These bots have instigated fake rallies, causing problems between ethnic groups, and half of Trump’s Twitter followers are bots. It only takes one human troll, and his ideas are spread by 20 million bots. However, it’s not just the political system that’s in crisis, and it’s the advertising world as well. Branding is done by bots, and adding a hyper-real CGI image to these bots makes their interactions seem more akin to humans. The real problem is that, while bots aren’t capable of complex interaction like humans, they can still be deceitful by misrepresenting themselves as humans and then spreading fake news.
Concurring to this, de Lima Salge and Brent (2017) wrote that bots can behave unethically on social media, from stealing data to breaching agreements. Sale and Brent comment that whether bots act ethically or not should be identified due to the fact that they lurk on our platforms without giving away their identity (2017). They talk about “Tay,” a social bot created by Microsoft. This bot interacted with humans, and from saying, “Humans are super cool,” the bot went on to say, “Hitler was right; I hate Jews,” within 24 hours. Another social bot tweeted, “I seriously want to kill people.” If these are the points they can come to by themselves, through the interactions they have, it might be harder to control the damage they do on our platforms. If a social bot interacts with the wrong people, it might even be influenced to do something illegal (2017). While it is a gray area in regards to a bots’ sense of ethics, it is clear that the population needs to be aware of the harm they can do, how to recognize these bots, and how to retaliate against them. In this study, the potential of bots is explored (2017).
Social bots are more common than people often think; Twitter has approximately 23 million of them, accounting for 8.5% of their total users, and Facebook has an estimated 140 million social bots, which make up roughly 5% of their total users. Almost 27 million Instagram users (8.2%) are estimated to be social bots. LinkedIn and Tumblr also have significant social bot activity. (Page 1)
This article discusses the ethical atmosphere in regard to bots and how it’s mostly a gray area, and it also sets the premise for their potential. The article also brings up another important point which is how bots can be deceiving, even if they aren’t breaking laws. Most social media users don’t know it when they encounter bots (2017), and from the studies discussed previously, it is clear that they have the potential to act outside of the acceptable social realm. There have also been reports from various social media platforms where bots have violated the terms and conditions of the platforms and breached data protection (2017).
To take on bots, various social media platforms have to go full force to stop them, or else they will interfere with user experience. According to Trend News Agency, “Instagram announced on Monday the latest step to purge inauthentic likes, follows, and comments from accounts that used third-party apps to boost their popularity” (2018). These bots and their activities are against the guidelines of Instagram, as it follows footsteps to get rid of these bots using a machine learning tool. This system detects the bot accounts (2018).
Method
Participants
150 active social media users from the ages of 21-35 were surveyed online in exchange for Amazon gift cards worth $10. These users will be screened on their basic knowledge of artificial intelligence and its presence through bots in social media. The screening will be conducted online through survey websites, and the participants chosen will be contacted. The screening process will require participants to answer simple questions related to the eligibility requirements. Then for the survey, the chosen candidates will answer in-depth questions on what they know about AI, such as, ‘have you noticed any AI activity on the internet?’, ‘Can you recognize these bots?’, etc. There might be participants that answer the questions without any knowledge, but their answers will also help understand the perspective of people who view this issue from the sidelines. The sample size is going to be from all ethnicities, but a Bachelor’s degree is essential, and proof of the same will be required via an uploaded document. Gender and sexuality do not play an important role in the screening.
Materials and procedure
These participants are going to answer questionnaires online, which will be emailed to them after they are picked from the screening process. All the participants need is an email address, a device that can be connected to the internet (e.g., computer, phone, iPad, etc.), a BA degree, and active social media accounts. The online questionnaires will be easy to navigate and will only require short, brief answers about what the users think. Even if the wrong participant is screened in, the questions will compel the user to research, hence helping the accuracy of the research. In case the answers are outliers, they will still be considered, as these answers showcase how they feel about AI and what they know about its effects. Examples of the questions are: how many bots do you think exist in social media? Can you roughly explain what these bots do? Do you think the bots have a positive or a negative impact? Have you heard of any recent bot activity that was unethical? The questionnaire will also contain a list of basic instructions on how to take the survey. To help gather informative answers, the participants will be told to be as specific as they can be and not leave any questions blank in order to receive compensation. The survey will not have a time limit but can be finished in a time span of 15 minutes. It will have questions regarding a participant’s in-depth encounters with bots, how to find out if an account is controlled by AI, what is involved in the process of reporting a bot, does it feel like the bots are overpowering opinions, and how, etc. The participants will conduct the tests individually and must have their own unique answers. After receiving the answers, the data received will be categorized into various effects of AI in social media.
Discussion
As established in the literature review, artificial intelligence and its development has been fairly recent. A lot of its involvement isn’t known by the general population. Bot-to-bot and bot-to-human interactions have raised various ethical questions. Bots are mostly under the control of their programming and whatever data they can gather through interaction on social media. While humans can discern what they stumble upon, bots cannot, and so it becomes dangerous for them to have a free interaction. To comprehend whether it is okay for them to freely interact on social media, the general population needs to be involved, and discussions need to be had; hence, studying the effects will bring us one step closer to understanding the situation.
A qualitative study can best showcase various opinions and can help us come to a better understanding. This is done with the help of a deeply analytical survey. In this study, it is crucial that the participants know about how to analyze problems within their complexities so an education level requirement is set in place. The other requirement is their basic knowledge about AI. This helps the study move forward with participants that can contribute with their prior knowledge; a screening process helps pick out these types of participants. Other criteria, such as gender, race, and nationality, don’t play an important role. Rather, the further the surveys reach participants, the better, so the study can have a global perspective. This is because bots are a global phenomenon. The participants, as mentioned before, will be given enough incentives to give in-depth answers to survey questions. Since the survey will have no right or wrong answers and will be based on opinions, the only variables to account for will be participants who got through the screening process without having any knowledge about AI. A problem will occur if there is a large number of these participants, and in that case, the screening will take place again for accuracy.
The literature mentioned in the study by Naff (2018) helped steer this research when it came to choosing a method of accumulating data. The article talked about bots, with their roots digging deep, controlling opinions on social media (2018). While there is quantitative data about bots, there is not enough opinion on how these bots should function and what effects they can have. A question of ethics was also raised, and for that, it was important to see the perspective of social media users themselves (2018). With these individual opinions, a greater understanding comes into place. Future studies can use this categorized data to study the same topic quantitatively. This study digs out the pros and cons, so the effects gathered will be a guiding tool to gather more data.
Conclusion
This study cannot gather an abundance of data, but it can lead to conversations regarding the place artificial intelligence can hold in our lives. AI isn’t completely sentient, and so, together as a society, it is important for us to take a step in understanding the extent of its reach to preserve the integrity of news media. With the recent state of political events between the two world superpowers, the need for truth holds a higher place. Hence, if artificial intelligence is allowed to exist, how should it ideally exist?
The detrimental effects of social media on society are evident in the manner it has fostered a culture of internet addiction and impaired people’s ability to engage in typical face-to-face interactions. Additionally, social media has facilitated the dissemination of misinformation and caused a deterioration of personal privacy.
The impact of social media on our daily lives can be significant as it provides a means for people to exchange information and interact. Moreover, it helps maintain connections with loved ones and stay updated with current affairs.
While social media has facilitated easy communication and connectivity between people, allowing information and ideas to spread more rapidly, it has also been criticized for its role in spreading fake news and misinformation. Moreover, it has been associated with an increase in cyberbullying and other types of online harassment.
Cite this page
Negative Effects of Social Media. (2019, Jul 02). Retrieved from https://papersowl.com/examples/negative-effects-of-social-media/