Hi, I’m Brandon Smith. I live in the Kansas City area and am getting closer to finishing a bachelor’s degree in Informatics with a concentration in Cybersecurity. I’ve had some past coursework in statistics and social research which I found very interesting. I’m looking forward to the technology spin that this course looks to provide with some of those same topics.
Whether it be politics, coronavirus, or celebrity news, our interests and lives have become increasingly entangled in the online world. Social media platforms, mass media outlets, and even small blogs have all become a significant part of how we consume and share information as well as interact with others. The ease and speed of communication have some downsides, however. Anyone can share false information and spread it just as quickly. This disinformation can be used to target people with specific opinions or worldviews and give them a false perception of reality. Some online platforms take efforts to moderate content on their sites. Such moderation can be difficult. Is the moderated content actually false or just a matter of different and unharmful opinions? Does the platform or moderator have a bias that influences what they take action against? Is protecting their users necessary or is this excessive censorship? Social media sites and many online news outlets benefit the most by attracting users and keeping them clicking and using their sites. Does a goal of driving consumption come at the sacrifice of quality content?
A lot of debate and gray area exists in what content is moderated and how it is done. With that in mind, how has user trust in these platforms been affected? Do users trust content on moderated platforms or do they prefer content in places espousing a free speech for all approach? Do users believe that platforms can be trusted to fairly moderate content without bias? Determining what factors the users believe are important to this debate, along with components to authenticate and verify truthful sources, could be used for a framework to moderate content or alert users to possible errors with the information they consume and do so in a manner that generally establishes and encourages accurate information and trust between the outlet and consumer.
A lot of great questions there! I look forward to seeing what you find and how you narrow your topic down.
When administrating a website with a more loose set of restrictions on it’s end users, I believe that it can work at a cost of advertisement revenue. The biggest thing that social media platforms rely on for income is advertisement and marketing data from it’s end users.
For example, a website such as national geographic can attain a much larger pool of advertisers because of its safe for work user base and contents. A website such as Reddit carries a smaller pool because some subreddits contain “not safe for work” content. The morality of the situation aside, businesses on average prefer to generate more income, and a business such as social media will probably take the bigger advertisement pool over a smaller one if given the chance.
Interesting topic Brandon. Not only does technology make the spread of false information easier, but our own confirmation bias plays a large part in the spread of false information. Many people will tend to believe information that aligns with their belief structure regardless if it is true or not. They then use those sources to spread that information to friends, family, and the rest of the world. Now more than ever, we have to be diligent when researching topics.