PROJECT PROPOSAL Project Economics and Feasibility Business Model Evaluation Haier in the US: Transforming GE Appliances Integra…

FIND A SOLUTION AT Academic Writers Bay

PROJECT PROPOSAL(CMP7200)
IDENTIFICATION AND EXTRACTION OF FAKE NEWS FROM TWITTER
Name:

Student ID:

Word Count:

Date:

Table of Contents
Table of Figures 1
Abstract 1
Introduction 1
Background 1
Aim/Objective 1
Research Questions 1
Ethical Concerns 1
Project Timeline 1
Literature Review 1
Fake News Detection on Social Media: A Data Mining Perspective 1
Fake News Detection on Social Media using Geometric Deep Learning 1
Methodology 1
Experimentation 1
Phase 1: Data 1
Phase 2: Algorithm 1
Conclusion 1
References 1
Table of Figures
Figure 1 – Gantt chart defining the project timeline 5
Figure 2 – Procedure for Literature Review 5
Figure 3 – Comparison of datasets based on the type of features that can be extracted. 9
Figure 4 – Model Architecture (GC: Graph Convolution, MP: Mean Pooling, FC: Fully Connected, SM: SoftMax) 11
Figure 5 – Performance Metrics 11
Abstract
Social media has evolved from a ‘friends and family’ platform into a tool that can be used to deliver news, or convey a message, spread a word, capture a market and much more. This has been the case since the public realized its true potential. With the fastest news spreading prowess, and the power to make anything viral in minutes of its release, social media is becoming the center of attention for people with knowledge to spread around.
The issue is that social media never started out as a tool for any of the applications it has today. It started out as a platform for connecting people from all across the globe. But now, people use it as a way to get more views on their content. These views do not come easily. To get these views, people go to extreme lengths, they even post fake news on the internet i.e., something people want to hear to lure in the views.
This project provides with research that utilizes Machine Learning to identify whether a certain tweet contains fake news or not. The project is a classification project and thus will use classification approaches. Some of the more popular approaches for classification are Decision trees, Artificial Neural Networks (ANNs), Support Vector Machine (SVM). Clustering can also be used if the project needs to be taken into unsupervised learning. A popular clustering algorithm used is k-Nearest Neighbors (KNN). This research will experiment with binary classification Machine Learning and prove that Machine Learning can be useful in saving people from fake news on the internet.
The output of this project’s research will be experimental data and analysis of how Machine Learning works to catch fake news and how effectively it does it.
Keywords: Fake News, ANN, Decision Trees, Binary Classification, Logistic Regression
Introduction
According to (Shu et al., 2017), Fake news is a news article that is intentionally and verifiably false. This means that any news article that is intentionally written to be false due to some nefarious reason is considered as a fake news article. (Shu et al., 2017) also mentions that there can be various reasons emerging from various background that can be the reason behind people creating fake news. These reasons can be psychological or social in nature. This is discussed further in the Literature Review.
Online media are these days one of the fundamental news hotspots for a huge number of individuals all throughout the planet because of their minimal effort, simple access, and fast dispersal (Monti, Frasca, Eynard and Mannion, n.d.). Online Media Systems have been significantly changing the manner in which news is created, scattered, and devoured, opening unanticipated freedoms (Reis et al., 2019). The fast development in Information Technology (IT) over the most recent twenty years has prompted a development in the measure of data accessible on the World Wide Web. Another style for trading and sharing data is online media. Online media suggests the techniques for coordinated effort among people wherein they make, offer, and exchange information and contemplations virtual organizations and associations (like Twitter and Facebook) (Badieh Habib Morgan and van Keulen, 2014). This anyway comes at the expense of questionable reliability and huge danger of openness to ‘counterfeit news’, deliberately written to delude the perusers (Monti, Frasca, Eynard and Mannion, n.d.).
Background
Social media is becoming increasingly popular amongst the masses. The advent in internet technology has brought on a massive surge of data generated on social media. People have since realized that social media is the new way of sharing information around the world and they gain profit from it. Online media are these days one of the primary news hotspots for a large number of individuals all throughout the planet because of their minimal effort, simple access, and quick spread (Monti, Frasca, Eynard and Mannion, n.d.).
(Shu et al., 2017) also characterizes the foundations of fake news based on human psychology. This will be discussed in the Literature Review section. The research performed in this domain as always been with respect to the literature review done in this project, a classification-based research. Popular research techniques from research papers for classification include Decision Trees, Random Forests, SVM, CNN (Monti, Frasca, Eynard and Mannion, n.d.). This project will proceed with the same classification paradigm for the classification of fake news from twitter. The specific algorithm that this research will employ will be decided with further research but the choice would be between logistic regression and Multi-Layer Perceptron (MLP).
Aim/Objective
This project is built with the purpose of classifying whether the information fetched from twitter is fake or not. For that, the following steps are necessary:
Literature Review performed to grasp the development in this domain and to study the state-of-the-art approaches to this problem.
Finalizing a dataset consisting of tweets to train the ML algorithm/ensemble to classify fake and real tweets.
Creating a ML model for training and identification of fake news.
Algorithm and Performance Analysis.
Research Questions
Striving to get an answer to the following question is what gives direction to this research project:
Can Classification Algorithms in Machine Learning provide the necessary help to make the data on social media more viable?
Ethical Concerns
The dataset used in this research has been developed by (Shu et al., 2017) to address the disadvantages of existing social media datasets for fake news detection. The twitter policy must be taken into consideration while using the dataset. Although the dataset has been approved and public, one cannot be surer when it comes to ethics. The twitter policy states the following:
“Security and Control are essential – Protecting and safeguarding the assurance of people on Twitter is fused into the middle DNA of our association. Along these lines, we preclude the usage of Twitter data in any way that would be clashing with people’s reasonable suspicions for insurance. By developing the Twitter API or having the chance to Twitter Content, you have an uncommon assignment to do in guarding this duty, most importantly by in regards to people’s security and giving them straightforwardness and control over how their data is used.”
Other parts of the twitter privacy policy talks about various other things like account privacy and data sharing:
“Twitter is public and Tweets are promptly distinguishable and accessible by anybody all throughout the planet. We give you non-public approaches to impart on Twitter as well, through ensured Tweets and Direct Messages. You can likewise utilize Twitter under a pen name you don’t really want to utilize your name.
At the point when you use Twitter, regardless of whether you’re simply taking a gander at Tweets, we get some close to home data from you like the sort of gadget you’re utilizing and your IP address. You can decide to impart extra data to us like your email address, telephone number, address book contacts, and a public profile. We utilize this data for things like keeping your record secure and showing you more important Tweets, individuals to follow, occasions, and advertisements.
We give you control through your settings to restrict the information we gather from you and how we use it, and to control things like record security, showcasing inclinations, applications that can get to your record, and address book contacts you’ve transferred to Twitter. You can likewise download data you have shared on Twitter.
Notwithstanding data you share with us, we utilize your Tweets, content you’ve perused, Liked, or Retweeted, and other data to figure out what points you’re keen on, your age, the dialects you talk, and different signs to show you more pertinent substance. We give you straightforwardness into that data, and you can adjust or address it whenever”.
As long as there is not breach in the interests of the users of twitter and the data is not used irresponsibly, Data from twitter can be used. Since, the dataset has been developed as a result of research in (Shu et al., 2017) and available on a public repository on GitHub, it is all green lights.
Project Timeline
A timetable for managing the modules and the submodules of the project with respect to time is called a project timeline. The project timeline helps divide the project into modules and submodules so one can work on a single part for a specified time. This helps the developer remain focused and streamlined. The work finished more efficiently too.
A Gantt chart is the tool used for depicting a project timeline. The Gantt chart for this research project is shown below in Figure 1.
Figure 1 – Gantt chart defining the project timeline
Literature Review
A solid LR gives a solid assistance and a head start for the task. Examination generally important to what this venture will do is gathered and broke down to give a premise to this exploration. This will help clarify the legitimacy and respectability of this exploration too. The LR will be proceeded as shown in Figure 2.
Figure 2 – Procedure for Literature Review
To keep the quality and integrity of the LR, the following rules are applied while selecting papers for LR:
The language in the chose papers should be English.
Ought to be disseminated in journals with a high impact factor.
Complete and free access ought to be open for the journal.
All the assessment datasets and code for the papers ought to be available gratis.
Papers sooner than 10 years are not agreeable.
Fake News Detection on Social Media: A Data Mining Perspective
Introduction
In this review, an exhaustive review of recognizing fake news through online media, including fake news depictions on cerebrum examination and social hypotheses, existing estimations from a data mining perspective, evaluation estimations and agent datasets is presented. Related assessment zones, open issues, and future investigation course for fake news acknowledgment through online media are furthermore discussed.
Approach
As an extending proportion of our lives is spent interfacing on the web through online media stages, an always expanding number of people will overall hunt out and eat up news from electronic media rather than customary news affiliations. The clarifications behind this change in usage lead are natural in the possibility of these online media stages:
It is as often as possible more ideal and more reasonable to consume news through online media differentiated and standard news media, similar to papers or TV.
It is less complex to extra offer, comment on, and analyze the news with mates or distinctive users by means of online media.
Since it is practically allowed to give news on the web and much faster and less difficult to disperse through electronic media, immense volumes of fake news, i.e., those reports with intentionally fake information, are conveyed online for an arrangement of purposes, as money related and political increment. To help moderate the unfriendly results achieved by fake news – both to benefit general society and the news climate – It’s significant that we make procedures to normally recognize fake news through online media. This article presents an outline of phony news location and examine promising research directions.
The fake news detection is a two-fold process consisting of characterization and detection. The process follows as show below:
Characterization
Fake News on Traditional Media
Psychological Foundations: Humans are regularly not really skilled at isolating among certifiable and fake news. There are a couple of psychological theories that can explain this wonder and the convincing power of fake news. Ordinary fake news essentially targets purchasers by abusing their individual shortcomings. There are two essential issue which make buyers regularly unprotected against fake news:
Gullible Realism: purchasers will overall acknowledge that their impression of the fact of the matter are the single exact viewpoints, while others who vary are seen as dumbfounded, irrational, or uneven.
Affirmation Bias: buyers like to get information that insists their present viewpoints.
Due to these mental tendencies basic in human impulse, fake news can every now and again be viewed as authentic by customers.
Social Foundations: Prospect theory depicts dynamic as a cycle by which people make choices subject to the overall increments and adversities when appeared differently in relation to their current status. This yearning for extending the remuneration of a decision applies to social gains as well, for instance, continued with affirmation by others in a customer’s brief casual local area. This tendency for social affirmation and affirmation is major for a person’s character and certainty, making customers inclined to pick “socially ensured” choices while eating up and dissipating news information, adhering to the principles set up locally whether or not the news being shared is fake data.
Fake News on Social Media
Harmful Accounts: While various customers through electronic media are genuine, online media customers may moreover be pernicious, and some of the times are not even certifiable individuals. The insignificant exertion of making electronic media accounts similarly upholds vindictive customer accounts, similar to social bots, cyborg customers, and savages. Pretty much, these extraordinarily powerful and hardliner malicious records through online media become the astounding sources and augmentation of fake news.
Echo Chamber:
The echo chamber sway works with the cycle by which people eat up and acknowledge fake news as a result of the going with mental factors:
Social trustworthiness: which implies people will undoubtedly consider a to be as strong if others see the source is sound, especially when there isn’t adequate information allowed to get to the genuineness of the source.
Frequency heuristic: which suggests that buyers may typically uphold information they hear consistently, whether or not it is fake data.
Detection
The location period of this exploration is partitioned independently for distinguishing counterfeit news from news substance and phony news from online media. The mathematical definition of fake news is as follows:
Feature Extraction
There is a difference between approaching fake news articles from the newspapers and news suppliers and the social media. These are the features this research focuses on:
For News Context:
Article Source
Headline
Body Text
Image/Video

READ ALSO...   APU HIST101 Written Assignment 1

These features can be classified into linguistic and visual based features. Linguistic based features help target issues like ‘clickbait’, and opinion biased news. The features that are extracted from linguistic data are lexical features and syntactic features. The detection of these features can be enabled along with a focus on domain-centric vocabulary. Lie detection techniques can be applied to reinforce the features. Fake images that are edited and are not verified can be captured using visual features. These images can be identified based on visual features like resolution, clarity and coherence scores any many more.
For Social Context: Including the features mentioned above, there are other features from the social context that can be derived from user-driven social interactions. The social features are extracted from the following:
User-based: Differentiating between a bot and a human account is essential to fake news detection since bots are mainly used to repeatedly post fake news to gain more views. Thus, capturing user-based features from user profiles and social interaction is mandatory. User-based features are further classified into two sets, individual based features i.e., number of followers/following, number of posts/tweets. Group-based features are captured by analyzing the group of users related to a single news article.

READ ALSO...   Background Global trade

Model Creation
Information-based methodologies intend to utilize outside sources to truth check proposed claims in news content. The objective of reality checking is to appoint a fact worth to a case in a specific setting. Existing information-based methodologies are as per the following:
Expert-arranged fact checking intensely depends on human space specialists to research important information and records to develop the decisions of guarantee veracity. The destruction here is the measure of time it takes which makes it unworkable for adaptability.
Crowdsourcing-focused reality checking misuses the “intelligence of group” to empower ordinary individuals to explain news content; these comments are then totaled to deliver a general appraisal of the news veracity.
Computational-situated certainty checking means to give a programmed adaptable framework to group valid and bogus cases.
Style-based methodologies attempt to distinguish counterfeit news by catching the controllers in the composing style of information content. There are fundamentally two run-of-the-mill classes of style-based strategies:
Deception-arranged stylometric techniques catch the tricky assertions or cases from news content. The inspiration of trickiness location starts from legal brain research.
Objectivity-situated methodologies catch style flags that can demonstrate a diminished objectivity of information substance and in this manner the possibility to misdirect purchasers, like hyper-sectarian styles and sensationalist reporting.
Stance-based methodologies use clients’ perspectives from applicable post substance to derive the veracity of unique news stories. The position of clients’ posts can be addressed either unequivocally or verifiably. Unequivocal positions are immediate articulations of feeling or assessment, like the “approval” and “disapproval” responses communicated in Facebook. Implied positions can be consequently removed from online media posts. Position location is the errand of naturally deciding from a post whether the client is agreeable to, nonpartisan toward, or against some objective element, occasion, or thought.
Propagation-based methodologies for counterfeit news recognition reason about the interrelations of important web-based media presents on foresee news believability. The essential supposition that will be that the believability of a news occasion is profoundly identified with the validity of significant online media posts. Both homogeneous and heterogeneous believability organizations can be worked for engendering measure. Homogeneous believability networks comprise of a solitary kind of substances, like post or occasion. Heterogeneous believability networks include various sorts of substances, like posts, sub-occasions, and occasions.
Datasets
The following datasets are used in this research:
BuzzFeed News
LIAR
BS Detector
CREDBANK
Evaluation Metrics
Most existing approaches consider the fake news problem as a classification problem that predicts whether a news article is fake or not:
True Positive (TP)
True Negative (TN)
False Negative (FN)
False Positive (FP)
By formulating this as a classification problem, we can define following metrics,
Precision:
Recall:
F1:
Accuracy:
Figure 3 – Comparison of datasets based on the type of features that can be extracted.
Fake News Detection on Social Media using Geometric Deep Learning
Introduction
In this paper, a novel programmed counterfeit news recognition model dependent on mathematical profound learning is appeared. The basic center calculations are a speculation of old style convolutional neural organizations to diagrams, permitting the combination of heterogeneous information like substance, client profile and movement, social chart, and news proliferation.
Methodology
This examination proposes learning counterfeit news explicit spread examples by misusing mathematical profound learning, a novel class of profound learning strategies intended to chip away at diagram organized information. Mathematical profound adapting normally manages heterogeneous information (like client segment and action, interpersonal organization structure, news proliferation and substance), in this manner conveying the capability of being a binding together system for content, social setting, and spread based methodologies.
The model proposed in this paper is prepared in a regulated way on an enormous arrangement of commented on phony and genuine stories spread on Twitter in the time frame 2013-2018. Broad testing is performed of our model in various testing settings, showing that it accomplishes high precision (almost 93% ROC AUC), requires short news spread occasions (only a couple long periods of proliferation), and performs well when the model is prepared on information inaccessible on schedule from the testing information.
Data
In the first place, the general rundown of truth checking articles from such chronicles and, for straightforwardness, disposed of cases with vague names, for example, ‘blended’ or ‘halfway obvious/bogus’ was assembled. Second, every one of the separated articles distinguished as possibly related URLs referred to by the reality checkers, sifting through each one of those not referenced at any rate once on Twitter. Third, prepared human annotators were utilized to find out whether the site pages related with the gathered URLs were coordinating or denying the case, or were basically inconsequential to that guarantee. This gave a basic technique to engender truth-names from actuality checking decisions to URLs: in the event that a URL coordinates with a case, it straightforwardly acquires the decision; on the off chance that it denies a case, it acquires something contrary to the decision (e.g., URLs coordinating with a genuine case are marked as evident, URLs denying a genuine case are named as bogus). The last piece of the information assortment measure comprised of the recovery of Twitter information identified with the spread of information related with a specific URL.
Features
The going with features were isolated depicting news, customers, and their activity, collected into four classes:
User profile (geo-limitation and profile settings, language, word embeddings of customer profile self-depiction, date of record creation, and whether it has been checked)
User development (number of top picks, records, and circumstances with)
Network and spreading (social relationship between the customers, number of allies and sidekicks, course spreading tree, retweet timestamps and source contraption, number of answers, proclamations, top decisions and retweets for the source tweet)
Content (word introducing of the tweet text-based substance and included hashtags).
Model – Geometric Deep Learning
Best in class profound learning strategies depend on signal preparing hypothesis with the base suspicion that the information is Euclidean in nature. As of late, profound learning for non-Euclidean information has beginning acquiring foothold. Mathematical profound learning is the term used to depict these profound learning strategies.
Diagram CNNs supplant the old-style convolution procedure on lattices with a nearby change invariant collection on the neighborhood of a vertex in a chart. The creators utilized a four-layer Graph CNN with two convolutional layers (64-dimensional yield highlights map in each) and two completely associated layers (delivering 32-and 2-dimensional yield highlights, separately) to foresee the phony/genuine class probabilities.
Figure 4 – Model Architecture (GC: Graph Convolution, MP: Mean Pooling, FC: Fully Connected, SM: SoftMax)
Results
All things considered, the preparation, test, and approval sets contained 677, 226, and 226 URLs, separately, with 83.26% valid and 16.74% bogus marks (± 0.06% and 0.15% for preparing and approval/test set individually). ROC AUC is 92.70 ± 1.80%. The following is a graphical portrayal of the aftereffects of the analysis.
Figure 5 – Performance Metrics
Methodology
The different segments that are needed to procedurally control the undertaking towards finish make up the Methodology. Execution of this system gives a bunch of decides that makes improvement more smoothed out and centered towards the fundamental destinations.
This exploration will use Machine learning to lead tests that will demonstrate the way that mechanization of ID of useful substance is valuable. To accomplish this, the stages referenced underneath should be followed:
Experimentation
AI gives adequate ground to exploring different avenues regarding information. In any case, the necessity of information is basic to any Machine learning project. The dataset will give all the information to the calculations that Machine learning will give.
Phase 1: Data
The dataset selected will be from an already existing project based on the detection of fake news from social media i.e., the FakeNewsNet. The dataset will be taken from their open-source GitHub repo KaiDMML/FakeNewsNet. The description of the dataset is as follows:
Complete dataset can’t be conveyed due to Twitter security strategies and news distributer duplicate rights. Social commitment and client data are not revealed on account of Twitter Policy. This code vault can be utilized to download news stories from distributed sites and important web-based media information from Twitter. The moderate rendition of most recent dataset gave in this repo (situated in dataset envelope) incorporate after records:
politifact_fake.csv – Samples related to fake news collected from PolitiFact
politifact_real.csv – Samples related to real news collected from PolitiFact
gossipcop_fake.csv – Samples related to fake news collected from GossipCop
gossipcop_real.csv – Samples related to real news collected from GossipCop
Each of the above CSV files is comma separated file and have the following columns:
id – Unique identifier for each news
url – URL of the article from web that published that news
title – Title of the news article
tweet_ids – Tweet ids of tweets sharing the news. This field is list of tweet ids separated by tab.
Phase 2: Algorithm
The nature of the problem defines the kind of paradigm we need to follow for the selection of algorithms. Deciphering from the problem statement and project rationale, it can be determined that the paradigm to be used is classification. The algorithm used for this project will be logistic regression. Logistic regression is a basic twist on gradient descent that specializes in binary classification. Binary classification is used for this project since we need to differentiate between fake and non-fake news i.e., two classes of output.
The reason behind the employment of Logistic Regression is the scope of the research and the usage of the algorithm by researchers in the research. The scope of the project is defined by the paradigm followed i.e., Binary Classification. Logistic regression employs the sigmoid function () at its core that specializes in Binary classification. Although currently decided with Logistic Regression, there is a possibility of switching to Neural Networks that use the same sigmoid in the node core. The reason for this change might be the large number of features with varying data types. Further research into this will shine light on the final method.
On observing the dataset, it can be seen that there are fake and real news tweets related to politics available in the dataset. The logistic regression algorithm will be trained on these tweets. The possibility of using a Multi-layered Perceptron is also on the table, the final algorithm will be decided after further testing. The dataset will be split with a 70-30 ratio to implement cross validation for performance measurement. Other performance metrics like confusion matrices will be employed.
Conclusion
The ever-increasing popularity of social media is becoming a double-edged sword. The power of social media in making a post viral all around the world effortlessly has crazed the people into posting fake news and articles on social media to attract views and likes. This undermines the capability of social media as a reliable source of news. This research aims to counter this problem by using Logistic regression (the final algorithm will be decided with further research and testing) to classify the tweets into fake or non-fake news tweets. The algorithm will be trained to identify the difference between fake and real tweets and then will be subjected to performance metrics like cross validation and confusion matrices.
References
Monti, F., Frasca, F., Eynard, D. and Mannion, D., n.d. Fake News Detection on Social Media using Geometric Deep Learning. ResearchGate.
Reis, J., Correia, A., Murai, F., Veloso, A., Benevenuto, F. and Cambria, E., 2019. Supervised Learning for Fake News Detection. IEEE Intelligent Systems, 34(2), pp.76-81.
Badieh Habib Morgan, M. and van Keulen, M., 2014. Information Extraction for Social Media. Proceedings of the Third Workshop on Semantic Web and Information Extraction.
Shu, K., Sliva, A., Wang, S., Tang, J. and Liu, H., 2017. Fake News Detection on Social Media: A Data Mining Perspective. ACM SIGKDD Explorations Newsletter, 19(1), pp.22-36.

READ ALSO...   Top Employer in Canada
Order from Academic Writers Bay
Best Custom Essay Writing Services

QUALITY: 100% ORIGINAL PAPERNO PLAGIARISM – CUSTOM PAPER