Many network scientists have investigated the research problem of mitigating or removing false information propagated in social networks. There are two main categories of false information. The first is disinformation, which represents false information that is knowingly shared with malicious intent. The second is misinformation, in which agents share false information unwittingly, without any malicious intent. Many existing works have looked at mechanisms to mitigate or remove false information in terms of how to select a set of seeding nodes (or agents) based on their network characteristics (e.g., centrality features). However, little work has focused on the role of uncertainty as a factor in the formulation of agents' opinions. Uncertainty-aware agents can form different opinions and eventual beliefs about true or false information. In this work, we leverage an opinion model, called Subjective Logic (SL), which explicitly deals with a level of uncertainty in an opinion where the opinion is defined as a combination of belief, disbelief, and uncertainty and the level of uncertainty is easily interpreted as a person's confidence in given belief or disbelief. However, SL considers the dimension of uncertainty only derived from a lack of information (i.e., ignorance), not from other causes such as conflicting evidence. In the era of Big Data where we are flooded with information, conflicting information can increase uncertainty (or ambiguity) and have a greater effect on opinions than a lack of information (or ignorance). In order to enhance the capability of SL to deal with ambiguity as well as ignorance, we propose an SL-based opinion model that includes a level of uncertainty derived from both causes. By developing a variant of the SIR (Susceptible-Infected-Recovered) model that can change an agent's status based on the state of their opinions, we capture the evolution of agents' opinions over time. We present an analysis and discussion of critical changes under varying values of key design parameters, including the frequency ratio of true or false information propagation, centrality metrics used for selecting seeding false informers and true informers, an opinion decay factor, the degree of agents' prior belief, and the percentage of true informers. We validated our proposed opinion model using both the synthetic network environments and realistic network environments considering a real network topology, user behaviors, and the quality of news articles. The proposed agent's opinion model and corresponding strategies to deal with false information can be applicable to combat the spread of fake news in various social media platforms (e.g., Facebook).