This is a topic which is still very much being debated. About 1 year ago, people used to draw a distinction between, “fake news” (which was described as being completely removed from reality, having absolutely zero factual basis), and “biased news” (news which has a institutional bias, you can see a graph of bias in the American context here:Media Bias Chart, 3.1 Minor Updates Based on Constructive Feedback - ad fontes media ).
Recently, a parliamentary committee in the UK has actually recommended that news agencies abandon the term “fake news”, and instead just call news “factually false” or “biased”. They have recommended that all Government agencies stop using the term. The reason they have done this is because many anti-democratic forces around the world are co-opting the original meaning of the term to describe any media which is critical of them. Therefore if the word stops being used, news can go back to being either fact-based reporting, or opinion reporting.
To your questions:
Not sure what in capacity your are wanting to stop (media, government, institution, personal) the spread of factually false news, but it will depend on the platform on which it was orginally spread (Facebook, twitter, and other social platforms, you can simply contact the platform), but if a far left or far right news organization is pushing a story that has falsehoods, they may simply refuse to take it down. Other websites that are platforms for discussion are again more difficult. If people use the platform to say “John has three eyes on his face” when he in fact only has two eyes on his face, you could contact the publisher to discuss the falsehood, but they might just reply “oh this is not a serious website, we are just having fun being silly about John’s eyes”. So it depends on the original platform
In terms of a policy perspective on stopping it, the general consensus is that both the source and platform need to be addressed. Most sources of “fake news” (in the original sense) are profiteers or idealogues, who are using it for either profit or propagandistic purposes. These people need to be identified and have their platform removed. Not sure if any real “fake news” websites have actually been taken down by Governments. In terms of addressing various platforms - FB and Twitter are now going through a lot of (forced) self-regulation, but there is no real comprehensive policy response anywhere in the world yet, but countries with stable and independent media environments are all taking the problems seriously. (Ironically, many countries actively support fake news - North Korea for example).
In terms of disproving fake news, you have to understand your audience: “who” are you wanting to show the truth to, and through what platform?
I will give you an interesting example from Cambodia. During the 2018 election, a paid election monitor, from Romania, came to Cambodia to give legitimacy to a completely ridiculous sham of an election. This monitor’s face was put on local media, and many very (understandably) frustrated opposition supporters were disgusted with the ethics of this “zombie election monitor”. But one opposition supporter decided to create some fake news. He put up a photo of the amoral election monitor next to a photo of ANOTHER Romanian who had been arrested a few weeks after the Cambodian election IN ENGLAND for a series of terrible crimes - drug dealing, rape, extortion etc. Anyway, the Cambodian opposition supporter tried to say this was the SAME person. It clearly wasn’t (different name, look nothing alike, were in different countries around the time of the alleged offending. So, the story (that they were the same person) was shared on Facebook TWO THOUSAND TIMES!!! An awful spread of fake news. Many people shared without knowing it was fake. There were about 500 comments, and every 50 comments or so, someone would point out “hey they are actually different people”. A lot of sharing damage was done before Facebook finally realised that it was fake news and they took it down. Part of the problem is that Facebook has very few Cambodian-language MODERATORS, which are crucial to ensure that information shared on FB is factual. The other problem is that even if facebook has 1000 moderators working at once, it will still take a report from a user to flag the potential “fake news” content. It can take many many shares before someone actually thinks it could be false.
So, in short, the problem is very complicated, and there is no consensus on the most effective and efficient way to deal. I hope this has given you some insight into the broader problem and attempts currently under afoot to deal with it.