Having a Chat With JShaw, the Founder of Permafacts
Last week Permafacts Alpha version became open to the general public. They are trying to tame a gargantuan beast: the truth itself. They are at the beginning of a long and tricky road that is paved with a lot of subtle perils. Will they manage to create a protocol that can accurately weigh the truth while avoiding all the threats that lay ahead? Check out their Alpha version and the exclusive interview we had with Permafacts founder JShaw, and find out for yourself. Our subjective opinion is that they are on the good track, but you don’t have to believe us, just introduce this assertion in their platform and see how it’s opposed or supported.
Q: So, first thing first: how did you end up searching for truth? It was a definitory moment that made you say – this is wrong, I have to do something, or was it more like a general need you saw in the market?
A: I hadn’t been interested in the news until I started noticing a pattern recently. I noticed there were outrageous headlines that didn’t really make sense with some articles and I was confused. I started seeing this more often. Eventually, I started noticing that the news isn’t just biased on both sides of the political spectrum — the industry distorts the truth. This pattern seems to be creating negative outcomes in the real world. The click culture is promoting this, and I believe there should be a new model for journalism. Permafacts is that.
Q: Before going inside the actual mechanics of your solution. Why Arweave?
A: Every piece of the application can be put on-chain with Arweave, and there are smart contracts built into the platform. So version 1 will take advantage of the smart contract platform that’s built-in, and version 2 will use AI to generate on-chain metadata about the content on the platform. Ultimately I also save a ton of money building with arweave as infrastructure.
Q: Can you please describe the underlying features and mechanics of what Permafacts will be? Seeking and enforcing the truth is not an easy task…
A: Absolutely. Ultimately, it’s about letting the market decide and providing feedback based on that. Creating a way for users to create assertions. Think like a news article. That assertion is then published, and other users can use burnt AR to either support or oppose the assertion. A user could also take positions in both support or opposition if they believe there will be a high volume of engagement. This data will be used to display categorized feeds of assertions. There will be all sorts of metrics surrounding the assertions for users to help make decisions.
Q: Let me be the devil’s advocate:) Let’s say that we are close to terra-luna collapse, and there are already articles arguing that the luna protocol is faulty and it presents great risks. However, the price is still immensely high, and the “lunatics” are defending their investments by downvoting those unfavorable articles and branding them as FUD. How can Permafacts overcome this potential scenario?
A: So there’s a few things going on there.
- The protocol would make it costly for that to happen. So the cost of doing it would disincentivise the investor from doing that.
- These actions create an opportunity for others to exit their position, so ultimately there’s a good chance of the market stabilising.
- The assertion would probably drop down the news feed for a bit; I’m interested in seeing that play out and how easy it is to detect. You would think a sudden increase in Support or Opposition could easily get shared, and the market could see an opportunity and capitalise on it.
Interesting data about which users have “asserted” well in the past and users who have taken good “positions” in the past could make it easier to understand which strategies are best as well.
Q: (I’m not yet out from the devil’s advocate role) So, from what I understand, the machine will initially weigh the value of the truth of an article and set a “bar” that will translate into different costs for agreeing or not with the article for the human reader. In our case, the machine will consider those articles rather true, so to contest the articles’ validity, Luna supporters will have to pay a high price from the beginning. Am I correct?
A: Both support and opposition will start at the same place. Initially, that will probably be zero. Then there will be a pool for support and a pool for opposition. As more people take positions, either could take the lead. Volume, as well as the amount of support and opposition, will be metrics used to provide relevant content to the readers for whichever category they’re interested in. Ultimately the market will decide with their positions, and market manipulation comes at a cost that will incentivise other users in positions with an opportunity to correct the market.
It’s interesting, though — because what if that backfires? What if you filter it by vouched users and people start to notice who’s doing it? Also, what if multiple people make similar assertions that rise up the feed? The cost gets even greater to manipulate the feed — and no one can go in and take things down. Manipulation comes at a cost. That cost will be used to combat manipulation..
I am placing my bet that, over time, this could do a few things:
- Completely reframe the way people interpret the news. – for example, rather than just clicking retweet or like because you want something to be true, you’d have to, at the very least, ask yourself what others would think. You may re-read the headline and ask yourself if it’s a bit too outrageous or “on the nose”.
- Tell us what people REALLY believe is relevant – because they’ve placed skin in the game with the position they took.
- Tell us over time which is less risky to support. – the reason I say this is because some assertions will be very relevant at the moment and will eventually be shared after evidence directly says otherwise. Evidence may even suggest the asserter knew. That would be a good signal for the market to correct the problem by taking an opposing position if it became relevant enough in the future that the initial asserter was lying. The people in opposition would then be incentivised to make people aware of that. It will be very interesting to see if relevant stories could remain relevant LONGER because of the mechanism — compared to the 24 hours and onto the next outrage topic for the next day.
Q: In the case of point 1: “You’d have to ask yourself what others may think”, don’t you think that is both cool – to be more aware of the potential general opinion, but at the same time a perfect mechanism for falling in line with the potentially “winning” narrative? In some cases, the truth is not at all a “trendy” asset.
A: Absolutely! I like how you said that. This is the beauty of the mechanism. My vision is to build a mechanism that could help incentivise people to move beyond that. People are held back from spreading the truth currently — and it’s gotten worse. If you could incentivise the first people to speak out about what they believe is true, you’re one step closer to the principle of social proof.
What if certain topics are held back because the cost of speaking up is too high — and what if that could be balanced out? A way to incentivise people to push back against a closing Overton window.
Q: In the case of point 2: “because they’ve placed skin in the game with the position they took” don’t you think this mechanism will tend to somehow exclude the “poor” from the proposed game of truth? We, as a society, have been trying for centuries to become more inclusive and to offer at least equal chances for access to certain amenities. How could one without money make their voice heard in a market where money talks?
A: Interesting question, really. I guess the problem I have been targeting is pretty narrowly focused at the moment. It’s more focused on how to fix a lot of things, honestly. There’s a lot of censorship and important things that are not reported as often because of how the news works nowadays. Articles that are suppressed from platforms. There are certain stories that I cannot believe aren’t reported on — for example, the Uyghurs. If the news is reframed, it has the potential to change how people vote. That could change circumstances for the poor which could grant the means to have a larger voice!
But the journalists need a different model.
Q: Ok, I think that my role as a devil’s advocate can stop for now 🙂 Can Permafacts be used for both checking the integrity of singular news and checking the integrity of large data feeds? Somehow, Arweave itself is a huge aggregator of data. Can Permafacts be used in a more technical branch, like organizing and creating accurate feeds for various industries from the content stored on the Blockweave?
Let’s say, somehow, making order into content and creating certain categories.
A: One of my favorite things about Arweave is the potential AI use cases that can be built on top of such a large dataset that you can rely on to not change. The initial thought that sparked Permafacts was about taking off-chain data, like news articles, and deploying them on-chain. From there, train AI to spot persuasion and propaganda techniques to give users a likelihood percentage that the person is potentially spreading a narrative. This will be the v2 version of this, providing even more data for the users to make informed decisions. You could also use the economic data from past assertions to try and spot patterns for predicting future outcomes more accurately. At that point, you’re composing data points to curate info that’s better at providing useful metrics. As the application evolves, tagging strategies for composing better feeds will emerge from the data and I think the Arweave tagging system and the ease of querying data with GraphQL make the evolution faster.