Wednesday 6 September 2017

How to trust?

Trust is an enormously important part of security. How and why we trust things is the basis for both security in principle and in practice. Ultimately, we have to trust our password managers, for example, and - more importantly - the services we sign up for. I know that Amazon is using my data to dubious and potentially harmful purpose. I trust that my VPN provider is not.

That trust might well be misguided. It's based partly on reputation and partly on a vague belief that businesses probably won't harm their long-term future to make a fast buck. This is not a safe assumption, even in the VPN business. Due to the mistrust most citizens of most countries currently have for their government, VPN companies have sprung up all over the place. Quite a lot of them are fraudulent, either not actually providing a proper VPN service or sucking up and exploiting your data willy-nilly.

We live in an age where the US president has shown us that he can break the rules everyone thought were there to limit the power of presidents.... and nobody will do a thing about it.

We live in an age where trusted retail pharmacists such as Boots in the UK sell quack medicines alongside actual medicines with no way to tell the two apart.

So trust is important, all the more so because it can be so easily subverted when people just don't play by the rules we expect them to play by.  We need to establish better bases for trust and one of those is reputation. We look at the reviews. We're all pretty savvy, right? We look at the negative reviews first and use them to establish a kind of context for the positive reviews. If the negative reviews are all about late delivery or the occasional breakage or that the buyer clearly ordered the wrong thing in the first place, we are more comfortable accepting the positive reviews.

We do that because glowing reviews are easy to manufacture and bad ones are more difficult to fake.  And we do it because we tend to think we can recognise fake reviews of either type.  We're probably kidding ourselves.

Cory Doctorow writes:
One of the reasons that online review sites still have some utility is that "crowdturfing" attacks (in which reviewers are paid to write convincing fake reviews to artificially raise or lower a business or product's ranking) are expensive to do well, and cheap attacks are pretty easy to spot and nuke. 
But in a new paper, a group of University of Chicago computer scientists show that they were able to train a Recurrent Neural Network (RNN) to write fake reviews that test subjects could not distinguish from real reviews -- and moreover, subjects were likely to rate these as "helpful" reviews.
The bad news is that neural nets can write convincing reviews. The good news is that other neural nets are even better at spotting fake reviews. The worse news is that they need our metadata to do this, the most obvious being our social graphs. Presumably anonymous reviews will do little to train these neural nets to recognise fake reviews, further reducing any basis we have for trust.

Those of us who work to put metadata back into our own hands might be inadvertently harming everyone by making trust ever harder to come by.

No comments:

Post a Comment