Monday 31 July 2017

How we screw friends, families and strangers by being careless

If you know me, you'll know that my eyes are constantly aching from rolling at the phrase "if you've nothing to hide, you've nothing to fear."  One of the guiding principles and main motive force of privacy work is that most people believe this.  It's the fact that most people believe it that makes the accidental or deliberate shedding of personal data so valuable.

I won't go into the reasons the phrase is wrong here, largely because I have learned through experience that if I start, I'm unlikely ever to stop.  But I will go into some of the reasons why it's difficult to convince people that their privacy matters and then give some suggestions about how to do it.

It's easy enough to understand why people use that damnable phrase. I don't actually blame anyone for believing it, although my aching eye muscles could do with a break for at least a few minutes every day.  The scale and malevolence of how we're all being screwed by the people who have our data is - deliberately - largely hidden.  We're not told who our data will be sold to or what it will be used for. Fine print tells us that it "might" (or the even more insidious "may") be 'shared' with 'partners' but without any indication of exactly what data is being shared with whom or why.  We have a tendency to think that if people aren't telling us things in capital letters then the things are probably not very important.

It's also hard to connect the consequences of sharing data with any negative outcome. It's unlikely that we'll ever connect an instance of identity theft with the box we ticked on a website nine years ago, for example. Plus, of course, we often give away data to get cool stuff (10th sub free! apps that anticipate our needs etc) and we don't want to give that up, especially since we don't always understand why giving away that data might be bad. Nor should we, necessarily. The benefits might indeed outweigh the harm for some people in some cases. The problem is that we're not equipped to make that decision, because of the deliberate machinations of the companies who make money from our data and the complexity of the landscape.

So people like me need to come up with convincing examples. How can ticking this particular box harm you in the future?  This is hard, not because examples don't exist but because they have to cover that distance in time and place between the ticking of the box and the stealing of the identity. They have to show that it's in aggregation of data over a period of years that the greatest danger lies. We humans are not very good at internalising knowledge of that sort or at practising the regimen needed to do anything about it.

The examples I've had the most success with tend to be ones that show how poor privacy habits can screw our friends and family. I find this confusing - I hate my friends and family - but it seems to work for lots of people.

It's a very important point and one I harp on enough to contribute to the eye-rolling muscle strain of my friends and family (good - I told you I hate them): privacy is a group exercise. It would be good if we tried not to inadvertently screw each other the whole time through our own carelessness.

There are ways we can screw the people in our own networks through complacency and other ways we can screw complete strangers.  Please don't take this as a manual of how to screw people, by the way, treat it instead as a way to be mindful of how our actions can harm others whether we mean to  or not.

1. Harming friends
I frequently talk about the Amazon gift service because it is such a perfect example. You're an Amazon customer, you buy someone a gift to be sent directly to them. You've just given away an enormous amount of information about that other, innocent (well, not if they're one of my friends or family) person. Their address, their possible birth date or other significant date, the sort of things they like (or that you think they like) and so on. If Amazon already knows their address it can start building a social network of their friends and family and make inferences about them too. 

Why is this harmful rather than delightful? 

For one thing, your friends never asked you to hand Amazon their data. There might be all sorts of reasons they don't want that to happen. Even if (perhaps especially if) you don't know what reasons they might have for not wanting Amazon to have this data, you should at least ask them first and not ask or press the issue if they say no. They might have things to fear regardless of whether they have anything to hide. And they might have things to hide.

Second, harm may come from a variety of sources, malignant, benign or indifferent.  Couples or families might be hurt if one member receives targeted adverts based on a gift. It's not hard to imagine how trouble might be caused if one member of a couple received the gift of a sex toy in the post. It might also be problematic if the adverts someone were served while browsing were informed by a gift, wanted or otherwise. Inducting someone into a social network operated in secret by people who wish to sell us things is not a kind thing to do.

Third, the companies who buy this data collate lists of people they deem 'vulnerable', by which they mean vulnerable to being sold things they don't want or need. The information you shed about them contributes to aggressive targeting and other borderline con-artistry as well as out-and-out conning by less scrupulous firms.

Fourth, this data will certainly be stolen at some point. Hackers will use this data to do bad things to our friends. They'll steal their identity, which is very much easier if they know trivial facts about people such as where they shop and eat. They'll create digests of information about certain types of people and sell them to bad guys who specialise in screwing that type of person. For example, helpful gifts might indicate that the recipient is elderly. An unscrupulous company might (rightly or wrongly) conclude that the elderly person is especially vulnerable and target them for scams that match the gifts they've received.

Fifth, spam. You're putting people on lists that are sold to spammers - email, real world, knocking on our doors - I don't think anyone wants that.

2. Screwing strangers
There's a very real sense in which customers are becoming less customers and more sheep to be shorn, bags of organs to be harvested. Our gleeful introduction of others into this practice completes the analogy.  We're all the Judas Goat for faceless corporations, dragging our friends into dangers they didn't sign up to.

But it's worse even than that because those companies are also screwing their own employees based on our privacy choices.

Here's one of the most obvious examples: you know when you visit a restaurant and they ask you to rate the service on a card or - increasingly - on a touch screen? What on Earth do you think that's for other than to generate an excuse to deprive servers of their tips? A simple scale of dissatisfaction isn't going to help the restaurant improve its business, is it? With the card-based version, companies might be angling to seem caring about customers (while still changing nothing and punishing servers) but with the computerised version, we can be sure that servers will be screwed more. What kind of servers generate the most dissatisfaction?  Can companies find ways to incorporate these results into their existing racist or sexist hiring and firing policies? Can they generate brand new racist or sexist policies?

Well of course they fucking can. And will.

But look also at the wider picture. Much of restaurant technology is aimed at either getting people back out through the door as quickly as possible or selling them more stuff. To achieve this they (especially chains) do all kinds of worrying stuff. They greet you by name. They remind you of what you ordered last time you visited (even if it's a different location). 

The servers and lower to middle management are easy to punish if this does not go according to plan and customers sit around enjoying their meals instead of hurrying and/or ordering stuff they didn't want.

By gleefully shedding data we turn ourselves into sheep to be shorn. But we turn other people into sheep, too. And we turn the former farmers into serfs, serving at the whim of their owners to achive goals not related to their jobs and punishments that are not based on how well they do their jobs.

That's the harm. Don't make me roll my eyes.

A terrible idea

http://www.bbc.co.uk/news/av/technology-40676084/how-facial-recognition-could-replace-train-tickets

The URL says most of what you need to know.

Wednesday 26 July 2017

Tuesday 25 July 2017

Some solid advice

https://media.boingboing.net/wp-content/uploads/2017/07/upsstore_100729948_medium.jpgCeaser's Palace in Las Vegas is holding this year's Defcon, a conference about hacking and security.  There are good reasons to believe that scoundrels will be attempting to hack everything in sight and even better reasons to believe they have the skills to pull it off.

For this reason, the UPS business centre in the hotel has decided only to accept print jobs that come as an email attachment, not on a USB stick or via a link. This is a reasonable precaution and probably the best compromise they can make while still doing business. Email attachments aren't at all safe either, of course, but people will need to print stuff, I guess. In general, reducing the number of attack vectors is worthwhile but at a conference like this it might just goad people into getting creative...

Cory Doctorow reports at Boing Boing (from where I borrowed the photo for this post), also noting that Andy Thompson (aka @R41nM4kr) has offered a list of security essentials for attendees.  They are pretty sensible. I follow an almost identical list of rules whenever I am forced to leave the house.

Here's the part of Thompson's list concerned with internet access and connectivity:
  1. Unless absolutely necessary for a job function, disable WiFi.
  2. Disable Bluetooth on your computer and phone.
  3. Disable NFS connectivity on your phone and computer.
  4. If Wifi is absolutely required, ONLY use your own provided wifi. I used a JetBack/MiFi and connect ONLY to that device.
  5. Always use a VPN as soon as you obtain WiFi access.
  6. Do NOT plug any network cable into the laptop.
  7. Do not plug any USB storage devices (hard drives, sticks, network adapters, Raspberry Pi’s, etc) into the laptop or phone. 
The importance of not connecting to public WiFi unless you really need to and then only doing so over a VPN cannot be overstated. I'd love to know more about the pscyhology behind the willingness we have to connect to random networks just because they happen to be there. We generally have no idea about whether they are secure, whether they have been compromised or whether the operators have malicious intent. We don't even know if the network is legit: we tend to assume that if there's a WiFi signal with the same name as the venue, then it's operated by that venue.

It's frighteningly easy to intercept traffic on unencrypted wireless networks. It's almost as easy to write scripts to scan for things that might be passwords flying about the place.  So if you do need to use public or commercial WiFi, be sure to use a VPN.

I use my phone as a mobile hotspot with a VPN rather than use other people's WiFi.  I only make an exception when there's no mobile signal. Something tells me this won't be a problem in Vegas.

My list, if I happen to be leaving the country (especially to the US) has some additions:
  1. Log out of social media, email and messaging accounts on your laptop and phone. Remove any cookies that store passwords.
  2. Use a hardware token (I use a Yubikey Neo) to protect access to your password manager (you're using a password manager, right?)
  3. Send the hardware token in your checked luggage, don't carry it with you.
That way, nobody can force you to reveal your passwords. Of course, they might refuse you entry to the country and it will be quite inconvenient when your luggage is inevitably lost, but if these prices seem like they are worth paying, go for it. Also, you'll feel kind of like a spy.

Monday 24 July 2017

Age verification

The UK government is threatening to implement age verification on porn sites because won't
someone think of the children. This means that porn site users will have to prove they are 18 before they can feast upon the porn within.

I have to admit, I have some concerns about porn which can be summarised as:
  1. Lots of performers (especially women) are hurt by the porn industry. There are questions of whether consent is really possible when one's income relies on saying yes. Sex work is not necessarily just another job and there are certainly porn companies that take advantage of performers and their plight, if they have one. I have nothing at all against consensual performance and am entirely in favour of sex workers being allowed to work without criticism or harassment. But we usually have no way of telling what pressures the performers face and therefore what, if any, consent, they are really capable of. I think we - as consumers of porn - need to be very careful.
  2. The messages children are likely to glean from porn are not positive. They could be, I reckon. Hangups about sex and sexuality from previous generations and religious nonsense are terrible things and being positive and cool and non-judgmental about sex and sexuality is surely good. But it's clear that the vast majority of porn doesn't encapsulate great messages about agency and consent and equality. If a child's introduction to sex is mainstream porn, it seems likely that it'll have fucked up ideas about how to treat other people, especially women. I would  rather they learn sex-positive lessens from places other than porn.
The second item is most germane to the government's goal of age verification of porn sites but there are some problems. I'll stick with two:
  1. Literally everyone on the planet knows it won't work. It's the equivalent of - in the 70s and 80s - putting porn on the top shelves of news agents where children cannot supposedly reach with their short arms. It's like the buying of booze and tobacco by children through very easy means such as asking someone older. Refusing to sell young people cigarette papers probably won't cause an indelible barrier to their smoking a bit of whatever takes their fancy.
  2. Putting porn verification in the hands of the people who sell the porn is an open season for blackmail.
And this is the thing. Make up your own mind about porn: it's not illegal to make it (for the most part) or consume it (usually) in the UK. But if we have to register our consumption of porn, we're at the mercy of laws that will certainly change for the worse. 

Being a registered porn consumer will automatically put you in the frame for sex crimes, for example, regardless of any other suspicion. The register of casual porn users will become a list of automatic suspects. 

And porn companies, who have our credit card details, would be in an excellent position to threaten us, fake our browsing or chat behaviour or otherwise fuck us over.

And of course that's all before worrying about how the whole registration and access business might work, which is nightmarish in itself just from an engineering perspective.

TL;DR: It's complicated. Age verification won't protect anyone and it'll certainly expose people who haven't done anything wrong to undue and improper security. 

And above all, it won't protect the people who need the most protection: the performers.


Wednesday 19 July 2017

Evil Thoughts: be the fox in your own hen house

https://s-media-cache-ak0.pinimg.com/736x/c2/e1/99/c2e199a8d5bc3ac170e6c0455788e9ff.jpgA lot of people feel that we only really wake up to security when we’re stung by an attack. I’m not sure this is true. For example, we might learn less about security when our house is burgled than we do when we lock ourselves out. We always manage to get back in eventually, after all. We might find inventive ways to gain entry or call a locksmith who will have the door open in about five seconds. Either way, we learn something about our house’s vulnerabilities and how secure it really is.

We might remember that one slightly dodgy window latch we’ve been meaning to fix and wonder if we might be able to wiggle it open from outside. We might use an improvised device to see if we can open a door from the inside through the letterbox. We wonder whether we could use that rock in the garden to smash a window. We worry about setting off the alarm, but then remember that nobody takes any notice of alarms anyway.

Whatever – and regardless of whether we succeed – we’ve suddenly thought a lot more about home security than we ever have before. In contrast, when we’re burgled we tend to assume that the burglars have secret knowledge or skills because, well, that’s what burglars do. We expect burglars to be able to gain entry if they try long and hard enough, but we assume this is because of their ninja skills, not because our houses are all fundamentally insecure.

It’s only when we try to break in ourselves that we realise the truth.

This is why penetration testing (aka pen testing) exists. A pen test is an authorised attack on a system designed to expose its vulnerabilities so that they can be fixed. It’s the equivalent of the desperate householder trying to break in to their own home. There are many pen testing specialists out there and the field seems to be growing. This is because to take security seriously, you must see the system from outside and tech companies are increasingly recognising this.

This is also true of our own personal systems: our networks of computers, tablets, phones, ebook readers, digital assistants, smart devices, connected lightbulbs, software, services (such as social networking, online purchasing etc) and – importantly – our friends and family. We need to think about those things as if we were trying to gain elicit access to our own stuff if we are to protect our privacy and safety.

A trivial example: we might not feel a need to lock our computers when we leave the house, because the house itself is locked and anyway, it’s annoying to have to type in passwords every time the screen locks. But we’ve just seen how easy it is to break into a house. It’s not unreasonable to expect that – increasingly – burglars will enter our homes to gain access to our devices for the information they contain as much as for the value of the hardware. Leaving aside for now the standard (and incorrect) defence that “there’s nothing interesting on my devices anyway” (which I’ll talk about a lot more in weeks to come), our devices are very useful to people with ill intent. They might not have any particular grudge against us, but might use the data on our devices to steal our identities, creating new credit accounts in our name, spending the contents and saddling us with the debt and the damage to our credit ratings.

We need to think about the things a bad guy might do if they had physical access to our devices and implement safeguards which will stop them doing harm or at least make it too difficult to bother. We need to think like the burglar rather than like the complacent homeowner.

A more complicated example: a security setup is only as good as its weakest link. Sometimes the weakest link is a person or our relationship with that person. Our friends and family might be leaking information about us that could be useful to an attacker. Which means, of course, that we are probably doing the same to them. Here is one way we can weaken other people’s security without necessarily knowing it:

When we use Amazon to buy a gift for someone (to be sent to them directly), we’re telling Amazon an awful lot about that person. We’re telling Amazon that they are associated with us in some way, that perhaps it is their birthday or anniversary, the kind of things they like (or at least the things we think they like) and so on. If our friend also has an Amazon account – which is very likely – then Amazon will know even more. It will know about the people they buy gifts for, the other people you buy gifts for and might be able to track which of these other people also buy gifts for each other. They’ll be able to infer how good we all are at gift buying, based on the differences between what we buy for other people and what they buy for themselves. They can infer the strength or quality of relationships based on the money we all spend on each other and even on how late we leave it before ordering something, whether we look at their wish lists and so on. We’ve given away a lot of potentially exploitable information about people who didn’t give us permission to do so and probably don’t know that it has happened. And chances are they’re doing the same to us.

All this information could be available to criminals whenever Amazon is hacked, which will certainly happen quite often.

This is why we need to think like burglars rather than householders. We need to act like we’re locked out and have to find interesting ways to get back in through improvised means. We need to be the fox in our own hen-houses.

But while I think this is sound advice, it isn’t very practical yet. I’ll get around to more practical advice in the coming weeks. In the meantime, here is an example to get you thinking about the criminal mindset you’ll need to keep you and your friends safe.

When you last changed a password because you forgot the old one, did you do something like open a new message in your email client to temporarily store it before you could memorise it or store it somewhere more secure (I’ve seen people do this)? Do you know whether the email client saved that message as a draft? Draft emails are often a rich source of useful information, partly because we all tend to forget they exist.

Be sneaky! Tell me about your sneaky ideas in the comments.

Tuesday 18 July 2017

DRM needs to protect people other than the rights holders

I'm all for people being able to protect the content they've created from being abused, but DRM
(Digital Rights Management) is frequently used for less noble purposes.  I'll go into this in this week's Wednesday post.

The World Wide Web Consortium (W3C) and partcularly its director Tim Berners-Lee (yes, that Tim Berners-Lee) recently decided to ignore numerous objections by W3C members and the internet-using public in general to go ahead with its plan to incorporate DRM into the web's body of standards.

There are numerous problems which I'll talk about tomorrow (or you can read the text of the EFF's apppeal against the decision here).  For now, read the EFF's appeal to get a sense of who and what we're fighting.

Creators deserve protection but publishers shouldn't get to decide how consumers use the content they've bought or how resarchers investigate the security of DRM systems or which innovations are allowed to succeed. This is the battleground. I'll write more about it tomorrow.

Breaking encryption

Breaking encryption is a really bad idea. There's no such thing as a back door that 'good' people (such as governments) can use and bad people such as criminals can not. This doesn't prevent virtually every government from pledging to force technology companies to implement encryption back doors in the false name of security against terrorist attacks. This won't work because terrorists do not have much incentive to obey the law. This rather reminds me of the little green visa forms you had to fill in when flying to the US. You had to tick a box to say you hadn't committed any genocides as though lying on the visa form was the greater offense.

Australia's government is the latest to adopt this pre-beaten dead horse of a stupid idea. They're copying the UK, which makes me feel guilty. I feel I must apologise for the conduct of our nation. Sorry, Australia.

The article I quoted goes over the usual stuff but I found the following amusing (emphasis mine):
But some experts, as well as Facebook, warned that weakening end-to-end encryption services so that police could eavesdrop would leave communications vulnerable to hackers.
The quote from Australian Prime Minister Malcolm Turnbull is exactly as terrifying as it is hilarious:
The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia.
I look forward to the anti-gravity bill. 

Relaunch

https://www.etsy.com/uk/listing/189665986/gothic-art-tattooed-tattoo-evil
Absolutely not what this blog was named after.
Welcome to the relaunched evilwednesday. There will be a few changes around here.

The biggest change is that I'll be posting more often and more briefly. I'll try to limit myself to a few sentences on each post unless I don't. It is my blog :) I'll also write some commentary every Wednesday about various topical things. That's generally where I'll be more expansive.

As always the topics will relate to privacy and open rights. Other topics I'm interested in such as human rights, social justice, atheism, skepticism, cats will appear on another blog (URL to follow when I've decided where to put it) and those posts will be cross-linked here without comment so you can more easily ignore them.

Finally, I'll be trying to publicise this content more widely and generate some interest in privacy/open rights activism.

If you have anything to contribute, the comments are your playground.