Facial Recognition - can of worms

I hope you are right but there are an awful lot of suppositions, I think, I doubts etc contained within your post Andy and we can never tell who is going to be in power in 10 or 20 years and how they might well maliciously use technology that was created with the best of intentions.

I am firmly against the whole 'if you have nothing to hide you have nothing to worry about' shtick when it comes to civil liberties.
We don't know who will be in power, but we know there are enough Tories who won't want to be watched and Labour probably wouldn't use it for "dodgy" reasons. I just don't think it would ever get out of hand, the people don't want that, most of the MP's won't want that. It would likely get used for things which most approve of, and not used for things which most disapprove of.

I think the UK will gradually shift left over time, so this would end up being used more for protection, rather than restriction/ dodgy aspects.
 
Study finds gender and skin-type bias in commercial artificial-intelligence systems
Examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.

Massachusetts Institute of Technology study
 
I used to be involved in data analytics. Firstly for credit scoring models for major banks and then for Marketing for charities. You would be amazed if you saw and understood the amount of data that can be used and is held about you and your postcode. This has risen massively over the last ten years, with the rise of Social Media.
In my time the data used, protected people from making the wrong decisions with their finances which may have a consequence to the organisation that was providing the lending. If the customer really wished to apply for credit that was unaffordable they could use another institution. However we all remember the credit crunch of 2008.
I have no problem my data being used to screen and filter out undesirables, so long as their is an opportunity to challenge and appeal in case of error. No model is 100% reliable and a good result would be 80%.
Equally I have no problem being tracked in terms of my movements. If this could be used to identify criminal activity, then that would be a good outcome, again a process to challenge and override is needed.
Data is here to stay, and I would doubt that anything can be done to stop it going forward. Indeed I would rather be monitored by data, than a human trying to make a subjective decision, that has been proven to be very unreliable.
 
What if you are wearing a clowns nose or false big ears or sunglasses.........🙄

Then they know you are a cabinet member


I hope you are right but there are an awful lot of suppositions, I think, I doubts etc contained within your post Andy and we can never tell who is going to be in power in 10 or 20 years and how they might well maliciously use technology that was created with the best of intentions.

I am firmly against the whole 'if you have nothing to hide you have nothing to worry about' shtick when it comes to civil liberties.

The 'if you have nothing to hide don't worry' narrative is ok for some.

You only have to look at stop and search to see that certain communities are targeted more than others. Innocent individuals targeted, mainly on skin colour, time and time again.
 
Legislation around proper use is essential.
One day, it's use will avert a disaster such as a terrorist attack - assuming that it hasn't already.
Personally I'd like to know if a terrorist enters a venue my family are in. And I don't mind them authorities (whoever they are) knowing my movements.
 
I'm generally OK with it on private property where I can effectively opt out. However, I think it should be subject to GDPR 'no sharing' type restrictions. I'm strongly opposed to it in public spaces and I'm also opposed to data sharing. It's a slippery slope to a China-style social credit system.
 
For football it doesn't need authority. East ham Council used is own database of known trouble makers. Other clubs could do the same.

It's private property and entry can be refused for any reason.

The issue comes when the technology is used beyond the scope of recognition and moves into assumption of uncommitted acts that are actioned on in a public space.
If its a public body authority should be sought, that's what the law says, for private use its not required.
 
Study finds gender and skin-type bias in commercial artificial-intelligence systems
Examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.

Massachusetts Institute of Technology study
That's not really a bias, it's just physics.

Lighter tones absorb less light/reflect it better, so it's easier to make out detail, and darker tones absorb it. It's the same as any other photography (basically what this is relying on), light is the photographer's friend. Sure you can bring up the exposure to brighten anything, to make anything easier to see, but the darker it is, the more noise (error) you're going to get.

To get something which effectively works with lower light, and which is fast enough not to be blurred is extremely expensive for the lens.
 
To get something which effectively works with lower light, and which is fast enough not to be blurred is extremely expensive for the lens
So does that mean it can be done, just that it costs more?
 
I'd normally not be bothered about this stuff, but given that we're in a country now where people can be arrested for holding up a blank piece of paper or simply shouting disapproval at the monarchy then i'd be pretty cautious about the motive for implementation at large public events.
 
To get something which effectively works with lower light, and which is fast enough not to be blurred is extremely expensive for the lens
So does that mean it can be done, just that it costs more?
It can be made better, but not fully fixed, it's constrained by physics. The cost/ benefit with photography gear isn't great, small changes cost lots of money. I've not bought a new lens in or a new camera in a few years, but they hadn't really moved on a great deal in 10 years when I was doing more of that.

Like a half-decent lens which works in ok light might be £500-£1000 if it can zoom, but one which does the same with less light might be 3-5k, that's for dedicated lenses though in real DSLR cameras. Real cameras also have large sensors ready to take that light in too, which is also very important.

Prime lenses are much cheaper, but you can't zoom with them, which won't be great with moving targets. They would work where people stop to get a picture took, but these areas would be well-lit anyway. Basically like a photo booth.

Most CCTV of facial recognition stuff will be about the same quality as a phone lens/ sensor, probably even worse, and they're nowhere near as good as a real camera, especially in low light, the sensor is just not big enough. To use a good sensor and good lens is probably talking 3-5k alone, never mind the other CCTV aspects. If it's recorded from a video it's going to be worse too, as they're probably shooting at 30-60 frames a second, which isn't long enough to let a lot of light in.

Increasing the megapixels can help, but if you've not fixed the other restrictions all you're getting is more pixels of noise. This is why phone companies advertise higher megapixels, it's to create an illusion that you're getting something which is a lot better, but you really aren't, how they deal with light is more important, aperture, sensor size etc.
 
It can be made better, but not fully fixed, it's constrained by physics. The cost/ benefit with photography gear isn't great, small changes cost lots of money. I've not bought a new lens in or a new camera in a few years, but they hadn't really moved on a great deal in 10 years when I was doing more of that.

Like a half-decent lens which works in ok light might be £500-£1000 if it can zoom, but one which does the same with less light might be 3-5k, that's for dedicated lenses though in real DSLR cameras. Real cameras also have large sensors ready to take that light in too, which is also very important.

Prime lenses are much cheaper, but you can't zoom with them, which won't be great with moving targets. They would work where people stop to get a picture took, but these areas would be well-lit anyway. Basically like a photo booth.

Most CCTV of facial recognition stuff will be about the same quality as a phone lens/ sensor, probably even worse, and they're nowhere near as good as a real camera, especially in low light, the sensor is just not big enough. To use a good sensor and good lens is probably talking 3-5k alone, never mind the other CCTV aspects. If it's recorded from a video it's going to be worse too, as they're probably shooting at 30-60 frames a second, which isn't long enough to let a lot of light in.

Increasing the megapixels can help, but if you've not fixed the other restrictions all you're getting is more pixels of noise. This is why phone companies advertise higher megapixels, it's to create an illusion that you're getting something which is a lot better, but you really aren't, how they deal with light is more important, aperture, sensor size etc.
I am not sure that is the problem. Darker skin tones hide contours. It's essentially why black people tend to look younger, the lines are not so obvious. Most facial recognition works, primarily on face geometry. It has to pick up features, eyes nose and mouth. An interesting aside facial recognition can recognise a child's face and match it to the adult face because the first thing it does is normalise the size.

Perhaps it is exactly the same thing you mean.
 
I am not sure that is the problem. Darker skin tones hide contours. It's essentially why black people tend to look younger, the lines are not so obvious. Most facial recognition works, primarily on face geometry. It has to pick up features, eyes nose and mouth. An interesting aside facial recognition can recognise a child's face and match it to the adult face because the first thing it does is normalise the size.

Perhaps it is exactly the same thing you mean.

It's definitely a problem, it can't not be, but yeah the problem you describe is effectively the same thing.

It's hard for a human to see the detail in anything darker, and you basically don't see as much of the visible wrinkles or face contours/ geometry as the colours don't let you. A computer is looking for the same thing with AI, they're looking for colour contrasts, highlights, shade etc, then the geometry is figured out from that.

Works the same as trying to identify a white person in sunlight, shade, heavy shade and then darkness, it gets harder the darker you go. Skin tones are replicating the same thing in a basic sense.

The child > adult thing is fairly simple for AI, I expect. It's basically just scaling up the geometry and then adjusting with an age algorithm/ filter which has probably been figured out by analysing 100's of thousands of known child to adult pics, of the same person. Of course, the best chance of getting the best result is to have an excellent starting picture for the child and knowing their age, then maybe other things like weight and height might help. Then say if you know the adult's age, weight and height they can scale for that accordingly. For parts you don't know it will just use assumed parameters, and give a range.
 
I hope you are right but there are an awful lot of suppositions, I think, I doubts etc contained within your post Andy and we can never tell who is going to be in power in 10 or 20 years and how they might well maliciously use technology that was created with the best of intentions.

I am firmly against the whole 'if you have nothing to hide you have nothing to worry about' shtick when it comes to civil liberties.
I have nothing to hide but I have still lost the freedom to go about my life unwatched and unmonitored. A liberty, infringed.
 
Last edited:
Back
Top