Facebook has come under fire in the UK from their Child Exploitation and Online Protection Center (Ceop) for being unwilling to insert a ‘Panic’ button on their service.
The thinking by CEOP is that such a button would deter pedophiles and other criminal behavior. Citing more than 297 cases where UK users reported abuse to Facebook that did not receive a response – and where the users then turned to Ceop – Jim Gamble, head of the agency said their investigations found sexual grooming, bullying and hacking, and that some suspects have been arrested.
While CEOP has identified a problem, their solution is flawed.
In this particular case I agree with Facebook. Placing a “panic” button that routes abuse reporting directly to Ceop is a very poor choice for 3 reasons:
- Little of what is reported will actually be a police matter, and giving this information to the police is then highly inappropriate – it’s none of their business. You don’t want your child investigated by the police simply because someone chooses to report them; that’s a form of bullying in itself.
- It is a poor use of police time to sift through the flood of ‘panic reports’ that don’t actually merit their attention. Shifting the burden of responsibility and cost that should appropriately fall on the company to the public is an inappropriate use of tax funds.
- A ‘panic’ button circumvents the service’s abuse system, which in turn means the service cannot effectively manage what’s happening on their service, or understand how to take remedial action – including blocking users.
The real solution is for users, parents of users, and governments, to hold service providers accountable for providing strong abuse detection capabilities, combined with appropriate real-time response by moderators to any abuse that’s reported.
THIS is where Facebook, and other companies providing online services for consumers need to step up.