Techdirt Daily Newsletter for Saturday, 11 September, 2021

 
From: "Techdirt Daily Newsletter" <newsletters@techdirt.com>
Subject: Techdirt Daily Newsletter for Saturday, 11 September, 2021
Date: October 8th 2020

Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new

Techdirt Daily Newsbrief

Techdirt Email.

Stories from Wednesday, October 7th, 2020

 

16K COVID-19 Cases Go Missing In UK Due To Government's Use Of Excel CSVs For Tracking

from the excel? dept

by Timothy Geigner - October 7th @ 7:46pm

Yes, yes, you're sick of hearing about COVID-19. Me too. But the dominant force of 2020 continues to provide news, often times with a technology focus. This mismanaged pandemic has already given us an explosion of esports, students gaming remote learning systems, and enough dystopia to make George Orwell vomit in his grave.

But to really get your anger bubbles gurgling, you need turn only to the myriad of ways far too many governments have taken a moment that requires real leadership and forethought, and pissed it all down their legs. America appears to be trying to lead the charge in this, with our shining city on the hill mostly being illuminated by headlights of cars carrying sick passengers looking to get tested for this disease. Still, we're not alone when it comes to sheer asshatery. The UK recently managed to lose thousands of COVID-19 cases... because it was tracking them in Excel CSVs.

The issue was caused by the way the agency brought together logs produced by commercial firms paid to analyse swab tests of the public, to discover who has the virus. They filed their results in the form of text-based lists - known as CSV files - without issue.

PHE had set up an automatic process to pull this data together into Excel templates so that it could then be uploaded to a central system and made available to the NHS Test and Trace team, as well as other government computer dashboards.

Public Health England (PHE) decided to put all of this information into a file using the XLS format. XLS was first introduced in 1987 and was replaced by the XLSX format over a decade ago. Putting aside the use of Excel to monitor positive COVID-19 cases in a major industrialized nation for just a moment, just the use of an antiquated format managed to lose PHE over sixteen thousand positive cases.

How? Well, XLS has restrictions as to how many rows of data it can record.

As a consequence, each template could handle only about 65,000 rows of data rather than the one million-plus rows that Excel is actually capable of. And since each test result created several rows of data, in practice it meant that each template was limited to about 1,400 cases.

When that total was reached, further cases were simply left off.

Which means the people that had COVID-19 weren't tracked for contact tracing. The government and its people didn't have a complete picture as to either the total case count for the disease, nor its positivity rate. In other words, the agency in charge of national health failed to keep the nation informed as to its risk exposure because it didn't know how to properly use a common office application that it repurposed to record COVID-19 data.

Labour's shadow health secretary Jonathan Ashworth said lives had still been put at risk because the contact-tracing process had been delayed.

"Thousands of people [were] blissfully unaware they've been exposed to Covid, potentially spreading this deadly virus at a time when hospital admissions are increasing," he told the House of Commons. "This isn't just a shambles. It's so much worse."

The UK's Health Secretary told the House of Commons that PHE had decided to replace the use of Excel, or what he called a "legacy system", two months ago. But apparently PHE hadn't gotten around to it yet.

And still hasn't, actually. In fact, PHE's plan to temporarily fix all of this is... more Excel!

To handle the problem, PHE is now breaking down the test result data into smaller batches to create a larger number of Excel templates. That should ensure none hit their cap.

But insiders acknowledge that the current clunky system needs to be replaced by something more advanced that excludes Excel, as soon as possible.

When you hear complaints that governments are not taking this pandemic seriously, this is what they mean.

6 Comments »

Content Moderation Case Study: Suppressing Content To Try To Stop Bullying (2019)

from the not-a-good-solution dept

by Copia Institute - October 7th @ 3:33pm

Summary: TikTok, like many social apps that are mainly used by a younger generation, has long faced issues around how to deal with bullying done via the platform. According to leaked documents revealed by the German site Netzpolitik, one way that the site chose to deal with the problem was through content suppression -- but specifically by suppressing the content of those the company felt were more prone to being victims of bullying.

The internal documents showed different ways in which the short video content that TikTok is famous for would be rated for visibility. This could include content that was chosen to be “featured” (i.e., seen by more people) but also content that was deemed “Auto R” for a form of suppression. Content rated as such was excluded from the “for you” feed on Tiktok after reaching a certain number of views. The “for you” feed is how most people view TikTok videos, so this rating would effectively put a cap on views. The end result was the “reach” of content categorized as Auto R was significantly limited, and completely prevented from going “viral” and amassing a large audience or following.

What was somewhat surprising was that TikTok’s policies explicitly suggested putting those who might be bullied in the “Auto R” category -- even saying that those who were disabled, autistic, or with Down Syndrome, should be put in this category to minimize bullying.

According to Netzpolitik, employees at TikTok repeatedly pointed out the problematic nature of this decision, and how it was discriminatory itself and punishing people not for any bad behavior, but because of the belief that their differences might possibly lead to them being bullied. However, they claimed that they were prevented from changing the policies by TikTok’s corporate parent, ByteDance, which dictated the company’s content moderation policies.

Decisions to be made by TikTok:

  • What are the best ways to deal with and prevent bullying done on the platform?
  • What are the real world impacts of suppressing the viral reach of any content based on the type of person making the content?
  • Is it appropriate to effectively prevent those you think will be bullied from getting full access to your platform to prevent the possibility of bullying?
  • What data points are being assessed to justify the assumptions being made about “Auto R” being an effective anti-bullying tool?
Questions and policy implications to consider:
  • When there are strong pushes from policymakers to platforms that they need to “stop bullying” will it lead to unintended consequences like the effective minimization of access to these platforms by potential victims of bullying, rather than dealing with the root causes of bullying?
  • Will efforts to prevent a bad behavior be used to really sweep that activity under the rug, rather than looking at how to actually make a platform safer?
  • What is the role of technology intermediaries in preventing bad behavior?
Resolution: TikTok admitted that these rules were a “blunt instrument” that were put in place rapidly to try to minimize bullying on the platform -- but that the company had realized it was the “wrong” approach and had implemented more nuanced policies:

"Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy," he told the BBC.

"This was never designed to be a long-term solution, and while the intention was good, it became clear that the approach was wrong.

"We have long since removed the policy in favour of more nuanced anti-bullying policies."

However, the Netzpolitik report suggested that this policy had been in place at least until September of 2019, just three months before its reporting came out in December of 2019. It is unclear exactly when the “more nuanced” anti-bullying policies were put in place, but it is possible that they came about due to the public exposure and pressure from the reporting on this issue.

5 Comments »

Facebook Internal Memo Reveals Challenges Social Media Companies Face In Protecting Democracy

from the democracy-and-social-media dept

by Gary Shapiro - October 7th @ 1:29pm

Is social media good or bad for democracy?

A recent internal memo from a departing Facebook employee may force us to do a deep dive on this issue. And it should – but not for the memo’s allegations that Facebook focuses primarily on protecting democracy and removing fake accounts from Western countries.

Rather, the memo sets out to explain how Facebook – and presumably other social media companies – have become self-directed global state departments trying to triage fraud by the wealthy, politically powerful or simply evil groups creating fictitious accounts trying to sway public opinion and stomp out groups with other views. These bad actors weaponize Facebook and other social media to ridicule those who challenge incumbents – thus twisting the concept of democracy and the value of the media as a town square for dialogue.

All of this was exposed in detail by data scientist Sophie Zhang in her internal memo posted on her last day of work. In it she described her job as tracking down fraudulent accounts and said she was fired for wanting to spend more time on protecting democracies in non-Western countries. The Buzzfeed article that reported the memo quotes a former colleague who lauded Zhang’s integrity and passion for her job of tracking down bots attempting to influence elections.

The fallout from the Zhang piece remains to be seen. She likely will be sought by the State Department, government investigators and private and political organization given her unarguably deep experience, moral judgment and strong skill set in investigating fraud using social media.

But her revealing memo raises the bigger issue of the huge expectations and complex job social media companies now face. Facebook must monitor some 2.6 billion users, along with sophisticated efforts by governments around the world to misuse the platform. For me, the surprising thing isn’t that Facebook failed to remove all disinformation in Honduras or Bolivia or Azerbaijan, it’s that one company is now expected to moderate political discourse across the entire globe, accurately determining in real time what statements are valid and what is not. Even with Facebook’s reach and resources, that is simply not a reasonable expectation.

An even bigger point for Americans is that we are lucky to have Facebook and other major social media companies based in our country. Our cultural affinity and history favoring diversity and different viewpoints, our First Amendment, our melting pot of people and ideas, and our Constitution and history favoring choice in elections should require that we protect and help Facebook and other social media companies as they do the best they can to preserve and expand American – and even global – democracy.

President Trump's focus on TikTok's Chinese ownership is the other side of this coin. The Chinese are everything we are not with their Uighur detention centers, social monitoring and rating of every citizen and total control of speech. They have proven that totalitarianism may be effective at controlling the aftermath of a pandemic – although their restrictions on speech and crackdowns on dissent allowed COVID-19 to spread in the first place. But I choose individual liberty and want a choice besides the one communist candidate China offers for each position.


As Americans, we face a quandary. How do we recognize the value to democracy and support our top social media companies which now dominate the world? For one, we should agree that democracy is a foundational principle and fraudulent accounts should be rooted out. More, we need transparency by social media companies in what they expect and allow – along with what they won't tolerate. This does not necessarily require government action. Eli Lehrer, president of the R Street Institute – a nonprofit, nonpartisan, public policy research organization – suggests big tech companies follow the comic book industry of the 1950s and develop a voluntary code. Another idea is for a multilateral, democracy-loving advisory board where each country gets input – but not control.

Although Buzzfeed headlines Zhang as a whistleblower, she has not spoken publicly yet and only circulated an internal memo. She came across as seriously diligent and concerned with her job of ferreting out fraudulent misuse of Facebook’s platform to subvert democracy. She noted that Facebook had made efforts to control misinformation: she removed 672,000 fake accounts spreading disinformation about the pandemic, and took down 10.5 million fake reactions and fans from high-profile politicians in Brazil and the U.S. in the 2018 election. She said she was overwhelmed by her inability to address government-organized misuse occurring on Facebook in smaller, non-Western countries.

Democracy is the cornerstone of our culture and nation. And social media companies give voice to those with different ideas. Think about the reach President Trump has on Facebook and Twitter!

We should spend less time trying to cut down the size of these American crown jewel companies or removing their legal protections for user-generated content – and more time figuring out how they can operate within principles protecting democracy and the free flow of ideas by real citizens.

Gary Shapiro is president and CEO of the Consumer Technology Association (CTA)®, the U.S. trade association representing more than 2,000 consumer technology companies, and a New York Times best-selling author. He is the author of the book, Ninja Future: Secrets to Success in the New World of Innovation. His views are his own.

22 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Our New Monetization Experiment: Coil & The Web Monetization Protocol

from the check-it-out dept

by Mike Masnick - October 7th @ 12:00pm

As some of you know there have been no ads on Techdirt for the past few months or so, due to some issues with the way in which Google's AdSense program was run. We've been sorting through some possible options for some better advertising solutions and we may begin some new experiments shortly. In the meantime though, I want to thank everyone who has stepped up to support us in one way or another. I won't deny that losing the revenue from ads sucks, because it does, but we've always relied on a variety of other business models beyond ads as well.

The ad situation has caused a bunch of companies and projects to reach out to us about some other monetization ideas and options. Most of them were... not great. Or they were intrusive or annoying for you. But one that was quite intriguing was the idea of experimenting with Coil. Coil is working on creating an actual standard for web monetization, called the Web Monetization standard. It's being proposed as a W3C standard, and assuming that happens, then it wouldn't even just be connected to Coil any more. There are various ideas that have some similarities to what Coil is doing, but Coil is the only one I know of that is built around a standard that they hope will be adopted more widely and independently of Coil itself. It's also using the Interledger open protocol and you know how I feel about protocols.

So, how does it work? Well, as of last week, Techdirt is now "web monetized" with Coil. If you have a Coil account, which runs you $5/month, and the Coil browser extension, when you browse Techdirt, Coil will automatically deposit money into a wallet for us. It will also do that for a few other sites that are using Coil... including Imgur, Hackernoon and a bunch of Conde Nast sites like Wired (though, bizarrely, not Ars Technica). The more time you spend on Techdirt, the more we get, but it doesn't change how much you pay. And, also, it's not like we're going to try to keep you here any longer than usual.

Being realistic here: we don't expect this to be huge. In fact, we barely expect it to be small. But I still believe in the promise of the early web, built on open protocols and standards. Marc Andreessen, creator of the original graphical web browser and now the successful venture capitalist, has said multiple times that the original sin in creating the browser was the failure to build in monetization. Coil and the various other projects appear to be an attempt to rectify that -- and to me, that's worth supporting and experimenting with. I don't know where it will go, but if we can somehow help make it more widely adopted (and get a bit of support back in exchange) that seems like a good thing.

We're also brainstorming some other ideas around how we might use Coil in fun ways, so feel free to make some suggestions in the comments as well if you have any creative ideas. If you want to test it out, head on over to Coil and sign up. And let us know what you think.

20 Comments »

Texas Grand Jury Indicts Netflix For 'Lewd Exhibition' Of Children In Its Movie 'Cuties'

from the child-beauty-pageants-expected-to-remain-unaffected dept

by Tim Cushing - October 7th @ 10:44am

It seems impossible that 2020 could get any stupider. But here we are, watching in bemusement as a showboating prosecutor talks a grand jury in a tiny Texas county into indicting an online streaming service for… let's check the record… "promotion of lewd visual material depicting child."

Here's "liberty loving conservative" (and state rep) Matt Schaefer's tweet, which contains a snapshot of the indictment.

Here's what the tweet says above the indictment photo:

Netflix, Inc. indicted by grand jury in Tyler Co., Tx for promoting material in Cuties film which depicts lewd exhibition of pubic area of a clothed or partially clothed child who was younger than 18 yrs of age which appeals to the prurient interest in sex

Go ahead and jump to the replies if you enjoy watching a bunch of people who don't understand the First Amendment or state law cheer on this showy act of futility.

The indictment [PDF] states that Netflix broke the law by distributing the film "Cuties" via its streaming service. Jurisdiction is presumably proper because even Tyler County residents can subscribe to the service. If you're not familiar with "Cuties," it's a coming-of-age film dealing with a Sengalese preteen who begins to emulate the sexualization of other females while growing up as a Muslim in Paris, France. It won awards at the Sundance Film Festival and flew under the radar until Netflix began its promotion of the film, which centered on the more questionable depictions of underage girls engaging in hyper-sexualized behavior.

All hell broke loose for a few weeks last month. Calls to boycott Netflix filled social media services and a number of US politicians decided this was the thing they should be spending their time on as thousands died from COVID, businesses closed forever and unemployment remained high.

Here's just some of the legislative-level furor that followed Netflix's release of the film.

Senator Bob Hall (R-Canton) swore to file a bill that would outlaw pedophilia in the state constitution, Texas Attorney General Ken Paxton joined two other state attorney generals in a letter asking Netflix to remove the film, and U.S. Senator Ted Cruz (R-TX) asked U.S. Attorney General William Barr to investigate the company for “the filming of minors engaging in sexually explicit conduct.” State Rep. James White (R-Tyler) likewise wrote Paxton asking for an investigation into the film.

Now that we're all caught up, let's look at the indictment and see if we can find the fatal flaw:

"Knowingly promote[d] visual material which depicts the lewd exhibition of the genitals or pubic area of a clothed or partially clothed child who was younger than 18 years of age at the time the visual material was created, which appeals to the prurient interest in sex, and has no serious literary, artistic, political, or scientific value…"

Anyone can get an indictment. This much is known about grand juries. But securing a conviction is going to be a hell of a lot more difficult. The prosecutor is going to have to convince a judge (and possibly a jury) that a film that won awards at an international film festival contains no "serious literary or artistic value." That's even harder to argue under the Miller test erected by the Supreme Court, which says the work as a whole has to be considered in terms of artistic merits, not just the cringier parts that prompted backlash all over the internet. That should be enough to nullify the criminal case Tyler County DA Lucas Babin is bringing against Netflix.

There's some pandering going on here, but it originates in the prosecutor's office. This will score points with the kind of people willing to award points for performative wastes of public funds. No one else will be affected. "Cuties" will enjoy another few days of internet infamy, along with its US distributor. But no one's going to jail because Netflix distributed this movie, no matter how much one prosecutor wants it to happen.

Read More | 40 Comments »

Daily Deal: CharbyEdge Pro 6-in-1 Universal Cable

from the good-deals-on-cool-stuff dept

by Daily Deal - October 7th @ 10:39am

CharbyEdge Pro 6-in-1 Universal Cable is the perfect everyday cable for anyone who owns many devices. Bring it to charge your MacBook Pro/Air, iPhone, iPad, Android Phones, or even wireless earphones. And rest assured your time is saved -- with universal fast charging support. It's also remarkably durable and long (6.5 ft.). Get one for $25 or two for $45.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Comment »

Donald Trump Now Wants To Repeal Section 230, Which Will Actually Make The Stuff He Complains About Worse

from the our-president-is-a-fool dept

by Mike Masnick - October 7th @ 9:36am

We've already discussed how the President has been urging Congress to make "complaining about the internet" a key election issue for Republicans. This is why Congress has introduced 17 different bills about Section 230 this year, combined with two separate proposals from the White House itself. But apparently, even that is not enough for our completely clueless President. On Tuesday after facing a bit more mild moderation concerning dangerous lies about COVID that he had posted, he announced that he wanted to "repeal" 230 entirely.

This was in response to both Facebook and Twitter taking action after Trump's accounts falsely claimed that the flu is more deadly than COVID-19 (a truly incredible bit of disinformation coming directly from the President, a man still sick with COVID-19). Facebook removed the post while Twitter put a misinformation warning on it. And this is why Trump is upset and wants to "repeal 230."

To be fair, this is the same thing that his opponent in next month's election has been saying, with his equally ridiculous and foolish calls to "revoke" Section 230. Both of them are wrong. Neither of them seem to understand what Section 230 does or why repealing or revoking it won't help with whatever they think they're doing.

But since the President is the latest to spout this nonsense, it should be pointed out that if he got his wish, it would create the exact opposite of what he thinks will happen. Rather than putting pressure on social media companies to leave his nonsense, lies, propaganda, and disinformation alone, it will make them more likely to pull it down to avoid being taken to court and having to deal with questions of liability.

It's an ironic statement since, without the existence of Section 230, Trump very well might not be able to tweet it. If Congress were to remove social media platforms' liability protection, then companies like Twitter and Facebook would have no choice but to remove users' ability to post content at-will. Instead, moderators would have to vet and approve content to make sure that it wasn't potentially libelous.

This would exacerbate the very problem that many conservatives have with social media—namely, that Twitter (and to a lesser extent, Facebook) sometimes takes aggressive action against provocative right-wing speech, by labeling the content as misleading or removing it outright.

For whatever reason, both Democrats and Republicans seem to think that "Section 230" is Facebook/Google/Twitter. And if they don't like a move made by any of those companies they think the "solution" is to harm 230. This is wrong and shows a fundamental lack of understanding about how 230 works, what it does, and what would happen if it were changed or removed. It is ridiculous that the President calls for this in response to those sites trying to limit the damage caused by the President himself, and it's just as ridiculous that this seems to be one area that the President and his opponent actually agree on: that the open internet should go away.

49 Comments »

Stop Pretending The Trump GOP Genuinely Cares About Monopoly Power

from the gullible-and-adorable dept

by Karl Bode - October 7th @ 6:36am

Over the last year or two, a constant drumbeat has permeated tech news coverage. It goes something like this: the GOP is embracing a "populist" agenda by standing up to "big tech." The modern Trump GOP (with heroic consumer champions like Josh Hawley and Marsha Blackburn in the lead) we're told, have become stalwart opponents of monopolization, especially in tech. They're just super concerned about what this power means for free speech, especially given that conservative voices are routinely "censored" on the internet.

One problem. It's all bullshit. And there's a long line of journalists and experts who still somehow haven't quite figured that out yet. Or have figured it out but are too afraid of upsetting readers or advertisers to be honest about it.

Case in point: the New York Times, which this week explored how the GOP's interest in "reining in big tech" has stalled because "solutions" to modern tech problems could hurt revenues or don't include adequate hand-wringing over "conservative censorship":

"The Republicans’ chief objections to the report are that some of the legislative proposals against the tech giants could hamper other businesses and impede economic growth, said four people with knowledge of the situation. Several Republicans were also frustrated that the report didn’t address claims of anti-conservative bias from the tech platforms. Mr. Buck said in “The Third Way” that some of the recommendations were “a nonstarter for conservatives."

The Times, like most big outlets, proceeds from the assumption that the Trump GOP genuinely cares about reining in "monopoly power" in technology. But that gives the GOP way more credit than it has earned or deserves, and helps prop up bad faith bullshit as legitimate grievance.

For one, the GOP's breathless concerns about "monopolization" aren't apparent anywhere else. As the GOP freaks out over "big tech," for example, "big telecom" has been allowed to effectively guard the chicken house and eat the lion's share of the chickens. The GOP-controlled FCC effectively neutered itself at AT&T's and Comcast's request. Terrible job and competition killing telecom mergers have been repeatedly, rubber stamped by the GOP. Similarly there's zero evidence of any serious attempt to rein in other monopolized sectors from banking and airlines to pharmaceutical and energy.

There's also still no evidence that "conservatives are being censored." As in, none. Oddly, the New York Times can't be bothered to mention this. The lion's share of those being kicked offline are being kicked offline because they're simply...behaving like assholes on the internet. And in fact, there's far more evidence that platforms like Facebook are ignoring their own rules to protect right-wing speech because it's more profitable to let inflammatory bullshit bumble around the information ecosystem (see: Breitbart being a trusted news ally and nobody giving a damn that Ben Shapiro games Facebook systems to inflate traffic).

Here's the thing. This steady flood of shitty Section 230 bills aren't about policing monopoly power. And folks like Marsha Blackburn and Josh Hawley, who've never had a single bad thing to say about telecom monopolies, couldn't give any less of a shit about monopoly power. They do however care about political power. And the over-arching goal right now is to apply enough pressure on Silicon Valley giants that they don't start adequately policing political disinformation. Because if they do, many of the cornerstones of the modern Trump GOP (race baiting, divisive bullshit, inflammatory garbage, rampant disinformation) fall apart.

Yes, Democrats have plenty of bad ideas and frequently can be found doing nothing or making things even worse. Congress needs a reboot and fresh blood on a tragic scale. But that doesn't change the fact that the entire, multi-year GOP quest to tackle "big tech's monopoly power" has never actually been about monopoly power. It's about political power and money. It's about trying to shovel more ad revenue to accountability-immune telecom monopolies or Rupert Murdoch. It's also about preventing anybody from tackling the mountains of hateful internet bullshit that has become the cornerstone of Trumpism.

And despite countless gullible news reports and experts eager to portray GOP "big tech" pearl clutching as a good faith, earnest examination of the very real problems popping up in tech, it's simply not. And those who continue to pretend otherwise are part of the problem.

58 Comments »

After Years Of Claiming It Doesn't Use Facial Recognition Software, The LAPD Admits It Has Used It 30,000 Times Since 2009

from the we-regret-the-repeated-errors dept

by Tim Cushing - October 7th @ 3:30am

The Los Angeles Police Department apparently loves using facial recognition tech. It doesn't like talking about its love for this tech, though. It told the Georgetown Law Center on Privacy and Technology it had nothing to give the Center when it asked for its facial recognition tech documents.

The Los Angeles Police Department has repeatedly announced new face recognition initiatives—including a “smart car” equipped with face recognition and real-time face recognition cameras—yet the agency claimed to have “no records responsive” to our document request.

The LAPD flatly denied using the tech as recently as 2019.

"We actually do not use facial recognition in the Department," Rubenstein told the LA Times in 2019, adding an exception of "a few limited instances" where outside agencies used it during joint investigations.

Here's what the LA Times has discovered, thanks to public records that the LAPD finally decided to stop withholding.

The Los Angeles Police Department has used facial recognition software nearly 30,000 times since 2009, with hundreds of officers running images of suspects from surveillance cameras and other sources against a massive database of mug shots taken by law enforcement.

The new figures, released to The Times, reveal for the first time how commonly facial recognition is used in the department, which for years has provided vague and contradictory information about how and whether it uses the technology.

There's some technically true stuff in the LAPD's obfuscation. The LAPD does not have its own software. This makes it easier to claim it does not use the tech "in the Department." But the Department definitely uses the tech. The LAPD has direct access to the software owned by the Los Angeles County Sheriff's Department, something it uses thousands of times every year.

Despite this disclosure, the LAPD refuses to admit it misled the public in the past about its reliance on the tech.

LAPD Assistant Chief Horace Frank said “it is no secret” that the LAPD uses facial recognition, that he personally testified to that fact before the Police Commission a couple years ago, and that the more recent denials — including two since last year, one to The Times — were just mistakes.

When a citizen misleads a cop, it's obstruction. When cops mislead the public, it's just an honest mistake. But the reality of the situation is this: more than 300 officers have access to the database which contains 9 million photos. And the software the Sheriff's Department uses is run by Dataworks, which made news recently for being instrumental in two false arrests predicated on facial recognition searches run on its platform.

Now that the LAPD can't continue pretending it doesn't use facial recognition tech, it's begun issuing statements correcting its earlier "mistakes." The group of people receiving corrections include public records requesters who were previously issued flat denials about the LAPD's use of the tech. Now that the PD's reliance on facial recognition is out in the open, maybe Los Angeles legislators can finally get around to regulating government use of the unreliable tech.

8 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Visit Techdirt for today's stories.

Forward to a Friend
 
 
  • This mailing list is a public mailing list - anyone may join or leave, at any time.
  • This mailing list is announce-only.

Techdirt's original daily email. Once a day, Techdirt will email the full-length version of the previous day's stories from Techdirt.com (based on Pacific time).

Privacy Policy:

Floor64 will not share your email address with third parties.

Go back to Techdirt