Techdirt Daily Newsletter for Sunday, 12 September, 2021

 
From: "Techdirt Daily Newsletter" <newsletters@techdirt.com>
Subject: Techdirt Daily Newsletter for Sunday, 12 September, 2021
Date: August 13th 2020

Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new

Techdirt Daily Newsbrief

Techdirt Email.

Stories from Wednesday, August 12th, 2020

 

Las Vegas Police Are Running Lots Of Low Quality Images Through Their Facial Recognition System

from the that's-going-to-end-badly dept

by Tim Cushing - August 12th @ 7:42pm

Even when facial recognition software works well, it still performs pretty poorly. When algorithms aren't generating false positives, they're acting on the biases programmed into them, making it far more likely for minorities to be misidentified by the software.

The better the image quality, the better the search results. The use of a low-quality image pulled from a store security camera resulted in the arrest of the wrong person in Detroit, Michigan. The use of another image with the same software -- one that didn't show the distinctive arm tattoos of the non-perp hauled in by Detroit police -- resulted in another bogus arrest by the same department.

In both cases, the department swore the facial recognition software was only part of the equation. The software used by Michigan law enforcement warns investigators search results should not be used as sole probable cause for someone's arrest, but the additional steps taken by investigators (which were minimal) still didn't prevent the arrests from happening.

That's the same claim made by Las Vegas law enforcement: facial recognition search results are merely leads, rather than probable cause. As is the case everywhere law enforcement uses this tech, low-quality input images are common. Investigating crimes means utilizing security camera footage, which utilizes cameras far less powerful than the multi-megapixel cameras found on everyone's phones. The Las Vegas Metro Police Department relied on low-quality images for many of its facial recognition searches, documents obtained by Motherboard show.

In 2019, the LVMPD conducted 924 facial recognition searches using the system it purchased from Vigilant Solutions, according to data obtained by Motherboard through a public records request. Vigilant Solutions—which also leases its massive license plate reader database to federal agencies—was bought last year by Motorola Solutions for $445 million.

Of those searches, 471 were done using images the department deemed “suitable,” and they resulted in matches with at least one “likely positive candidate” 67% of the time. But 451 searches, nearly half, were run on “non-suitable” probe images. Those searches returned likely positive matches—which could mean anywhere from one to 20 or more mugshots, all with varying confidence scores assigned by the system—only 18% of the time.

Fortunately, low-quality images seemingly rarely return anything investigators can use. (Although that 18% is still 82 "likely positive matches...") If the system did, we'd be seeing far more bogus arrests than we've seen to this point. Of course, prosecutors and police aren't letting suspects know facial recognition software contributed to their arrests, so courtroom challenges have been pretty much nonexistent.

Although most of the information in the documents is redacted -- making it difficult to verify LVMPD claims about the software's contribution to arrests and prosecutions -- enough details remained to provide a suspect facing murder charges with information the LVMPD had never turned over to him or admitted to in court.

Clark Patrick, the Las Vegas attorney representing [Alexander] Buzz, told Motherboard that neither the LVMPD nor the Clark County District Attorney’s office ever informed him that investigators identified Buzz as a suspect using, at least in part, facial recognition technology. The Clark County District Attorney’s office did not respond to an interview request or written questions.

Had this information been given to Buzz and his attorney at the beginning of the trial, he likely would not have waived his right to a preliminary evidentiary hearing. If this had taken place -- along with knowledge of a private company's contribution to the investigation -- prosecutors may have had to produce information about the tech and the surveillance footage it pulled images from.

The documents don't appear to show a reliance on low-quality images to make arrests, but they do show investigators will run nearly any image through the software to see if it generates some hits. The precautions taken after this matter most. If investigators are only considering matches to be leads, it will head off most false arrests. But if investigators take shortcuts -- as appears to have happened in Detroit -- the outcome is disastrous for those falsely arrested. A person's rights and freedoms shouldn't be at the mercy of software that performs poorly even when given good images to work with. The use of this software is never going to go away completely, but agencies can mitigate the damage by refusing to treat matches as probable cause.

1 Comment »

Creating Family Friendly Chat More Difficult Than Imagined (1996)

from the the-kids-will-find-a-way dept

by Copia Institute - August 12th @ 3:40pm

Summary: Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids’ software company, Knowledge Adventure, who had built an online community in the mid-90s called “KA-Worlds.” Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney’s requirements for an online community, they realized chat would be next to impossible:

Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: "I’m confused. What standard should we use to decide if a message would be a problem for Disney?"

The response was one I will never forget: "Disney’s standard is quite clear:

No kid will be harassed, even if they don’t know they are being harassed."...

"OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs," we replied.

One of their guys piped up: "Couldn’t we do some kind of sentence constructor, with a limited vocabulary of safe words?"

Before we could give it any serious thought, their own project manager interrupted, "That won’t work. We tried it for KA-Worlds."

"We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words – the standard parts of grammar and safe nouns like cars, animals, and objects in the world."

"We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he’d created the following sentence:

I want to stick my long-necked Giraffe up your fluffy white bunny.

In that initial 1996 project, chat was abandoned, but as they continued to develop HercWorld, they quickly realized that they still had to worry about chat, even without a chat feature:

It was standard fare: Collect stuff, ride stuff, shoot at stuff, build stuff… Oops, what was that last thing again?

"…kids can push around Roman columns and blocks to solve puzzles, make custom shapes, and buildings.", one of the designers said.

I couldn’t resist, "Umm. Doesn’t that violate the Disney standard? In this chat-free world, people will push the stones around until they spell Hi! or F-U-C-K or their phone number or whatever. You’ve just invented Block-ChatTM. If you can put down objects, you’ve got chat. We learned this in Habitat and WorldsAway, where people would turn 100 Afro-Heads into a waterbed." We all laughed, but it was that kind of awkward laugh that you know means that we’re all probably just wasting our time.

Decisions for family-friendly community designers:

  • Is there a way to build a chat that will not be abused by clever kids to reference forbidden content (e.g., swearing, innuendo, harassment, abuse)?
  • Can you build a chat that does not require universal moderation and pre-approval of everything that users will say?
  • Are there ways in which kids will still be to communicate with others even without an actual chat feature?
  • How much of a “community” do you have with no chat or extremely limited chat?

Questions and policy implications to consider:

  • Is it possible to create an online family friendly environment that will work?
    • If so how do you prevent abuse?
    • If not, how do you handle the fact that kids will get online whether they are allowed to or not?
  • How do you incentivize companies to create spaces that actually remain as child-friendly as possible?
  • If “the kids will always find a way” to get around limitations, does it make sense to hold the companies themselves responsible?
  • Should family friendly environments require full-time monitoring, or pre-vetting of any usage?
Resolution: Disney eventually abandoned the idea of HercWorld due to all of the issues raised. However, the interview highlights the fact that they tried again a couple of years later, with an online chat where users could only pull from a pre-selected list of sentences, but it did not have much success:

"The Disney Standard" (now a legend amongst our employees) still held. No harassment, detectable or not, and no heavy moderation overhead.

Brian had an idea though: Fully pre-constructed sentences – dozens of them, easy to access. Specialize them for the activities available in the world. Vaz Douglas, our project manager working with Zoog, liked to call this feature "Chatless Chat." So, we built and launched it for them. Disney was still very tentative about the genre, so they only ran it for about six months; I doubt it was ever very popular.

The same interview notes that Disney tried once again in 2002 with a new world called “ToonTown”, with pulldown menus that allowed you to construct very narrowly tailored speech within the chat to try to avoid anything that violated the rules.

As the story goes, Disney still had problems with this. To make sure people were only communicating with people they knew in real life, one of the restrictions in this new world was that you had to have a secret code from any user you wished to chat with. The thinking was that parents would print these out for kids who could then share them with their friends in real life, and they could link up and “chat” in the online world.

And yet, once again, people figured out how to get around the restrictions:

Sure enough, chatters figured out a few simple protocols to pass their secret code, several variants are of this general form:

User A:"Please be my friend."
User A:"Come to my house?"
User B:"Okay."
A:[Move the picture frames on your wall, or move your furniture on the floor to make the number 4.]
A:"Okay"
B:[Writes down 4 on a piece of paper and says] "Okay."
A:[Move objects to make the next letter/number in the code] "Okay"
B:[Writes…] "Okay"
A:[Remove objects to represent a "space" in the code] "Okay"
[Repeat steps as needed, until…]
A:"Okay"
B:[Enters secret code into Toontown software.]
B:"There, that worked. Hi! I’m Jim 15/M/CA, what’s your A/S/L?"

Incredibly, there was an entire Wiki page on the Disney Online Worlds domain that included a variety of other descriptions on how to exchange your secret number within the game, even as users were not supposed to be doing so:

For example, let's say you have a secret code (1hh 5rj) which you would like to give to a toon named Bob.

First, you should make clear that you want to become their SF.
You: Please be my friend!
You: (random SF chat)
You: I can't understand you
You: Let's work on that
Bob: Yes
Now, start the secret.
You: (Jump 1 time and say OK. Jump 1 time because that is the first thing in your code. Say OK to confirm that was part of your secret.)
Bob: OK (Wait for this, as this means he has written down or otherwise recorded the 1)
You: Hello! OK (Say hello because the first letter of hello is h, which is the second part of your secret.)
Bob: OK (again, wait for confirmation)
Repeat above step, as you have the same letter for the third part of your secret.
Bob: OK (by now you should know to wait for this)
You: (Jump 5 times and say OK. Jump 5 times as this is the 4th part of your secret)
Bob: OK
You: Run! OK (The 5th part of your secret is r, and "Run!" starts with r)
Bob: OK
You: Jump! OK (Say this because j is the last part of your secret.)
Bob: OK
At this point, you have successfully transmitted the code to Bob.
Most likely, Bob will understand, and within seconds, you will be Secret Friends!

So even though Disney eventually did enable a very limited chat, with strict rules to keep people safe, it still left open many challenges for early trust & safety work.

Images from HabitChronicles

12 Comments »

Paulding County High School Un-Suspends Student But Can't Un-Infect Students Who Got COVID-19

from the think-of-the-children dept

by Timothy Geigner - August 12th @ 1:33pm

Well, that didn't take long. We had just been discussing how the Paulding County School District had suspended a student for taking a photo of packed hallways filled with kids not wearing masks on the first day back to school a week or so ago. While the school mumbled something about the suspension being for using a phone without permission at school, the school also said the quiet part out loud over the intercom when it informed students that any social media activity that made the school look bad would result in "consequences." In case it wasn't already clear, that is blatantly unconstitutional, violating the students' First Amendment rights.

In the least shocking news ever, the district has since reversed that suspension.

A Georgia high school has reversed course and lifted the suspension of two students who were punished after posting photos of the school's packed hallways when classes resumed earlier this week. North Paulding High School in Dallas, Georgia, faced national criticism over the viral photos showing students shoulder-to-shoulder, with fewer than half wearing masks.

"This morning my school called and they have deleted my suspension," she wrote.

"To be 100% clear, I can go back to school on Monday. I couldn't have done this without all the support, thank you."

It appears that someone taught school administrators how the country's governing document works and caused them to run as fast as they could from the original decision to suspend anyone over free speech. Now, onto that bit about being able to go back to school on Monday. The problem with that is shortly after those controversial pictures were taken, a whole bunch of kids at the school got COVID-19. Again, this is not surprising.

Just days after a photo of crowded hallways at North Paulding High School went viral, parents were informed Saturday of nine confirmed cases of the coronavirus at the school.

Channel 2 Investigative Reporter Nicole Carr got a copy of the letter. Principal Gabe Carmona wrote that six students and three staff members who were in school last week have since reported positive tests for COVID-19.

And, as a result, North Paulding High School has fully shut down due to the spike in cases.

So, to recap, Paulding County suspended a student for essentially showing the world why she was fearful of attending her own school, due to state and local officials being absolutely callous with student safety, then unsuspended her, and are now dealing with a COVID-19 outbreak among students and staff. Hannah Watters, one of the students initially suspended for the photos, probably doesn't feel all that much vindication, however, so busy is she dealing with threats she's getting.

Watters said she's faced threats since she and another student shared pictures that captured national attention showing students shoulder-to-shoulder in a crowded hallway, many without masks.

"I know I'm doing the right thing and it's not going to stop them from doing it, but it is concerning, especially since it's a lot of the people that I go to school with," she said. "People I've known for years now that are threatening me now.'"

If one were trying to advertise to the public why they should not want to move their family to Paulding County, I can't imagine any agency coming up with anything better than precisely what has occurred there in this story.

27 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Digital Technology As Accelerant: Growth And Genocide In Myanmar

from the broader,-collaborative-view dept

by Aye Min Thant - August 12th @ 12:00pm

Every person in Myanmar above the age of 10 has lived part, if not most, of their life under a military dictatorship characterized by an obsession with achieving autonomy from international influences. Before the economic and political reforms of the past decade, Myanmar was one of the most isolated nations in the world. The digital revolution that has reshaped nearly every aspect of human life over the past half-century was something the average Myanmar person had no personal experience with.

Recent reforms brought an explosion of high hopes and technological access, and Myanmar underwent a digital leapfrog, with internet access jumping from nearly zero percent in 2015 to over 40 percent in 2020. At 27-years-old, I remember living in a Yangon where having a refrigerator was considered high tech, and now, there are 10-year-olds making videos on Tik Tok.

Everyone was excited for Myanmar's digital revolution to spur the economic and social changes needed to transform the country from a pariah state into the next economic frontier. Tourists, development aid, and economic investment poured into the country. The cost of SIM cards dropped from around 1,000 US dollars in 2013 to a little over 1 dollar today.

This dramatic price drop was paired with a glut of relatively affordable smartphones and phone carriers that provided data packages that made social media platforms like Facebook free, or nearly free, to use. This led to the current situation where about 21 million out of the 22 million people using the internet are on Facebook. Facebook became the main conduit through which people accessed the internet, and now is used for nearly every online activity from selling livestock, watching porn, reading the news, to discussing politics.

Then, following the exodus of over 700,000 Rohingya people from Myanmar’s war-torn Rakhine State, Facebook was accused of enabling a genocide.

The ongoing civil wars in the country and the state violence against the Rohingya, characterized by the UN as ethnic cleansing with genocidal intent, put a spotlight on the potential for harm brought on by digital connectivity. Given its market dominance, Facebook has faced great scrutiny in Myanmar for the role social media has played in normalizing, promoting, and facilitating violence against minority groups.

Facebook was, and continues to be, the favored tool for disseminating hate speech and misinformation against the Rohingya people, Muslims in general, and other marginalized communities. Despite repeated warnings from civil society organizations in the country, Facebook failed to address the new challenges with the urgency and level of resources needed during the Rohingya crisis, and failed to even enforce its own community standards in many cases.

To be sure, there have been improvements in recent years, with the social media giant appointing a Myanmar focused team, expanding their number of Myanmar language content reviewers, adding minority language content reviewers, establishing more regular contact with civil society, and devoting resources and tools focused on limiting disinformation during Myanmar’s upcoming election. The company also removed the accounts of Myanmar military officials and dozens of pages on Facebook and Instagram linked to the military for engaging in "coordinated inauthentic behavior." The company defines "inauthentic behavior" as "engag[ing] in behaviors designed to enable other violations under our Community Standards," through tactics such as the use of fake accounts and bots.

Recognizing the seriousness of this issue, everyone from the EU to telecommunications companies to civil society organizations have poured resources into digital literacy programs, anti-hate-speech campaigns, social media monitoring, and advocacy to try and address this issue. Overall, the focus of much of this programming is on what Myanmar and the people of Myanmar lack—rule of law, laws protecting free speech, digital literacy, knowledge of what constitutes hate speech, and resources to fund and execute the programming that is needed.

In the frenzy of the desperate firefighting by organizations on the ground, less attention has been given to larger systemic issues that are contributing to the fire.

There is a need to pay greater attention to those coordinated groups that are working to spread conspiracy theories, false information, and hatred to understand who they are, who is funding them, and how their work can be disrupted—and, if necessary, penalized.

There is a need to reevaluate how social media platforms are designed in a way that incentivizes and rewards bad behavior.

There is also a need to question how much blame we want to assign to social media companies, and whether it is to the overall good to give them the responsibility, and therefore power, to determine what is and isn't acceptable speech.

Finally, there is a need to ask ourselves about alternatives we can build, when many governments have proven themselves more than willing to surveil and prosecute netizens under the guise of health, security, and penalizing hate speech.

It is dangerous to expect private, profit-driven multinational corporations to be given the power to draw the line between hate speech and free speech. Just as it is dangerous to give that same power to governments, especially in this time of rising ethno-nationalistic sentiments around the globe and the increasing willingness of governments to overtly and covertly gather as much data as possible to use against those they govern. We can see from the ongoing legal proceedings against Myanmar in international courts regarding the Rohingya and other ethnic minorities, and statements from UN investigative bodies on Myanmar that Facebook has failed release to them evidence of serious international crimes, that neither company policies nor national laws are enough to ensure safety, justice, and dignity for vulnerable populations.

The solution to all this, as unsexy as it sounds, is a multifaceted, multi-stakeholder, long-term effort to build strong legal and cultural institutions that disperses the power and the responsibility to create and maintain safe and inclusive online spaces between governments, individuals, the private sector, and civil society.  

Aye Min Thant is the Tech for Peace Manager at Phandeeyar, an innovation lab which promotes safer and more inclusive digital spaces in Myanmar. Formerly, she was a Pulitzer Prize winning journalist who covered business, politics, and ethno-religious conflicts in Myanmar for Reuters. You can follow her on Twitter @ma_ayeminthant.

This article was developed as part of a series of papers by the Wikimedia/Yale Law School Initiative on Intermediaries and Information to capture perspectives on the global impacts of online platforms’ content moderation decisions. You can read all of the articles in the series here, or on their Twitter feed @YaleISP_WIII.

1 Comment »

Judge Tosses Out Genius' Laughable Lawsuit Against Google Over Licensed Lyric Copying

from the what-a-dumb-lawsuit dept

by Mike Masnick - August 12th @ 10:45am

Last year we wrote about what we called the "dumbest gotcha story of the week," involving the music annotation site Genius claiming that Google had "stolen" its lyrics. The only interesting thing about the story is that Genius had tried to effectively watermark its version of the lyrics by using some smart apostrophes and some regular apostrophes. However, as we noted, the evidence that Google "copied" Genius just wasn't supported by the facts -- and even if they had copied Genius, it's unclear how that would violate any law. You can read that post for more details, but the simple fact is that a bunch of sites all license lyrics and have permission for them -- and many use a third party such as LyricFind to supply the lyrics. But how those lyrics are created is... however possible. Even as sites "license" lyrics from publishing companies, those companies themselves don't have their own lyrics. So basically lyric databases are created however possible -- including having people jot down what they think lyrics are... or by copying other sites that are doing the same. And there's nothing illegal about any of that.

And yet, for reasons that are beyond me, last December, Genius sued both Google and LyricFind over this. As we noted at the time, it was one of the dumbest lawsuits we'd seen in a while, and it would easily fail. And that is exactly what has happened. The lawsuit was removed from NY state court to federal court, and while Genius tried to send it back, the judge not only rejected that request, but she dismissed the entire lawsuit for failure to state a claim (that's legal talk for "wtf are you even suing over, that doesn't violate any law, go home.")

There were a bunch of issues that Genius tried to raise, but all of them were pretend issues. As we noted all along, Genius has no copyright interest in the lyrics (indeed, it has to license them too -- and, amusingly, in its early days, songwriters accused Genius of being a "pirate" site for not licensing those lyrics...). And so Genius tried to make a bunch of claims without arguing any copyright interest, but these were all really attempted copyright claims in disguise, and the court rightly pointed out that copyright pre-empts all of them.

Breach of contract? Nah, copyright pre-empt's that:

Plaintiff’s breach of contract claims are nothing more than claims seeking to enforce the copyright owners’ exclusive rights to protection from unauthorized reproduction of the lyrics and are therefore preempted. The parties agree that Plaintiff is not the owner of the copyrights to any of the lyrics it transcribes, and Plaintiff concedes that it licenses lyrics from the copyright owners.... Although Plaintiff describes the rights it seeks to enforce as “broader and different than the exclusive right existing under the Copyright Act,” based on “the substantial investment of time and labor by [Plaintiff] in a competitive market,” ... and asserts breach of contract claims based on alleged violations of Plaintiff’s Terms of Service, Plaintiff’s own ability to transcribe and display the lyrics on its website arises from the licensing rights Plaintiff has in the lyrics....

[....]

Plaintiff’s argument is, in essence, that it has created a derivative work of the original lyrics in applying its own labor and resources to transcribe the lyrics, and thus, retains some ownership over and has rights in the transcriptions distinct from the exclusive rights of the copyright owners.... This argument is consistent with the treatment of derivative works under federal copyright law....

Plaintiff likely makes this argument without explicitly referring to the lyrics transcriptions as derivative works because the case law is clear that only the original copyright owner has exclusive rights to authorize derivative works....

Even accepting the argument that Plaintiff has added a separate and distinct value to the lyrics by transcribing them such that the lyrics are essentially derivative works, because Plaintiff does not allege that it received an assignment of the copyright owners’ rights in the lyrics displayed on its website, Plaintiff’s claim is preempted by the Copyright Act because, at its core, it is a claim that Defendants created an unauthorized reproduction of Plaintiff’s derivative work, which is itself conduct that violates an exclusive right of the copyright owner under federal copyright law.

Unjust enrichment? Yup. Pre-empted by copyright law. In that case, Genius had pointed to one case that showed an unjust enrichment claim avoided pre-emption, but the court points out that that case was quite different.

While the court in CVD Equipment Corp. listed deception as an extra element sufficient to avoid preemption, the Court finds, based both on the facts in that case and the Second Circuit decisions cited in support, that the decision in CVD Equipment Corp. was based on the defendant’s alleged abuse of a fiduciary relationship, which is not present in this case. The factual allegations in CVD Equipment Corp., described above, clearly supported a claim that the defendants had unjustly enriched themselves by abusing a fiduciary relationship.... Moreover, the two Second Circuit cases the district court relied on in making its ruling further support the conclusion that the basis for the court’s holding was not that the plaintiffs had alleged “deception,” but rather, that they had alleged the abuse of fiduciary relationships. In Kregos, cited by the court in CVD Equipment Corp., in finding that the plaintiff’s unfair competition claim was preempted, the Second Circuit stated that “unfair-competition claims based upon breaches of confidential relationships, breaches of fiduciary duties and trade secrets have been held to satisfy the extra-element test and avoid § 301 preclusion.” ... Similarly, in Computer Associates International, Inc., also cited by the court in CVD Equipment Corp., the Second Circuit noted that the “state law rights that . . . satisfy the extra element test, and thus are not preempted by section 301 . . . include unfair competition claims based upon breaches of confidential relationships, breaches of fiduciary duties and trade secrets.”... In contrast, in this case, Plaintiff has not alleged that Defendants abused a confidential or fiduciary relationship.

Unfair competition? Sorry, nope. Pre-empted by copyright.

Plaintiff’s unfair competition claims are preempted by the Copyright Act. Plaintiff alleges that Defendants “misappropriated content from [Plaintiff’s] website,”... in “an unjustifiable attempt to profit from [Plaintiff’s] expenditure of time, labor and talent in maintaining its service,”... Plaintiff has not alleged that Defendants breached any fiduciary duty or confidential relationship, or that Defendants misappropriated Plaintiff’s trade secrets. Instead, Plaintiff’s claims are precisely the type of misappropriation claims that courts have consistently held are preempted by the Copyright Act....

Plaintiff’s claims are essentially “reverse passing off” claims, as Plaintiff alleges that Defendants copied Plaintiff’s work product — song lyrics displayed on its website — and attempted to pass them off as either, in LyricFind’s case, its own work product or, in Google’s case, either its own work product or work product it was licensed to display.... Unfair competition claims involving allegations of reverse passing off are preempted by the Copyright Act.

How about "bad faith" claims under NY state law? Here, we see the zombie of the never ending SCO v. IBM case, which Genius sought to use in support. But, there's a problem. That case was in the 10th Circuit. This case is in the 2nd.

The Tenth Circuit’s decision in SCO Group, Inc. is directly contradicted by caselaw in this Circuit, discussed above, finding that New York unfair competition claims alleging misappropriation of copyrightable works are preempted by the Copyright Act. Regardless of how the Tenth Circuit interpreted the “bad faith” element of New York unfair competition claims, in this Circuit, “bad faith” on its own is not sufficient to avoid preemption — if it were, unfair competition claims under New York law would never be preempted.

Unfairness under California law? Pre-empted. Easily.

The Second Circuit has held that “[n]o matter how ‘unfair’” a defendant’s alleged conduct is, “such unfairness alone is immaterial to a determination whether a cause of action has been preempted by the Copyright Act.”

Deceptive, unethical, and immoral conduct? By this point you can feel the judge getting bored of having to repeat herself.

Courts in this Circuit have found that deception is not an extra element that saves an unfair competition claim from preemption.

And thus, the case is tossed completely.

Given that the Court finds that all of Plaintiff’s state law claims are preempted by the Copyright Act, and Plaintiff has not asserted any federal law claims, the Court dismisses the Complaint for failure to state a claim.

Don't try to pretend that you have a pseudo copyright in content you have no copyright rights over.

Read More | 13 Comments »

Daily Deal: The 2020 Work From Anywhere Bundle

from the good-deals-on-cool-stuff dept

by Daily Deal - August 12th @ 10:38am

Working from home can be amazingly convenient but really hard at the same time. To successfully work remotely you need key skills: focus, self-motivation, communication, collaboration, and more. The 2020 Work From Anywhere Bundle can help you make the transition from working in an office to working remotely or for yourself. It's on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Comment »

Why Are There Currently No Ads On Techdirt? Apparently Google Thinks We're Dangerous

from the content-moderation-at-scale dept

by Mike Masnick - August 12th @ 9:37am

You probably didn't notice it, but there are currently no third-party ads on Techdirt. We pulled them down late last week, after it became impossible to keep them on the site, thanks to some content moderation choices by Google. In some ways, this is yet another example of the impossibility of content moderation at scale. If we didn't know and understand how impossible content moderation at scale is to do well, we might be like The Federalist and pretend that Google's content moderation decisions were based on disagreement with our ideology. That would have allowed us to make a fake story like the one that is still getting news cycles, thanks to idiots in Congress insisting that Google defunded the Federalist because of its ideological viewpoints.

The truth is that Google's AdSense (its third-party ad platform) content moderation just sucks. In those earlier posts about The Federalist's situation, we mentioned that tons of websites deal with those "policy violation" notices from Google all the time. Two weeks ago, it went into overdrive for us: we started receiving policy violation notices at least once a day, and frequently multiple times per day. Every time, the message was the same, telling us we had violated their policies (they don't say which ones) and we had to log in to our "AdSense Policy Center" to find out what the problem was. Every day for the ensuing week and a half (until we pulled the ads down), we would get more of these notices, and every time we'd log in to the Policy Center, we'd get an ever rotating list of "violations." But there was never much info to explain what the violation was. Sometimes it was "URL not found" (which seems to say more about AdSense's shit crawler than us). Sometimes it was "dangerous and derogatory content." Sometimes it was "shocking content."

But that would be about it. One difference, however, was that in the past Google would say that we didn't need to fix those flagged URLs and that they would just stop showing ads on those pages. Which is fine. They don't want their ads appearing there, no problem. But, many of these new "policy violations" said they were "must fix" issues. But what that "fix" sould be was never explained. Incredibly, this included a non-existent URL (a malformed URL that would just take you to the front page of Techdirt). That was deemed "must fix." Also, somewhat amusingly, the tag page for Google was deemed "dangerous or derogatory" and "must fix":

Same with the tag page for "content moderation." I only wish I were joking:

Again, what you see there was basically all of the information given to us. How do we "fix" that? Who the fuck knows? Again, I do not think that this is Google targeting us for our views (even when we have been critical of Google or Google's content moderation practices). It just seems to be that content moderation is impossible to do well, and Google is a prime example.

Incredibly, this list of problematic URLs would just keep changing. Some would drop off the list with no explanation (even the "must fix" ones). Some new ones would be added. Some would switch between "must fix" and "don't need to fix." No explanation. No record of the "fixes." As an example, on Friday July 31st, I logged in and saw 25 URLs deemed to be policy violations. On Saturday morning I logged in and it was down to 18. No reason. Sunday morning it was at 22. But Sunday evening it was 27.

I tried to reach out to people at AdSense to figure out what the hell we should do and did not get back anything useful.

Three other things happened around this time as well. First, on the same day we started receiving these daily (or multiple times daily) policy violation emails, Google also started claiming that our daily emails (which are just snapshots of the blog itself) were phishing attempts, and automatically deleting them from any G Suite user's email account:

For users of Gmail (not G Suite) it just moved our newsletters to spam, still claiming they were phishing attempts:

Again, the emails don't ask users to do anything or to log in to anything. They're not phishing. They're just an email version of the day's blog posts. We didn't see how these two things (the AdSense violations and the accusations of "phishing") could possibly be connected, so it might just be a coincidence that they started the exact same day -- but, again, who knows?

The next thing that happened was that the company we work with to manage the ad flow on our website (and to bring in other sources, beyond Google ads) told us that Google had reached out to them (not us) to say that because of all of the ongoing unfixed "policy" violations, we would be kicked out of AdSense by the end of August. Also, Google told them that we were engaging in "clickspam" by hiding our ads to make them look like regular content, and that needed to be fixed immediately. The problem is -- we don't do that and have never done that. Our ads were always in the right hand column and clearly called out as ads. Indeed, we pay attention to what other sites do, and we are way, way, way, way more careful than basically every other website on the planet when it comes to not shoving our ads where they might be mistaken as organic content.

Finally, we started receiving reports from multiple Techdirt visitors (including those who told us they had purposefully whitelisted Techdirt from their ad blockers) that ads being delivered by Google were causing their computers to run hot. Multiple reports of ads on Techdirt failing to load properly, and causing Techdirt to fail to load properly. And also causing fans to turn on. And, to be honest, that's the last straw for us. We would try to work with Google to understand why our content is so problematic for it, but when Google's products start harming our users and causing a nuisance for them, that's when they've got to go.

Given all this, we just decided that we're pulling the ads off the site entirely for the time being -- at least until we can figure out a better situation. This (obviously) represents a revenue hit for us, but the situation had become impossible to deal with. I was wasting so much time the past few weeks trying to figure out what the hell we were supposed to do, as opposed to doing the work I needed to be doing. So, that's it for now. We're looking at other providers out there, but so far, so many of the ones we talk to appear to be sketchy, and we're not doing that either. If anyone knows of any non-sketchy, non-awful advertising partners, please let us know. Or, if you happen to have some excess money and want to just sponsor stuff so we don't even have to worry about regular ads, let us know. Assuming most of you are not in that position, we do have a page of various ways individuals can support us. We know that times are tough for many, many people right now, but if you happen to be doing okay, and can help us replace at least a little of what money we made from ads, that would be greatly appreciated.

85 Comments »

The Idea That Banning TikTok Thwarts Chinese Intelligence In Any Way Is Ridiculous

from the sitting-at-the-kid's-table dept

by Karl Bode - August 12th @ 6:12am

As we've noted a few times, not much about the Trump administration's ban of TikTok makes coherent sense. Most of the biggest TikTok pearl clutchers in the GOP and Trump administration have actively opposed things like basic US privacy laws or even improving election security, and were utterly absent from efforts to shore up other privacy and problems, be it the abuse of cellular location data or our poorly secured telecom infrastructure. It's a bunch of xenophobia, pearl clutching, and performative politics dressed up as serious adult policy that doesn't even get close to fixing any actual problems.

And yet, many reporters and internet experts keep parroting the idea that banning TikTok somehow "protects U.S. consumers" or "prevents the Chinese government from obtaining U.S. consumer data." You're to ignore that Americans install millions of Chinese-made "smart" TVs, fridges, and poorly secured IOT gadgets on home and business networks with reckless abandon. Or that international corporations not only sell access to consumer data to any nitwit with a nickel, they often leave it unencrypted in the cloud. Or that the U.S. has no privacy law for the internet era, and corporations routinely see performative wrist slaps for privacy and security incompetence.

The idea that Chinese intelligence, with zero scruples and an unlimited budget, "needs" TikTok access to spy on Americans' data in this environment is just silly nonsense. Any yet, here we are.

It's all even more absurd when you consider the scope and complexity of global adtech markets. As Gizmodo's Shoshana Wodinsky recently explored, international adtech is a complex, unaccountable monster. This orgy of consumer tracking, behavioral data, and "anonymized" (read: not actually anonymous at all) datasets is so complex, even folks that cover the sector have a hard time understanding it. Thinking we can control what data the Chinese government is gleaning from this tangled web -- or that even selling TikTok to Microsoft somehow "fixes" anything -- is an act of hubris in full context:

"Over time, what’s become very, very clear is that while, say, Google and Facebook and TikTok are ultimately at the whims of local regulators, the same can’t be said about the digital Rube Goldberg machine of platforms, subsidiaries and shady third-parties these companies use to churn our data into massive profits. Our current digital economy, to a certain degree, depends on global ties that are built to run far deeper than any ban or buyout could ever hope to touch.

Or, to put it more bluntly: If Trump’s real concern is keeping the data of our squeaky clean American phones out of the clutches of that dirty, no-good communist adversary China, then Microsoft buying TikTok won’t do shit—our data is making its way from U.S. companies to servers in China constantly, regardless of who owns TikTok."

Thinking a ban of one Chinese teen dancing app addresses any of this is just absurd. Similarly, the idea that our defunded, understaffed, and kneecapped privacy regulators at the FTC can actively track or manage any of this (without serious reform, more staff, and a bigger budget) is equally silly:

As Wodinsky notes, if this market is so large that journalists, experts, and privacy and security regulators can't ferret out where your data winds up and who is actively cashing in on access to it, the idea that banning or selling TikTok is some mystical foil for one of the most powerful intelligence-gathering governments on the planet is a bizarre pipe dream:

"The wonderful world of digital advertising is built on black boxes inside of black boxes inside of black boxes. This means that there’s a good chunk of people who work at these companies who likely can’t tell you exactly where your name, your phone number, your precise location, or other personal data might actually end up."

And yet if you read numerous press reports and analyst insights into the TikTok fracas, they'll fail utterly to include any of this as essential context, which in turn lets the Trump administration pretend that this political tap dance is actually helpful internet policy.

27 Comments »

Georgia Governor Passes Law Granting Cops Protected Status For 'Bias-Based' Crimes

from the saving-the-protectors-from-the-protected dept

by Tim Cushing - August 12th @ 3:13am

Georgia governor Brian Kemp -- last seen here trying to turn his own election security problems into a Democrat-lead conspiracy -- has just proven he's unable to read the room. The governor can't read the room in his own state, much less the current state of the nation. Less than a month after the killing of George Floyd in Minneapolis triggered nationwide protests against police violence, officers in Atlanta were involved in a controversial killing of a Black man in a fast food restaurant parking lot.

The state of the nation is pretty much the same as it is in Georgia. Now is not the time to be offering police officers even more legal protections, considering how much they've abused the ones they already have. Idiotic bills touted by legislators saying stupid things like "blue lives matter" come and go. Mostly they go, since they're either redundant or unworkable. These laws try to turn a person's career choice into an immutable characteristic, converting some of the most powerful people in the nation into a class that deserves protection from the public these officers are sworn to serve.

It's now possible to commit a hate crime against a cop in Georgia, thanks to Kemp and his party-line voters.

Gov. Brian Kemp signed a proposal into law Wednesday that Republicans pushed to grant police new protections despite stiff opposition from critics who said it creates a messy tangle of legal problems.
[...]

In a statement, Kemp said he took action because he has attended the funerals of too many law enforcement officers killed in the line of duty, and he called the measure a “step forward as we work to protect those who are risking their lives to protect us.

“While some vilify, target and attack our men and women in uniform for personal or political gain, this legislation is a clear reminder that Georgia is a state that unapologetically backs the blue,” Kemp said.

The standalone bill was a concession to state Republicans, who refused to help pass an actual hate crime bill without being able to give more protections to already-very well-protected police officers. Not only does the law make it a crime to engage in "bias motivated intimidation" of police officers and first responders, it gives them a way to exact revenge on anyone they believe has wronged them. From the bill [PDF]:

A peace officer shall have the right to bring a civil suit against any person, group of persons, organization, or corporation, or the head of an organization or corporation, for damages, either pecuniary or otherwise, suffered during the officer's performance of official duties, for abridgment of the officer's civil rights arising out of the officer's performance of official duties, or for filing a complaint against the officer which the person knew was false when it was filed.

Critics of the bill believe this addition to state law would give officers a way to sue anti-police protesters for whatever harms officers feel they've suffered while policing demonstrations. And it would affect more than protesters. Anyone interacting with a police officer runs the risk of being sued because "damages suffered" is limited only by the officer's imagination and the court's tolerance. Even if the suit is baseless, the defendant still has to show up and defend themselves, using their own money while officers play litigation roulette with the taxpayers' bankroll.

Then there's the heart of the law, which makes certain acts hate crimes:

A person commits the offense of bias motivated intimidation when such person maliciously and with the specific intent to intimidate, harass, or terrorize another person because of that person's actual or perceived employment as a first responder:

(1) Causes death or serious bodily harm to another person; or

(2) Causes damage to or destroys any real or personal property of a person because of actual or perceived employment as a first responder without permission and the amount of the damage exceeds $500.00 or the value of the property destroyed exceeds $500.00.

Those acts are punishable by five years and/or a $5,000 fine. And the acts described are already crimes. Doubling down on crimes to make cops feel special may actually make things more ridiculous, as the ACLU has explained.

According to the bill, anyone found guilty of the death, serious bodily harm or destruction of more than $500 worth of property of a first responder, specifically because of his or her occupation, would face between one and five years in prison and/or a fine up to $5,000.

Currently, the punishment for murder includes death, life in prison without the possibility of parole or life in prison.

Since the targeted killing of a police officer could be considered “bias motivated intimidation” of a first responder, the ACLU says a legal argument called the “rule of lenity” requires courts to pursue the charge that is the most favorable to a defendant.

And the most lenient charge is the new law, which calls for only a five-year sentence (maximum) for killing a cop if the crime appears to have been motivated by anti-cop bias. Prosecutors who want to do the most damage to cop-killers won't be pursuing bias charges. They'll ignore the new law completely. Legislators were apparently made aware of this conflict prior to the bill's passage but apparently figured it would sort itself out once it became law.

Senate Judiciary Committee Chairman Jesse Stone, a Waynesboro Republican and attorney, said he was made aware of a potential problem with the legislation late Friday.

“I think if they were charged with bias motivated (intimidation), that might be a concern,” said Stone, who voted for HB 838 and is retiring this year. “I haven’t studied it, but I think it’s something that should be looked into.

Yes, the best time to look into potential problems with legislative proposals is after they've become law. Everything about this new law is terrible, including its path to the governor's desk. It passed with one vote, divided entirely along party lines. And it shows one party is far more concerned with pandering to its powerful law enforcement voter base than protecting citizens from their public servants.

Read More | 43 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Visit Techdirt for today's stories.

Forward to a Friend
 
 
  • This mailing list is a public mailing list - anyone may join or leave, at any time.
  • This mailing list is announce-only.

Techdirt's original daily email. Once a day, Techdirt will email the full-length version of the previous day's stories from Techdirt.com (based on Pacific time).

Privacy Policy:

Floor64 will not share your email address with third parties.

Go back to Techdirt