Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Wednesday, February 3rd, 2021
Annual Reminder: You Can Probably Just Call The Super Bowl The Super Bowl
from the touchdown dept
by Timothy Geigner - February 3rd @ 7:36pm
It's that special time of year again where we here at Techdirt need to remind you that, no, the NFL cannot keep you from referring to The Super Bowl as The Super Bowl, full stop. While the NFL stomps around the entire country every year, slapping down bars and churches for hosting Super Bowl parties, all while an extremely unhelpful media plays along, the truth is that most of the bullying the NFL does isn't over actual trademark infringement. Sure, if some business advertises some association or endorsement by the NFL, that would be trademark infringement. Or if they claimed endorsement of the game or the NFL, that too would be infringing use. But a church simply hosting a Super Bowl party is not trademark infringement.
And, of course, the silliest output of this confusion is people and companies using half-baked euphemisms to refer to the Super Bowl instead. Everyone knows what they're talking about and, yet, this somehow isn't infringing. So, were there any confusion, it would still exist, and yet the NFL relents. The most common of these has been "The Big Game", of course, and its use continues to this day.
Restaurants have taken to calling it the Big Game because the NFL trademarked the name "Super Bowl" and jealously defends its use. But whatever you call it, Dallas restaurants are offering a superabundance of specials and takeout options for Sunday's game. We'll just call them Super Bowl specials because we can.
And so can everyone else. Really. Go ahead. This "the Big Game" nonsense is modernity's "fire in a crowded theater." But, because trademark bullying works, and everyone is so terrified of the NFL, instead you get this...
Not to be tripped up by trademark hassles, GAPCo got creative in naming their game-day deal. The Superb Owl Sampler includes 12 garlic knots, 12 toasted ravioli (six cheese, six beef), 12 pizza poppers with large ranch and sauces for dipping. The sampler ($55) feeds up to 10 people.
How the hell do you even parody something like that?
But if you really want to get yourself irritated, actual United States government agencies are getting in on this euphemistic bullshit. And the US Consumer Product Safety Commission actually made this all sillier with its own messaging on Twitter.
It's Super Bowl Week, also known as, Large Football Game week. If you're getting a new TV this week, make sure you're getting TV and furniture tip-over straps too #AnchorIt pic.twitter.com/SAxQo93Qjb
— US Consumer Product Safety Commission (@USCPSC) February 1, 2021
Why in the name of Tom Brady's sweaty jock strap would you put out a tweet that names the Super Bowl and then put out an image that uses a euphemism for it? And, related: "the Large Football Game"? I'm frankly tempted to see that graphic as an attempt to poke fun at the NFL for its protectionist nonsense, but somehow I don't think the USPSC has that much of a sense of humor.
Stop. STOP. Stop giving the NFL a power it doesn't actually have. Stop acting like the league can somehow gatekeep reality. It can't. Just call the Super Bowl by its damned name. It's not Voldemort, after all.
from the word-filters dept
by Copia Institute - February 3rd @ 3:48pm
Summary: GitHub solidified its position as the world's foremost host of open source software not long after its formation in 2008. Twelve years after its founding, GitHub is host to 190 million repositories and 40 million users.
Even though its third-party content is software code, GitHub still polices this content for violations of its terms of service. Some violations are more overt, like possible copyright infringement. But much of it is a bit tougher to track down.
A GitHub user found themself targeted by a GitHub demand to remove certain comments from their code. The user's code contained the word "retard" -- a term that, while offensive in certain contexts, isn't offensive when used as a verb to describe an intentional delay in progress or development. But rather than inform the user of this violation, GitHub chose to remove the entire repository, resulting in users who had forked this code to lose access to their repositories as well.
It wasn't until the user demanded an explanation that GitHub finally provided one. In an email sent to the user, GitHub said the code contained content the site viewed as "unlawful, offensive, threatening, libelous, defamatory, pornographic, obscene, or otherwise objectionable." More specifically, GitHub told the user to remove the words "retard" and "retarded," restoring the repository for 24 hours to allow this change to be made.
Decisions for GitHub:
Unfortunately for GitHub, this drew attention to its less-than-consistent approach to terms of service violations. Searches for words considered "offensive" by GitHub turned up dozens of other potential violations -- none of which appeared to have been targeted for removal despite the inclusion of far more offensive terms/code/notes.
And the original offending code was modified with a tweak that substituted the word "retard" with the word "git" -- terms that are pretty much interchangeable in other parts of the world. The not-so-subtle dig at GitHub and its inability to detect nuance may have pushed the platform towards reinstating content it had perhaps pulled too hastily.
Originally posted on the Trust & Safety Foundation website.
Federal Court Orders Destruction Of Illegally-Obtained Sex Trafficking Sting Recordings
from the no-sex-traffickers-were-harmed-during-the-course-of-this-investigation dept
by Tim Cushing - February 3rd @ 1:44pm
The expiring breaths of a sensationalistic failure are emanating from a Florida sex trafficking investigation's soon-to-be corpse. A massive sting operation -- built on surreptitious recordings of massage parlor employees and their customers -- ended with nothing more than a bunch of solicitation charges. The alleged massive sex trafficking operation was actually just a bunch of consensual activity, with massage parlor employees free to come and go as they pleased.
It still made headlines, mainly because New England Patriots owner Robert Kraft was one of those caught on camera. But nearly every attempted prosecution has been thwarted by the actions of law enforcement officers, whose recordings illegally intruded into private spaces, violating the Fourth Amendment. The Appeals Court of Florida tossed the allegedly incriminating recordings, finding them unconstitutional.
For some reason, the agencies that made the surreptitious, illegal recordings are still holding onto them. The state attorney's office has allowed the retention of the videos, claiming they might be useful to plaintiffs suing law enforcement officers and agencies over violated rights.
On the face of it, this seems like a reasonable assertion. There is at least one federal lawsuit involving this sting operation underway. But the state attorney -- David Aronberg -- thinks immunity (qualified or absolute) will allow him and several law enforcement agencies to escape unscathed. Until that happens, Aronberg wants the recordings to remain intact until this litigation concludes, claiming his office can't "legally or ethically" order the destruction of potential evidence against him.
But his arguments aren't working. As Elizabeth Nolan Brown reports for Reason, a federal judge has ruled against the state attorney.
In his January 22 order, Ruiz granted John Doe's motion to compel destruction of the massage room video. Ruiz ruled that the defendants "shall destroy the videos unlawfully obtained through the surveillance of the Orchids of Asia Day Spa […] from January 18, 2019 to January 22, 2019, including any body camera footage obtained during associated traffic stops as well as any copies thereof."
The motion to compel destruction was unopposed, and Ruiz noted that the destruction is "pursuant to the terms of the parties' settlement agreement."
So, let's sort this all out. The state attorney claimed the footage needed to be retained because these plaintiffs might want to use it as evidence in their lawsuit. But the plaintiffs actually wanted the footage destroyed and had to get the court to order the destruction the state attorney claimed wasn't "legal or ethical."
Retaining the footage plaintiffs wanted destroyed was, at the very least, unethical. And this order makes any further retention illegal. It would have seemed apparent destruction was the right way to go unless the plaintiffs requested otherwise, given that the state appeals court ruled last year that the recordings were illegally obtained and could not be used as evidence in the state's prosecutions.
This about wraps up this sordid little law enforcement escapade. And another sex trafficking sting resulting in the arrest of zero sex traffickers is par for the course for law enforcement agencies which appear to be looking for any excuse to engage in titillating wastes of taxpayers' time and money.
Facebook Oversight Board's First Decisions... Seem To Confirm Everyone's Opinions Of The Board
from the take-a-deep-breath dept
by Mike Masnick - February 3rd @ 12:09pm
Last week, the Oversight Board -- which is the official name that the former Facebook Oversight Board wants you to call it -- announced decisions on the first five cases it has heard. It overturned four Facebook content moderation decisions and upheld one. Following the announcement, Facebook announced that (as it had promised) it followed all of the Oversight Board's decisions and reinstated the content on the overturned cases (in one case, involving taking down a breast cancer ad that had been deemed to violate the "no nudity" policy, Facebook actually reinstated the content last year, after the Board announced it was reviewing that decision). If you don't want to wade into the details, NPR's write-up of the decisions and policy recommendations is quite well done and easily digestible.
If you want a more detailed and thoughtful analysis of the decisions and what this all means, I highly recommend Evelyn Douek's detailed analysis of the key takeaways from the rulings.
What I'm going to discuss, however, is how the decisions seem to have only reinforced... absolutely everyone's opinions of the Oversight Board. I've said before that I think the Oversight Board is a worthwhile experiment, and one worth watching, but it is just one experiment. And, as such, it is bound to make mistakes and adapt over time. I can understand the reasoning behind each of the five decisions, though I'm not sure I would have ruled the same way.
What's more interesting to me, though, is how so many people are completely locked in to their original view of the board, and how insistent they are that the first decisions only confirm their position. It's no secret that many people absolutely hate Facebook and view absolutely everything the company does as unquestionably evil. I'm certainly not a fan of many of the company's practices, and don't think that the Oversight Board is as important as some make it out to be, but that doesn't mean it's not worth paying attention to.
But I tended to see a few different responses to the first rulings, which struck me as amusing, since the positions are simply not disprovable:
1. The Oversight Board is just here to rubberstamp Facebook's decisions and make it look like there's some level of review.
This narrative is slightly contradicted by the fact that the Oversight Board overturned four decisions. However, people who believe this view retort that "well, of course the initial decisions have to do this to pretend to be independent." Which... I guess? But seems like a lot of effort for no real purpose. To me, at least, the first five decisions are not enough to make a judgment call on this point either way. Let's see what happens over a longer time frame.
2. The Oversight Board is just a way for Facebook and Zuckerberg not to take real responsibility
I don't see how this one is supportable. It's kind of a no-win situation either way. Every other company in the world that does content moderation has a final say on their decisions, because it's their website. Facebook is basically the first and only site so far to hand off those decisions to a 3rd party -- and it did so after a ton of people whined that Facebook had too much power. And the fact that this body is now pushing back on Facebook's decisions suggests that there's at least some initial evidence that the Board might force Zuckerberg to take more responsibility. Indeed, the policy recommendations (not just the decisions directly on content moderation) suggest that the Board is taking its role as being an independent watchdog over how Facebook operates somewhat seriously. But, again, it's perhaps too early to tell, and this will be a point worth watching.
3. The Oversight Board has no real power, so it doesn't matter what they do.
The thing is, while this may be technically true, I'm not sure it matters. If Facebook actually does follow through and agree to abide by the Board's rulings, and the Board continues the initial path it's set of being fairly critical of Facebook's practices, then for all intents and purposes it does have real power. Sometimes, the power comes just from the fact that Facebook may feel generally committed to following through, rather than through any kind of actual enforcement mechanism.
4. The Oversight Board is only reviewing a tiny number of cases, so who cares?
This is clearly true, but again, the question is how it will matter in the long run. At least from the initial set of decisions, it's clear that the Oversight Board is not just taking a look at the specific cases in front of it, but thinking through the larger principles at stake, and making recommendations back to Facebook about how to implement better policies. That could have a very big impact on how Facebook operates over time.
As for my take on all of this? As mentioned up top, I think this is a worthwhile experiment, though I've long doubted it would have that big of an impact on Facebook itself. I see no reason to change my opinion on that yet, but I am surprised at the thoroughness of these initial decisions and how far they go in pushing back on certain Facebook policies. I guess I'd update my opinion to say I've moved from thinking the Oversight Board had a 20% chance of having a meaningful impact, to now it being maybe 25 to 30% likely. Some will cynically argue that this is all for show, and the first cases had to be like that. And perhaps that's true. I guess that's why no one is forced to set their opinion in stone just yet, and we'll have plenty of time to adjust as more decisions come out.
14 States Are Now Considering 'Right to Repair' Legislation
from the this-train-is-rolling dept
by Karl Bode - February 3rd @ 10:53am
Five years or so ago, frustration at John Deere's draconian tractor DRM culminated in a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.
Of course the problem isn't just restricted to John Deere. Apple, Microsoft, Sony, and countless other tech giants eager to monopolize repair have spent years bullying independent repair shops and demonizing consumers who simply want to reduce waste and repair devices they own.
Fast forward to 2021, and roughly fourteen different states are all considering pending right to repair legislation that would put power back in the hands of consumers and independent repair shops. Some states, like Montana, are considering different types of legislation that would cover both consumer hardware and agricultural equipment.
COVID is also pouring some gasoline on this fire, highlighting how manufacturers frequently enjoy a stranglehold over tools, documentation, and replacement parts, which can literally put human lives at risk by causing repair delays:
"Covid has changed our relationship with technology and it's obvious that laws need to catch up,” Proctor said. “We need devices to work and learn, but manufacturers won't provide tools or information even when their stores are closed."
Throughout this whole movement, companies have tried to cling tightly to nonsense in a bid to derail momentum. Usually this involves hallucinating nonexistent harms that threaten public safety and security.
Such as when Apple insisted that passing a right to repair law in Nebraska would turn the state into a "mecca for hackers." Or more recently, when the auto industry tried to claim that expanding Massachusetts' existing consumer tech law, to make sure that independent garages could access tools and diagnostic gear, would result in a "boom in sexual predators." The multi-sector quest to demonize the right to repair movement is relentless, and almost always involves making up bogus harms related to security and safety:
The problem is nobody believes them, in large part because their motivations couldn't be more obvious. And the more outlandish attacks giants like Apple make on this genuine grass roots coalition, the more attention -- and momentum -- it receives.
Daily Deal: The Cooking and Baking Master Class Bundle
from the good-deals-on-cool-stuff dept
by Daily Deal - February 3rd @ 10:48am
The Cooking and Baking Master Class Bundle has 7 courses to help you elevate your culinary skills. You'll learn about Sourdough starters, different flours, how to bake anything from brownies, to tarts, to rolls and more. You'll also learn superfoods nutrition, meal planning, vegan cooking, and pizza making. To top it all off, there's a course on food photography so you can take the best photos of your new creations. The bundle is on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
from the oh-shut-up dept
by Mike Masnick - February 3rd @ 9:40am
Former Senator Joe Lieberman was a ridiculous censorial problem when he was a Senator. Back in the early days of social media, when the first questions of content moderation were first gaining attention, Lieberman was perhaps the original moral panic Senator, demanding censorship of 1st Amendment protected content. It started back in 2008, when he sent an angry letter to YouTube, saying that they had to take down "terrorist content." YouTube reviewed a bunch of the links he sent, and removed only the ones that violated YouTube's policies. That made Lieberman mad and he sent a second letter demanding that the company take down "terrorist" videos. He also did the same thing to Twitter. Because of the political pressure, these companies became more aggressive, leading them to... take down a human rights watchdog that was documenting war crimes. Because sometimes "terrorist videos" are actually... "documenting war crimes."
A smarter person might step back and realize that there's a lot of nuance here, and what seems "easy" may be a bit more complex. But not old Joe Lieberman. Instead, he ramped up his desire to censor. He demanded Amazon stop hosting Wikileaks, and ordered Google to add a "this blog is run by a terrorist" button to all Blogger blogs. He also tried to expand the Espionage Act to cover journalists who publish leaked information.
So perhaps it's not surprising that when CBS News asked Lieberman to come on with Major Garrett and discuss Section 230 and content moderation, Lieberman immediately jumped to "they should get rid of 230" and censor more nonsense. Major Garrett kicks the conversation off with... a total misrepresentation of Section 230.
Garrett: You referred to Section 230 of the Communications Decency Act, and it does give essentially a carte blanche... there is no liability for... platforms for dissemination and placing essentially a foundation beneath things that are disinformation. You said it should be changed.
First of all, that awkward statement is... not accurate. Disinformation is protected by the 1st Amendment. Platforms have no liability for disinformation because of the 1st Amendment not because of Section 230. You'd think that someone on CBS News, which relies heavily on the 1st Amendment, would know that.
I mean, if providing a platform for disinformation wasn't protected, why, we could all sue CBS News for the disinformation spewed in this interview by Major Garrett and Joe Lieberman... about Section 230. Anyway, Lieberman does his Lieberman thing:
Lieberman: So, the social media companies. The internet platforms have such impact in so many ways. Just think about it. A lot of the activity leading up to January 6th occurred on the internet.
Yeah. But it also occurred on TV. A lot of it occurred coming out of the President's mouth. A lot of it occurred on Fox News. So why are you blaming the internet?
The terrorists communicate. The Islamic terrorists on jihadist websites.
Um. So are you saying people shouldn't communicate? Do we take away phone service from people who might be terrorists? And, Joe, if they're communicating on Jihadist websites that means they're not communicating on social media. If you force them off social media, then they communicate elsewhere where it's harder for the intelligence community to track them.
Finally, Lieberman does recognize that maybe the solution isn't easy, but then he immediately jumps to a "nerd harder" kind of thinking:
You know, it's not easy, for the internet companies. But, to give them total immunity from liability encourages them to be irresponsible. Not responsible at all for what's on the internet.
Except that's wrong. Facebook has hired 30,000 content moderators and invested heavily in technology to help as well. If they were "not responsible at all" why would they do that? All of the big social media companies have a large policy and trust & safety teams that takes all of these issues very seriously. And just because you don't have legal liability, it doesn't mean there aren't tremendous other reasons and incentives for the companies to be responsible. Without good moderation practices, sites fill up with spam, abuse, and harassment and that drives away users. It also drives away advertisers. So these companies have a strong self-interest in doing the best job they can when it comes to moderation.
So, look. One simple answer, maybe not the best, but it would do it, would be to just repeal that exemption from liability, that Section 230 gives these companies. And what does that mean? It means that they could be SUUUUEED by victims of whatever is on their platform. They'll probably look for something less than that. But that is a simple clean answer, and it ultimately leaves it to our system of law and our courts.
Yes, he drags out the "sued." But... Joe... sued for what? Again, so far everything you've described is protected speech under the 1st Amendment. So what are they going to be sued for? For making you upset? That's not how any of this works.
The reason I said it's not easy for the companies is this: there's obviously some things that just shouldn't be on there, and they should just take them off. I once had a conversation with somebody at YouTube, and I was complaining about the incendiary, violent, stimulating, sermons by a particular Islamic cleric who was operating out of Yemen. Eventually we killed him with a drone. His name was Awlaki. And the woman I was speaking to at YouTube said, 'you know what, we understand what you're saying, but how can we tell the difference between just a regular sermon he's giving and when he's actually inciting violence?' And I said, it's not easy, but you can do it. [Laughs] I mean, I can tell the difference between whether, when a priest or a minister or a rabbi... of course, none of them that I know are incentivizing violence [Laughs]... But when they're over the line, or they're just talking to a religious base. And they can do it too.
This is so ignorant it's depressing. He totally missed the point of what the person from YouTube was trying to tell him. Determining which content is okay and which is not when you're dealing with 500 hours of video uploaded to YouTube every minute, in a variety of languages around the world, and some of it using euphemisms or coded speech, is not the same as Lieberman himself sitting there deciding "oh this video is bad." What the person from YouTube was trying to tell him is the same thing that we've been saying here for years. It's not that it's not easy, it's that it's impossible to do it well, because humanity and society is messy. And we can see that in the way that Lieberman's earlier demands to censor "terrorism" resulted in the disappearance of war crimes documentation from human rights groups.
Lieberman may laugh that off, but it's only because he's a very foolish man. Repealing Section 230 fixes none of what he's talking about.
A good reporter might have pointed that out. But this is CBS News, which just recently did an entire misleading 60 Minutes episode that got almost everything about Section 230 wrong.
You really want to believe that major media companies -- who failed to embrace the internet -- aren't purposefully lying about Section 230 to help out their own corporations, but it happens so incredibly often that it really makes you wonder.
After Years Of Litigation, AT&T Customers Get A Measly $22 For Being Lied To Over 'Throttling'
from the what-accountability-doesn't-look-like dept
by Karl Bode - February 3rd @ 5:39am
Way back in 2014 the FTC sued AT&T for selling "unlimited" wireless data plans with very real and annoying limits. The lawsuit noted that, starting in 2011, AT&T began selling "unlimited" plans that actually throttled upwards of 90 percent of your downstream speeds after using just two or three gigabytes of data. AT&T spent years trying to wiggle out of the lawsuit via a variety of legal gymnastics, including at one point trying to claim that the very same net neutrality and FCC Title II rules AT&T was attempting to kill, prevented the FTC from holding it accountable.
In late 2019 AT&T agreed to a $60 million settlement with the FTC without actually admitting any wrongdoing. Consumers who were lied to and ripped off for years nabbed somewhere around $12 each. Another, separate California class action recently came to a close with AT&T agreeing to a $12 million settlement. There too, consumers are expected to get somewhere around $10 to $11 each because they likely would have seen even less after a full trial:
"The new class-action settlement says that, on average, settlement-class members exceeded the throttling thresholds during 7.5 monthly billing periods. This means customers paid AT&T an average of $225 for unlimited data in months they were throttled. Despite that, plaintiffs concluded that the $10 or $11 settlement payments are a good deal compared to what they would likely get at trial."
Years of litigation, two major settlements, and $22 per consumer was the end result. Though lawyers wound up doing okay, grabbing their fair share of the $12 million:
"Here's how the numbers in the new settlement shake out. Of the $12 million, $462,000 is set aside for administrative costs and up to $3 million for attorneys' fees and expenses. The remainder would be distributed to the class, which is defined as California residents who bought unlimited mobile data from AT&T and exceeded the data usage threshold "for one or more monthly billing cycles such that the user would have been eligible for data usage slowing or deprioritization by AT&T in those billing cycles under AT&T's network management policies."
And this of course could have been worse had AT&T succeeded in flinging these folks toward binding arbitration, a system advertised as more effective than class actions despite being demonstrably even more lopsided and pathetic.
Wireless carriers have been advertising "unlimited" plans and then lying about their very real limits for the better part of fifteen years now. Many are still doing it and will continue to do it. Why? The penalty is always a tiny, tiny, fraction of the money earned by being misleading. The only real lesson here for AT&T is that stalling and litigation can easily blunt accountability for misleading or predatory business practices.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.