Techdirt Daily Newsletter for Saturday, 24 April, 2021

 
From: "Techdirt Daily Newsletter" <newsletters@techdirt.com>
Subject: Techdirt Daily Newsletter for Saturday, 24 April, 2021
Date: September 12th 2020

Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new

Techdirt Daily Newsbrief

Techdirt Email.

Stories from Friday, September 11th, 2020

 

The Next Generation Of Video Game Consoles Could Be The Beginning Of GameStop's Death

from the stop-discing-around dept

by Timothy Geigner - September 11th @ 7:39pm

Predictions about the death of video game retailer GameStop have been with us for at least a decade. There have been many reasons for such predictions, ranging from the emergence of digital downloaded games gobbling up market share to declines in retail stores generally. But there are two recent new headwinds that might frankly be the end of this once ubiquitous franchise as we know it.

The first headwind is one common to all kinds of retailers currently: the COVID-19 pandemic. The pandemic is actually almost certainly worse for GameStop compared with retailers for other industries. As noted above, sales for the industry have long been trending towards digital downloads. Yes, there are still those out there who insist on buying physical media games, and in many cases there are good reasons for doing so, but the truth is that market was shrinking steadily for a long, long time. With the pandemic both shuttering many retail stores and keeping scared consumers out of those that remain open, the digital market share in the gaming industry has grown quickly. Whether anyone will want to go back to buying physical copies of games, new or used, is an open question.

All of which might not ultimately matter, as the other headwind is the next generation of consoles being released with options for no built in disc drive at all.

The latest quarterly earnings report from GameStop doesn't show much sign of a turnaround for the long-troubled game retailer. Sales were down 26.7 percent year over year for the April through June quarter. Even accounting for permanent store closures and COVID-related reduced operating hours, so-called comparable "same-store" sales were still down 12.7 percent year over year. GameStop's already depressed stock is down nearly 8 percent on the news, as of this writing.

GameStop still publicly sees an "opportunity to capitalize" on the upcoming release of new Sony and Microsoft consoles, which could help turn its business around in the short term. But there's some reason to believe the coming generation of consoles could actually make GameStop's long-term prospects worse, thanks to console options that get rid of disc drives entirely.

During a recent earnings call, CEO George Sherman tried to spin this in the opposite direction, pointing out that the new consoles include an option for a disc drive as a reason for optimism. A huge chunk of GameStop's money is made reselling used games that are marked up considerably. If the best a cheerleader for the company can muster is pointing out that, at least for this generation, some of the consoles will still have drives... well, that isn't great.

Especially when you put this all in context. Both Microsoft's Xbox and Sony's PlayStation forthcoming consoles have options for discless devices that are priced significantly less than the alternative. That represents yet another reason why some gamers, who might not have gone all digital otherwise, will be jumping ship. Between the virus pushing more gamers to download games digitally, lower priced consoles in the middle of an economic downturn, and the general trends that pre-date the pandemic, the analogies some are drawing to GameStop's future aren't pretty.

Sherman confirmed in the earnings call that GameStop will sell these disc-drive-free consoles in its stores, a move akin to a world where Tower Records decided to sell iPods as its physical album sales cratered.

Yikes.

Now, none of this suggests that every gamer everywhere is ready to give up discs. Nor should this be taken to indicate that retail game stores are going to become fully extinct. In fact, I don't think the Tower Records analogy is the best that can be drawn, even if we stay in the music space. Instead, it is beginning to feel inevitable that GameStop, or other companies, will be become like modern day record stores: there to cater to the niche market of those that want CDs and vinyl, with all of the nostalgia that's as important for buyers as the product itself.

But it sure as hell won't be the GameStop of the last two decades.

4 Comments »

Content Moderation Case Study: Pinterest's Moderation Efforts Still Leave Potentially Illegal Content Where Users Can Find It (July 2020)

from the pin-this dept

by Copia Institute - September 11th @ 3:35pm

Summary: Researchers at OneZero have been following and monitoring Pinterest's content moderation efforts for several months. The "inspiration board" website hosts millions of images and other content uploaded by users.

Pinterest's moderation efforts are somewhat unique. Very little content is actually removed, even when it might violate the site's guidelines. Instead, as OneZero researchers discovered, Pinterest has chosen to prevent the content from surfacing by blocking certain keywords for generating search results.

The problem, as OneZero noted, is that hiding content and blocking keywords doesn't actually prevent users from finding questionable content. Some of this content includes images that sexually exploit children.

While normal users may never see this using Pinterest's built-in search tools, users more familiar with how search functions work can still access content Pinterest feels violates its guidelines, but hasn't actually removed from its platform. By navigating to a user's page, logged-out users can perform searches that seem to bypass Pinterest's keyword-blocking. Using Google to search the site -- instead of the site's own search engine -- can also surface content hidden by Pinterest.

Pinterest's content moderation policy appears to be mostly hands-off. Users can upload nearly anything they want to with the company only deleting (and reporting) clearly illegal content. For everything else that's questionable (or potentially harms other users), Pinterest opts for suppression, rather than deletion.

“Generally speaking, we limit the distribution of or remove hateful content and content and accounts that promote hateful activities, false or misleading content that may harm Pinterest users or the public’s well-being, safety or trust, and content and accounts that encourage, praise, promote, or provide aid to dangerous actors or groups and their activities,” Pinterest’s spokesperson said of the company’s guidelines.

Unfortunately, users who manage to bypass keyword filters or otherwise stumble across buried content will likely find themselves directed to other buried content. Pinterest's algorithms surface content related to whatever users are currently viewing, potentially leading users even deeper into the site's "hidden" content.

Decisions to be made by Pinterest:

  • Is hiding content effective in steering users away from subject matter/content Pinterest would rather they didn't access?
  • Would deletion -- rather than hiding -- result in affected users leaving the platform?
  • Is questionable content a severe enough problem the company should rethink its moderation protocols?
  • Should "related content" algorithms be altered to prevent the surfacing of hidden content?
Questions and policy implications to consider:
  • Does hiding -- rather than removing -- content potentially encourage users to use this invisibility to engage in surreptitious distribution of questionable or illegal content?
  • Does the possibility of hidden content resurfacing steer ad buyers away from the platform?
  • Will this approach to moderation -- hidden vs. deletion -- remain feasible as pressure for sites to aggressively police misinformation and "fake news" continues to mount?
Resolution: Pinterest's content moderation strategy remains mostly unchanged. As the site's spokesperson stated, the site appears to feel the hiding of content addresses most raised concerns, even if it does allow more determined site users to locate content the site would rather they never saw.

Comment »

The First Hard Case: Zeran V. AOL And What It Can Teach Us About Today's Hard Cases

from the congress-and-the-courts-got-it-right dept

by Cathy Gellis - September 11th @ 1:35pm

A version of this post appeared in The Recorder a few years ago as part of a series of articles looking back at the foundational Section 230 case Zeran v. America Online. Since to my unwelcome surprise it is now unfortunately behind a paywall, but still as relevant as ever, I'm re-posting it here.

They say that bad facts make bad law. What makes Zeran v. America Online stand as a seminal case in Section 230 jurisprudence is that its bad facts didn’t. The Fourth Circuit wisely refused to be driven from its principled statutory conclusion, even in the face of a compelling reason to do otherwise, and thus the greater good was served.

Mr. Zeran’s was not the last hard case to pass through the courts. Over the years there have been many worthy victims who have sought redress for legally cognizable injuries caused by others’ use of online services. And many, like Mr. Zeran, have been unlikely to easily obtain it from the party who actually did them the harm. In these cases courts have been left with an apparently stark choice: compel the Internet service provider to compensate for the harm caused to the plaintiff by others’ use of their services, or leave the plaintiff with potentially no remedy at all. It can be tremendously tempting to want to make someone, anyone, pay for harm caused to the person before them. But Zeran provided early guidance that it was possible for courts to resist the temptation to ignore Section 230’s liability limitations – and early evidence that they were right to so resist.

Section 230 is a law that itself counsels a light touch. In order to get the most good content on the Internet and the least bad, Congress codified a policy that is essentially all carrot and no stick. By taking the proverbial gun away from an online service provider’s proverbial head, Congress created the incentive for service providers to be partners in achieving these dual policy goals. It did so in two complementary ways: First, it encouraged the most beneficial content by insulating providers for liability arising from how other people used their services. Second, Congress also sought to ensure there would be the least amount of bad content online by insulating providers from liability if they did indeed act to remove it.

By removing the threat of potentially ruinous liability, or even just the immense cost arising from being on the receiving end of legal threats based on how others have used their services, more and more service providers have been able to come into existence and enable more and more uses of their systems. It's let these providers resist unduly censoring legitimate uses of their systems in order to minimize their legal risk. And by being safe to choose what uses to allow or disallow from their systems, service providers have been free to allocate their resources more effectively to police the most undesirable uses of their systems and services than they would be able to if the threat of liability instead forced them to divert their resources in ways that might not be appropriate for their platforms, optimal, or even useful at all.

Congress could of course have addressed the developing Internet with an alternative policy, one that was more stick than carrot and that threatened penalties instead of offering liability limitations, but such a law would not have met its twin goals of encouraging the most good content and the least bad nearly as well as Section 230 actually has. In fact, it likely would have had the opposite effect, eliminating more good content from the Internet and leaving up more of the bad. The wisdom of Congress, and of the Zeran court, was in realizing that restraint was a better option.

The challenge we are faced with now is keeping courts, and Section 230’s critics, similarly aware. The problem is that the Section 230 policy balance is one that works well in general, but it is not always in ways people readily recognize, especially in specific cases with particularly bad facts. The reality is that people sometimes do use Internet services in bad ways, and these uses can often be extremely visible. What tends to be less obvious, however, is how many good uses of the Internet Section 230 has enabled to be developed, far eclipsing the unfortunate ones. In the 20-plus years since Zeran people have moved on from AOL to countless new Internet services, which now serve nearly 90 percent of all Americans and billions of users worldwide. Internet access has gone from slow modem-driven dial-up to seamless always-on broadband. We email, we tweet, we buy things, we date, we comment, we argue, we read, we research, we share what we know, all thanks to the services made possible by Section 230, but often without awareness of how much we owe to it and the early Zeran decision upholding its tenets. We even complain about Section 230 using services that Section 230 has enabled, and often without any recognition of the irony.

In a sense, Section 230 is potentially in jeopardy of becoming a victim of its own success. It’s easy to see when things go wrong online, but Section 230 has done so well creating a new normalcy that it’s much harder to see just how much it has allowed to go right. Which means that when things do go wrong – as they inevitably will, because, while Section 230 tries to minimize the bad uses of online services, it’s impossible to eliminate them all—we are always at risk of letting our outrage at the specific injustice cause us to be tempted to kill the golden goose by upending something that on the whole has enabled so much good.

When bad things happen there is a natural urge to do something, to clamp down, to try to seize control over a situation where it feels like there is none. When bad things happen the hands-off approach of Section 230 can seem like the wrong one, but Zeran has shown how it is still very much the right one.

In many ways the Zeran court was ahead of its time: unlike later courts that have been able to point to the success of the Internet to underpin their decisions upholding Section 230, the Zeran court had to take a leap of faith that the policy goals behind the statute would be born out as Congress intended. It turned out to be a faith that was not misplaced. Today it is hard to imagine a world without all the benefit that Section 230 has ushered in. But if we fail to heed the lessons of Zeran and exercise the same restraint the court did then, such a world may well be what comes to pass. As we mark more than two decades since the Zeran court affirmed Section 230 we need to continue to carry its lessons forward in order to ensure that we are not also marking its sunset and closing the door on all the other good Section 230 might yet bring.

Comment »

You Might Like
 
 
 
Learn more about RevenueStripe...

Apparently The New Litmus Test For Trump's FCC: Do You Promise To Police Speech Online

from the snowflake-central dept

by Mike Masnick - September 11th @ 12:00pm

Last month we wrote about how President Trump withdrew the renomination of FCC Commissioner Mike O'Rielly just days after O'Rielly dared to [checks notes] reiterate his support for the 1st Amendment in a way that hinted at the fact that he knew Trump's executive order was blatantly unconstitutional. Some people argued the renomination was pulled for other reasons, but lots of people in DC said it was 100% about his unwillingness to turn the FCC into a speech police for the internet.

While it seems quite unlikely that Trump can get someone new through the nomination process before the election, apparently they're thinking of nominating someone who appears eager to do the exact opposite: Nathan Simington, who wants the FCC to be the internet speech police so bad that he helped draft the obviously unconstitutional executive order in response to the President's freak-out at being fact checked.

Three sources close to the matter say Nathan Simington, a senior advisor at the NTIA within the commerce department, has emerged as a leading candidate to take over Republican Commissioner Mike O’Rielly’s seat at the FCC.

Simington is said to have helped draft the administration’s social media executive order, and his nomination would be a victory for Republicans who want to see the FCC take a larger role in regulating social networks.

You can see the Trumpian logic here: "O'Rielly gently pushed back the tiniest bit on our plan to ignore the 1st Amendment and compel social media companies to host the propaganda and disinformation we spew, so let's replace him with someone who supports that singularly stupid argument. How about the guy who drafted the executive order!"

The idea that "will you support the FCC being the speech police" is now the Republican litmus test for being an FCC Commissioner is a freakish 180 from the history of Republican FCC Commissioners who have spent decades arguing against that on the things they actually have authority over (with the notable exception of obscenity, which GOP Commissioners have, at times, wanted to police). Either way, this seems like yet another example of the Republican party not having any core principles other than punishing the companies and people that Trump doesn't like.

21 Comments »

Florida Sheriff's Predictive Policing Program Is Protecting Residents From Unkempt Lawns, Missing Mailbox Numbers

from the if-you-can't-the-time-in-perpetuity,-don't-commit-the-crime-even-once dept

by Tim Cushing - September 11th @ 10:45am

Defenders of "predictive policing" claim it's a way to work smarter, not harder. Just round up a bunch of data submitted by cops engaged in biased policing and allow the algorithm to work its magic. The end result isn't smarter policing. It's just more of the same policing we've seen for years that disproportionately targets minorities and those in lower income brackets.

Supposedly, this will allow officers to prevent more criminal activity. The dirty data sends cops into neighborhoods to target everyone who lives there, just because they have the misfortune of living in an area where crime is prevalent. If the software was any "smarter," it would just send cops to prisons where criminal activity is the highest.

The Pasco County Sheriff's Department thinks it's going to drive crime down by engaging in predictive policing. But no one's crippling massive criminal organizations or liberating oppressed communities from the criminal activity that plagues their everyday lives. Instead of smart policing that maximizes limited resources, Pasco County residents are getting this instead:

First the Sheriff’s Office generates lists of people it considers likely to break the law, based on arrest histories, unspecified intelligence and arbitrary decisions by police analysts.

Then it sends deputies to find and interrogate anyone whose name appears, often without probable cause, a search warrant or evidence of a specific crime.

They swarm homes in the middle of the night, waking families and embarrassing people in front of their neighbors. They write tickets for missing mailbox numbers and overgrown grass, saddling residents with court dates and fines. They come again and again, making arrests for any reason they can.

One former deputy described the directive like this: “Make their lives miserable until they move or sue.”

Those are the options given to residents. The Sheriff wants residents to fund their own harassment. If they don't like being hassled by officers, move or sue. Both are costly. Both disrupt people's lives. And it's happening because people live in the "wrong" areas or have committed criminal acts in the past, the latter of which law enforcement isn't willing to forgive or forget long after these residents have repaid their debt to society.

In one case, a 15-year-old boy on probation (and overseen by a probation officer) for stealing motorized bikes was "visited" by deputies 21 times in six months. They went to his mother's employer, his friend's house, and the gym he frequented. Past mistakes are the impetus for months or years of hassling by deputies, thanks to the Sheriff's software.

Since September 2015, the Sheriff’s Office has sent deputies on checks like those more than 12,500 times, dispatch logs show.

The Sheriff's Office says this is a smarter way to fight crime. When deputies fine someone $2,500 for having chickens in their yard or arrest a father because a 17-year-old was spotted smoking cigarettes on his property, it's just better police work all around. The Sheriff's Office has become the county's unofficial Homeowner's Assocation, hassling residents for uncut grass, missing mailbox numbers, and having unpopular pets on the premises. But the Pasco County Sheriff thinks this is a good thing and has the stats to back it up.

The Sheriff’s Office said its program was designed to reduce bias in policing by using objective data. And it provided statistics showing a decline in burglaries, larcenies and auto thefts since the program began in 2011.

Or does it?

But Pasco’s drop in property crimes was similar to the decline in the seven-largest nearby police jurisdictions. Over the same time period, violent crime increased only in Pasco.

All the data generated by the Office's 12,500 hasslings goes back into the system, laying the foundation for the next 12,500 useless insertions of law enforcement into people's lives.

The program utilizes 30 people and runs residents $2.8 million a year. It's headed by a former senior counterterrorism expert. The second-in-command is a former Army intelligence officer. But for all the supposed expertise, it's only country residents being terrorized.

The system assigns points to people to see if they can make the top 100 "offenders" list, which is where the Office focuses its efforts. Points are given to people if they're accused of any criminal act, even if the charges are dropped or they're only considered a suspect. Their scores are enhanced if they appear in police reports, even as a witness or a victim.

Body camera recordings and documents show deputies engaged in "intelligence-led" policing threatening people with arrests and citations if they won't agree to let officers in their homes. They also show efforts targeting teens and people with developmental disabilities, including one "target" who had twice been ruled incompetent to stand trial. Former deputies and officers say not every interaction was recorded or logged. In some cases, deputies would park multiple cars outside of targets' homes for hours at a time or make up to six visits a day to the same residence.

The goal is harassment. And it works. Residents feel harassed. Interactions that began cordially have steadily become more confrontational. This works to the Sheriff's advantage. Provoking anger makes it easier to find something to charge residents with, given the number of statutes that enable "contempt of cop" charges. At least one frequent target moved their family out of the county

All of this targeted harassment hasn't made county residents any safer. They'd enjoy the same reduction in property crime in any other nearby county without having to deal with this massive downside. And, as the stats show, violent crime is lower in nearby counties not subjecting residents to mafioso tactics under the guise of "intelligence-led policing." All the program has really shown is that the Sheriff's Office has an excess of personnel and resources.

14 Comments »

Daily Deal: The Complete Microsoft Azure Course Bundle

from the good-deals-on-cool-stuff dept

by Daily Deal - September 11th @ 10:40am

The Complete Microsoft Azure Course Bundle has 15+ hours of video content and 6 eBooks on Azure Cloud solutions, integration, and networks. You'll learn how to monitor and troubleshoot Azure network resources, manage virtual machines with PowerShell, use computer vision to detect objects and text in images, and much more. It's on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Comment »

White House Insisted It Had 16,000 Complaints Of Social Media Bias Turned Over To The FTC; The FTC Has No Record Of Them

from the oh-really? dept

by Mike Masnick - September 11th @ 9:40am

One less noticed feature of the White House's anti-Section 230 executive order was the claim that the White House had over 16,000 complaints about social media bias that it would turn over to the FTC to help it... do something to those big mean social media companies:

In May of 2019, the White House launched a Tech Bias Reporting tool to allow Americans to report incidents of online censorship. In just weeks, the White House received over 16,000 complaints of online platforms censoring or otherwise taking action against users based on their political viewpoints. The White House will submit such complaints received to the Department of Justice and the Federal Trade Commission (FTC).

The FTC shall consider taking action, as appropriate and consistent with applicable law, to prohibit unfair or deceptive acts or practices in or affecting commerce, pursuant to section 45 of title 15, United States Code. Such unfair or deceptive acts or practice may include practices by entities covered by section 230 that restrict speech in ways that do not align with those entities’ public representations about those practices.

This wasn't the first time that Trump had claimed to possess thousands of complaints about social media bias. In August of 2018 (9 months before the White House claims to have set up a tool to receive such complaints), the President said the White House had "literally thousands and thousands of complaints coming in."

think Google is really taking advantage of a lot of people, and I think that’s a very serious thing, and it’s a very serious charge. I think what Google and what others are doing — if you look at what’s going on at Twitter, if you look at what’s going on in Facebook — they better be careful, because you … Can’t do that to people. You can’t do it. We have tremendous — We have literally thousands and thousands of complaints coming in, and you just can’t do that. So I think that Google and Twitter and Facebook, they’re really treading on very, very troubled territory, and they have to be careful. It’s not fair to large portions of the population.

At the time I filed a FOIA request to see if I could get the details of these "complaints." A few months later, I was told there were no responsive records, but it was possible other parts of the government had them. I tried another part of the government, as well, and they emailed me on March 12th (the same day that the much of where I live started to shut down due to the pandemic) to tell me they regretted the delay, and since it had taken so long for them to respond, they were assuming I was no longer interested, and were closing my FOIA request unanswered.

Nice of them.

Of course, now that the White House was claiming a specific amount (over 16,000) and saying it was passing them along to the FTC, you might think it would be a new opportunity to find out about these complaints. Turns out... no. Ryan Singel filed a FOIA request for those complaints and the FTC says the White House never sent over the complaints:

The FTC has not received the aforementioned complaints. We have located 10 complaints received through the Consumer Sentinel Network that mention this topic area and they are enclosed.

As you might expect, the 10 included complaints are pretty ludicrous. To be clear, I expect that even if the White House can turn up 16,000 such complaints that they will almost all be totally ludicrous. But these are pretty silly. Here are just a few.

I have 2 trademarks on my methodology of teaching from the US Government that work in conjunction with each other to show that my reseach is linked to a specific way of educating. Members of the ITEEA group have promoted policy that seems to breach research methodologies, PhD methodologies, publications, degrees and almost all of this inter-relates with federal funds in various ways. I have reported this numerous times to many agencies. The primary points of the issues are available published on this blogpost: www.steamedu.com/usdoe - at the bottom of the page are links to evidence for federal attorneys, the FBI and other related offices, and now yours. In lieu of today's executive order signed by the president and Section 230, it seems as though websites that we not currently curbing the spread of misinformation can now be held libel. I will assume this applies to PAC group websites, related social media sites both created by them and by groups. I am no longer sure if individuals and or site publishers would be held libel for potential slander and fraud issues and would like your group to please help direct me to a path of justice on behalf of the people, our taxes, our policies and the future of integrity in education and possibly in publishing. Thank you.

Good luck with that.

Facebook is completely removing first amendment rights of their users. It should not be deemed a platform if they are allowed to edit restrict people from using their voices on social media. Today after waking up at 6 AM and not being on social media for at least eight hours I woke up to a restriction block on my page saying that I could not re-share any more news stories until next Monday. Consequently the reason why I believe I was blocked is because I've been sharing new stories that are talking about the Minneapolis riots and Trump Signing his executive order that helps us against restricting social media network information. I truly believe they have taken this opportunity to silence my first amendment right to share the information with my timeline because they don't like the fact that I am not supporting Joe Biden! This is a complete miss use of their platform privileges and they need them stripped away immediately and they need to be held liable for taking peoples first amendment rights

Uh huh.

I made polictical statements, which have King James Bible, Conservatives, Republican views, my account was suspended for 30 days. In which Democrate, and Independent's post are fine to Facebook Community Standards. Since then Trump signed an Executive Order that Social Media's have violated people's First Amendment Rights of Free speech to ALL, I expect and apology, and NO FUTHER CENSORSHIP !

And then:

Twitter harassment has been ongoing for years, but now it's to a massive level and they are trying to thwart my right to be a Candidate for Governor, 2024 in New Hampshire. They have locked me out of 5 different Twitter accounts, delete my following and don't allow me lawful access to the site, to my own virtual property and those rights. They also are violating the Sherman Act. They have been sued, are in lawsuits now, and have an Executive Order they are violating, and they are trying and disrupting my campaign account @NHCandidate2024. They tried to force me to delete a second tweet, that is perfectly lawful and did not violate any terms of anything, they are trying to prevent me from using the Twitter site, and they continue to harass and bully. It is imperative the that FTC address these out of control website staff that are actually harassing users, severing free speech, and targeting people's accounts. They have been bullying me, and don't like my reach or voice, and ability to tell the truth and believe in everyone's rights, not just those I agree with. They have altered their "rules" and even when nothing violates them, they harass users and try to force them into removing comments, only to then continue to harass and bully and deny access to the USER's virtual property, housed in their website. They have given themselves improper authority and have continued to bully and target the majority of users (conservatives) and then allow for death threats and mutations of people on another political side. The FTC as well as every other organization, and law makers must terminate their ever growing abuses to people, and give them consequences. They cannot shut down and lock out users based on their lies and biases. They are also searching, and found nothing other than the use of the word idiot, a lawful word and that was the best they could to to retailaite and close my account. The exact tweet being b(6) only an IDIOT reads a single article, does no research and believes that slanted written story. And does no other research, go away moron, too stupid to get it https://t.co/LNoyl3Tjl6Apr 27, 2020, 7:18 PM I don't even know what that comment was in response to, but it was obviously someone I blocked that was in fact trolling me. There is no threat any other harassment, it's a comment and many of them get strong and drastic responses, since the trolls are so vile. Twitter cannot continue to lock out accounts based on no violations at all, and use those fake lock outs to justify closing out accounts. I am running and going to use all social media as anyone else running as the right to do, without being bullied by Twitter or anyone else. Time to fix their illegal activity. It's criminal and they need consequences and fines and some need more consequences, including investigating their own sites. Twitter is liable for distrupting anyone's right to contact me on my Twitter account for my campaign and they owe criminal restitution, and civil damages as well.

Slow down there, champ.

Anyway, the FCC is now sitting on a bunch of these style of complaints now as well, filed in the ongoing comment period. However, the FTC is apparently still waiting for more than just the 10 it received directly. The White House promised to deliver over 16,000 complaints, but the FTC is apparently still there, waiting.

Read More | 18 Comments »

Auto Industry Pushes Bullshit Claim That 'Right To Repair' Laws Aid Sexual Predators

from the fear-mongering-ahoy dept

by Karl Bode - September 11th @ 6:33am

A few years back, frustration at John Deere's draconian tractor DRM culminated in a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.

Of course the problem isn't just restricted to John Deere. Apple, Microsoft, Sony, and countless other tech giants eager to monopolize repair have made a habit of suing and bullying independent repair shops and demonizing consumers who simply want to reduce waste and repair devices they own. This, in turn, has resulted in a growing push for right to repair legislation in countless states.

To thwart these bills, companies have been ramping up the use of idiotic, fear mongering arguments. Usually these arguments involve false claims that these bills will somehow imperil consumer privacy, safety, and security. Apple, for example, tried to thwart one such bill in Nebraska by claiming it would turn the state into a "mecca for hackers."

While there's been no shortage of bad faith arguments like this, the auto industry in Massachusetts has taken things to the next level. The state is contemplating the expansion of an existing state law that lets users get their vehicles repaired anywhere they'd like. In a bid to kill these efforts, the Alliance for Automotive Innovation, which represents most major automakers, has taken to running ads in the state falsely claiming that the legislation would aid sexual predators:

The primary message of the ads is that if we allow people to more easily repair their vehicles, data from said vehicles will somehow find itself in the hands of rapists, stalkers, and other menaces. Granted actual experts have made it abundantly clear that this is utterly unfounded. The existing law requires that automakers use a non-proprietary diagnostic interface so any repair shop can access vehicle data using an ordinary OBD reader. It also makes sure that important repair information is openly accessible. The update to said law simply attempts to close a few loopholes in the existing law:

"Question 1 seeks to close a loophole in that earlier law, which exempted cars that transmitted this data wirelessly. As cars become even more computerized, independent repair shops are worried that manufacturers will do away with the OBD port and will store this data wirelessly, exempting them from the earlier law. The new initiative simply guarantees that car owners and independent repair companies can access this data wirelessly without "authorization by the manufacturer," and requires car manufacturers to store this data in a secure, "standardized, open-access platform."

One local ABC affiliate in Massachusetts thoroughly debunked the ads' claims. Experts told Matthew Gault at Motherboard that the real goal of the auto industry here is to simply shift all of this diagnostic tech to wireless to wiggle around the law. In part to maintain a monopoly on repair (letting them drive up the cost of taking your vehicle to the dealership), but also to further obscure all the driving, location, and other data automakers are collecting and selling to a long list of companies:

"My guess is what automakers really don't want to talk about is all of the data that they are collecting from connected vehicles that they're not telling us about,” Paul F Roberts, founder of Securerepairs—a group of security and repair professionals who advocate for security and repair issues—told Motherboard on the phone.

“The backup safety cameras that go on every time you put your car in reverse, are those on all the time and are they observing your surroundings and inferring data about your whereabouts and preferences?” Roberts said. “The in-cabin cameras that we know Tesla has on their cars, are those just monitoring you all the time… are they monitoring your GPS data and mining that or selling that? We don’t know."

Of course they're collecting and selling that data with minimal oversight. The United States still lacks any meaningful privacy laws for the modern era, in part because many of these same companies have opposed such legislation. Because it's hard for the auto industry to honestly admit it wants to monopolize repair, drive up consumer costs, and obfuscate the wholesale hoovering up and sale of your data, they've apparently concocted a grotesque bullshit narrative that the legislative updates will somehow aid sexual predators. Stay classy, Alliance for Automotive Innovation!

29 Comments »

Cops And Paramedics Are Still Killing Arrestees By Shooting Them Up With Ketamine

from the i-guess-it's-ok-if-it's-not-on-purpose? dept

by Tim Cushing - September 11th @ 3:31am

Cops -- and the paramedics who listen to their "medical advice" -- are still killing people. A couple of years ago, an investigation by the Minneapolis PD's Office of Police Conduct Review found officers were telling EMS personnel to inject arrestees with ketamine to calm them down. This medical advice followed street-level diagnoses by untrained mental health unprofessionals who've decided the perfect cure for "excited delirium" is a drug with deadly side effects.

People have been "calmed" to death by ketamine injections -- ones pushed by police officers and carried out by complicit paramedics. The cases reviewed by the OPC included potentially dangerous criminals like jaywalkers and disrespecters of law enforcement ("obstruction of justice"). Multiple recordings showed arrestees shot up with ketamine shortly before their hearts stopped or they ceased breathing.

This incredibly dangerous practice of using ketamine to sedate arrestees hasn't slowed down. Instead, it has spread. What was a horrific discovery in Minneapolis is still day-to-day business elsewhere in the country. Cops and paramedics in Colorado are still putting peoples' lives at risk by using ketamine as their go-to sedative.

Police stopped Elijah McClain on the street in suburban Denver last year after deeming the young Black man suspicious. He was thrown into a chokehold, threatened with a dog and stun gun, then subjected to another law enforcement tool before he died: a drug called ketamine.

Paramedics inject it into people like McClain as a sedative, often at the behest of police who believe suspects are out of control. Officially, ketamine is used in emergencies when there’s a safety concern for medical staff or the patient. But it's increasingly found in arrests and has become another flashpoint in the debate over law enforcement policies and brutality against people of color.

An analysis by The Associated Press of policies on ketamine and cases where the drug was used during police encounters uncovered a lack of police training, conflicting medical standards and nonexistent protocols that have resulted in hospitalizations and even deaths.

McClain was killed because paramedics assumed he weighed nearly twice as much as he actually did. They gave him an inadvertent double dose that triggered cardiac arrest. Soon after that, McClain was declared brain dead and removed from life support. McClain was killed for the crime of being suspicious in public (cops were responding to a call about a "suspicious person wearing a ski mask and waving their arms.") And he was killed by the people who were supposed to ensure his health and safety.

After his death, Colorado's health department attempted to investigate law enforcement use of ketamine. That investigation appears to have fallen apart before it could really get started. As the AP report points out, there are no uniform reporting requirements for ketamine deployment -- not at any level of government. State requirements are different from federal requirements. Consequently, there's no cohesive collection of data on this drug's use.

Unfortunately, most government guidelines agree cops can use a particularly worthless term to justify the use of the sedative.

Most states and agencies say ketamine may be administered when someone exhibits “excited delirium” or agitation, which is typically associated with chronic drug abuse, mental illness or both.

Even if "excited delirium" was a mental health condition recognized by a large number of medical and mental health entities and governing bodies (spoiler alert: it isn't), police officers aren't qualified to make diagnoses and recommend sedatives after limited interactions with people they're trying to arrest. But government bodies have already issued this permission slip to cops and they use it as often as they can. It's a diagnosis that rarely comes from anyone but a law enforcement officer or official.

The drug is only safe when deployed in controlled settings by healthcare professionals. Even then, there may be complications due to preexisting conditions. Turning it into a tool of arrest tradecraft eliminates all the expertise and replaces it with expedience. It may not go wrong every time. But it goes wrong often enough -- and with deadly consequences -- that no one should feel comfortable allowing law enforcement and EMS crews to make off-the-cuff decisions about its use.

There were 902 reported instances of Colorado paramedics administering ketamine from 2018 to 2020, and almost 17% had complications, including cardiac arrest and oxygen deprivation, the state health department said.

If it increases the chances of death, everyone involved should steer clear of it. EMS personnel are supposed to be lifesavers, not deathbringers. The same goes for cops. Just because someone's uncooperative doesn't mean they need to be subjected to something that could kill them. That a 17% failure rate hasn't slowed this practice down shows how little cops and their first responder buddies care about the lives of people in handcuffs.

30 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Visit Techdirt for today's stories.

Forward to a Friend
 
 
  • This mailing list is a public mailing list - anyone may join or leave, at any time.
  • This mailing list is announce-only.

Techdirt's original daily email. Once a day, Techdirt will email the full-length version of the previous day's stories from Techdirt.com (based on Pacific time).

Privacy Policy:

Floor64 will not share your email address with third parties.

Go back to Techdirt