Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Tuesday, September 22nd, 2020
How To Nuke Your Reputation: The Nikola Edition
from the going-downhill dept
by Timothy Geigner - September 22nd @ 7:46pm
This isn't so much in vogue as it was in the past, but it still remains true that one's reputation is a scarce resource that can be frittered away easily. And, on these pages at least, it is often equal parts perplexing and funny to watch some folks in the tech space torpedo their own reputations for various reasons. The more shrewd don't always seem to care about this sort of thing, which is how you get the MPAA pirating clips from Google to make its videos, or a law school taking a critic to court only to have the court declare said critic's critique was totally true. Good times.
Which brings us to Trevor Milton, the founder of Nikola Motor Company. Nikola is playing in the electric truck vehicle space. In 2016, Milton announced in an official video that the Nikola One Semi was "fully functional." In fact, one of Milton's chief public concerns at the time was ensuring that nobody could come by and drive away with one of the trucks. The companion video for the Nikola One was posted to YouTube in January of 2018. This video shows the Nikola One chugging down a lonely one-lane road.
Despite all of the fanfare, it's worth noting that the Nikola One never made it into production. Why? Well...
Hindenburg Research published a bombshell report claiming that the Nikola One wasn't close to being fully functional in December 2016. Indeed, Hindenburg published a 2017 text message exchange in which a Nikola employee stated that the company didn't resume work on the truck in the months after the show.
Even more incredible, Hindenburg reported that the truck in the "Nikola One in motion" video wasn't moving under its own power. Rather, Nikola had towed the truck to the top of a shallow hill and let it roll down. The company allegedly tilted the camera to make it look like the truck was traveling under its own power on a level roadway.
Now, on the one hand, that's objectively funny. It's sort of an Adam West's Batman approach to product demonstration. But, on the other hand, now that Milton has admitted the charges above, he's likely in a whole world of trouble. The company has tried to weasel out of this in fairly absurd fashion.
"Nikola never stated its truck was driving under its own propulsion in the video," Nikola wrote. "Nikola described this third-party video on the Company’s social media as 'In Motion.' It was never described as 'under its own propulsion' or 'powertrain driven.' Nikola investors who invested during this period, in which the Company was privately held, knew the technical capability of the Nikola One at the time of their investment."
Not everyone seems to think that's true. The SEC and DOJ are reported to have opened investigations into the company's behavior after these revelations. And, as to the point of Milton's reputation personally, he's out at Nikola.
Milton's resignation came just 10 days after a bombshell research report revealed that Milton wasn't telling the truth in 2016 when he unveiled the company's first product, the Nikola One, and claimed that it "fully functions." Over the weekend, Milton offered (voluntarily, he says) to resign as executive chairman, and Nikola's board accepted his offer. Milton will also relinquish his seat on Nikola's board.
Now, a few items of note. First, Nikola does now have a functioning prototype, the Nikola Two. It's also partnering with several automobile companies and has contracts in place with them.
But for Milton, he loses his position at the company he founded, millions in stock and consulting fees, and has gained infamy as someone who is willing to, at best, mislead the public about his companies' products. Your reputation is a scarce good. Frittering it away by turning the camera on an angle probably isn't the best move.
from the it's-bad,-get-rid-of-it dept
by Mike Masnick - September 22nd @ 3:32pm
Senator Lindsey Graham is in a tight re-election campaign that he might just lose. And he's doing what politicians desperate for campaign cash tend to do: releasing a lot of absolutely batshit crazy bills that will pressure big donors to donate to him to either support the bill, or to get him not to move forward on it. It's corrupt as hell, but is standard practice. And the best of these kinds of bills are ones that pit two large industries with lots of lobbyists and cash to throw around against one another. For many years the favorite such bill for this was a bill about performance rights royalties for radio play. This would pit radio broadcasters against the music industry, and the cash would flow. Every two years, as the election was coming, such a bill would be released that was unlikely to go anywhere, but the cash would flow in.
More recently, the goal has been to target the big internet companies. And, boy, Linsdey Graham's campaign must be struggling, because he's decided to take two horrible, awful bills that would harm the internet and mash them together into a single bill that is set for markup by the Senate Judiciary Committee next week. This new bill, entitled the "Online Content Policy Modernization Act" simply combines the terrible and unconstitutional CASE Act (to create a quasi-judicial court in the Copyright Office to review copyright claims) with some of the recently released (and also horrible and unconstitutional) "Online Freedom and Viewpoint Diversity Act" which would rewrite Section 230 to remove the ability to moderate "otherwise objectionable" content without liability, and would, instead, insert a limited list of what kinds of content could be moderated without liability.
Both of these are bad ideas, but both of them are specific threats to the open internet -- and the kinds of things that Senator Graham knows he can fundraise on. Both bills are garbage, and Senator Graham likely knows this -- but he's not in the Senate to actually legislate. He's there to stay in power, and there's a real chance he might lose this November. So I guess it's time to break out the really stupid bills.
Techdirt Podcast Episode 256: Little Brother vs. Big Audiobook, With Cory Doctorow
from the doing-something-different dept
by Leigh Beadon - September 22nd @ 1:32pm
The third book in Cory Doctorow's Little Brother series is coming soon — but as usual, Cory is doing something different as part of the release. Fans and Techdirt readers know he's an outspoken opponent of DRM who makes sure all his work is available DRM-free, but that isn't so easy when it comes to audiobooks, where Audible's market dominance forces DRM onto everything. So while publishers eagerly picked up Attack Surface for printing, he retained the audio rights and is running his first-ever Kickstarter to release a nice non-DRM version. This week, Cory joins Mike on the podcast to discuss why he's doing it, what he's giving up, and the industry changes he hopes to inspire.
Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Court Rejects Clearview's First Amendment, Section 230 Immunity Arguments
from the don't-allow-a-bad-company-to-generate-bad-courtroom-precedent-though dept
by Tim Cushing - September 22nd @ 12:29pm
Back in March, facial recognition tech upstart Clearview was sued by the Vermont Attorney General. The AG alleged Clearview's scraping of sites to harvest photos (and other biometric/personal info) of Vermont residents violated state privacy laws. It also alleged Clearview had mislead residents and customers about the company's intended uses and its success in the law enforcement marketplace.
Clearview's response to the lawsuit was… interesting. It tried to invoke Section 230 immunity, claiming it was nothing more than a host for third-party content. The problem with this argument was it wasn't being sued over the content itself (which wasn't defamatory, etc.) but over its collection of the content, which did not provide Vermont residents with notice their information was being collected and gave them no way to opt out.
The company then hired a prominent (but opportunistic) First Amendment lawyer to argue it had a First Amendment right to collect and disseminate this information, even when its collection efforts routinely violated the terms of service of nearly every site it scraped to obtain photos. This argument was also interesting in its own way, but had the potential to cause complications for plenty of entities not nearly as universally-reviled as Clearview. In some ways, Clearview is the Google of faces, gathering information from all over the web and delivering search results to Clearview users.
The Vermont court has finally weighed in [PDF] on Clearview's arguments. And it doesn't like most of them. (h/t Eric Goldman)
Here's the court's take on the Section 230 argument:
Importantly, the basis for the State’s claims is not merely the photographs provided by third—party individuals and entities, or that Clearview makes those photographs available to its consumers. Instead, the claims are based on the means by which Clearview acquired the photographs, its use of facial recognition technology to allow its users to easily identify random individuals from photographs, and its allegedly deceptive statements regarding its product… This is not simply a case of Clearview republishing offensive photographs provided by someone else, and the State seeking liability because those photographs are offensive. Indeed, whether the photographs themselves are offensive or defamatory is immaterial to the State’s claims.
Instead, the claims here attempt to hold Clearview “accountable for its own unfair or deceptive acts or practices,” such as screen—scraping photographs Without the owners’ consent and in Violation of the source’s terms of service, providing inadequate data security for consumers’ data, applying facial recognition technology to allow others to easily identify persons in the photographs, and making material false or misleading statements about its product.
So, no dismissal based on Section 230 immunity for Clearview. The court then tackles the First Amendment assertions. The court says the First Amendment does not cover the commercial speech targeted by the AG's lawsuit.
The court next observes that at least some of the conduct alleged in Counts and III is largely nonexpressive in nature. The allegations that Clearview provided inadequate data security and exposed consumers’ information to theft, security breaches, and surveillance lack a communicative element. The First Amendment does not protect such conduct.
Whether the software itself is covered by the First Amendment is more difficult to answer.
Because the Clearview app’s raw code is not at issue here as in Corley, the app arguably has no expressive speech component and is more similar to the “entirely mechanical” automatic trading system in Vartuli that “induce[d] action without the intercession of the mind or the will of the recipient.” Vartuli, 228 F.3d at 111. The user simply inputs a photograph of a person, and the app automatically displays other photographs of that person with no further interaction required from the human user. In that sense, the app might not be entitled to any First Amendment protection. Complicating matters, however, is the fact that Clearview’s app is similar to a search engine, and some courts have generally recognized First Amendment protection for search engines, at least to the extent that the display and order of search results involve a degree of editorial discretion.
Whether or not it's actually speech doesn't appear to matter, at least not to this court. It says the "speech" -- protected or not -- can be regulated by the Vermont government. Since the AG isn't suing over the content of the "speech" itself but rather the use of personal information gathered from Vermont residents, the lawsuit against Clearview can continue.
Presumably, the State has no problem with Clearview operating its app so long as the Vermonters depicted in its photograph database have fully consented. The regulation sought by the State here is content-neutral and, accordingly, subject to intermediate scrutiny.
But then the court goes on to say that even if this violates Clearview's First Amendment rights, it barely violates them.
Furthermore, any incidental restriction on speech imposed by the State’s action would not burden substantially more speech than is necessary to further the State’s interest in protecting privacy. The State estimates that the reliefit requests will leave more than 99 percent of Clearview’s database intact.
That's a little more problematic. The court does go on to state that Clearview could avoid this by seeking affirmative consent from Vermont residents. It also says the court would ensure that any regulation proposed by the state would be subjected to further scrutiny to ensure the burden on Clearview is minimal. But that seems unlikely to be true if the court already believes burdensome regulation would only result in a 1% reduction in free speech.
The court also upholds all the deceptive claims allegations. Clearview's marketing has been far from honest. It has touted law enforcement successes that have been directly contradicted by the named law enforcement agencies. It has told people they can ask to be removed from its database, but then says that's only subject to laws that aren't in force in most of the nation. It has also claimed its only for legitimate law enforcement use, but has sold the software to a number of private entities and encouraged law enforcement officers to "run wild" while testing the app using faces of friends and family members. All of these claims survive.
The longer Clearview exists, the more lawsuits it will face. Its collection method -- scraping sites to obtain personal data -- is already problematic. The tech itself remains unproven, having never been tested for accuracy by an independent outside agency. If this is the best it can do in its own defense, it's going to run itself out of money before it secures any favorable precedent.
This Week Only: Free Shipping On Techdirt Gear From Threadless
from the get-it-quick dept
by Leigh Beadon - September 22nd @ 11:15am
Get free shipping on Techdirt Gear orders over $45 with the coupon code FREESHIP92031e946 »
Have you had your eye on some gear from the Techdirt store on Threadless? Then this is the week to pick it up! From now until Friday at 3pm PDT, you can get free shipping on orders over $45 in the US and $80 international with the coupon code FREESHIP92031e946. The offer covers all our designs, including the new Otherwise Objectionable gear celebrating two of the most important words in Section 230, and our wide variety of face masks.
There's also our complete line of Techdirt logo gear and, as usual, a wide variety of products available in every design: t-shirts, hoodies, sweaters and other apparel — plus a variety of cool accessories and home items including buttons, phone cases (for many iPhone and Galaxy models), mugs, tote bags, and stylish notebooks and journals.
This week only! Get free shipping with the coupon code FREESHIP92031e946 »
Authors Of CDA 230 Do Some Serious 230 Mythbusting In Response To Comments Submitted To The FCC
from the that's-not-how-any-of-this-works-at-all dept
by Mike Masnick - September 22nd @ 10:44am
While there were thousands of comments filed to the FCC in response to the NTIA's insanely bad "petition" to have the FCC reinterpret Section 230 in response to an unconstitutional executive order from a President who was upset that Twitter fact checked some of his nonsense tweets, perhaps the comment that matters most is the one submitted last week by the two authors of Section 230, Senator Ron Wyden and former Rep. Chris Cox. Cox and Wyden wrote what became Section 230 back in the 90s, and have spent decades fighting misinformation about it -- and fighting to keep 230 in place.
In the comment they submitted to the FCC, they respond to all the idiotic nonsense that everyone has been submitting. Again, these are the guys who wrote the actual law. They know what it was intended to do, and agree with how it's been used to date. So they go on a systematic debunking journey through the nonsense. First, they respond to comments that say that the FCC can interpret 230. Nope.
Several commenters have repeated the claim in the Petition that “[n]either section 230’s text, nor any speck of legislative history, suggests any congressional intent to preclude the Commission’s implementation.” In fact, however, as the authors of the legislation and the floor managers of the debate on the bill in the House of Representatives, we can assure you the very opposite is true. We and our colleagues in Congress on both sides of the aisle were emphatic that we were not creating new regulatory authority for the FCC or any other independent agency or executive branch department when we enacted Section 230. Not only is this clear from the legislative history, but it is written on the face of the statute. Unlike other provisions in Title II of the Communications Act, Section 230 does not invite agency rulemaking. Indeed, in a provision that judges interpreting the law have noted is “unusual,” Section 230(b) explicitly provides:
It is the policy of the United States … to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.
When this legislation came to the floor of the House of Representatives for debate on August 4, 1995, the two of us, together with members on both sides of the aisle, explained that our purpose was to ensure that the FCC would not have regulatory authority over content on the internet. We and our colleagues, Democrats and Republicans alike, decried the unwelcome proregulatory alternative of giving the FCC responsibility for regulating content on the internet, which at the time was being advanced in separate legislation by Senator James Exon...
The Cox-Wyden bill under consideration was intended as a rebuke to that entire concept.
Then, to prove they're not engaging in revisionist history, they cite the speeches they themselves gave about how the whole point of their bill was to keep the FCC from regulating the internet. From Wyden's floor speech at the time:
[T]he reason that this approach rather than the Senate approach is important is … the speed at which these technologies are advancing [which will] give parents the tools they need, while the Federal Communications Commission is out there cranking out rules about ·proposed rulemaking programs. Their approach is going to set back the effort to help our families.
Cox's floor speech was even more direct with the question of whether or not their approach was designed to give the FCC power:
Some have suggested, Mr. Chairman, that we take the Federal Communications Commission and turn it into the ‘Federal Computer Commission’ — that we hire even more bureaucrats and more regulators who will attempt, either civilly or criminally, to punish people by catching them in the act of putting something into cyberspace. Frankly, there is just too much going on on the Internet for that to be effective....
[This bill] will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet —that we do not wish to have a ‘Federal Computer Commission’ with an army of bureaucrats regulating the Internet....
The message today should be, from this Congress: we embrace this new technology, we welcome the opportunity for education and political discourse that it offers for all of us. We want to help it along this time by saying Government is going to get out of the way and let parents and individuals control it rather than Government doing that job for us....
If we regulate the Internet at the FCC, that will freeze or at least slow down technology. It will threaten the future of the Internet. That is why it is so important that we not have a ‘Federal Computer Commission’ do that.
Next, the comment responds to the claims that 230 is "outdated." Nope, claim its authors:
Several commenters, including AT&T, assert that Section 230 was conceived as a way to protect an infant industry, and that it was written with the antiquated internet of the 1990s in mind – not the robust, ubiquitous internet we know today. As authors of the statute, we particularly wish to put this urban legend to rest.
Section 230, originally named the Internet Freedom and Family Empowerment Act, H.R. 1978, was designed to address the obviously growing problem of individual web portals being overwhelmed with user-created content. This is not a problem the internet will ever grow out of; as internet usage and content creation continue to grow, the problem grows ever bigger. Far from wishing to offer protection to an infant industry, our legislative aim was to recognize the sheer implausibility of requiring each website to monitor all of the user-created content that crossed its portal each day.
Critics of Section 230 point out the significant differences between the internet of 1996 and today. Those differences, however, are not unanticipated. When we wrote the law, we believed the internet of the future was going to be a very vibrant and extraordinary opportunity for people to become educated about innumerable subjects, from health care to technological innovation to their own fields of employment. So we began with these two propositions: let’s make sure that every internet user has the opportunity to exercise their First Amendment rights; and let’s deal with the slime and horrible material on the internet by giving both websites and their users the tools and the legal protection necessary to take it down.
The march of technology and the profusion of e-commerce business models over the last two decades represent precisely the kind of progress that Congress in 1996 hoped would follow from Section 230’s protections for speech on the internet and for the websites that host it. The increase in user-created content in the years since then is both a desired result of the certainty the law provides, and further reason that the law is needed more than ever in today’s environment.
Next up: the all too frequent claim that 230 creates a special rule for the internet that is different than for brick and mortar stores, and therefore there's a "double standard." Again, nope.
Several commenters have asserted that Section 230 sets up a “double standard” by treating online businesses differently from “brick-and-mortar” businesses. This represents a fundamental misunderstanding of both the purpose of the law and how it operates in practice.
Section 230 serves to punish the guilty and protect the innocent. Individuals and firms are made fully responsible for their own conduct. Anyone who creates digital content and uploads it to a website is legally liable for what they have done. A website that hosts the content will likewise be liable, if it contributes to the creation or development of that content, in whole or in part. Otherwise, the website will be protected from liability for third-party content.
Section 230 was written to adapt intermediary liability rules long recognized in the analog world for the digital world, applying the wisdom accumulated over decades in legislatures and the courts to the realities of this new technological realm. As authors of the law, we understood what was evident in 1996 and is even more in evidence today: it would be unreasonable for the law to impose on websites a legal duty to monitor all user-created content.
When Section 230 was written, just as now, each of the commercial applications flourishing online had an analog in the offline world, where each had its own attendant legal responsibilities. Newspapers could be liable for defamation. Banks and brokers could be held responsible for failing to know their customers. Advertisers were responsible under the Federal Trade Commission Act and state consumer laws for ensuring their content was not deceptive and unfair. Merchandisers could be held liable for negligence and breach of warranty, and in some cases even subject to strict liability for defective products. In writing Section 230, we—and ultimately the entire Congress—decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its legal obligations.
What Section 230 added to the general body of law was the principle that individuals or an entity operating a website should not, in addition to their own legal responsibilities, be required to monitor all of the content created by third parties and thereby become derivatively liable for the illegal acts of others. Congress recognized that to require otherwise would jeopardize the quintessential function of the internet: permitting millions of people around the world to communicate simultaneously and instantaneously, a unique capability that has made the internet “the shining star of the Information Age.” Congress wished to “embrace” and “welcome” this, not only for its commercial potential but also for “the opportunity for education and political discourse that it offers for all of us.” The result is that websites are protected from liability for user-created content, but only to a point: if they are responsible, even in part, for the creation or development of that content, they lose that protection.
The fact that Section 230 established the legal framework for assessing liability in circumstances unique to the internet does not mean that either this framework or the preexisting legal rules do not apply equally to all online and offline businesses. Every business continues to bear the same legal responsibilities when operating in the offline world, and every business is bound by the same statutorily-defined responsibilities set out in Section 230 when operating in the e-commerce realm.
Then there's the question about whether or not the FCC can mandate disclosure and reporting requirements. As Cox and Wyden note, this argument -- pushed strongly by AT&T and the NTIA "borders on the absurd."
The Petition asks the FCC to interpret Section 230 as if it contained explicit requirements mandating terms of service, content moderation policies, due process notice and hearings in which content creators could dispute moderation decisions, and public disclosures concerning these and other matters. The Petition further asks that the FCC impose these specific requirements by rule. Multiple commenters, including AT&T, have endorsed this aspect of the NTIA proposal.
The Petition clearly states NTIA’s understanding that Congress, with “strong bi-partisan support,” intended Section 230 to be “a non-regulatory approach.” In this they are correct. As outlined in Section II above, the legislative history clearly demonstrates that we and our colleagues in Congress intended to keep the FCC and other regulators out of this area. This is reflected in the language of Section 230 itself. Both of us, as the authors of the legislation, made ourselves abundantly clear on this point when the law was being debated.
This fact—and NTIA’s admission of it—makes it all the more illogical for their Petition to ask the Commission to interpret Section 230 as statutory authorization for the FCC to regulate the very subjects that Section 230 itself covers, and which Congress wanted the Commission to stay out of. It surpasses illogic, and borders on the absurd, for the Petition to ask the FCC to use authority that Section 230 clearly does not grant it, in order to divine from the text of the statute explicit duties and burdens on websites that Section 230 itself clearly does not impose.
As Cox and Wyden note, any such interpretation would clearly require new legislation and could not be created, whole cloth, from the mind of an angry President and clueless NTIA staffers with grudges about Section 230.
All of this would require new federal legislation. None of it appears in Section 230, either in the text of the law that we can all read (and that the two of us wrote), or even in the invisible ink which NTIA must believe only it can read.
I get the feeling that Cox and Wyden do not think highly of the NTIA petition.
As for those who commented suggesting that the FCC could interpret Section 230 to include a "negligence" standard, again, this is not how any of this works:
Several commenters, including Digital Frontiers Advocacy, have urged grafting onto Section 230 a requirement, derived from negligence law, upon which existing protections for content moderation would be conditioned. These requirements would add to Section 230 a “duty of care” or a “reasonableness” standard that cannot be found in the statute. As one example, the Petition (which is generically endorsed in its entirety by many individual commenters) would have the FCC require that content moderation decisions be “objectively reasonable,” as compared to the clear language of Section 230, which provides that the decision is to be that of “the provider or user.”
As the authors of this law, and leading participants in the legislative process that led to its enactment in 1996, we can assure the Commission that the reason you do not see any such requirement on the face of the statute is that we did not intend to put one there.
The proposed introduction of subjective negligence concepts would effectively make every complaint concerning a website’s content moderation into a question of fact. Since such factual disputes can only be resolved after evidentiary discovery (depositions of witnesses, written interrogatories, subpoenas of documents, and so forth), no longer could a website prove itself eligible for dismissal of a case at an early stage.
We intended to spare websites the death from a thousand paper cuts that would be the result if every user, merely by filing a complaint about a content moderation decision, could set in motion a multi-year lawsuit. We therefore wrote Section 230 with an objective standard: was the allegedly illegal material created or developed—in whole or in part—by the website itself? If the complaint adequately alleges this, then a lawsuit seeking to hold the website liable as a publisher of the material can proceed; otherwise it cannot.
And if you think Cox and Wyden are done exploring just how absurdly stupid this process has been, you haven't prepared yourself for the next section, in which they respond to the many ridiculous comments suggesting 230 enables the FCC to enforce "neutrality" on internet websites:
The Claremont Institute and scores of individual commenters have complained that particular websites are not politically neutral, and they demand that Section 230’s protection from liability for content created by others be conditioned on proof that a website is in fact politically neutral in the content that it hosts, and in its moderation decisions.
There are three points that must be made in reply. The first is that Section 230 does not require political neutrality. Claiming to “interpret” Section 230 to require political neutrality, or to condition its Good Samaritan protections on political neutrality, would erase the law we wrote and substitute a completely different one, with opposite effect. The second is that any governmental attempt to enforce political neutrality on websites would be hopelessly subjective, complicated, burdensome, and unworkable. The third is that any such legislation or regulation intended to override a website’s moderation decisions would amount to compelling speech, in violation of the First Amendment....
They respond to every idiot who misinterprets the line in the Findings part of Section 230 about "diversity of political discourse" by saying "we meant lots of different sites, not that every site has to host all your nonsense."
Section 230 itself states the congressional purpose of ensuring that the internet remains “a global forum for a true diversity of political discourse.” In our view as the law’s authors, this requires that government allow a thousand flowers to bloom—not that a single website has to represent every conceivable point of view. The reason that Section 230 does not require political neutrality, and was never intended to do so, is that it would enforce homogeneity: every website would have the same “neutral” point of view. This is the opposite of true diversity.
To use an obvious example, neither the Democratic National Committee nor the Republican National Committee websites would pass a political neutrality test. Government compelled speech is not the way to ensure diverse viewpoints. Permitting websites to choose their own viewpoints is.
And then there's that comment that was popular among individual filers (and lots of idiots on Twitter) that because Section 230 allows websites to take down lawful speech, that's somehow a violation of the 1st Amendment. We've discussed many, many, many times how ridiculous that is, but why don't we hear it from Wyden and Cox:
Many individual commenters complained that their political viewpoints have been “censored” by websites ostensibly implementing their community guidelines, but actually suppressing speech. Several of these commenters have urged the FCC to require that all speech protected by the First Amendment be allowed on any site of sufficient size that it might be deemed an equivalent to the “public square.” In the context of this proceeding, that would mean Section 230 would somehow have to be “interpreted” to require this.
Comments within this genre share a fundamental misunderstanding of Section 230. The matter is readily clarified by reference to the plain language of the statute. The law provides that a website can moderate content “whether or not such material is constitutionally protected.”... Congress would have to repeal this language, and replace it with an explicit speech mandate, in order for the FCC to do what the commenters are urging.
Government-compelled speech, however, would be a source of further problems. Because the First Amendment not only protects expression but non-expression, any attempt to devise an FCC regulation that forces a website to publish content it otherwise would moderate would almost certainly be unconstitutional. The government may not force websites to publish material that they do not approve. As Chief Justice Roberts unequivocally put it in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”...
And then they point out that many commenters don't seem to understand the 1st Amendment:
The answer to the commenters’ complaints of “censorship” must be twofold. First, many of the comments conflate their frustrations about Section 230 with the First Amendment. As noted, it is the First Amendment, not Section 230, that gives websites the right to choose which viewpoints, if any, to advance. Furthermore, First Amendment speech protections dictate that the government, with a few notable exceptions, may not dictate what speech is acceptable. The First Amendment places no such restrictions on private individuals or companies. Second, the purpose and effect of Section 230 is to make the internet safe for innovation and individual free speech. Without Section 230, complaints about “censorship” by the likes of Google, Facebook, and Twitter would not disappear. Instead, we would be facing a thousandfold more complaints that neither the largest online platforms nor the smallest websites are any longer willing to host material from individual content creators.
And changing Section 230 in the manner these commenters seek wouldn't actually help them:
Eroding the law through regulatory revision would seriously jeopardize free speech for everyone. It would be particularly injurious to marginalized viewpoints that aren’t within “the mainstream.” It would present near-insuperable barriers for new entrants attempting to compete with entrenched tech giants in the social media space. Not least of all, it would set a terrible example for the rest of the world if the United States, which created the internet and so much of the vast cyber ecosystem that has enabled it to flourish globally as an informational, cultural, scientific, educational, and economic resource, were to undermine the ability that hundreds of millions of individuals have each day to contribute their content to that result.
In the absence of Section 230, the First Amendment rights of Americans, and the internet as we know it, would shrivel. Far from authorizing censorship, the law provides the legal certainty and protection from open-ended liability that permits websites large and small to host the free expression of individuals, making it available to a worldwide audience. Section 230 is a bulwark of free speech and civil discourse that is more important now than ever, especially in the current political climate that is increasingly hostile to both.
In short, so many of these commenters are confused about the law, the history, the technology, how free speech works, how the internet works, and more. That much of this is also true of the NTIA petition itself is a shame.
The Cox and Wyden comment concludes by underlining the fact that they wrote 230 with the explicit intent of keeping the FCC away from regulating internet websites.
On one point we can speak ex cathedra, as it were: our intent in writing this law was to keep the FCC out of the business of regulating websites, content moderation policies, and the content of speech on the internet. The Petition asks the Commission to reverse more than two decades of its own policy by becoming, at this late stage in the life of Section 230, its regulatory interpreter. In so doing, the FCC would assume responsibility for regulating websites, content moderation policies, and the content of speech on the internet—precisely the result we intended Section 230 to prevent. To reach this perverse result, the FCC would “clarify” the words of Section 230 in ways that do violence to the plain meaning of the statutory text.
One would hope that such a detailed response from the authors of the law would put this whole nonsense to rest. But it won't.
Daily Deal: The Complete Developer Bootcamp
from the good-deals-on-cool-stuff dept
by Daily Deal - September 22nd @ 10:41am
The Complete Developer Bootcamp will introduce you to the best practices for software development. You will learn the most popular best practices in software development such as code quality gates, coding standards, unit testing, test automation, branching strategy, business analysis, estimations, Agile, and more. It is on sale for $16.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Blowback Time: China Says TikTok Deal Is A Model For How It Should Deal With US Companies In China
from the because-of-course dept
by Mike Masnick - September 22nd @ 9:27am
We've already covered what a ridiculous, pathetic grift the Oracle/TikTok deal was. Despite it being premised on a "national security threat" from China, because the app might share some data (all of which is easily buyable from data brokers) with Chinese officials, the final deal cured none of that, left the Chinese firm ByteDance with 80% ownership of TikTok, and gave Trump supporters at Oracle a fat contract -- and allowed Trump to pretend he did something.
Of course, what he really did was hand China a huge gift. In response to the deal, state media in China is now highlighting how the Chinese government can use this deal as a model for the Chinese to force the restructuring of US tech companies, and force the data to be controlled by local companies in China. This is from the editor-in-chief of The Global Times, a Chinese, state-sponsored newspaper:
That says:
The US restructuring of TikTok’s stake and actual control should be used as a model and promoted globally. Overseas operation of companies such as Google, Facebook shall all undergo such restructure and be under actual control of local companies for security concerns.
So, beyond doing absolutely nothing to solve the "problem" that politicians in the US laid out, the deal works in reverse. It's given justification for China to mess with American companies in the same way, and push to expose more data to the Chinese government.
Great work, Trump. Hell of a deal.
Meanwhile, the same Twitter feed says that it's expected that officials in Beijing are going to reject the deal from their end, and seek to negotiate one even more favorable to China's "national security interests and dignity."
So, beyond everything else, Trump's "deal" has probably done more to help China, and harm data privacy and protection, while also handing China a justification playbook to do so: "See, we're just following your lead!"
DOJ Continues Its Quest To Kill Net Neutrality (And Consumer Protection In General) In California
from the who-needs-monopoly-oversight? dept
by Karl Bode - September 22nd @ 6:22am
After the FCC effectively neutered itself at telecom lobbyist behest, numerous states jumped in to fill the consumer protection void. As a result, California, in 2018, passed some net neutrality rules that largely mirrored the FCC's discarded consumer protections. Laughing at the concept of state rights, Bill Barr's DOJ immediately got to work protecting U.S. telecom monopolies and filed suit in a bid to vacate the rules.
The DOJ's central argument was that California's attempt to protect consumers was somehow "anti-consumer." And the lawsuit largely centered on language the FCC had included in its net neutrality repeal (again, at telecom lobbyist behest) attempting to ban states from filling the void created by the federal government no longer giving a damn. The courts so far haven't looked too kindly upon that logic, arguing that the FCC can't abdicate its authority over telecom, then try to lean on that non-existent authority to try to tell states what to do.
Last week California filed its first brief (pdf) in its legal battle with the DOJ. ISPs are seeking a preliminary injunction to prevent California from enforcing the rules during the lawsuit. Again though, their primary argument continues to be that states can't enforce net neutrality because the FCC said so. Which, as Stanford Professor Barbara van Schewick continues to point out, is still nonsense no matter how many times industry and the captured U.S. government repeat the claim:
"According to case law, an agency’s decision to deregulate can only block the states from stepping in when the agency has the power to regulate and decides not to use it.
But when the FCC eliminated net neutrality in 2018, it also removed its own authority over broadband providers. In essence, the agency decided that broadband providers are not telecommunications companies that simply shuttle data back and forth (like a telephone company), but information service providers which interact with and alter data, like a website.
This removed any authority that would have allowed the FCC to adopt net neutrality protections. Thus, the elimination of net neutrality did not establish a calibrated federal deregulatory regime, as the U.S. and the ISPs argue; it simply reflected the FCC’s lack of authority. Simply put, the FCC’s 2018 Order created a regulatory vacuum, and you can’t conflict with a vacuum."
Of course one of the reasons you stack the courts with unqualified sycophants and partisan yes men is so basic, fundamental logic doesn't apply. So as usual, while it's likely the courts will laugh at the telecom sector's efforts here, it's certainly not guaranteed. And while the press will cover this story as a "government lawsuit," make no mistake: this is AT&T, Verizon, and Comcast using the federal government as a hand puppet as they attempt to have their cake and eat it too. Namely, no real oversight on the state or federal level, and no pesky market competition to keep their worst impulses in check.
from the Pell-grants-no-longer-offered-to-Anarchy-State-University-students dept
by Tim Cushing - September 22nd @ 3:22am
The Trump Administration hasn't met a slope it isn't willing to grease up and go sliding down. There's not much united about the states at the moment and the President's lavish devotion to all things "law and order" is making things worse.
The insertion of federal officers into cities experiencing weeks and months of protests hasn't done much to reduce the adjacent violence that drew them there in the first place. Engaging in Gestapo-esque "disappearing" of protesters -- along with federal officer violence targeting journalists and observers -- has done nothing to return order to cities like Portland, Oregon.
Earlier this month, the Administration issued a memo threatening to cut off federal funding to cities the Administration doesn't like.
My Administration will not allow Federal tax dollars to fund cities that allow themselves to deteriorate into lawless zones. To ensure that Federal funds are neither unduly wasted nor spent in a manner that directly violates our Government’s promise to protect life, liberty, and property, it is imperative that the Federal Government review the use of Federal funds by jurisdictions that permit anarchy, violence, and destruction in America’s cities. It is also critical to ensure that Federal grants are used effectively, to safeguard taxpayer dollars entrusted to the Federal Government for the benefit of the American people.
Suddenly the Administration is very concerned about federal spending. Named in the memo were New York City, Seattle, Portland, and Washington DC. All of these have been targets of Trump's personal attacks via Twitter, where he's claimed the cities are being ruined by "radical left Democrats." The memo is transparently partisan. Nowhere in the memo -- which is directed to the DOJ and the Office of Management and Budget (OMB) -- does Trump call out cities in contested states vital to his reelection. Similar protests and/or law enforcement defunding are occurring in Minneapolis, Minnesota and Kenosha, Wisconsin, but neither city is mentioned in the memo.
The memo -- issued September 2nd -- gave the DOJ two weeks to designate "anarchist" cities unworthy of federal funding. The DOJ has responded, sparing Washington DC, but designating the other three cities mentioned in the memo as "anarchy jurisdictions."
The U.S. Department of Justice today identified the following three jurisdictions that have permitted violence and destruction of property to persist and have refused to undertake reasonable measures to counteract criminal activities: New York City; Portland, Oregon; and Seattle, Washington. The Department of Justice is continuing to work to identify jurisdictions that meet the criteria set out in the President’s Memorandum and will periodically update the list of selected jurisdictions as required therein.
So, what does it take to become an anarchy under Trump? Not much, apparently. Just an unwillingness to maintain the law enforcement status quo. The DOJ considers it "anarchy" to prevent police from "restoring order" or ordering them to abandon areas they lawfully have access to. (This refers to the temporary "autonomous zone" set up in Seattle by protesters.) These stipulations deal with judgment calls by city mayors during periods of intense civil unrest -- unrest prompted by previous police violence, something that's ignored completely by the memo and the DOJ.
But "anarchy" is also something as simple as police reform.
Whether a jurisdiction disempowers or defunds police departments.
Nobody's shutting down police departments. Taking police officers out of schools or routing mental health crisis calls to mental health professionals instead of cops isn't "disempowering." And if the funds aren't being used by law enforcement agencies to cover activities they're no longer being asked to perform, they should be routed to the agencies that are performing them. That's not "defunding." That's just funding.
And if the Attorney General can't find anything on the list to use to designate a city as "anarchist," he can always make something up.
Any other related factors the Attorney General deems appropriate.
So, anything could be used to trigger this review. Possibly even just being located in a state Trump doesn't think he can carry.
Right now, the memo only orders a "review" of existing funding. There are no laws on the books that allow the President to strip federal funding from cities he doesn't think lean right enough or are too mean to cops. Congress controls federal funding, not the Administration.
The slippery slope is, of course, a route to direct federal control of city and state-level policy making. Pass the "wrong" laws and your federal funds could be reduced or eliminated. If Congress somehow finds a way to make this legal by codifying pro-law enforcement requirements, the federal government will be the final arbiter of local lawmaking. This isn't the way it's supposed to work. And the Tenth Amendment is supposed to limit federal interloping like this. Even if a law is passed by Congress to make Trump's defunding plan "lawful," it probably won't be Constitutional. For an administration that leans so heavily on the phrase "rule of law," it sure seems to ignore rules and laws with alarming frequency.
Even if nothing happens past this point, the Administration will still be posting a periodic list of enemy cities and seeking some way to block them from receiving federal funds. And the selection process is transparently partisan, targeting only cities that have pushed back against Trump's heated rhetoric and his "offers" to deploy federal stormtroopers to handle local protests. This is more malignant ugliness from an administration that's served up plenty over the last four years.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.