Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Friday, January 15th, 2021
A Few More Thoughts On The Total Deplatforming Of Parler & Infrastructure Content Moderation
from the it's-tricky dept
by Mike Masnick - January 15th @ 5:39pm
I've delayed writing deeper thoughts on the total deplatforming of Parler, in part because there was so much else happening (including some more timely posts about Parler's lawsuit regarding it), but more importantly because for years I've been calling for people to think more deeply about content moderation at the infrastructure layer, rather than at the edge. Because those issues are much more complicated than the usual content moderation debates.
And once again I'm going to make the mistake of offering a nuanced argument on the internet. I urge you to read through this entire post, resist any kneejerk responses, and consider the larger issues. In fact, when I started to write this post, I thought it was going to argue that the moves against Parler, while legal, were actually a mistake and something to be concerned about. But as I explored the arguments, I simply couldn't justify any of them. Upon inspection, they all fell apart. And so I think I'll return to my initial stance that the companies are free to make decisions here. There should be concern, however, when regulators and policymakers start talking about content moderation at the infrastructure layer.
The "too long, didn't read" version of this argument (and again, please try to understand the nuance) is that even though Parler is currently down, it's not due to a single company having total control over the market. There are alternatives. And while it appears that Parler is having difficulty finding any such alternative to work with it, that's the nature of a free market. If you are so toxic that companies don't want to do business with you, that's on you. Not them.
It is possible to feel somewhat conflicted over this. I initially felt uncomfortable with Amazon removing Parler from AWS hosting, effectively shutting down the service, and with Apple removing its app from the app store, effectively barring it from iPhones. In both cases, those seemed like very big guns that weren't narrowly targeted. I was less concerned about Google's similar removal, because that didn't block Parler from Android phones, since you don't have to go through Google to get on an Android phone. But (and this is important) I think all three moves are clearly legal and reasonable steps for the companies to take. As I explored each issue, I kept coming back to a simple point: the problems Parler is currently facing are due to its own actions and the unwillingness of companies to associate with an operation so toxic. That's the free market.
If Parler's situation was caused by government pressure or because there were no other options for the company, then I would be a lot more concerned. But that does not appear to be the case.
The internet infrastructure stack is represented in different ways, and there's no one definitive model. But an easy way to think of it is that there are "edge" providers -- the websites you interact with directly -- and then there's everything beneath them: the Content Delivery Networks (CDNs) that help route traffic, the hosting companies/data centers/cloud providers that host the actual content, the broadband/network/access providers, and the domain registers and registrars that help handle the naming and routing setup. And there are lots of other players in there as well, some (like advertising and certain communications providers) with elements on the edge and elements deeper in the stack.
But a key thing to understand is the level of granularity with which different players can moderate, and the overall impact their moderation can have. It's one thing for Twitter to remove a tweet. It's another thing for Comcast to say "you can't access the internet at all." The consequences of moderation get much more severe the deeper you go into the stack. In this case, AWS's only real option for Parler was to remove the entire service, because it couldn't just target the problematic content (of which there was quite a lot). As for the app stores, it's a tricky question. Are app stores infrastructure, or edge? Perhaps they are a little of both, but they had the same limited options: remove the app entirely, or leave it up with all its content intact.
For many years, we've talked about the risks of saying that players deeper in the infrastructure stack should be responsible for content moderation. I was concerned, back in 2014, when there was talk of putting liability on domain registrars if domains they had registered were used for websites that broke the law. There have been a few efforts to hold such players responsible as if they were the actual lawbreakers, and that obviously creates all sorts of problems, especially at the 1st Amendment level. As you move deeper into the stack, the moderation options look less like scalpels and more like sledgehammers that remove entire websites from existence.
Almost exactly a decade ago, in a situation that has some parallels to what's happened now, I highlighted concerns about Amazon deciding to deplatform Wikileaks in response to angry demands from then Senator Joe Lieberman. I found that to be highly problematic, and likely unconstitutional -- though Wikileaks, without a US presence, had little standing to challenge it at the time. My concern was less with Amazon's decision, and more with Lieberman's pressure.
But it's important to go back to first principles in thinking through these issues. It's quite clear that companies like Amazon, Apple, and Google have every legal right to remove services they don't want to associate with, and there are a ton of reasons why people and companies might not want to associate with Parler. But many people are concerned about the takedowns based on the idea that Parler might be "totally" deplatformed, and that one company saying "we don't want you here" could leave them with no other options. That's not so much a content moderation question, as a competition one.
If it's a competition question, then I don't see why Amazon's decision is really a problem either. AWS only has 32% marketshare. There are many other options out there -- including the Trump-friendly cloud services of Oracle, which promotes how easy it is to switch from AWS on its own website. Oracle's cloud already hosts Zoom (and now TikTok's US services). There's no reason they can't also host Parler.*
But, at least according to Parler, it has been having trouble finding an alternative that will host it. And on that front it's difficult to feel sympathy. Any business has to build relationships with other businesses to survive, and if no other businesses want to work with you, you might go out of business. Landlords might not want to rent to troublesome tenants. Fashion houses might choose not to buy from factories with exploitative labor practices. Businesses police each other's business practices all the time, and if you're so toxic that no one wants to touch you... at some point, maybe that's on you, Parler.
The situation with Apple and Google is slightly different, and again, there are lots of nuances to consider. With Apple, obviously, it is controlling access to its own hardware, the iPhone. And there's a reasonable argument to be made that Apple offers the complete package, and part of that deal is that you can only add apps through its app store. Apple has long argued that it does this to keep the phone secure, though it could raise some anti-competitive concerns as well. But Apple has banned plenty of apps in the past (including Parler competitor Gab). And that's part of the nature of iPhone ownership. And, really, there is a way to route around Apple's app store: you can still create web apps that will work on iOS without going through the store. This does limit functionality and the ability to reach deeper into the iPhone for certain features, but those are the tradeoffs.
With Google, it seems like there should be even less concern. Not only could Parler work as a web app, Google does allow you to sideload apps without using the Google Play store. So the limitation was simply that Google didn't want the app in its own store. Indeed, before Amazon took all of Parler down, the company was promoting its own APK to sideload on Android phones.
In the end, it's tough to argue that this is as worrisome as my initial gut reaction said. I am still concerned about content moderation when it reaches the infrastructure layer. I am quite concerned that people aren't thinking through the kind of governance questions raised by these sledgehammer-not-scalpel decisions. But when exploring each of the issues as it relates to Parler specifically, it's hard to find anything to be that directly concerned about. There are, mostly, alternatives available for Parler. And in the one area that there apparently aren't (cloud hosting) it seems to be less because AWS has market power, and more because lots of companies just don't want to associate with Parler.
And that is basically the free market telling Parler to get its act together.
* It's noteworthy that AWS customers can easily migrate to Oracle Cloud only because Oracle copied AWS's API without permission which, according to its own lawyers is copyright infringement. Never expect Oracle to not be hypocritical.
Content Moderation Case Study: Dealing With Demands From Foreign Governments (January 2016)
from the gets-tricky-quickly dept
by Copia Institute - January 15th @ 3:30pm
Summary: US companies obviously need to obey US laws, but dealing with demands from foreign governments can present challenging dilemmas. The Sarawak Report, a London-based investigative journalism operation that reports on issues and corruption in Malaysia, was banned by the Malaysian government in the summer of 2015. The publication chose to republish its own articles on the US-based Medium.com website (beyond its own website) in an effort to get around the Malaysian ban.
In January of 2016, the Sarawak Report had an article about Najib Razak, then prime minister of Malaysia, entitled: “Najib Negotiates His Exit BUT He Wants Safe Passage AND All The Money!” related to allegations of corruption that were first published in the Wall Street Journal, regarding money flows from the state owned 1MDB investment firm.
The Malaysian government sent Medium a letter demanding that the article be taken down. The letter claimed that the article contained false information and that it violated Section 233 of the Communications and Multimedia Act, a 1998 law that prohibits the sharing of offensive and menacing content. In response, Medium requested further evidence of what was false in the article.
Rather than responding to Medium’s request for the full “content assessment” from the Malaysian Communications and Multimedia Commission (MCMC), the MCMC instructed all Malaysian ISPs to block all of Medium throughout Malaysia.
Decisions to be made by Medium:
Later that month, people noticed that Medium.com was no longer blocked in Malaysia. Soon after, the MCMC put out a statement saying that Medium no longer needed to be blocked because an audit of 1MDB had been declassified days earlier, and once that report was out, there no longer was a need to block the website: “In the case of Sarawak Report and Medium, there is no need to restrict when the 1MDB report has been made public.”
Originally published on the Trust & Safety Foundation website.
Another Day, Another Location Data Privacy Scandal We'll Probably Do Nothing About
from the this-problem-isn't-going-away dept
by Karl Bode - January 15th @ 1:45pm
Another day, another location data scandal we probably won't do anything about.
Joseph Cox, a one-man wrecking ball on the location data privacy beat the last few years, has revealed how a popular Muslim prayer app has been collecting and selling granular user location data without those users' informed consent. Like so many apps, Salaat First (Prayer Times), which reminds Muslims when to pray, has been recording and selling detailed daily activity data to a third party data broker named Predicio. Predicio, in turn, has been linked to a supply chain of government partners including ICE, Customs and Border Protection, and the FBI.
As usual, users aren't clearly informed that their every waking movement is being monetized on a massive scale, something that concerned an anonymous source familiar with Salaat First's business model:
"Being tracked all day provides a lot of information, and it shouldn't be usable against you, especially if you are unaware of it," the source said."
The report comes on the heels of previous reports showing how a similar app, Muslim Pro, was also found to be collecting sensitive user location data and selling it to third parties with links to the federal government. In most of these instances the companies involved try to hide behind claims that this isn't a big deal because the data involved is anonymized. But there's usually no objective inquiries to confirm that's actually true, and it doesn't matter anyway because as countless studies have found, "anonymized" data isn't actually anonymous anyway (especially for a government rich with other data sets).
The data collected here includes app users, their phone model, their movement habits, their operating system, IP address, timestamps, user advertising IDs, and more. As with many apps, the Salaat First doesn't direct users to an overlong privacy policy at any point. Nor does it make clear that user data is sold to third parties, not just used for advertising purposes. This violates Google's app store guidelines, not that it apparently seems to matter to anybody:
"A Google spokesperson told Motherboard in a statement "The Play Store prohibits the sale of personal or sensitive data collected through Play apps. We investigate all claims related to apps violating our policies, and if we confirm a violation, we take action."
Predicio, however, has been harvesting location data from Android apps and paying developers for years, raising questions about Google's lackluster enforcement of its own policies."
Predicio didn't much want to comment. The laundry list of location data scandals at this point is monumental, and the primary response to the problem, with the occasional exception, has been little more than silence and scattered policy dipshittery.
First there was the Securus and LocationSmart scandal, which showcased how cellular carriers and data brokers buy and sell your daily movement data with only a fleeting effort to ensure all of the subsequent buyers and sellers of that data adhere to basic privacy and security standards. Then there was the blockbuster report showing how this data routinely ends up in the hands of everyone from bail bondsman to stalkers, again, with only a fleeting effort made to ensure the data itself is used ethically and responsibly. Since then, there's just a steady parade showing the same problems throughout adtech.
Throughout it all, government has refused to lift a finger to address the problem, presumably because lobbyists don't want government upsetting the profitable apple cart, and government doesn't want to lose access to its ability to track your every waking stumble without much transparency or oversight. Meanwhile, countless folks continue to labor under the illusion that this sort of widespread dysfunction will be fixed by telecom or adtech "market forces."
It's not clear what people expected would happen when we created a massive online ecosystem that monetizes users' every waking movement (and tied it to the federal government) while simply refusing to pass even a basic privacy law for the internet era. Instead of taking this problem seriously, the nation's top policy voices in 2020 spent most of their time freaking out about a Chinese teen dancing app or trying to destroy a law integral to the functioning of the internet in the mistaken belief this would let them be bigger assholes online.
from the disappointing dept
by Mike Masnick - January 15th @ 12:11pm
It's not just Ajit Pai who is an FCC chair who misunderstands Section 230. His predecessor, Tom Wheeler continues to get it totally wrong as well. A year ago, we highlighted Wheeler's complete confusion over Section 230, that blamed Section 230 for all sorts of things... that had nothing at all to do with Section 230. I was told by some people that they had talked to Wheeler and explained to him some of the mistakes in his original piece, but it appears that they did not stick.
This week he published another bizarre and misguided attack on Section 230 that gets a bunch of basic stuff absolutely wrong. What's weird is that in the last article we pointed to, Wheeler insisted that social media websites do no moderation, because of 230. But in this one, he's now noting that 230 allowed them to close down the accounts of Donald Trump and some other insurrectionists -- but he's upset that it came too late.
These actions are better late than never. But the proverbial horse has left the barn. These editorial and business judgements do, however, demonstrate how companies have ample ability to act conscientiously to protect the responsible use of their platforms.
Right. Except that... the reason they have "ample ability" is because they know they can't be sued over those choices thanks to Section 230 and the 1st Amendment. Wheeler's real complaint here is that these private companies didn't act as fast as he wanted to in pulling down 1st Amendment protected speech. Then he misrepresents how Section 230 itself works:
Subsection (2) of Section 230 provides that a platform shall not be liable for, “Any action voluntarily taken in good faith to restrict access to or availability of material that any provider or user considers to be…excessively violent, harassing, or otherwise objectionable…” In other words, editorial decisions by social media companies are protected, as long as they are undertaken in good faith.
This is... only partially accurate, and very misleading. First of all, editorial decisions by companies are protected by the 1st Amendment. Second, subsection (2) almost never comes into play, and the vast, vast majority of Section 230 cases around moderation say that it's subsection (c)(1), not (c)(2), that gives companies immunity from lawsuits over moderation. Assuming that it's (c)(2) alone leads you into dangerously misleading territory. Even worse, (c)(2) has two subsections as well, and when Wheeler says that it applies "as long as they are undertaken in good faith" he ignores that (c)(2)(B) has no such good faith requirement.
Of course, in the very next paragraph, he admits that (c)(1) is what grants the companies immunity, so I'm not even sure why he brings up (c)(2) and the good faith line. That's almost never an issue in Section 230 cases. But the crux of his complaint is that he seems to think it's obvious that social media should have banned Trump and Trump cultists earlier -- and he invokes the classic "nerd harder" line:
Dealing with Donald Trump is a targeted problem that the companies just addressed decisively. The social media companies assert, however, that they have no way to meaningfully police the information flowing on their platform. It is hard to believe that the brilliant minds that produced the algorithms and artificial intelligence that powers those platforms are incapable of finding better outcomes from that which they have created. It is not technological incapacity that has kept them from exercising the responsibility we expect of all other media, it is the lack of will and desire for large-scale profits. The companies’ business model is built around holding a user’s attention so that they may display more paying messages. Delivering what the user wants to see, the more outrageous the better, holds that attention and rings the cash register.
This is a commonly stated view, but it tends to reveal a near total ignorance with how these decisions are made. These companies have large trust and safety teams, staffed with thoughtful professionals who work through a wide variety of trade-offs and challenges in making these decisions. While Wheeler is over here saying that it's obvious that the problem is they waited too long and didn't nerd harder to remove these people earlier, you have plenty of others out there screaming that this proves the companies are too powerful, and they should be barred from banning him.
Anyone who thinks it's a simple business model issue has never been involved in any of these discussions. It's not. There are a ton of factors involved, including what happens if you make this move and there's a legal backlash? Or what happens if you're driving all the cultists into underground sites where we no longer know what they're planning? There are lots of questions and demanding that these large companies with a variety of competing interests must do it to your standard is the height of privilege. It's impossible to do moderation "right." Because there is no "right." There are just a broad spectrum of wrong.
It's fine to say that companies can do better. It's fine to suggest ways to make better decisions. But too many pundits and commentators act as if there's some "correct" decision and any result that differs from that cannot possibly be right. And, even worse, they blame Section 230 for that -- when the reality is that Section 230 is what enables the companies to explore different solutions, as both Twitter and Facebook have done for years.
Wheeler's "solution" for reforming Section 230 is also ivory tower academic nonsense that seems wholly disconnected from the reality of how content moderation works within these companies.
Social media companies are media, not technology
Mark Zuckerberg testified to Congress, “I consider us to be a technology company because the primary thing we do is have engineers who write code and build product and services for other people.” That software code, however, makes editorial decisions about which information to choose to route to which people. That is a media decision. Social media companies make money by selling access to its users just like ABC, CNN, or The New York Times.
Even though he says this is an idea for reform... it's just a statement? And a meaningless one at that. It doesn't matter if they're media or technology. They're a mixture of both and something new. Trying to lump them into old buckets doesn't help and doesn't take us anywhere useful. And, honestly, if your goal here is to reform Section 230, declaring these companies media companies doesn't help, because media companies and their editorial decisions are wholly protected by the 1st Amendment.
There are well established behavioral standards for media companies
The debate should be over whether and how those standards change because of user generated content. The absolute absence of liability afforded by Section 230 has kept that debate from occurring.
Um. No. Again, these are not the same as traditional media companies. They have some similarities and some differences. Section 230 doesn't change anything. And if Tom Wheeler honestly thinks that there hasn't been a debate about behavioral standards on content moderation, then he honestly shouldn't be commenting on this. There has been an active discussion and debate on this stuff for years. The fact that he's ignorant of it doesn't mean it doesn't happen. Indeed, the very fact that he doesn't know about the debate that has gone on among trust and safety professionals and the executives at these companies going back many, many years suggests that Tom Wheeler should perhaps take some time to learn what's really going on before declaring from on high what he thinks is and is not happening.
But the key point here is that the standards of traditional media companies don't work well for social media because of the very differences in social media. A regular media company has standards because they need to review a very, very limited amount of content each day, on the order of dozens of stories. A social media company often has millions or billions of pieces of content every day (or in some cases every hour). The unwillingness to comprehend the difference in scale suggests someone who has not thought these issues through.
Technology must be a part of the solution
When the companies hire thousands of human reviewers it is more PR than protection. Asking humans to inspect the data constantly generated by algorithms is like watching a tsunami through a straw. The amazing power of computers created this situation, the amazing power of computers needs to be part of the solution.
I mean... duh? Is there anyone who doesn't think technology is a part of the solution? Every single company with user generated content, even tiny ones like us, make use of technology to help us moderate. And there are a bunch of companies out there building more and more solutions (some of them very cool!). I'm confused, though, how this matters to the Section 230 debate. Changing section 230 will not change the fact that companies use technology to help them moderate. It won't suddenly make more technology to help companies moderate. This whole point makes it sound like Tom Wheeler didn't ever bother to actually speak to an expert on how content moderation works -- which, you know, is kind of astounding when he then positions himself to give advice on how to force companies to moderate.
It is time to quit acting in secret
When algorithms make decisions about which incoming content to select and to whom it is sent, the machines are making a protected editorial decision. Unlike the editorial decisions of traditional media whose editorial decisions are publicly announced in print or on screen and uniformly seen by everyone, the platforms’ determinations are secret: neither publicly announced nor uniformly available. The algorithmic editorial decision is only accidentally discoverable as to the source of the information and even that it is being distributed. Requiring the platforms to provide an open API (application programming interface) to their inflow and outflow, with appropriate privacy protections, would not interfere with editorial decision-making. It would, however, allow third parties to build their own algorithms so that, like other media, the results of the editorial process are seen by all.
So, yes, some of this I agree with. I mean, I wrote a whole damn paper on trying to move away from proprietary social media platforms to a world built on protocols. But, the rest of this is... again, suggestive of someone who has little knowledge or awareness of how moderation works.
First, I don't see how Wheeler's analogy with media even makes sense here. There are tons of editorial decisions that the public will never, ever know about. How can he argue that they're "publicly announced"? The only information that news media makes public is what they finally decide to publish or air. But... that's not near the entirety of editorial decision making. We don't see what stories never make it. We don't see how stories are edited. We don't see what important facts or quotes are snipped out. We don't see the debates over headlines. We have no idea why one story gets page 1 top of the page treatment, while some other story gets buried on A17. The idea that media editorial is somehow more public than social media moderation choices is... weird?
Indeed, in many ways, social media companies are way more transparent than traditional media companies. They even have transparency reports that have details about content removals and other information. I've yet to see a mainstream media operation do that about their editorial practices.
Finally demanding "transparency" is another one of those solutions that occurs to people who have never done content moderation. I recently wrote about the importance of transparency, but the dangers of mandated transparency. I won't rehash that all over again, but the debate is not nearly as simple as Wheeler makes it out to be. But a few quick points: transparency reports have already been abused by some governments to allow them to celebrate and push for ever greater censorship of criticism of the government. We should be concerned about that. On top of that, transparency around moderation can be extremely costly and again create a massive burden for smaller players.
But perhaps one of the biggest issues with the kind of transparency that Wheeler is asking for is that it assumes good faith on the part of users. I've pointed out a few times now that we've had our comment moderation system in place for over a decade now and in that time the only people who have ever demanded "more transparency" into how it works are those looking to game the system. Transparency is often demanded from the worst of your users who want to "litigate" every aspect of why their content was removed or why they were banned. They want to search for loopholes or accuse you of unfair treatment. In other words, despite Wheeler's whole focus being on encouraging more moderation of voices he believes are harmful, forced transparency is likely to cut down on that, as it gives those moderated more "outs" or limits the willingness of companies to moderate "edge" cases.
The final paragraph of Wheeler's piece is so egregious and so designed to make a 1st Amendment lawyer's head explode, that I'm going to go over it sentence by sentence.
Expecting social media companies to exercise responsibility over their practices is not a First Amendment issue.
Uh... expecting social media companies to exercise responsibility over their practices absolutely is a 1st Amendment issue. The 1st Amendment has long been held to both include a prohibition on compelled speech, as well as a right of association (or non-association). That is, these companies have a 1st Amendment right to moderate as they see fit, and to not be compelled to host speech, or be forced to associate with those they don't want to associate with. That's why many of the complaints are really 1st Amendment issues, not Section 230 issues.
Relatedly, it feels like some of the problems with Wheeler's piece is he's bought into the myth that with Section 230 there are no incentives to moderate at all. That's clearly false, given how much moderation we've seen. The false thinking is driven by the belief that the only incentive to moderate is the law. That's ridiculous. The health of your platform is dependent on moderation. Keeping your users happy, and not having your site turn into a garbage dump of spam, harassment and hate, is a very strong motivator for moderation. Advertisers are another motivation, since they don't want their ads appearing next to bigotry and hatred. The focus on the law as the main lever here is just wrong.
It is not government control or choice over the flow of information.
No, but changing Section 230... would do that. It would force companies to change how they moderate. This is a reason why Section 230 is so important. It gives companies (and users!) a freedom to experiment.
It is rather the responsible exercise of free speech.
Which... all of these companies already do. So what's the point here?
Long ago it was determined that the lie that shouted “FIRE!” in a crowded theater was not free speech. We must now determine what is the equivalent of “FIRE!” in the crowded digital theater.
Long time Techdirt readers will already be screaming about this. This claim is not just wrong, it's very, very ignorant about the 1st Amendment. The "falsely shouting fire in a crowded theater" line was a throwaway line in an opinion by Justice Holmes that was actually about jailing someone for handing out anti-war pamphlets. It was never actually standard for 1st Amendment jurisprudence, and was effectively overturned in later cases, meaning it is not an accurate statement of law.
Tom Wheeler is very smart and thoughtful on so many things, that it perplexes me that he jumps into this area without bothering to understand the first thing about Section 230, the 1st Amendment, or content moderation. There are experts on all three things that he could talk to. But even more ridiculous, even assuming everything he says is accurate -- what actual policy proposal does he make in this piece? Tech companies should use tech in their moderation efforts? That seems like the only actionable point.
There are lots of bad Section 230/content moderation takes out there, and I can't respond to them all. But this is the former chair of the FCC, and when he speaks, people pay attention. And it's extremely disappointing that he would jump into this space headfirst with so many factual errors and mistaken assumptions. It's doubly troubling that this is the second time (at least!) that he's now done this. I hope that someone at Brookings, or someone close to him suggests he speak to some actual experts before speaking on this subject again.
Nintendo Hates You: Gaming Giant Lobs A DMCA Nuke At Hundreds Of Fan Games
from the nintendon't dept
by Timothy Geigner - January 15th @ 10:44am
Nintendo has built quite a reputation for itself as an intellectual property protectionist, going much further than most other game publishers to exert strict control over all of its IP. While this control is deployed in a wide-ranging manner, one of the most visible, common, and consequential avenues for this protectionism comes in the form of Nintendo getting all manner of fan-made creations taken down. These games, almost universally created by huge Nintendo fans as labors of love, are nearly always the subject of DMCA takedowns. Think for just a moment what that means: Nintendo is disallowing, on the regular, the expression of fandom by its own customers.
Every time I have written a post about some individual game being erased in this manner, rather than Nintendo exploring how to officially endorse these fan-made creations so as to protect its IP while still promoting its own fans, it's left me scratching my head. But now that Nintendo has managed to get hundreds of fan-games taken down in one fell swoop, well, it seems the company has decided to take this war on its own fans to another level.
Hundreds of non-commercial Nintendo fangames have been removed from the popular game publishing community Game Jolt after the platform complied with several DMCA takedown requests. Many of the affected games have dedicated fanbases including many die-hard Nintendo fans, some of whom now seem eager to revolt.
A few days ago, Nintendo’s legal department sent DMCA notices to the game publishing community Game Jolt. The site, where hobbyists and indie developers share their creations for free, was notified that hundreds of fangames infringed Nintendo’s trademarks.
Now, GameJolt does advertise on its site, leading Nintendo's DMCA notice to warn that the site was profiting from this infringement. As a result, the site took 379 games down. But it's worth noting both that uploaders appear to have an option to turn off ads on their games' pages and that the developers and fans for these games are absolutely livid about Nintendo's actions. One developer summed it up nicely and then went on to point out why this attempt at enforcement by Nintendo probably won't even work.
“They’ll get no sympathy from me, this isn’t the first time they’ve pulled a stunt like this. They’ve made it clear they hate their fans and repeat it time and time again never learning from it.”
The developer will continue to work on his “Five Nights At Team HQ series” but fears that it will be targeted eventually. That doesn’t stop the developer though, and he encourages others to simply flood the Internet with copies.
“Nintendo if you think taking down everyone’s games will help your image and get people to buy more of your games then you’re sorely mistaken! I’ll keep making and reuploading fan games even if you try to take them down, so DEAL WITH IT! All people who have copies of the fangames that were taken down take them and reupload them all over the internet so they stay up no matter what!”
And that already appears to be happening. Some developers are taking their fan games to other platforms and reuploading them there. Other developers are actually reuploading their games back to GameJolt, but without advertising on them, in the hopes that banner ads were somehow the lynchpin that caused Nintendo to target their particular games.
Nintendo has remained silent on the matter as of the time of this writing, but readers here will know its reputation well enough to know that removing advertisements won't stop the takedowns. Nintendo hates fan-made creations, see, and has taken down plenty of games that were completely devoid of any commercial aspects.
The only thing that will stop the company from treating its biggest fans so poorly, it seems, is if those fans suddenly begin disappearing like the games they created.
Daily Deal: The Premium 2021 Project And Quality Management Bundle
from the good-deals-on-cool-stuff dept
by Daily Deal - January 15th @ 10:39am
The Premium 2021 Project and Quality Management Bundle has 22 courses to help you learn how to handle any project and deliver efficient results. You'll be introduced to the fundamentals of project and product management, the keys to being a successful leader, and how to prepare for various certification exams. Courses cover Six Sigma, Agile, Jira, Lean, and more. It's on sale for $46.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
from the did-he-teach-hawley? dept
by Mike Masnick - January 15th @ 9:39am
I'm beginning to see where Josh Hawley got his totally nutty ideas about the 1st Amendment. The Wall Street Journal has an utterly insane piece by Yale Law professor Jed Rubenfeld -- currently suspended due to sexual harassment claims, and who was infamously quoted telling prospective law clerks for then Judge Brett Kavanaugh, that Kavanugh "hires women with a certain look" -- and a... um... biotech executive named Vivek Ramaswami who is mad about "woke" companies, insisting (wrongly) that the big internet companies are actually part of the US government and therefore have to abide by the 1st Amendment in their content moderation practices.
Honestly, the level of thinking here is on par with your typical Breitbart commenter, not a well known (if slightly disgraced) Yale Law professor.
Conventional wisdom holds that technology companies are free to regulate content because they are private, and the First Amendment protects only against government censorship. That view is wrong: Google, Facebook and Twitter should be treated as state actors under existing legal doctrines. Using a combination of statutory inducements and regulatory threats, Congress has co-opted Silicon Valley to do through the back door what government cannot directly accomplish under the Constitution.
It's not just "conventional wisdom." It's lots and lots of legal precedent and a general understanding of 1st Amendment doctrine going back ages. State action doctrine is not some brand new concept. I mean, there have been some very thoughtful academic pieces on the idea that state action doctrine should be changed to try to make it apply to social media companies. But those are academic papers suggesting how they think the law should change. They're not saying it fits under current doctrine.
Because it doesn't.
It is “axiomatic,” the Supreme Court held in Norwood v. Harrison (1973), that the government “may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” That’s what Congress did by enacting Section 230 of the 1996 Communications Decency Act, which not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so.
So... the first sentence is correct. It's also why we've repeatedly raised concerns about lawmakers demanding specific content moderation options. But the second sentence is just laughably wrong. Nothing in Section 230 of the Communications Act induces, encourages, or promotes private persons to accomplish what is constitutionally forbidden to accomplish. The 1st Amendment protects a company's right not to associate with those it does not wish to associate with. It also protects against being compelled to host speech it disagrees with. Those are both constitutionally protected things. Section 230 does not change that.
The piece does highlight some members of Congress stupidly (and I believe, unconstitutionally) pressuring Facebook and Google to restrict "harmful content." And I agree that's wrong. But it's a massive leap towards insanity to try to spin that as saying that those vague threats from elected officials magically turns the websites themselves into arms of the state, subject to 1st Amendment restrictions placed on government. I mean, if it did, you've just handed Congress a magic tool to effectively nationalize any company: just unconstitutionally order them to do something, and voila, they're now state actors.
That's insane. That would only encourage Congress to make unconstitutional demands of companies to have those companies declared state actors. It's bonkers. I feel sorry for Yale Law students who deserve better.
Such threats have worked. In September 2019, the day before another congressional grilling was to begin, Facebook announced important new restrictions on “hate speech.” It’s no accident that big tech took its most aggressive steps against Mr. Trump just as Democrats were poised to take control of the White House and Senate. Prominent Democrats promptly voiced approval of big tech’s actions, which Connecticut Sen. Richard Blumenthal expressly attributed to “a shift in the political winds.”
So... the argument here is that you want more hate speech online, and you're mad that Facebook is restricting it? Holy shit. What is wrong with you?
And, um, note what is left out in this claim about exactly when these companies "took its most aggressive steps against Mr. Trump." It's not just about the fact that Democrats were poised to take control of the Executive and Legislative branches, but because Trump had just inspired a fucking riot at the Capitol building in an effort to overturn a free and fair election and reports were coming out that he was happy about what happened, worrying many that he would encourage yet more attacks in the days leading up to the Biden inauguration.
Seems like kind of an important thing to include, no? There's no indication that the Trump bans were about politics at all. There is every indication they were about preventing an armed insurrection and possible civil war. But Rubenfeld and Ramaswamy literally ignore all of that and insist that it's some sort of ideological or political issue... and stretch that to argue that these companies are arms of the state. A state that is still controlled by Donald Trump.
I mean, this is embarrassing.
For more than half a century courts have held that governmental threats can turn private conduct into state action. In Bantam Books v. Sullivan (1963), the Supreme Court found a First Amendment violation when a private bookseller stopped selling works state officials deemed “objectionable” after they sent him a veiled threat of prosecution. In Carlin Communications v. Mountain States Telephone & Telegraph Co. (1987), the Ninth U.S. Circuit Court of Appeals found state action when an official induced a telephone company to stop carrying offensive content, again by threat of prosecution.
As the Second Circuit held in Hammerhead Enterprises v. Brezenoff (1983), the test is whether “comments of a government official can reasonably be interpreted as intimating that some form of punishment or adverse regulatory action will follow the failure to accede to the official’s request.” Mr. Richmond’s comments, along with many others, easily meet that test. Notably, the Ninth Circuit held it didn’t matter whether the threats were the “real motivating force” behind the private party’s conduct; state action exists even if he “would have acted as he did independently.”
Again, this is getting the facts all mixed up. All of these cases are important ones for why the government cannot force companies into moderating the way it sees best. It's why nearly all proposals to modify Section 230 are unconstitutional. But... those all involved cases where officials made specific demands that a company then followed through on -- and then the actions are really seen as government actions. But, here, there was no government official demanding that Twitter or Facebook block Trump. Trump is the President.
I agree that there might be an argument that elected officials who make specific moderation demands could be violating 1st Amendment rights of speakers (and of the companies themselves!), but to argue that vague statements by elected officials to be better about "taking responsibility" turning all moderation decisions into state action is galaxy-brain nonsense.
The piece does at least note that repealing Section 230 is a bad idea, but then goes off the rails immediately:
Republicans including Mr. Trump have called for Section 230’s repeal. That misses the point: The damage has already been done. Facebook and Twitter probably wouldn’t have become behemoths without Section 230, but repealing the statute now may simply further empower those companies, which are better able than smaller competitors to withstand liability. The right answer is for courts to recognize what lawmakers did: suck the air out of the Constitution by dispatching big tech to do what they can’t. Now it’s up to judges to fill the vacuum, with sound legal precedents in hand.
Uh, sure. Let's have judges produce precedent -- a la Backpage v. Dart -- that says that elected officials cannot threaten or try to force companies to do something unconstitutional. But, that should be on the public officials, not on companies, and certainly not when these actions were not taken at the behest of elected officials, but in order to try to stop an armed insurrection (which, again, the authors never bother to mention other than an oblique reference towards the end of the piece about "the breach of the Capitol").
The article also says that these companies now might try to block Joe Biden because they don't like his support of antitrust action against them. And...um... does anyone believe that? That's insane (beyond the fact that there are no antitrust suits against Twitter, which isn't even that big in the first place). But even if they did do that, it would immediately backfire. I mean, it would not just be ridiculous, laughable, and a total PR disaster, but it would play right into the hands of those suing the companies for antitrust.
I'm not sure how difficult it is for Rubenfeld and Ramawamy to get this through their skulls, but the bans last week were not because of a policy disagreement with the President. No one's blocking anyone for their "conservative viewpoints." It was because he had just inspired a violent mob to attack the Capitol while Congress was in session and trying to officially count the Electoral College votes and five people died. That violates every possible terms of service agreement ever written.
There’s more at stake than free speech. Suppression of dissent breeds terror. The answer to last week’s horror should be to open more channels of dialogue, not to close them off. If disaffected Americans no longer have an outlet to be heard, the siege of Capitol Hill will look like a friendly parley compared with what’s to come.
There are tons of outlets for "disaffected Americans." They have many outlets to be heard. What they don't have is a right to demand that any company host their speech when they are spreading blatant disinformation and violent rhetoric, including calling on people to literally murder public officials.
Ordinary Americans understand the First Amendment better than the elites do. Users who say Facebook, Twitter and Google are violating their constitutional rights are right. Aggrieved plaintiffs should sue these companies now to protect the voice of every American—and our constitutional democracy.
If they do, they will lose, and they will lose badly. It will be an embarrassing waste of money. One hopes that anyone thinking of filing such a lawsuit discusses it with a lawyer trained by actual legal experts, and not taught by Jed Rubenfeld.
Broadband Market Failure Keeps Forcing Americans To Build Their Own ISPs
from the do-not-pass-go,-do-not-collect-$200 dept
by Karl Bode - January 15th @ 6:32am
For decades now a growing number of US towns and cities have been forced into the broadband business thanks to US telecom market failure. Frustrated by high prices, lack of competition, spotty coverage, and terrible customer service, some 750 US towns and cities have explored some kind of community broadband option. And while the telecom industry routinely likes to insist these efforts always end in disaster, that's never actually been true. While there certainly are bad business plans and bad leaders, studies routinely show that such services not only see the kind of customer satisfaction scores that are alien to large private ISPs, they frequently offer better service at lower, more transparent pricing than many private providers.
The latest case in point: Ars Technica profiles one Michigan resident who had to build his own ISP from scratch after "the market" couldn't be bothered to service his address with anything even close to current generation broadband speeds:
"I had to start a telephone company to get [high-speed] Internet access at my house," Mauch explained in a recent presentation about his new ISP that serves his own home in Scio Township, which is next to Ann Arbor, as well as a few dozen other homes in Washtenaw County."
If you take a gander at a map, it's not like we're talking about the middle of the desert. It's just one of a long line of communities that were deemed not profitable enough, quickly enough by major US ISPs. Granted the price tag of the endeavor was well out of range of most Americans, especially lower income Americans who were already stuck on the wrong side of the digital divide before COVID came to town:
"Mauch said he has spent about $145,000, of which $95,000 went to the contractor that installed most of the fiber conduits. The fiber lines are generally about six feet underground and in some cases 10 or 20 feet underground to avoid gas pipes and other obstacles."
His plight is far from unique. US phone companies have effectively given up on fixed residential broadband to focus on wireless advertising, leaving companies like Comcast with a growing monopoly across countless US markets. Despite billions upon billions in subsidies, tax breaks, and dubious regulatory favors thrown at the nation's telecom monopolies, an estimated 83 million Americans only have the choice of one provider, while 43 million have zero options. This is obvious market failure, yet there's an entire cottage industry of industry allies whose entire function is to pretend that none of this is actually happening.
This obvious market failure, and our complete twenty-plus year failure to tackle it is bad enough (keep in mind Trump FCC and its associated industry BFFs can't even admit the market isn't competitive or that monopolization is a problem). But when desperate and angry communities decide to build their own networks, that's when the fun really starts.
First, such efforts quickly run into lawsuits by ISPs that didn't want to service these areas, but don't want anybody else doing so either. Then, they face a wave of bullshit disinformation (extra heavy if there's an ordinance being voted on), almost always covertly funded by industry, that tries to demonize these kinds of desperate efforts as socialism or even an affront to free speech. If that's not enough, they'll also run into bullshit protectionist laws, passed in more than 20 states and ghost written by industry lobbyists, that make these creative, local efforts as difficult as possible.
Community broadband isn't some mystical panacea. But it's absolutely a valid idea for communities that have been left behind for twenty years to improve their own infrastructure if that's what the community wants. Better yet, such efforts almost always result in lazy monopolies trying a little bit harder than they usually would under the one/two punch of limited competition and feckless, chickenshit regulators. But instead of embracing these efforts, our industry-captured regulators have chosen to demonize communities as the US FCC pretends that blind taxpayer subsidization, mindless deregulation, and more mergers are somehow going to fix rampant monopolization.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.