Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Friday, August 7th, 2020
Console Exclusive Games Have Given Way To Console Exclusive Game Characters
from the mine-mine-mine dept
by Timothy Geigner - August 7th @ 7:39pm
Editor's Note: Originally, this article was set to run before the article of Crystal Dynamics defending this decision... but somehow that didn't happen. You can read that article here if you like, or if you haven't already, you can read this one first, and recognize that time has no meaning any more, so the linear publishing of articles is no longer necessary... or maybe Mike just screwed things up. One of those.
For anything that isn't first-party content, I will never understand why games sell as console exclusives. Maybe there is math out there that makes having a game publisher limit itself to one sliver of the potential market make sense, but somehow I have a hard time believing it. That's all the more the case given that the recent trend has been less exclusivity, rather than more. While the PC market is now seeing platform exclusivity emerge, something which makes even less sense than with consoles, game franchises that were once jealously guarded exclusives, such as MLB The Show, are announcing opening up to more systems, including PCs.
But it seems the instinct to carve out something exclusive for your system is hard to shake. Or, that's at least the case for Sony, which has managed to retain exclusive rights for the character Spider-Man in the upcoming Marvel's Avengers game.
In a move already being roundly criticized on social media, Crystal Dynamics’ Jeff Adams revealed today that Spider-Man will be available as a free update for PlayStation players of this September’s Marvel’s Avengers game in “early 2021.” PC and Xbox One players, apparently, won’t get to play as him.
Adams announced the move in a PlayStation blog post, offering no insight as to why PC and Xbox players would miss out and outlining no exclusive content for those games. It doesn’t appear to be a timed exclusive. When Kotaku reached out to Square Enix, the game’s publisher, for comment, about that and the rest of the deal, we were directed to Adams’ blog post—which didn’t answer any of our questions.
Now, there is some complicated licensing potentially at issue here. While Disney owns the rights to The Avengers generally, Sony has retained many of the publishing rights for the Spider-Man character. In 2018, the excellent Spider-Man video game came out as a PlayStation and many assumed that Sony had the sole game publishing rights to the character. But that doesn't seem to be true, no matter what noises Sony's made in the past. Instead, these rights still seem to reside with Marvel, which has tended to lean towards the PlayStation. But, as the Kotaku article points out, it's not as though Spider-Man has never made an appearance on other systems. He's been in Nintendo games, along with other games, such as Marvel's Lego series of games.
The idea behind these exclusive deals, be it for entire game franchises or for characters like Spider-Man, is to try to engender some kind of loyalty among the fan-base by having this exclusive content. And perhaps at one point that worked. But these days, the only thing Sony seems to be getting for its trouble is backlash. And when Forbes is out here saying that this character exclusive isn't just bad for the other platforms the game will appear on, but bad for PlayStation players as well, then maybe it's time to rethink this whole thing.
The problem with exclusives is that they not only hurt the obvious suspects, the platforms that are not getting X or Y exclusive, which in this case is Xbox and PC players, but they even hurt the platform that’s supposed to benefit from them.
With Avengers, it’s easy to see how this could play out in a similar fashion. While the main storyline of Avengers seems to be playing out around six launch heroes, Black Widow, Hulk, Thor, Captain America, Iron Man and Ms. Marvel, the entire point of the game is that it will be an ongoing story that unfolds in time. It’s easy to see how a character like Spider-Man, a prominent Avenger in both the MCU and the comics, could have been integrated into a major storyline at some point in the future as the game expands. But the fact that he’s exclusive to PlayStation essentially insures that he cannot be a major player in the story, relegated to some sort of introductory side mission, and that’s it, or as a tag-along to other missions without a major active role.
So why do this at all? Because old habits are hard to shake, probably. And, frankly, Sony's gonna Sony. But that doesn't make any of this less dumb, less bad for the gaming community, or less bad for even those who will get this exclusive character.
from the real-time-decision-making dept
by Copia Institute - August 7th @ 3:31pm
Summary: The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.
The same goes for Twitter and Facebook. Facebook's user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.
When the unthinkable happens -- as it did on August 26, 2015 -- these two social media services swiftly responded. But even their swift efforts weren't enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.
Both platforms removed the videos and deactivated Flanagan's accounts. Twitter's response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.
Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding "autoplay" be opt in -- rather than the default setting -- to prevent them from inadvertently viewing disturbing content.
Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube -- all of which allow live-streaming.
Decisions to be made by social media platforms:
Questions and policy implications to consider:
Content like this is a clear violation of terms of service agreements, making removal -- once notified and located -- straightforward. But being able to "see" it before dozens of users do remains a challenge.
Focals 'Smart' Glasses Become Dumb As A Brick After Google Acquisition
from the you-don't-own-what-you-buy dept
by Karl Bode - August 7th @ 1:46pm
Time and time again we've highlighted how, in the modern era, you don't really own the hardware you buy. Music, ebooks, and videos can disappear on a dime without recourse, your game console can lose important features after a purchase, and a wide variety of "smart" tech can quickly become dumb as a rock in the face of company struggles, hacks, or acquisitions, leaving you with pricey paperweights where innovation once stood.
The latest case in point: Google acquired Waterloo, Ontario based North back in June. For several years, North had been selling AR capable "smart" glasses dubbed Focal. Generally well reviewed, Focal glasses started at $600, went dramatically up from there, and required you visit one of two North stores -- either in Brooklyn or Toronto -- to carefully measure your head using 11 3D modeling cameras. The glasses themselves integrated traditional prescription glasses with smart technology, letting you enjoy a heads up display and AR notifications directly from your phone.
But with the Google acquisition, North posted a statement to its website, stating the company was forced to make the "difficult decision" to wind down support for Focal as of the end of July, at which point the "smart" tech will become rather dumb:
Sorry, your smart glasses are now just dumb glasses. pic.twitter.com/IZy8w5vJlv
— Joanna Stern (@JoannaStern) July 28, 2020
The full blog post notes that not only will it be killing off all online functionality for its first generation of Focal glasses, but it's cancelling production of its second generation, Focal 2.0 product:
"Focals smart glasses and its services are being discontinued and will no longer be available after July 31st, 2020. You won’t be able to connect your glasses through the app or use any features, abilities, or experiments from your glasses...We will not be shipping Focals 2.0, but we hope you will continue the journey with us as we start this next chapter."
Fortunately, the company says it's giving an automatic refund to all Focals 1.0 customers, though we'll have to see how well that works out in practice as the company shifts the lion's share of its focus toward getting swallowed up by Google and spending wheelbarrows full of acquisition money. Still, whether it's your smart glasses or your smart pet food bowl, it's yet another example of how sticking with dumb tech is very often less hassle and the better option.
Revisiting The Common Law Liability Of Online Intermediaries Before Section 230
from the nuts-and-bolts dept
by Robert Hamilton - August 7th @ 12:00pm
On February 8, 1996, President Clinton signed into law the Telecommunication Act of 1996. Title V of that act was called the Communications Decency Act, and Section 509 of the CDA was a set of provisions originally introduced by Congressmen Chris Cox and Ron Wyden as the Internet Freedom & Family Empowerment Act. Those provisions were then codified at Section 230 of title 47 of the United States Code. They are now commonly referred to as simply “Section 230.”
Section 230 prohibits a “provider or user” of an “interactive computer service” from being “treated as the publisher or speaker” of content “provided by another information content provider.” 47 U.S.C. § 230(c)(1). The courts construed Section 230 as providing broad federal statutory immunity to the providers of online services and platforms from any legal liability for unlawful or tortious content posted on their systems by their users.
When it enacted Section 230, Congress specified a few important exceptions to the scope of this statutory immunity. It did not apply to liability for federal crimes or infringing intellectual property rights. And in 2018, President Trump signed into law an additional exception, making Section 230’s liability protections inapplicable to user content related to sex trafficking or the promotion of prostitution.
Nevertheless, critics have voiced concerns that Section 230 prevents the government from providing effective legal remedies for what those critics claim are abuses by users of online platforms. Earlier this year, legislation to modify Section 230 was introduced in Congress, and President Trump has, at times, suggested the repeal of Section 230 in its entirety.
As critics, politicians, and legal commentators continue to debate the future of Section 230 and its possible repeal, there has arisen a renewed interest in what the potential legal liability of online intermediaries was for the content posted by their users under the common law, before Section 230 was enacted. Thirty years ago, as a relatively young lawyer representing CompuServe, I embarked on a journey to explore that largely uncharted terrain.
In the pre-Section 230 world, every operator of an online service had two fundamental questions for their lawyers: (1) what is my liability for stuff my users post on my system that I don’t know about?; and (2) what is my liability for the stuff I know about and decide not to remove (and how much time do I have to make that decision)?
The answer to the first question was not difficult to map. In 1990, CompuServe was sued by Cubby, Inc. for an allegedly defamatory article posted on a CompuServe forum by one of its contributors. The article was online only for a day, and CompuServe became aware of its contents only after it had been removed, when it was served with Cubby’s libel lawsuit. Since there was no dispute that CompuServe was unaware of the contents of the article when it was available online in its forum, we argued to the federal district court in New York that CompuServe was no different from any ordinary library, bookstore, or newsstand, which, under both the law of libel and the First Amendment, are not subject to civil or criminal liability for the materials they disseminate to the public if they have no knowledge of the material’s content at the time they disseminate it. The court agreed and entered summary judgment for CompuServe, finding that CompuServe had not “published” the alleged libel, which a plaintiff must prove in order to impose liability on a defendant under the common law of libel.
Four years later, a state trial court in New York reached a different conclusion in a libel lawsuit brought by Stratton Oakmont against one of CompuServe’s competitors, Prodigy Services Co., based on an allegedly defamatory statement made in one of Prodigy’s online bulletin boards. In that case, the plaintiff argued that Prodigy was different because, unlike CompuServe, Prodigy had marketed itself as using software and real-time monitors to remove material from its service that it felt were inappropriate for a “family-friendly” online service. The trial court agreed and entered a preliminary ruling that, even though there was no evidence that Prodigy was ever actually aware of the alleged libel when it was available on its service, Prodigy should nevertheless be deemed the “publisher” of the statement, because, in the court’s view, “Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.”
The Stratton Oakmont v. Prodigy ruling was as dubious as it was controversial and confusing in the months after it was issued. CompuServe’s general counsel, Kent Stuckey, asked me to address it in the chapter I was writing on defamation for his new legal treatise, Internet and Online Law. Tasked with this scholarly mission in the midst of one of the digital revolution’s most heated legal controversies, I undertook to collect, organize and analyze every reported defamation case and law review commentary in this country that I could find that might bear on the two questions every online service faced: when are we liable for user content we don’t know about and when are we liable for the user content we know about but decide not to remove?
With respect to the first question, the answer dictated by the case law for other types of defendants who disseminate defamatory statements by others was fairly clear. As I wrote in my chapter, “[t]wo common principles can be derived from these cases. First, a person is subject to liability as a ‘publisher’ only if he communicates a defamatory statement to another. Second, a person communicates that statement to another if, but only if, he is aware of its content at the time he disseminates it.” Hamilton, “Defamation,” printed as Chapter 2 in Stuckey, Internet & Online Law (Law Journal-Seminars Press 1996), at 2-31 (footnotes omitted).
I concluded that the trial court had erred in Stratton Oakmont because it failed to address what the term “publish” means in the common law of libel—to “communicate” a statement to a third party. When an intermediary disseminates material with no knowledge of its content, it does not “communicate” the material it distributes, and therefore does not “publish” it, at least as that term is used in the law of libel. Thus, whether the intermediary asserts the right of “editorial control” over the content provided by others, and the degree of such control the intermediary claims to exercise, are immaterial to the precise legal question at issue: did the defendant “communicate” the statement to another? I wrote:
While it is true that a publisher’s “choice of material to go into a newspaper” constitutes “the exercise of editorial control and judgment” by that publisher, his “increased liability” for defamation arises from the knowledge of content that he inherently acquires as a result of exercising that judgment to include the material in the newspaper; it does not arise from the mere fact that he has a right to make that judgment. All distributors, like primary publishers, exercise the very same right to determine what material they will disseminate and what material they will not. . . . Indeed, the liability standard applied to a distributor presumes that he has such a right to refuse distribution and requires him to exercise it whenever he knows or has reason to know that a particular publication contains unlawful or tortious content. His efforts to exercise that right, therefore, cannot create the very same general duty to inspect content that is prohibited by that common law standard (and by the First Amendment).
Id. at 2-62 (footnotes omitted, quoting Stratton Oakmont, 23 Media L. Rep. (BNA) 1794, 1796 (N.Y. Sup. Ct. May 25, 1995).
With respect to the second question online intermediaries had for their lawyers—when are we liable for stuff posted by users we decide not to remove—the answer dictated by the common law was anything but firmly established and settled. In the pre-digital world, the economics of communicating with the public made it far more practical for aggrieved plaintiffs to sue only the producers of such content rather than those who merely distributed it. The truth is that in the pre-Internet history of the common law of libel, entities in the business of distributing the printed content of others were rarely sued, and even then only as an afterthought to defeat diversity of citizenship and thereby prevent the defendants in a state court action from removing the lawsuit to the federal courts. And in only two of those rare cases was the distributor defendant alleged to have actual knowledge of the defamatory content it was selling to the public; in both cases, the distributor defendant was eventually dismissed before the case ever went to trial.
Thus, as I noted in my chapter, I did not find a single reported case of defamation liability actually being imposed on an entity in the business of distributing to the public printed content produced by others. That meant that, prior to the enactment of Section 230, when a lawyer advised his intermediary client as to when he might be held liable for deciding not to remove users’ content, the lawyer could refer only to dicta by courts and speculation by commentators as to how courts might apply the law in that circumstance.
And there certainly was no consensus in such speculation. As I noted in my chapter, Professor Keeton observed in 1984 that "[i]t would be rather ridiculous, under most circumstances, to expect a bookseller or a library to withhold distribution of a good book because of a belief that a derogatory statement contained in the book was both false and defamatory of the plaintiff.” Prosser and Keeton on the Law of Torts, § 113, at 811 (5th ed. 1984). Indeed, do we really expect Kroger to make decisions whether to pull an issue of the National Enquirer from the shelves in every one of its grocery stores across the country because the CFO’s spouse told her at breakfast that he read in that week’s issue that a celebrity claimed one of his critics was a “liar”?
As I observed in my chapter, it is also noteworthy that in 1992, the National Conference of Commissioners on Uniform State Laws considered, but did not adopt, a standard that would immunize from republisher liability any “library, archive, or similar information retrieval or transmission service" that provides access “to information originally published by others,” if it is not “reasonably understood to assert in the normal course of its business the truthfulness of” such information or if it “takes reasonable steps to inform users” that it makes no such assertion. See Perritt, “Tort Liability, the First Amendment, and Equal Access to Electronic Networks,” 5 Harv. J.L. & Tech. 65, 108 (1992).
In 1996, Section 230 was enacted into law at the same time my research and analysis of the applicable common law standards was published as Chapter 2, “Defamation,” in Kent Stuckey’s treatise, Internet & Online Law (Law Journal-Seminars Press 1996). I then added a section to the chapter discussing Section 230 and continued to update it for a few years to discuss the initial cases applying it. Eventually, however, it became apparent that, in light of the courts’ construction of Section 230, an extensive discussion of the pre-Section 230 case law with respect to the liability of online intermediaries for user content was no longer needed, and my chapter in the treatise was replaced with one that focuses instead on the cases applying Section 230.
Kent Stuckey’s treatise is still being updated and is available for purchase from Law Journal Press. The chapter I wrote, however, which details all of the reported case law and commentary I found that might bear on the potential liability of online intermediaries for defamation under the common law at that time, before Section 230 was enacted, has not been made available to the public for many years. In light of the renewed interest in this topic as part of the current debates about Section 230’s future, that chapter is being made available online, with permission, here.
Robert W. Hamilton is Of Counsel at Jones Day. He has more than 36 years of experience in state, federal, and bankruptcy court litigation and in First Amendment and media cases. Bob represented CompuServe in Cubby v. CompuServe in 1990-1991. The views and opinions set forth herein are the personal views or opinions of the author; they do not necessarily reflect views or opinions of the law firm with which he is associated.
I found two reported cases in which a court imposed liability on a property owner for defamatory graffiti on an interior wall in his building, and three extremely old cases in which a property owner was held responsible for refusing to remove a defamatory statement displayed on his property. In none of those cases was the defendant in the business of distributing to the public printed speech produced by others.
FTC Commissioners Are Upset About Section 230; Though It's Not At All Clear Why
from the guys,-really? dept
by Mike Masnick - August 7th @ 10:44am
Another day, another bunch of nonsense about Section 230 of the Communications Decency Act. The Senate Commerce Committee held an FTC oversight hearing yesterday, with all five commissioners attending via video conference (kudos to Commissioner Rebecca Slaughter who attended with her baby strapped to her -- setting a great example for so many working parents who are struggling with working from home while also having to manage childcare duties!). Section 230 came up a few times, though I'm perplexed as to why.
Senator Thune, who sponsored the problematic PACT Act that would remove Section 230 immunity for civil actions brought by the federal government, asked a leading question to FTC Chair, Joe Simons, that was basically "wouldn't the PACT Act be great?" and Simons responded oddly about how 230 was somehow blocking their enforcement actions (which is just not true).
Senator Thune: Chairman Simons, as you know, reforming Section 230 of the Communications Decency Act has been hotly debated here in Congress. Section 230 is the law that prevents social media platforms, like Facebook and Twitter, from being sued for content that users post on their platforms. I've introduced a bi-partisan bill with Senator Schatz on this issue, known as the Platform Accountability and Consumer Transparency Act (the PACT Act), which among other things would stipulate that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government. The DOJ recommended this particular provision in its recently published list of recommendations for reforming Section 230. My question is how would consumer benefit from reforming Section 230 to ensure that the immunity provided by Section 230 does not apply to civil enforcement actions brought by the federal government, such as the FTC.
Simons: Thank you Senator. So we have a number of instances... it's actually fairly common for us to go into court and have a defense put on us relating to Section 230. So, it would be very helpful for us to avoid having to deal with that and allow us the ability to go not only after the platform participants, but, in the right circumstances, the platform itself.
There are so many issues with this. First, he doesn't actually answer the question. Thune asked him how it would benefit consumers, but Simons answered how it would benefit the FTC. While the FTC might like to argue otherwise, those two things are not the same. Second, what a nonsense question and answer. The point of Section 230 is to protect platforms from being held liable for actions of their users -- so why would it make sense for the FTC to ever go after the platform in those cases? Third, it's difficult to think of any case where (contrary to what Simons claims...) Section 230 ever got in the way of an FTC enforcement action. Indeed, back in 2016 we had a story showing the exact opposite. The 2nd Circuit appeals court more or less said that the FTC gets to ignore Section 230. We found that problematic at the time, but Simons (and Thune) seem to think they just need more of that.
Meanwhile, it's not clear there's a real split among Commissioners. Simons, the chair, is a Republican. Commissioner Rohit Chopra was also asked about Section 230 and also gave a bizarre answer. This was in response to some questions asked by Senator Wicker (who went on a bizarrely uninformed anti-Section 230 floor rant earlier this week). He first asked Simons about whether or not the FTC had a role in enforcing Section 230, and also about doing anything in regards to the President's executive order on 230. Unlike FCC chair Ajit Pai, who caved in to the President's unconstitutional order and started an inquiry, Simons at least pointed out that (despite what the executive order says about the FTC) he sees no role for it:
Wicker: Let's talk about the FTC's role in overseeing the enforcement of Section 230 of the Communications Decency Act, and in particular, President Trump's Executive Order in May, on preventing online censorship. Specifically, section four of this order calls on the FTC to take action against online platforms that restrict speech in a manner inconsistent with their terms of service. What is your view, Mr. Chairman, on the FTC's responsibilities under the executive order? And have you seen any examples of the behavior described in the order and taken any action under your authority so far?
Simons: Thank you, Mr. Chairman. We haven't taken any action according to the executive order. We get complaints from a wide variety of sources. From the public, from Congress, from competitors, from people in industry, from consumer watchdogs. And it's very important that we get those complaints and we pay attention to them. Lots of complaints have come from members of this Committee, and we're very thankful to them that you provide us with such thoughtful complaints.
We're an independent agency so we review all of them independently. We have jurisdiction over commercial speech -- particularly on deceptive and unfair and then some other statutes. So we look to see whether the complaints are subject to unfairness... or whether they're within our authority as I described. Our authority focuses on commercial speech, not political content curation.
If we see complaints that are not within our jurisdiction, then we don’t do anything. If we see complaints that are, we take a closer look, and figure out whether there's a violation. And then we determine whether it's appropriate for us to act.
Wicker: So you don't view political speech as within your jurisdiction?
Simons: Correct.
Wicker: So if the public and members of the Senate are concerned about online platforms like Twitter and Facebook being inconsistent in the way they restrict political speech, you do not view that as within the purview of your statutory responsibilities. And therefore, the executive order does not instruct you in that specific area? Is that correct?
Simons: Yes. For political content curation. Yes.
This line of questioning was already silly enough, but at least, unlike Pai, Simons was willing to say "hey, that's outside of our jurisdiction." But Wicker's line of question is silly in its own way. There's no legal requirement that platforms treat different political speech equally. And it would violate the 1st Amendment if the law did.
From there, though Wicker goes on to one of the Democratic Commissioners, Chopra, who also doesn't seem to like 230 either.
Chopra: Putting aside the executive order, the issue of Section 230 is one where... of great concern, and I think there's growing bipartisan consensus that it has been abused. We see, whether it comes to counterfeit and defective goods, and the unlevel playing field between online platforms and brick and mortar stores. And in general, I think the scrutiny is warranted when it comes to technology platforms abusing any liabilities and public privileges, and using that as regulatory arbitrage.
I think many of these platforms do have too much power to dictate certain policies and regulations, and I don't want to see them continue, in my view, to overuse and abuse the legal immunities that Congress has provided, and I think we need to take a hard look at that. Particularly when it comes to the use of surveillance-based behavioral advertising. I think that business model is inconsistent with the origins of Section 230. Section 230 is supposed to safeguard and promote speech. It's not supposed to prioritize certain types of things over others based on what makes those companies more money.
This is also wrong and misguided on many points. First of all, you don't "abuse" an immunity granted by Congress when you use it as intended -- which is to avoid liability for 3rd party content and to make content moderation decisions. Second: regarding counterfeit or defective goods, counterfeit goods are generally a trademark issue which is entirely exempt from Section 230. You'd think that Chopra would know this? Indeed, there was a huge lawsuit regarding eBay and counterfeit goods that I'm sure he does know about -- which shows that the issue is not a Section 230 one.
Also, every major platform already has a massive operation trying to fight counterfeit and defective goods -- totally unrelated to Section 230. They do so because they want their consumers to be happy.
Second, there is no "unlevel playing field." Section 230 protects all websites -- including those of "brick and mortar stores." So it's a weird comparison to make.
Finally, as we were just discussing, it's unclear what behavioral advertising has to do with 230. Section 230 is unrelated to business models -- and having an advertising based business model has not "inconsistent with the origins of Section 230." Section 230 has allowed a wide variety of platforms to exist, many of which is because they have relied on Section 230 protections to enable much broader consumer speech.
Once again, it would be nice if someone in our government actually understood the law before commenting on it. Unfortunately, it appears we're not getting that from the FTC.
Daily Deal: The Complete 2020 Learn Linux Bundle
from the good-deals-on-cool-stuff dept
by Daily Deal - August 7th @ 10:39am
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $69.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Trump Issues Ridiculous Executive Orders Banning TikTok And WeChat
from the that's-not-how-any-of-this-works dept
by Mike Masnick - August 7th @ 9:53am
While he had said he would do it last weekend, and then said he'd wait until September 15th (but that he wanted a finder's fee if TikTok was sold to Microsoft), last night the Trump White House issued two executive orders regarding apps from Chinese companies. The first one claims it's banning TikTok and the second one says it's banning WeChat (which isn't even that popular in the US, though it is hugely popular in China). He separately sent a letter to Congress about the TikTok ban.
As many people expected, Trump is trying to use the IEEPA, the same (already questionable) authority he used to put tariffs on a bunch of products from China at the beginning of his nonsense tradewar. This is only supposed to be used in cases of unusual or extraordinary threats -- and let's be totally blunt: TikTok is not an unusual or extraordinary threat to anything beyond President Trump's massive ego.
Of course, among the many problems with this, the IEEPA includes exceptions and a big one is that it does not apply to "any information or informational materials." And I'd argue (and I imagine TikTok's lawyers will argue) that means he can't ban apps via an executive order under the IEEPA. But, of course, that will have to be fought out in court (and might become moot if ByteDance finalizes its sale to an American company).
Still, this is all nonsense:
I, DONALD J. TRUMP, President of the United States of America, find that additional steps must be taken to deal with the national emergency with respect to the information and communications technology and services supply chain declared in Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain). Specifically, the spread in the United States of mobile applications developed and owned by companies in the People’s Republic of China (China) continues to threaten the national security, foreign policy, and economy of the United States. At this time, action must be taken to address the threat posed by one mobile application in particular, TikTok.
TikTok, a video-sharing mobile application owned by the Chinese company ByteDance Ltd., has reportedly been downloaded over 175 million times in the United States and over one billion times globally. TikTok automatically captures vast swaths of information from its users, including Internet and other network activity information such as location data and browsing and search histories. This data collection threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.
Again, tons of apps suck up this same information -- and most of our devices are built in China. The idea that TikTok is being used for "blackmail" or "corporate espionage" is beyond ludicrous. And, yeah, fine, federal employees perhaps shouldn't have an app like TikTok on their phone, but just tell federal employees not to use it. Don't ban it from the US.
And then there's this:
TikTok also reportedly censors content that the Chinese Communist Party deems politically sensitive, such as content concerning protests in Hong Kong and China’s treatment of Uyghurs and other Muslim minorities. This mobile application may also be used for disinformation campaigns that benefit the Chinese Communist Party, such as when TikTok videos spread debunked conspiracy theories about the origins of the 2019 Novel Coronavirus.
So... because TikTok moderates some content, the US will censor all the content? How does that make any sense at all?
In terms of what the actual order does, it bars any US person or company from conducting any transaction with TikTok's parent company ByteDance, or any subsidiary. That would likely mean that neither Apple nor Google could carry the app in their mobile app stores, which would effectively bar it from the US (and potentially large parts of the rest of the world, as well).
The WeChat order is effectively the same, barring transactions with WeChat's parent company Tencent Holdings "or any subsidiary of that entity." Of course, that raised a bunch of alarm bells among gamers. Tencent owns or has invested in basically every gaming company out there, and fully owns Riot Games, makers of League of Legends. It also holds a 40% stake in Epic Games.
Honestly, it sounds like, in typical Trumpland fashion, no one in the White House recognized this. So after people started to freak out, they clarified that they didn't mean to include the gaming companies:
Video game companies owned by Tencent will NOT be affected by this executive order!
White House official confirmed to the LA Times that the EO only blocks transactions related to WeChat
So Riot Games (League of Legends), Epic Games (Fortnite), et al are safe
(pending updates)
— Sam Dean 🦅 (@SamAugustDean) August 7, 2020
Of course... that could change. And who knows? A plain reading of the order certainly should bar League of Legends as well. And, of course, part of this demonstrates the absolute ridiculousness of these "bans" in the first place: lots of apps (and even more devices and equipment) come from China. Doing a blanket ban is idiotic and short-sighted. Not only will it not work, not only will it deprive people of useful technology, but it justifies the Chinese approach of splintering and fragmenting the internet and censoring foreign companies. It's an extremely dangerous game the President is playing here, with a poorly-aimed sledge hammer.
And that doesn't get into the fact that just banning WeChat alone will be devastating for tons of American citizens and residents with family in China. While WeChat may not be particularly popular within the US, its popularity in China means that many relatives in the US rely on WeChat to communicate with family. Cutting them off is, again, embracing the Chinese Great Firewall approach.
Again, I don't believe that the President actually has this authority, even under his already extremely broad interpretation of the IEEPA. That means there will almost certainly be multiple lawsuits over all this, and (I'm guessing) a request for a temporary restraining order to block those orders from going into effect. But still, if on Wednesday night the Trump White House made it clear it wanted to splinter the internet, last night it started to carry out that goal in practice.
Congress To Consider National Right To Repair Law For First Time
from the baby-steps dept
by Karl Bode - August 7th @ 6:33am
About five years ago, frustration at John Deere's draconian tractor DRM culminated in a grassroots "right to repair" movement. The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM and the company's EULA prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.
Since then, the right to repair movement has expanded dramatically, with a heavy focus on companies like Apple, Microsoft, Sony and their attempts to monopolize repair, driving up consumer costs, and resulting in greater waste.
It has also extended into the medical arena, where device manufacturers enjoy a monopoly on tools, documentation, and replacement parts, making it a nightmare to get many pieces of medical equipment repaired. That has, unsurprisingly, become even more of a problem during the COVID-19 pandemic due to mass hospitalizations and resource constraints, with medical professionals being forced to use grey market parts or DIY parts just to get ventilators to work.
Hoping to give the movement a shot of adrenaline, Senator Ron Wyden and Representative Yvette D. Clark have introduced the Critical Medical Infrastructure Right-to-Repair Act of 2020 (pdf), which would exempt medical equipment owners and "servicers" from liability for copying service materials or breaking DRM if it was done so to improve COVID-19 aid. The legislation also pre-empts any agreements between hospitals and equipment manufacturers preventing hospital employees from working on their own equipment, something that's also become more of a problem during the pandemic.
From a Wyden statement:
"There is no excuse for leaving hospitals and patients stranded without necessary equipment during the most widespread pandemic to hit the U.S. in 100 years,” Wyden said. “It is just common sense to say that qualified technicians should be allowed to make emergency repairs or do preventative maintenance, and not have their hands tied by overly restrictive contracts and copyright laws, until this crisis is over."
While numerous states have attempted to pass right to repair legislation, none have succeeded so far. In large part because companies like Apple have lobbied extensively to thwart them, (falsely) claiming that letting customers and independent repair merchants fix devices (usually for far less money) would be a privacy and security nightmare. In Nebraska, Apple even tried to claim that such legislation would turn the state into a mecca for hackers (sounds pretty cool to me, but what do I know). Apple has also spent years bullying a small repair shop in Norway because he used refurbished Apple parts to fix devices.
This is the first time such legislation will be proposed on the federal level. As such, likely seeing it as a gateway to broader legislation, companies like Apple, Microsoft, Sony, and John Deere will now likely do their best to (quietly) kill it, despite the positive impact it could have during a pandemic.
Appeals Court Upholds Ruling Saying PACER Overcharged Users
from the refund-form-available-for-$0.10/page dept
by Tim Cushing - August 7th @ 3:31am
A lawsuit against PACER for its long list of wrongs may finally pay off for the many, many people who've subjected themselves to its many indignities. The interface looks and runs like a personal Geocities page and those who manage to navigate it successfully are on the hook for pretty much every page it generates, including $0.10/page for search results that may not actually give users what they're looking for.
Everything else is $0.10/page too, including filings, orders, and the dockets themselves. They're capped at $3.00/each if they run past 30 pages, but for the most part, using PACER is like using a library's copier. Infinite copies can be "run off" at PACER at almost no expense, but the system charges users as though they're burning up toner and paper.
Back in 2016, the National Veterans Legal Services Program, along with the National Consumer Law Center and the Alliance for Justice, sued the court system over PACER's fees. The plaintiffs argued PACER's collection and use of fees broke the law governing PACER, which said only "reasonable" fees could be collected to offset the cost of upkeep. Instead, the US court system was using PACER as a piggy bank, spending money on flat screen TVs for jurors and other courtroom upkeep items, rather than dumping the money back into making PACER better, more accessible, and cheaper.
A year later, a federal judge said the case could move forward as a class action representing everyone who believed they'd been overcharged for access. A year later, it handed down a decision ruling that PACER was illegally using at least some of the collected fees. The case then took a trip to the Federal Circuit Court of Appeals with both adversarial parties challenging parts of the district court's ruling.
The Appeals Court has come down on the side of PACER users. Here's Josh Gerstein's summary of the decision for Politico:
The U.S. Court of Appeals for the Federal Circuit upheld a district court judge’s ruling in 2018 that court officials inflated fees for the Public Access to Court Electronic Records or PACER system by including costs for flat-screen courtroom televisions, electronic alerts to victims and police, as well as computer systems to manage jurors.
The three-judge appeals court panel unanimously ruled that Congress gave the federal courts permission to charge for systems that improve public access to court files, but did not create a technology slush fund that could be used to subsidize almost any purchase of electronics by the federal judiciary.
This potentially means millions of dollars of refunds will be headed back to users once all the details are sorted out. The suit only covers fees collected from 2010 to 2016 (the initiation of the lawsuit) and the Appeals Court -- while not thrilled a paywall continues to sit between citizens and access to court documents -- will send this back to the lower court for a closer examination of PACER's actual expenses.
The decision [PDF] says both parties are reading the law wrong, but the government is reading it wrongest. There is no obligation for PACER to charge fees. The flip side of that is there is also no obligation for the US court system to provide a free service either.
Whereas the judiciary previously was required to charge fees for electronic access to court information, after the 2002 amendment it could choose whether to do so. The language “only to the extent necessary” certainly suggests that Congress sought to encourage the judiciary to limit its imposition of such fees—since otherwise the amendment could have simply swapped “shall” for “may.” But, as we continue to stress, the text lacks a clear object or purpose of the supposed limitation (“only to the extent necessary” to what?) and we are unwilling to supply one of our own—or one of plaintiffs’—making. If Congress had intended to limit fees only to the extent necessary to reimburse expenses incurred in providing access to PACER, it would have said so more clearly. We can give full effect to the 2002 amendment by reading it as removing the electronic access fee obligation and encouraging the judiciary to rein in fees—without imparting any specific limitation on the fee-setting.
At this point, PACER can still charge users for access. Those fees may be reduced in the future, but for now nothing is changed. The money it collected in the past, however, may be headed back to PACER users. That's good news. Hopefully this decision is another step down the road to the removal of the paywall standing between citizens and the court documents their tax dollars have already paid for.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.