Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Monday, August 17th, 2020
from the let-that-be-a-lesson-to-you-all dept
by Glyn Moody - August 17th @ 7:54pm
The disruption caused by COVID-19 has touched most aspects of daily life. Education is obviously no exception, as the heated debates about whether students should return to school demonstrate. But another tricky issue is how school exams should be conducted. Back in May, Techdirt wrote about one approach: online testing, which brings with it its own challenges. Where online testing is not an option, other ways of evaluating students at key points in their educational career need to be found. In the UK, the key test is the GCE Advanced level, or A-level for short, taken in the year when students turn 18. Its grades are crucially important because they form the basis on which most university places are awarded in the UK.
Since it was not possible to hold the exams as usual, and online testing was not an option either, the body responsible for running exams in the UK, Ofqual, turned to technology. It came up with an algorithm that could be used to predict a student's grades. The results of this high-tech approach have just been announced in England (other parts of the UK run their exams independently). It has not gone well. Large numbers of students have had their expected grades, as predicted by their teachers, downgraded, sometimes substantially. An analysis from one of the main UK educational associations has found that the downgrading is systematic: "the grades awarded to students this year were lower in all 41 subjects than they were for the average of the previous three years."
Even worse, the downgrading turns out to have affected students in poorly performing schools, typically in socially deprived areas, the most, while schools that have historically done well, often in affluent areas, or privately funded, saw their students' grades improve over teachers' predictions. In other words, the algorithm perpetuates inequality, making it harder for brilliant students in poor schools or from deprived backgrounds to go to top universities. A detailed mathematical analysis by Tom SF Haines explains how this fiasco came about:
Let's start with the model used by Ofqual to predict grades (p85 onwards of their 319 page report). Each school submits a list of their students from worst student to best student (it included teacher suggested grades, but they threw those away for larger cohorts). Ofqual then takes the distribution of grades from the previous year, applies a little magic to update them for 2020, and just assigns the students to the grades in rank order. If Ofqual predicts that 40% of the school is getting an A [the top grade] then that's exactly what happens, irrespective of what the teachers thought they were going to get. If Ofqual predicts that 3 students are going to get a U [the bottom grade] then you better hope you're not one of the three lowest rated students.
As this makes clear, the inflexibility of the approach guarantees that there will be many cases of injustice, where bright and hard-working students will be given poor grades simply because they were lower down in the class ranking, or because the school did badly the previous year. Twitter and UK newspapers are currently full of stories of young people whose hopes have been dashed by this effect, as they have now lost the places they had been offered at university, because of these poorer-than-expected grades. The problem is so serious, and the anger expressed by parents of all political affiliations so palpable, that the UK government has been forced to scrap Ofqual's algorithmic approach completely, and will now use the teachers' predicted grades in England. Exactly the same happened in Scotland, which also applied a flawed algorithm, and caused similarly huge anguish to thousands of students, before dropping the idea.
The idea of writing algorithms to solve this complex problem is not necessarily wrong. Other solutions -- like using grades predicted by teachers -- have their own issues, including bias and grade inflation. The problems in England arose because people did not think through the real-life consequences for individual students of the algorithm's abstract rules -- even though they were warned of the model's flaws. Haines offers some useful, practical advice on how it should have been done:
The problem is with management: they should have asked for help. Faced with a problem this complex and this important they needed to bring in external checkers. They needed to publish the approach months ago, so it could be widely read and mistakes found. While the fact they published the algorithm at all is to be commended (if possibly a legal requirement due to the GDPR right to an explanation), they didn't go anywhere near far enough. Publishing their implementations of the models used would have allowed even greater scrutiny, including bug hunting.
As Haines points out, last year the UK's Alan Turing Institute published an excellent guide to implementing and using AI ethically and safely (pdf). At its heart lie the FAST Track Principles: fairness, accountability, sustainability and transparency. The fact that Ofqual evidently didn't think to apply them to its exam algorithm means its only gets a U grade for its work on this problem. Must try harder.
Confused Critic Of Section 230 Now In Charge Of NTIA
from the well-that's-unfortunate dept
by Mike Masnick - August 17th @ 3:35pm
Multiple experts on Section 230 have pointed out that the NTIA's bizarre petition to the FCC to reinterpret Section 230 of the Communications Decency Act is complete nonsense. Professor Eric Goldman's analysis is quite thorough in ripping the petition to shreds.
Normally we expect a government agency like NTIA to provide an intellectually honest assessment of the pros/cons of its actions and not engage in brazen partisan advocacy. Not any more. This petition reads like an appellate brief that would get a C- in a 1L legal writing course. It demonstrated a poor understanding of the facts, the law, and the policy considerations; and it ignored obvious counterarguments. The petition is not designed to advance the interests of America; it is designed to burn it all down.
As we mentioned, it seemed likely that the petition was written by Adam Candeub, a lawyer who was only hired a few months ago by the NTIA. Readers may recognize Candeub's name because he represented the white nationalist Jared Taylor in a failed lawsuit against Twitter for kicking him off the platform. At the time of the lawsuit, I engaged in an email discussion with Candeub in which he tried to justify his lawsuit, and it included the same sort of nonsense and debunked legal theories we now see in the NTIA petition. In that email exchange, he told me that "Section 230 doesn't help Twitter" because "if an internet firm starts to edit or curate others' comments -- creating its own content, it loses this immunity." That, of course, is incorrect.
Indeed, the California courts agreed with me (and basically every other court) in ruling that Section 230 protected Twitter's decision to remove Candeub's client.
Of course, in the past couple of years since all of that went down, Candeub has continued his quixotic quest to reimagine Section 230 to say what he wants it to say, rather than what the plain language of the law, and basically every court on record (and the authors of the law) have said that it actually says.
And now he'll get to do that as the guy in charge of NTIA. Axios is reporting that Candeub has been promoted to become the acting head of NTIA. Given Candeub's activism on this issue it's an odd role for him, and, as has happened so often in this particular administration, a destruction of historical norms. NTIA has historically been extremely balanced and avoids direct political advocacy type positions. It certainly appears that it will be taking a different approach under Candeub, and that approach includes a blatant misrepresentation of key laws about the internet. Historically, NTIA has been an important agency in protecting the open internet -- but now it should be seen as hostile to such an open internet. And that's disappointing for its legacy.
from the national-security-meets-data-security dept
by Tim Cushing - August 17th @ 1:40pm
Google's on-again, off-again relationship with China is off again. A decade ago, Google threatened to pull out of China because the government demanded a censored search engine. Fast forward to 2018 and it was Google offering to build a censored search engine for the China market. A few months later -- following heavy internal and external criticism -- Google abandoned the project.
China is now imposing its will on Hong Kong in violation of the agreement it made when the UK returned control of the region to the Chinese government. Its latest effort to stifle long-running pro-democracy demonstrations took the form of a "national security" law which was ratified by the far-too-obsequious Hong Kong government. The law equates advocating for a more independent Hong Kong with sedition and terrorism, allowing authorities to punish demonstrators and dissidents with life sentences for, apparently, fighting back against a government that agreed it wouldn't impose its will on Hong Kong and its residents.
For years, Google has refused to honor data requests from the Chinese government. Following this latest attack on Hong Kong autonomy, it appears Google now feels the region is indistinguishable from China.
Google will stop responding directly to data requests from Hong Kong authorities, according to a person familiar with the matter, treating the territory effectively the same as mainland China in such transactions.
The move comes in the wake of Beijing’s imposition of a broad national security law that targets vaguely defined crimes including subversion of state power, collusion with foreign powers, secession and terrorism.
The new law has received criticism from pretty much every country that doesn't wish it had thought of it first. The Chinese government is finding itself without many useful allies following this transparent attempt to silence criticism under the always useful "national security" banner. Multiple countries have offered asylum to pro-democracy activists and a number have suspended extradition treaties to prevent the Chinese government from dragging fleeing dissidents back for prosecution.
Very few governments are willing to help the Chinese government punish Hong Kong residents for wanting something better than an oppressive regime in power. Foreign tech companies shouldn't be willing to pick up the slack. Fortunately, Google -- despite its earlier assistance offers to the censorious government -- has decided to do the right thing and tell the Hong Kong government it won't be aiding and abetting its oppression efforts.
Google spokesman Aaron Stein said the company has “not produced data in response to new requests from Hong Kong authorities” since the security law was enacted and that “remains the case.”
The only autonomy China recognizes is its own. By violating its agreement with Hong Kong -- and promising to retaliate against sanctions and other efforts put in place by other countries in response to this new law -- China has made it clear it will go it alone to achieve its ends.
It Doesn't Make Sense To Treat Ads The Same As User Generated Content
from the cleaning-up-the-'net dept
by John Bergmayer - August 17th @ 12:05pm
Paid advertising content should not be covered by Section 230 of the Communications Decency Act. Online platforms should have the same legal risk for ads they run as print publishers do. This is a reform that I think supporters of Section 230 should support, in order to save it.
Before I explain why I support this idea, I want to make sure I'm clear as to what the idea is. I am not proposing that platforms be liable for content they run ads next to -- just for the ads themselves. I am not proposing that the liability lies in the "tools" that they provide that can be used for unlawful purposes, that's a different argument. This is not about liability for providing a printing press, but for specific uses of the printing press -- that is --publication.
I also don't suggest that platforms should lose Section 230 if they run ads at all, or some subset of ads like targeted ads, like a service-wide on/off switch. The liability would just be normal, common-law liability for the content of the ads themselves. And “ads” just means regular old ads, not all content that a platform commercially benefits from.
It's fair to wonder whom this applies to. Many of the examples listed below have to do with Facebook selling ads that are displaying on Facebook, or Google placing ads on Google properties, and it's pretty obvious that these companies would be the ones facing increased legal exposure under this proposal. But the internet advertising ecosystem is fiendishly complex, and there are often many intermediaries between the advertiser itself, and the proprietor of the site the ad is displayed on.
So at the outset, I would say that any and all of them could be potentially liable. If Section 230 doesn't apply to ads, it doesn't apply to supplying ads to others; in fact, these intermediary functions are considered a form of "publishing" under the common law. Which party to sue would be the plaintiff's choice, and there are existing legal doctrines that prevent double recovery, and to allow one losing defendant to bring in, or recover from, other responsible parties.
It's important to note, too, that this is not strict or vicarious liability. In any given case, it could be that the advertiser could be found liable for defamation or some kind of fraud but the platform isn't, because the elements of the tort are met for one and not the other. Whether a given actor has the "scienter" or knowledge necessary to be liable for some offense has to be determined for each party separately -- you can impute the state of mind from one party onto another, and strict liability torts for speech offenses are, in fact, unconstitutional.
The Origins Of An Idea
I first started thinking about it in the context of monetized content. After a certain dollar threshold is reached with monetized content, there should be liability for that, too, since the idea that YouTube can pay thousands of dollars a month to someone for their content but then have a legal shield for it simply doesn't make sense. The relationship of YouTube to a high-paid YouTuber is more similar to that between Netflix and a show producer, than it is between YouTube and your average YouTuber whose content is unlikely to have been reviewed by a live, human YouTube official. But monetized content is a marginal issue; very little of it is actionable, and frankly the most detestable internet figures don't seem to depend on it very much.
But the same logic runs the other way, to when the content creator is paying a platform for publishing and distribution, instead of the platform paying the content creator. And I think eliminating 230 for ads would solve some real problems, while making some less-workable reform proposals unnecessary.
Question zero should be: Why are ads covered by Section 230 to begin with? There are good policy justifications for Section 230 -- it makes it easier for there to be sites with a lot of user posts that don't need extensive vetting, and it gives sites a free hand in moderation. Great. It's hard to see what that has to do with ads, where there is a business relationship. Business should generally have some sense of whom they do business with, and it doesn't seem unreasonable for a platform to do quite a bit more screening of ads before it runs them than of tweets or vacation updates from users before it hosts them. In fact, I know that it's not an unreasonable expectation because the major platforms such as Google and Facebook already do subject ads to heightened screening.
I know I'm arguing against the status quo, so I have the burden of persuasion. But in a vacuum, the baseline should be that ads don't get a special liability shield, just as in a vacuum, platforms in general don't get a liability shield. The baseline is normal common law liability and deviations from this are what have to be justified.
I'm Aware That Much "Harmful" Content is not Unlawful
A lot of Section 230 reform ideas either miss the mark or are incompletely theorized, since, of course, much -- maybe even most -- harmful online content is not unlawful. If you sued a platform over it, without 230, you'd still lose, but it would take longer.
You could easily counter that the threat of liability would cause platforms to invest more in content moderation overall, and I do think that this is likely true, it is also likely that such investments could lead to over moderation that limits free expression by speakers that are even considered mildly controversial.
But with ads, there is a difference. Much speech that would be lawful in the normal case -- say, hate speech -- can be unlawful when it comes to housing and employment advertisements. Advertisements carry more restrictions and regulations in any number of ways. Finally, ads can be tortious in the standard ways as well: they can be fraudulent, defamatory, and so on. This is true of normal posts as well -- but with ads, there's a greater opportunity, and I would argue obligation, to pre-screen them.
Many Advertisements Perpetuate Harm
Scam ads are a problem online. Google recently ran ads for scam fishing licenses, despite being told about the problem. People looking for health care information are being sent to lookalike sites instead of the site for the ACA. Facebook has even run ads for low-quality counterfeits and fake concert tickets. Instead of searching for a locksmith, you might as well set your money on fire and batter down your door. Ads trick seniors out of their savings into paying for precious metals. Fake customer support lines steal people's information -- and money. Malware is distributed through ads. Troublingly, internet users in need of real help are misdirected to fake "rehab clinics" or pregnancy "crisis centers" through ads.
Examples of this kind are endless. Often, there is no way to track down the original fraudster. Currently, Section 230 allows platforms to escape most legal repercussions for enabling scams of this kind, while allowing the platforms to keep the revenue earned from spreading them.
There are many more examples of harm, but the last category I'll talk about is discrimination, specifically through housing and employment discrimination. Such ads might be unlawful in terms of what they say, or even to whom they are shown. Putting racially discriminatory text in a job or housing ad can be discriminatory, and choosing to show a facially neutral ad to just certain racial groups could be, as well. (There are tough questions to answer -- surely buying employment ads on sites likely to be read by certain racial groups is not necessarily unlawful -- but, in the shadow of Section 230, there's really no way to know how to answer these questions.)
In many cases under current law, there may be a path to liability in the case of racially discriminatory ads, or other harmful ads. Maybe you have a Roommates-style fact pattern where the platform is the co-creator of the unlawful content to begin with. Maybe you have a HomeAway fact pattern where you can attach liability to non-publisher activity that is related to user posts, such as transaction processing. Maybe you can find that providing tools that are prone to abuse is itself a violation of some duty of care, without attributing any responsibility for any particular act of misuse. All true, but each of these approaches only addresses a subset of harms and frankly seem to require some mental gymnastics and above-average lawyering. I don't want to dissuade people from taking these approaches, if warranted, but they don't seem like the best policy overall. By contrast, removing a liability shield from a category of content where there is a business relationship and a clear opportunity to review content prior to publication would incentivize platforms to more vigorously review.
A Cleaner Way to Enforce Anti-Discrimination Law and Broadly Police Harm
It's common for good faith reformers to propose simply exempting civil rights or other areas of law from Section 230, preventing platforms from claiming Section 230 as a defense of any civil rights lawsuit, much as how federal criminal law is already exempted.
The problem is that there is no end of good things that we'd like platforms to do more of. The EARN IT Act proposes to create more liability for platforms to address real harms, and SESTA/FOSTA likewise exempts certain categories of content. There are problems with this approach in terms of how you define what platforms should do, and what content is exempted, and issues of over-moderation in response to fears of liability. This approach threatens to make Section 230 a Swiss cheese statute where whether it applies to a given post requires a detailed legal analysis, which has other significant harms and consequences.
Another common proposal is to exempt "political" ads from Section 230, or targeted ads in general (or to somehow tackle targeting in some non-230 way). There are just so many line-drawing problems here, making enforcement extremely difficult. How, exactly, do you define "targeted"? How, by looking at an ad, can you tell whether it is targeted, contextual, or just part of some broad display campaign? With political ads, how do you define what counts? Ads from or about campaigns are only a subset of political ads--is an ad about climate change "political"--or an ad from an energy company touting its green record? In the broadest sense yes, but it's hard to see how you'd legislate around this topic.
Under the proposal to exempt ads from Section 230, the primary question to answer is not what is the content addressed to and what harms it may cause, but simply, whether it is an ad. Ads are typically labeled as such and quite distinct--and it may be the case that there need to be stronger ad disclosure requirements and penalties for running ads without disclosure. There may be other issues around boundary-drawing as well--I perfectly well understand that one of the perceived strengths of Section 230 is its simplicity, relative to complex and limited liability shields like Section 512 of the DMCA. Yet I think they're tractable.
Protection for Small Publishers
I've seen some publishers respond to user complaints of low-quality or even malware-distributing ads running on sites where the publishers point out that they don't see or control the ads--they are delivered straight from the ad network to the user, alongside publisher content. (I should say straight away that this still counts as "publishing" an ad--if the user's browser is infected by malware that inserts ads, or if an ISP or some other intermediary inserts the ad into the publisher's content, then no it is not liable, but if a website embeds code that serves ads from a third party, it is "publishing" that ad in the same sense as a back page ad on a fancy Conde Nast magazine. Whether that leads to liability just depends on whether the elements of the tort are met, and whether 230 applies, of course.)
For major publishers I don't have a lot of sympathy. If their current ad stack lets bad ads slip through, they should use a different one, if they can, or demand changes in how their vendors operate. The incentives don't align for publishers and ad tech vendors to adopt a more responsible approach. Changing the law would do that.
At the same time it may be true that some small publishers depend on ads delivered by third parties, and not only does the technology not allow them to take more ownership of ad content, they lack the leverage to demand to be given the right tools. Under this proposal, these small publishers would be treated like any other publisher for the most part, though I tend to think that it would be harder to meet the actual elements of an offense with respect to them. That said I would be on board with some kind of additional stipulation that ad tech vendors are required to defend and pay out for any case where publishers below a certain threshold are hauled into court for distributing ads they have no control over, but are financially dependent on. Additionally, to the extent that the ad tech marketplace is so concentrated that major vendors are able to shift liability away from themselves to less powerful players, antitrust and other regulatory intervention may be needed to assure that risks are borne by those who can best afford to prevent them.
The Tradeoffs That Accompany This Idea Are Worth It
I am proposing to throw sand in the gears on online commerce and publishing, because I think the tradeoffs in terms of consumer protection and enforcing anti-discrimination laws are worth it. Ad rates might go up, platforms might be less profitable, ads might take longer to place, and self-serve ad platforms as we know them might go away. At the same time, fewer ads could mean less ad-tracking and an across-the-board change to the law around ads should not tilt the playing field towards big players any more than it already is, and would not likely lead to an overall decline in ad spending, just a shift in how those dollars are spent (to different sites, and to fewer but more expensive ads)..
This proposal would burden some forms of speech more than others, too, so it’s worth considering First Amendment issues. One benefit of this proposal over subject matter-based proposals is that it is content neutral, applying to a business model. Commercial speech is already subject to greater regulation than other forms of speech, and this is hardly a regulation, just the failure to extend a benefit universally. Though of course this can be a different way of saying the same thing. But, if extending 230 to ads is required if it’s extended anywhere, it would seem that same logic would require that 230 be extended to print media or possibly even first-party speech. That cannot be the case. And I have to warn people that if proposed reforms to Section 230 are always argued to be unconstitutional, that makes outright repeal of 230 all the more likely, which is not an outcome I’d support.
Fans of Section 230 should like this idea because it forestalls changes they no doubt think would be worse. Critics of 230 should like it because it addresses many of the problems they've complained about for years, and has few if any of the drawbacks of content-based proposals. So I think it's a good idea.
John Bergmayer is Legal Director at Public Knowledge, specializing in telecommunications, media, internet, and intellectual property issues. He advocates for the public interest before courts and policymakers, and works to make sure that all stakeholders -- including ordinary citizens, artists, and technological innovators -- have a say in shaping emerging digital policies.
FBI Lawyer Criminally Charged For Lying To The FISA Court
from the thousands-of-lies;-one-(1)-guilty-plea dept
by Tim Cushing - August 17th @ 10:44am
Earlier this year, the DOJ Inspector General released a report that -- surprise, surprise -- showed the FBI abusing its FISA privileges. The FBI had placed former Trump campaign advisor Carter Page under surveillance, suspecting (but only momentarily) that he was acting as an agent of a foreign power. (Guess which one.)
The report said the first wiretap request might have been valid, but subsequent requests for extensions weren't. The Inspector General said the agency cherry-picked info to keep the wiretap alive, discarding any evidence it had come across that would have ended the surveillance.
Even more damning, it found that FBI lawyer Kevin Clinesmith altered an email from another federal agency to hide Carter Page's involvement with that agency from the FISA court. The FISA court demanded the DOJ hand over information on any other cases before it that Clinesmith might have had a hand in. But that wasn't the end of it. Clinesmith was also referred to the DOJ for criminal charges.
The criminal charge has arrived. The criminal complaint [PDF] was filed in the DC federal district court. It details the email Clinesmith altered and submitted with the Carter Page surveillance extension request to the FISA court in 2017.
The original email -- sent to Clinesmith by an unnamed government agency -- said that Page was an "operational source" for this agency. The "Other Government Agency" (OGA) stated this in the email to Clinesmith:
[M]y recollection is that [Individual #1] was or is… [digraph] (two-letter designation for operational contacts) but the [documents] will explain the details. If you need a formal definition for the FISA, please let me know…
Here's how the email was submitted by Clinesmith to the FISA court (alteration in bold):
My recollection is that [Individual #1] was or is [digraph] and not a "source" but the [documents] will explain the details.
Clinesmith made some other false statements about Page's source status to the Supervisory Special Agent in charge of the case as well, but those aren't criminal offenses… just regular ol' intradepartmental lying.
During a series of instant messages between the defendant and the SSA, the defendant indicated that Individual #1 was "subsource" and "was never a source." The defendant further stated "[the OGA] confirmed explicitly he was never a source." The SSA subsequently asked "Do we have that in writing?"
We do. Falsified writing, but writing nonetheless. See above.
So, what's up next for the FBI liar lawyer? Looks like a felony conviction:
The former FBI attorney, Kevin Clinesmith, will plead to a single felony count of making a false statement, though his lawyer said it was not his intent to mislead the court that approved the original warrant in 2016 and three renewals in 2017.
“Kevin deeply regrets having altered the email. It was never his intent to mislead the court or his colleagues as he believed the information he relayed was accurate,” Clinesmith’s lawyer Justin Shur said in a statement. “But Kevin understands what he did was wrong and accepts responsibility.”
I would hope Clinesmith understood what he did wrong while he was doing it -- not just after being hit with a criminal complaint.
Some will argue Clinesmith's lying was due to his hatred of Trump. After all, special counsel Robert Mueller booted him (along with agents Peter Strzok and Lisa Page) after uncovering anti-Trump text messages sent by the lawyer, including one calling Vice President Mike Pence "stupid" and referring to the current GOP as "tea party on steroids."
But to assume that is to forget the FBI's long history of surveillance/wiretap abuses, its habitual lying to courts, and a penchant for voyeurism that goes far beyond what's necessary to close an investigation. Deep state conspiracies may be fun to construct, but the reality is the FBI has far more power than it has ever been able to wield responsibly.
Daily Deal: The Learning Apps Bundle
from the good-deals-on-cool-stuff dept
by Daily Deal - August 17th @ 10:40am
The Learning Apps Bundle is a hub of over 50 of the best educational apps for kids that makes learning fun and entertaining. These apps are aimed at kids of all ages from toddlers to teens. They offer apps for basics animal names and sounds, alphabets, and numbers, and up to complicated topics of math and physics. All these apps are interactive and easy to use. The bundle is on sale for $20.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
from the come-on-now dept
by Mike Masnick - August 17th @ 9:32am
A week after issuing his first ridiculous executive order about TikTok, barring any transactions involving the company if it is still owned by ByteDance, President Trump decided he needed to issue a second executive order about TikTok, this one more directly ordering ByteDance to sell it. The authority used in this one is different. The first one was using the IEEPA, which is what Trump has used to claim "national security" reasons for imposing tariffs on China without Congressional approval. This time he's using 50 USC 4565, which allows the US treasury to block certain mergers, acquisitions and takeovers that might impact national security.
Except, here Trump is using that in reverse. ByteDance bought Musical.ly (and made it TikTok) two years ago. Trump didn't raise a peep at the time. To turn around now, two years later, and pretend that he can order the deal unwound is just silly.
Even if you don't trust ByteDance/TikTok, you should be absolutely concerned about this for multiple reasons: it's a clear and blatant abuse of power by the President. Allowing any President to just declare a foreign-owned company a problem and try to force it to sell to an American company is going to cause all sorts of long-term problems for the US. What's to stop foreign governments from doing the same to us? China is probably just itching to do something similar in retaliation. Second, to reach back two years and try to unwind a merger at this point based on this flimsy legal theory is just crazy as well. It's clear that this is nothing more than vindictiveness on the part of the President.
If there are real security issues with TikTok, then there should be due process. There should be investigations and evidence. Not just a childish, narcissistic President suddenly declaring that an entire company must be sold.
from the bad-things-are-good-for-you dept
by Karl Bode - August 17th @ 6:23am
To be very clear: American consumers don't like broadband usage caps. At all. Most Americans realize (either intellectually or on instinct) that monthly broadband usage caps and overage fees are little more than monopolistic cash grabs. They're confusing, frustrating price hikes on captive customers that accomplish absolutely none of their stated benefits. They don't actually help manage congestion, and they aren't about "fairness" -- since if fairness were a goal you'd have a lot of grandmothers paying $5-$10 a month for broadband because they only check their email twice a day.
Enter U.S. cable giant Charter (Spectrum), which is currently in the middle of trying to get the FCC to kill the merger conditions applied as part of its 2015 $79 billion acquisition of Time Warner Cable and Bright House Networks. Those conditions, among other things, required that Charter adhere to net neutrality (despite the fact that the GOP has since killed net neutrality rules), and avoid usage caps and overage fees. Both conditions had 7 year sunset clauses built in, and Charter, eager to begin jacking up U.S. broadband consumer prices ever higher, has been lobbying to have them killed two years early.
Charter's lobbying tactics so far have included giving money to groups like the Boys and Girls Club in exchange for gushing support for the elimination of the merger conditions, despite the fact that doing so would harm these groups' constituents with higher prices.
Charter's other major play apparently involves trying to tell the FCC that U.S. consumers really like monthly usage caps and annoying fees, restrictions the cable monopoly claims are "popular." From a filing (pdf) spotted by Ars Technica:
"There is also evidence that some consumers—either those who do not consume a lot of data and/or those who are looking for a lower-cost plan—may want a service where prices are based on the amount of data used... These different plans are proliferating in the market because they offer consumers a cost-effective alternative to unlimited data plans that are more than adequate to meet their needs. The DC/UBP [data caps and usage-based pricing] Condition, however, prevents Charter from keeping pace with its competitors and offering consumers the kinds of plans they are looking for."
This is a cute bit of logic U.S. broadband monopolies have long used that attempts to imply that caps and overage fees are about "fairness." But they've never been about fairness. ISPs love to insist they just want to "experiment with price differentiation," but you will never see a giant ISP offer a super cheap plan for people like your grandma that only check the Weather Channel website and their email a few times a day. And they don't do that because the goal is to drive up the costs of everybody's connection over the longer haul to please investors' needs for higher quarterly returns.
And they can get away with this because in most U.S. markets, their only competition is a phone company that hasn't upgraded its DSL lines since 2004 or so, or no competition at all. If the market actually saw competition and had functional regulatory oversight, you might see plans more closely tailored toward your actual usage. But in a market absent of real pricing competition and overseen by fecklessly captured regulators, what you get instead is endlessly higher prices, usually in the form of sneaky, misleading fees. Charter's filing tries to tap dance around this problem as well:
"Opponents’ claims that the BIAS market is not competitive are beside the point and overstated. The Conditions are unnecessary regardless of the level of BIAS competition because Charter, like other broadband providers, lacks the incentive or ability to discriminate against OVDs. OVDs are critical to the BIAS business and far too large and powerful to thwart with data caps or interconnection fees.
Only a cable monopoly with a 20 year track record of overcharging captive customers could try to argue that a lack of competition is "beside the point." Incumbent telecom monopolies, eager to protect their dwindling TV revenues, have every incentive to use usage caps anti-competitively. And they already are. AT&T, for example, exempts its own streaming video service from its arbitrary usage caps, while financially penalizing users that use Netflix, Amazon, or any other competitor.
There is zero doubt that Charter wants to follow down this path as well. And it certainly will; it just wants to do so slowly to minimize backlash (picture the boiling frog fable with you as the frog). There's also not much doubt that the captured, fact-phobic Trump FCC is eager to help them, proclaiming that letting a cable monopoly price gouge captive customers is about "restoring internet freedom," or whatever half-baked justification is on the menu this week.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.