Techdirt Daily Newsletter for Sunday, 25 April, 2021

 
From: "Techdirt Daily Newsletter" <newsletters@techdirt.com>
Subject: Techdirt Daily Newsletter for Sunday, 25 April, 2021
Date: July 30th 2020

Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new

Techdirt Daily Newsbrief

Techdirt Email.

Stories from Wednesday, July 29th, 2020

 

Stone Brewing Is Very Upset That People Don't Like Its Trademark Bullying

from the stone-cold dept

by Timothy Geigner - July 29th @ 7:37pm

It was just days ago that we were discussing Stone Brewing's new campaign to jealously protect all uses of the word "stone" on alcohol branding. The one time advocate brewer claiming to stand up for craft brewing against "Big Beer" has since devolved into a corporate gorilla smashing up the USPTO to get trademarks cancelled and firing off cease and desist notices to small breweries. All this, mind you, as it also wages war on a second front with MillerCoors over Keystone's rebranding as simply "Stone". In that suit, MillerCoors complained that lots of breweries use the word "stone", which appears to have set Stone Brewing off on its bout of aggression.

When Sawstone Brewing pushed back on a C&D and failed to work out an agreement with Stone Brewing, the latter initiated an attempt to cancel the former's trademark. Sawstone complained publicly. And now Stone Brewing is busy complaining that the public is being mean to it as a result.

Stone Brewing published a lengthy statement on its website Monday night regarding its trademark dispute with Sawstone Brewing Co. in Morehead, Ky., saying that Stone has become the “subject of a vicious online harassment and smear campaign.”

In a newly published statement, Greg Koch, the CEO of Stone Brewing, acknowledged the company’s multiple trademark disputes, noting that “this kind of thing is just part of owning a brand name and a company identity,” but he claimed that Sawstone’s version of events is not how the situation unfolded.

We'll get into that last bit in a second, but its worth pointing out that Koch's claim that this is all somehow necessary due to owning a brand name is demonstrably false. MillerCoors itself argued against this, admittedly disingenuously. After all, while I think I'd argue that turning Keystone to Stone probably is too close to Stone Brewing so as to cause confusion, MillerCoors' claim that lots of other breweries have used the word "stone" within their brands for a long, long time is absolutely true. And if Stone Brewing not only survived, but thrived, with all those other uses in existence, it negates completely the claim that Stone Brewing had no choice but to act as it has. Were that true, Stone Brewing wouldn't be the behemoth it now is.

Now, on to Koch's claim that Sawstone Brewing's description of events wasn't accurate... it's all in the petty details. Essentially, Koch claims that this all started when Sawstone Brewing attempted to trademark its name and that Stone Brewing tried to amicably work out a settlement of the trademark issues over the course of a few months. In addition, Sawstone missed a couple of deadlines for which it had promised settlement proposals. And... that's it.

All of which completely misses the point. Stone Brewing didn't have to take this action at all. And while the reported claims of online stalking and threats sent to Stone Brewing are reprehensible if true, a public backlash to bullying behavior by a brewer that was supposed to be standing up to these types of corporate actions is perfectly valid. If Stone Brewing doesn't like that version of the backlash, it can cease playing the bully. Unfortunately...

As for the trademark dispute, Koch said that Stone will not back down and that the decision will ultimately lie with the USPTO.

Well, then enjoy the continued backlash, you Arrogant Bastards.

5 Comments »

Moderation Of Racist Content Leads To Removal Of Non-Racist Pages & Posts (2020)

from the moderation-mistakes dept

by Copia Institute - July 29th @ 4:08pm

Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. Unfortunately, these efforts can cause unintended collateral damage to users who share surface similarities to hate groups, even though many of these users take a firmly anti-racist stance.

A recent attempt by Facebook to remove hundreds of pages associated with bigoted groups resulted in the unintended deactivation of accounts belonging to historically anti-racist groups and public figures.

The unintentional removal of non-racist pages occurred shortly after Facebook engaged in a large-scale deletion of accounts linked to white supremacists, as reported by OneZero:

Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term “skinhead.”

The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups and as Mark Zuckerberg continues to be scrutinized for his selective moderation of hate speech.

Dozens of Facebook users from around the world reported having their accounts locked or their pages disabled due to their association with the "skinhead" subculture. This subculture dates back to the 1960s and predates the racist/fascist tendencies now commonly associated with that term.

Facebook’s policies have long forbidden the posting of racist or hateful content. Its ban on "hate speech" encompasses the white supremacist groups it targeted during its purge of these accounts. The removals of accounts not linked to racism -- but linked to the term "skinhead' -- were accidental, presumably triggered by a term now commonly associated with hate groups.

Questions to consider:

  • How should a site handle the removal of racist groups and content?
  • Should a site use terms commonly associated with hate groups to search for content/accounts to remove?
  • If certain terms are used to target accounts, should moderators be made aware of alternate uses that may not relate to hateful activity?
  • Should moderators be asked to consider the context surrounding targeted terms when seeking to remove pages or content?
  • Should Facebook provide users whose accounts are disabled with more information as to why this has happened? (Multiple users reported receiving nothing more than a blanket statement about pages/accounts "not following Community Standards.")
  • If context or more information is provided, should Facebook allow users to remove the content (or challenge the moderation decision) prior to disabling their accounts or pages?
Resolution: Facebook's response was nearly immediate. Facebook apologized to users shortly after OneZero reported the apparently-erroneous deletion of non-racist pages. Guy Rosen (VP- Integrity at Facebook) also apologized for the deletion on Twitter to the author of the OneZero post, saying the company had removed these pages in error during its mass deletion of white supremacists pages/accounts and said the company is looking into the error.

17 Comments »

Former Rep. Chris Cox Used His Testimony At Tuesday's Senate Hearing On The Internet's Foundational Law To Do Some Myth-Busting

from the present-at-the-creation dept

by Mike Godwin - July 29th @ 1:49pm

Whenever internet-law experts see a new Congressional hearing scheduled whose purpose is to explore whether Section 230—a federal statute that’s widely regarded as a foundational law of the internet—needs to be amended or repealed, we shudder. That’s because we know from experience that even some of the most thoughtful and conscientious lawmakers have internalized some broken notions about Section 230 and have the idea that this statute is responsible for everything that bothers us about today’s internet.

That’s why Tuesday’s Senate hearing about Section 230 was, in its own way, much more calming than earlier hearings on the law have been. Each of the four witnesses had substantive knowledge to share, and even if some witnesses were wrong (at least in my view) on this or that fine point, none of them was grandstanding or (as has often been the case in the past) unwittingly or intentionally deceptive about what might be wrong with 230. Each more or less acknowledged that the law which cyberlaw professor Jeff Kosseff has aptly characterized, in his book of the same name, as “The Twenty-Six Words That Created the Internet.” Even Professor Olivier Sylvain of Fordham Law School, who believes Section 230’s protections to be “ripe for narrowing,” focuses on the courts’ role in interpreting the statute rather than Congress’s role in possibly amending it. Unlike some other hearings, this hearing’s witnesses had no one calling for repeal.

Kosseff, a faculty member at the U.S. naval academy at Annapolis, was himself one of the witnesses on Tuesday’s panel, which was convened by the Senate commerce committee’s Subcommittee on Communications, Technology, Innovation, and the Internet. But even though the hearing’s title as inspired by Kosseff’s book, it was former Representative Chris Cox, now a partner at the Morgan, Lewis & Bockius law firm and a board member at the tech lobbying group NetChoice, who was the star. In the 1990s, Representative Cox was an author and co-sponsor (with then-Representative, now Senator, Ron Wyden) of the bill that became Section 230. Having him as a witness on Tuesday’s panel was a bit like having James Madison show up to testify about what he was thinking when he wrote the Bill of Rights.

Cox’s testimony spotlighted the ways in which the legal immunities built into Section 230 in 1995—immunities that generally shield internet companies for liability for content created by users and subscribers—had given rise to the transformational effect those companies have had in the world of 2020. Just as important, Cox pointed out in his written testimony that the law does not shield service providers who created illegal or tortious content--”in whole or in part”--from legal liability:

Section 230 was written, therefore, with a clear fact-based test:

  • Did the person create the content? If so, that person is liable for any illegality.
  • Did someone else create the content? Then that someone else is liable.
  • Did the person do anything to develop the content created by another, even if only in part? If so, the person is liable along with the content creator.

Cox explained that this approach was aimed to accommodate the realities of being an online service provider but not to allow service providers that are clearly responsible for a crime or civil wrong to be immunized by the statute:

“Rep. Wyden and I knew that, in light of the volume of content that even in 1995 was crossing most internet platforms, it would be unreasonable for the law to presume that the platform will screen all material. We also well understood the corollary of this principle: if in a specific case a platform actually did review material and edit it, then there would be no basis for assuming otherwise. As a result, the plain language of Section 230 deprives such a platform of immunity.”

Cox used this portion of his written testimony to debunk which he called certain “myths” about Section 230—of which the first and most obvious myth is that Section 230 immunizes “websites that knowingly engage in, solicit, or support illegal activity.” Wrote Cox: “It bears repeating that Section 230 provides no protection for any website, user, or other person or business involved even in part in the creation or development of content that is tortious or criminal.”

Another of these myths had to do with the idea that 230’s purpose was to set up a separate legal rules for internet services that don’t apply in the outside world. Cox insists, however, that Section 230 simply extended to the online world the protections brick-and-mortar enterprises already had, in terms of not being liable for content they didn’t fully or partially create. (For example, if I slander someone in a restaurant, the restaurant’s proprietor shouldn’t be held liable for my using his premises to defame someone. I look forward to testing this principle when we’re all going out to restaurants again.)

Other creation myths included the idea that Section 230 was designed just to protect “an infant industry” (so is no longer necessary now that the industry is old enough to vote), or the idea that it was a favor to the tech industry (Cox says the tech companies in the 1990s mostly didn’t know enough to lobby for the provision—or else didn’t even exist then), or the idea that it was part of a “grand bargain” to help then-Senator James Exon pass his anti-porn legislation, then mostly known as the Communications Decency Act. With regard to that last theory, Cox explains that his and Wyden’s draft was “deliberately crafted as a rebuke” to Senator Exon’s approach to online porn. If service providers were going to make the world’s information available to users, Cox and Wyden reasoned, there was no way that any of the services could effectively be responsible for the “indecent” content in libraries and elsewhere that might show up on users’ screens.

The real reason Section 230 was included with Senator Exon’s Communications Decency Act language had to do with the politics of the conference committee that had to work out differences between the House and Senate versions of the Telecommunications Act of 1996. The Cox-Wyden provision was in the House version, but an overwhelming majority of senators had voted for the CDA in the Senate version. Harmonizing the two opposing provisions had some interesting consequences, as Cox’s testimony points out:

When the House and Senate met in conference on the Telecommunications Act, the House conferees sought to include Cox-Wyden and strike Exon. But political realities as well as policy details had to be dealt with. There was the sticky problem of 84 senators having already voted in favor of the Exon amendment. Once on record with a vote one way—particularly a highly visible vote on the politically charged issue of pornography—it would be very difficult for a politician to explain walking it back. The Senate negotiators, anxious to protect their colleagues from being accused of taking both sides of the question, stood firm. They were willing to accept Cox-Wyden, but Exon would have to be included, too. The House negotiators, all politicians themselves, understood. This was a Senate-only issue, which could be easily resolved by including both amendments in the final product. It was logrolling at its best.

“Perhaps part of the enduring confusion about the relationship of Section 230 to Senator Exon’s legislation has arisen from the fact that when legislative staff prepared the House-Senate conference report on the final Telecommunications Act, they grouped both Exon’s Communications Decency Act and the Internet Freedom and Family Empowerment Act into the same legislative title. So the Cox-Wyden amendment became Section 230 of the Communications Decency Act—the very piece of legislation it was designed to counter. Ironically, now that the original CDA has been invalidated, it is Ron’s and my legislative handiwork that forever bears Senator Exon’s label.”

Cox’s explanation should put to rest forever the myth that the Supreme Court’s decision in Reno v. ACLU (1997), when it struck down all other provisions of the Communications Decency Act as unconstitutional, left Section 230 alone as an incomplete fragment rendered meaningless and/or dysfunctional if standing alone. As Cox’s written testimony makes clear, Section 230 was originally crafted as a standalone statute whose purpose was to negate the effect of Stratton Oakmont v. Prodigy (1995)—a case whose judge drastically misread both prior caselaw and the facts of the case he decided—and restore something like state of the online-services law as it was understood after a federal court’s influential decision in 1991 in Cubby v. CompuServe.

One of the unfortunate aspects of Tuesday’s hearing is that Cox’s lengthy first-person account and massive debunking of common myths about Section 230 weren’t heard by most of the Senators or by the viewers who only watched the hearing online. In “person” (Cox, like the other witnesses, was beamed in via a teleconferencing system that I presume was Zoom), the former congressman departed from his written remarks to remind his audience that, among other things, Section 230 gave us Wikipedia, a free resource hosted by the Wikimedia Foundation, that serves most of us in the Western developed countries as a resource every day. This is something I wish more legislators would remember—that Wikipedia depends on Section 230 to exist in its current form and usefulness. Full disclosure: I spent a few years as general counsel and later outside counsel doing work for the Wikimedia Foundation. And, just like any other lawyer who who has worked to protect a highly valued online service, I can testify that we depended on Section 230 a lot.

Still another unfortunate aspect that is that Kosseff’s and Sylvain’s contributions, as well as those of the Internet Association’s deputy general counsel, Elizabeth Banker, were somewhat eclipsed both by Cox’s written testimony and by his live testimony as one of the two fathers of “the twenty-six words that created the internet.” But these tradeoffs were a small price to pay in order to spend so much of Tuesday morning getting myths busted and truths told. Even as someone who’s been dealing with Section 230 for almost as long as Cox has, I can say truthfully that I learned a lot.

11 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Banning TikTok Will Accomplish Nothing. Fix Our Broader Security & Privacy Problems Instead.

from the adulting-is-hard dept

by Karl Bode - July 29th @ 12:13pm

Earlier this month I noted how the calls to ban TikTok didn't make a whole lot of sense. For one thing, a flood of researchers have shown that TikTok isn't doing anything any different than a flood of foreign and domestic services. Secondly, the majority of the most vocal pearl clutchers over the app (Josh Hawley, etc.) haven't cared a whit about things like consumer privacy or internet security, suggesting it's more about politics than policy. The wireless industry SS7 flaw? US cellular location data scandals? The rampant lack of any privacy or security standards in the internet of things? The need for election security funding?

Most of the folks hyperventilating about TikTok haven't made so much as a peep on these other subjects. Either you actually care about consumer privacy and internet security or you don't, and a huge swath of those hyperventilating about TikTok have been utterly absent from the broader conversation. In fact, many of them have done everything in their power to scuttle any effort to have even modest privacy guidelines for the internet era, and fought every effort to improve and properly fund election security. Again, that's because, for many it's more about politics than serious, adult tech policy.

That's not to say there aren't security concerns when it comes to installing Chinese-made apps on American devices, but that same argument can be made (but somehow isn't) for an absolute ocean of foreign and domestic services, hardware, and apps. Over the weekend, Kevin Roose at the New York Times made some similar points, noting that things tend to get stupid when you fuse politics with policy and domestic financial interests with national security (especially given lobbyists adore taking advantage of the lack of transparency in the latter):

"There are also reasons to be skeptical of the motives of TikTok’s biggest critics. Many conservative politicians, including Mr. Trump, appear to care more about appearing tough on China than preventing potential harm to TikTok users. And Silicon Valley tech companies like Facebook, whose executives have warned of the dangers of a Chinese tech takeover, would surely like to see regulators kneecap one of their major competitors."

It took a while for this opinion to form out of the internet news and policy murk, but it's nice to see folks realizing that banning TikTok, but doing nothing about an absolute ocean of foreign and domestic hardware, services, and apps that pose similar threats, is kind of pointless and stupid:

"I’ll be honest: I don’t buy the argument that TikTok is an urgent threat to America’s national security. Or, to put it more precisely, I am not convinced that TikTok is inherently more threatening to Americans than any other Chinese-owned app that collects data from Americans. If TikTok is a threat, so are WeChat, Alibaba and League of Legends, the popular video game, whose maker, Riot Games, is owned by China’s Tencent.

And since banning every Chinese-owned tech company from operating in America wouldn’t be possible without erecting our own version of China’s Great Wall — a drastic step that would raise concerns about censorship and authoritarian control — we need to figure out a way for Chinese apps and American democracy to coexist."

But the piece goes a step further in smartly arguing that if you want to deter TikTok-esque privacy issues, you're better off fixing the underlying rot that has resulted in the cavalier treatment of consumer data. And using TikTok as an example of how to do security and privacy oversight correctly, you start by forcing the company to embrace open source, by finally getting off our collective, befuddled asses and passing a basic but tough US privacy law, by demanding greater transparency of TikTok (and every other company), and by ending our myopic view of security and privacy:

"I think TikTok is a bit of a red herring,” Alex Stamos, Facebook’s former chief security officer and a professor at Stanford University, told me in an interview. Ultimately, Mr. Stamos said, the question of what to do about TikTok is secondary to the question of how multinational tech giants in general should be treated.

“This is a chance to come up with a thoughtful model of how to regulate companies that operate in both the U.S. and China, no matter their ownership,” he said."

Granted that requires nuance and a holistic view of the real problem, and that's the last thing most of the folks crying about TikTok want. And they don't want that because they're not genuinely interested in consumer privacy and internet security, they're interested in putting on a little stage play for political reasons. Adult, good faith tech policy solutions that solve actual, real world problems is the very last thing on most of their minds.

6 Comments »

NIST Study Confirms The Obvious: Face Masks Make Facial Recognition Tech Less Useful, More Inaccurate

from the for-now... dept

by Tim Cushing - July 29th @ 10:46am

At the end of last year, the National Institute of Standards and Technology (NIST) released its review of 189 facial recognition algorithms submitted by 99 companies. The results were underwhelming. The tech law enforcement and security agencies seem to feel is a game changer is just more of the same bias we've been subjected to for years without any AI assistance.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Who were the winners in NIST's facial recognition runoff? These guys:

Middle-aged white men generally benefited from the highest accuracy rates.

We have some good news and bad news to report from the NIST's latest facial recognition study [PDF]. And the good news is also kind of bad news. (The bad news contains no good news, though.)

The bad news is that the COVID-19 pandemic is still ongoing. This leads to the good news: face masks -- now a necessity and/or requirement in many places -- are capable of thwarting facial recognition systems.

Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.

But that's also bad news. This increases the chance of both false positives and false negatives. Both of these are unwelcome side effects of face coverings. The tiny bit of good news is that it generates mostly unusable images for passive systems (like those installed in the UK) that collect photos of everyone who passes by their lenses. The other small bit of good news in this bad news sandwich is this: face masks reduce the risk of bogus arrests/detainments.

While false negatives increased, false positives remained stable or modestly declined.

NIST also noticed a couple of other quirks in its study. Mask coverage obviously matters. The more that's covered, the less likely it is software will draw the correct conclusion. But color also matters. Black masks produced more bad results than blue masks.

Companies producing facial recognition tech (89 algorithms were tested by NIST for this project) aren't content to wait out the pandemic. Many are already working on algorithms that use fewer features to generate possible matches. This is also bad news. While the tech may be improving, working around masks by limiting the number of data points needed to make a match is just going to generate more false positives and negatives. But companies are already training their AI on face-masked photos, many of which are being harvested from public accounts on social media websites. Dystopia is here to stay. The pandemic has only accelerated its arrival.

Read More | 10 Comments »

Daily Deal: Film And Cinematography Mastery Bundle

from the good-deals-on-cool-stuff dept

by Daily Deal - July 29th @ 10:41am

The Film and Cinematography Mastery Bundle will teach you how to write, shoot, and distribute your own films. One course walks you through every step of the film making process, from writing a screenplay to picking the right distribution method, and it comes with bonus materials including sample spreadsheets, contracts, and additional resources. Another course takes on the basics of cinematography, like choosing the right camera, ISO, frame rate, shutter speed, etc., and broadens your understanding of camera movement, and working with a production team. The final course shows you how to create professional videos with the equipment you already have, and take a look at ideal equipment to buy, techniques for getting better shots, and much more. It's on sale for $29.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Comment »

Facebook Employee Revolt Shows, Yet Again, That There Are Other Incentives Beyond Section 230

from the incentives-come-in-many-forms dept

by Mike Masnick - July 29th @ 9:34am

One of the most frustrating claims that critics of Section 230 make is that because of Section 230 the big internet companies have no incentive to deal with awful content (abuse, harassment, bigotry, lies, etc.). Yet, over and over again we see why that's not at all true. First of all, there's strong incentive to deal with crap content on your platform because if you don't your users will go elsewhere. So the userbase itself is incentive. Then, as we've discussed, there are incentives from advertisers who don't want their ads showing up next to such junk and can pressure companies to change.

Finally, there are the employees of these companies. While so much of the narrative around internet companies focuses (somewhat ridiculously) on the larger-than-life profiles of their founders/CEOs, the reality is that there are thousands of employees at these companies, many of whom don't want to be doing evil shit or enabling evil shit. And they have influence. Over the past few years, there have been multiple examples of employees revolting and pushing back against company decisions on things like government contracts and surveillance.

And, now they're pushing back on the wider impact of these companies. That's a Buzzfeed article detailing how a bunch of employees inside Facebook are getting fed up with the company's well-documented problems, its failure to change, and its failure to take into account its broader impact.

“This time, our response feels different,” wrote Facebook engineer Dan Abramov in a June 26 post on Workplace, the company’s internal communications platform. “I’ve taken some [paid time off] to refocus, but I can’t shake the feeling that the company leadership has betrayed the trust my colleagues and I have placed in them.”

Messages like those from Wang and Abramov illustrate how Facebook’s handling of the president’s often divisive posts has caused a sea change in its ranks and led to a crisis of confidence in leadership, according to interviews with current and former employees and dozens of documents obtained by BuzzFeed News. The documents — which include company discussion threads, employee survey results, and recordings of Zuckerberg — reveal that the company was slow to take down ads with white nationalist and Nazi content reported by its own employees. They demonstrate how the company’s public declarations about supporting racial justice causes are at odds with policies forbidding Facebookers from using company resources to support political matters. They show Zuckerberg being publicly accused of misleading his employees. Above all, they portray a fracturing company culture.

The examples in the Buzzfeed article may not be representative of how all employees feel, nor is it necessarily indicative that Facebook will definitely change its policies one way or the other. It's just highlighting that pressure to be better, to be responsible, and to build better products comes from all over -- and in Silicon Valley many employees came up with the belief (cynical or not) that they're here to change the world for the better. And when they realize they may not be doing that, many will speak out and push back.

And that is likely to have an impact over time: especially when the big tech companies are fighting over top talent, and desperately trying to hire the best engineers possible. If those engineers speak up and speak out, it can create very strong incentives for companies to change and to improve -- all without needing to take an axe to Section 230, which has little to nothing to do with all of this.

20 Comments »

Under Investigation For Antitrust Abuse, Trump DOJ Rubber Stamps Major Ad Industry Consolidation

from the bigger-must-be-better dept

by Karl Bode - July 29th @ 6:38am

While the Trump administration and its allies (like Josh Hawley) like to talk a lot about monopolization in "big tech," they couldn't actually care less about monopolies or their impact on competition. For example while Hawley and the Trump FCC/DOJ have made an endless stink about the power of "big tech," that's largely for performative political reasons, namely to perpetuate the utterly false claim that Conservatives are being "censored," to bully tech giants away from encryption, or to frighten them away from finally doing something about the (profitable) bigotry and disinformation problems that plague their networks.

Oddly, this performative, sometimes vindictive nonsense is often conflated with actually caring about monopoly power and reforming antitrust. You only need to look at the DOJ and FCC's mindless rubber stamping of every fleeting whim of the US telecom industry, one of the most heavily monopolized (and widely despised) sectors in technology. While T-Mobile was getting the red carpet rolled out for its competition and job killing merger with Sprint, Bill Barr's DOJ was busy hassling small cannabis companies, or filing empty-headed "antitrust" lawsuits against automakers for agreeing to limit emissions.

Studies from the likes of the Antitrust Institute (pdf) have made it very clear: the Trump administration's interest in "antitrust reform" is utterly and completely hollow. During an era when lagging antitrust enforcement needed to be meaningfully improved and reformed, the Trump administration instead began wielding antitrust as a political bludgeon to gain leverage over its enemies and dole out favors to its allies. It's mindless theater and an abuse of the law, yet it's often portrayed as serious adult policy making by many experts and the press.

Despite ongoing whistleblower investigations of Barr's politicization of antitrust, his DOJ is now rubber stamping the merger between native advertising platforms Taboola and Outbrain. EU and UK regulators have been scrutinizing the deal, arguing it will erode competition in the native advertising (read: clickbait) space, resulting in notably worse terms for already struggling publishers who face getting an even smaller share of advertising revenue:

"The Competition and Markets Authority said it has concerns the merger will result in a substantial lessening of competition and that it will investigate further unless the businesses take steps to address them...Many national and local news websites in the UK use one of the two services and the CMA said a “large proportion” of the publishers it contacted as part of its initial investigations were concerned about the potential impact of the deal."

More specifically, UK regulators say the combined companies would enjoy an 80% market share for the clickbait online recommendations market:

"Taboola and Outbrain are the 2 largest providers of content recommendation services to publishers in the UK, with a combined market share of over 80%. They supply very similar services and are each other’s main competitor. In particular, the companies’ internal documents and information received from publishers showed the strong competition between the companies.

If the merger were to go ahead, the CMA is concerned that publishers in the UK will have a reduced choice of supplier for content recommendation services. This could result in a worsening of terms for publishers and a reduction in their share of advertising revenue. A large proportion of the publishers contacted by the CMA were concerned about the impact of the deal if it goes ahead."

Many in the marketing sector have echoed those sentiments, stating the only thing the merger really accomplishes is creating a "gigantic vortex of crap":

In contrast there's been little to no serious scrutiny by the Barr DOJ, which apparently didn't spend too much time thinking about the deal's potential pitfalls:

"The Taboola-Outbrain merger had been proposed by the companies’ executives as way for publishers to monetize digital content outside of Big Tech’s monopolies. But some publishers Adweek has been talking to question how the merged company would be structured and what sort of terms it would offer publishers.

“What will be interesting is what happens to yield?” a publishing source told Adweek. “Will this be an improvement? Will the joined forces allow more yield to flow through to publishers? Or does this actually remove some competition in such a way that yield suffers? I don’t know.”

Now, maybe the Trump DOJ seriously looked at this deal intelligently and greenlit it based on adult policy making (they've yet to explain the approval). But based on what we've seen in the last year or two (like DOJ antitrust boss Delrahim using his personal phone and text message accounts to personally guide T-Mobile to merger approval, something, it should go without saying, antitrust enforcers should not be doing), that's giving them too much credit. It could really be as simple as the fact that Taboola execs told the DOJ this might potentially challenge "big tech," and that was enough for them.

That's the problem when you start abusing legal authority with reckless abandon--you lose any trust or faith in the idea that your decisions are actually being based on the data.

7 Comments »

EU Plans To Use Supercomputers To Break Encryption, But Also Wants Platforms To 'Create Opportunities' To Snoop On End-To-End Communications

from the there-are-better-ways dept

by Glyn Moody - July 29th @ 3:32am

They say that only two things are certain in life: death and taxes. But here on Techdirt, we have a third certainty: that governments around the world will always seek ways of gaining access to encrypted communications, because they claim that things are "going dark" for them. In the US and elsewhere, the most requested way of doing that is by inserting backdoors into encryption systems. As everyone except certain government officials know, that's a really bad idea. So it's interesting to read a detailed and fascinating report by Matthias Monroy on how the EU has been approaching this problem without asking for backdoors -- so far. The European Commission has been just as vocal as the authorities in other parts of the world in calling for law enforcement to have access to encrypted communications for the purpose of combating crime. But EU countries such as Germany, Finland and Croatia have said they are against prohibiting, limiting or weakening encrypted connections. Because of the way the EU works, that means the region as a whole needs to adopt other methods of gaining access. Monroy explains that the EU is pinning its hopes on its regional police organization:

At EU level, Europol is responsible for reading encrypted communications and storage media. The police agency has set up a "decryption platform" for that. According to Europol's annual report for 2018, a "decryption expert" works there, from whom the competent authorities of the Member States can obtain assistance. The unit is based at the European Centre for Cybercrime (EC3) at Europol in The Hague and received five million euros two years ago for the procurement of appropriate tools.

The Europol group uses the open source password recovery software Hashcat in order to guess passwords used for content and storage media. According to Monroy, the "decryption platform" has managed to obtain passwords for 32 cases out of 91 where it the authorities needed access to an encrypted device or file. A 39% success rate is not too shabby, depending on how strong the passwords were. But the EU wants to do better, and has decided one way to do that is to throw even more number-crunching power at the problem: in the future, supercomputers will be used. Europol is organizing training courses to help investigators gain access to encrypted materials using Hashcat. Another "decryption expert group" has been given the job of coming up with new technical and legal options. Unfortunately, the approaches under consideration are little more than plans to bully Internet companies into doing the dirty work:

Internet service providers such as Google, Facebook and Microsoft are to create opportunities to read end-to-end encrypted communications. If criminal content is found, it should be reported to the relevant law enforcement authorities. To this end, the Commission has initiated an "expert process" with the companies in the framework of the EU Internet Forum, which is to make proposals in a study.

This process could later result in a regulation or directive that would force companies to cooperate.

There's no way to "create opportunities" to read end-to-end encrypted communications without weakening the latter. If threats from the EU and elsewhere force major Internet services to take this step, people will just start using open source solutions that are not controlled by any company. As Techdirt has noted, there are far better ways to gain access to encrypted communications -- ones that don't involve undermining them.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

20 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Visit Techdirt for today's stories.

Forward to a Friend
 
 
  • This mailing list is a public mailing list - anyone may join or leave, at any time.
  • This mailing list is announce-only.

Techdirt's original daily email. Once a day, Techdirt will email the full-length version of the previous day's stories from Techdirt.com (based on Pacific time).

Privacy Policy:

Floor64 will not share your email address with third parties.

Go back to Techdirt