Techdirt Daily Newsletter for Sunday, 25 April, 2021

 
From: "Techdirt Daily Newsletter" <newsletters@techdirt.com>
Subject: Techdirt Daily Newsletter for Sunday, 25 April, 2021
Date: August 6th 2020

Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new

Techdirt Daily Newsbrief

Techdirt Email.

Stories from Wednesday, August 5th, 2020

 

Law Firm Southtown Moxie Responds Hilariously To Stupid Cease And Desist Letter

from the noob-associates dept

by Timothy Geigner - August 5th @ 8:10pm

There are many ways to respond to a cease and desist notice over trademark rights. The most common response is probably fear-based capitulation. After all, trademark bullying works for a reason, and that reason is that large companies have access to large legal war chests while smaller companies usually just run away from their own rights. Another response is the aggressive defenses against the bullying. And, finally, every once in a while you get a response so snarky in tone that it probably registers on the richter scale, somehow.

The story of how a law firm called Southtown Moxie responded to a C&D from a (maybe?) financial services firm called Financial Moxie is of the snark variety. But first, some background.

Financial Moxie is a financial advisory catering to working moms. Or at least I think it is… the website also lists multiple fitness instructors on staff so I don’t know what that’s all about. The “moxie” term aligns with the phenomenon of “Moxie Tribes” which seem to be groups for working moms to talk about how awesome they are. It’s basically Goop with fewer vagina candles. Meanwhile “Southtown Moxie” is a law firm in Tennessee and North Carolina.

After receiving a cease and desist letter demanding that Southtown Moxie withdraw its trademark application, Kevin Christoper of Rockridge Venture Law (Southtown Moxie’s sibling firm) sat down with a beer to pen a response.

Which is how we get to the response. The full letter is embedded below, but you damn well know you're in for a treat when the response to a C&D notice begins with:

Dear Ms. Harper,

THANK YOU SO MUCH for your C&D letter and notice of opposition to our trademark application! This case presents a wonderful training opportunity for our noob associates. (And, lawyer-to-lawyer I must add it’s an honor to correspond with you. You are obviously a sensational salesperson-attorney to convince your client to pay you for challenging another law firm’s trademark application—I’m truly in awe and look forward to learning a thing or two from you. When I think of it, your client is paying you, and also giving us good trademark cannon fodder for our noobs, so it’s a win-win all around.)

And we're off! The letter then goes into noting all of the things Ms. Harper's client could buy instead of wasting everyone's time on a losing potential lawsuit. Examples include: a speedboat, glamorous clothing and jewelry, or hiring a social media influencer. The most important part of all of this, I have to stress, is that each example comes with an embedded photo of a barbie doll pantomiming these suggestions.

With that throat-clearing complete, the response goes on to note in creative terms that financial and legal services are not the same thing, nor in the same markets, and therefore any trademark concern evaporates.

But I wouldn’t be drinking a Purple Haze in my skivvies if I didn’t point out the irony that your client has hired you to represent her BECAUSE SHE IS NOT LICENSED TO PRACTICE LAW. Based on your letter, she claims that our mark, limited to the provision of legal services, infringes upon her financial advisory, personal coaching, and tribal businesses and causes her great harm. Basically she thinks someone looking for “Moxie Tribe” fellowship is going to get sucked up into our vortex of intellectual property services.

The notice then goes on to note that Financial Moxie has a disclaimer listed on its site that all communication is intended for select states in America, none of which include North Carolina or Tennessee, where Southtown Moxie is located. So, different industries and different geographic locations. None of this adds up to a valid trademark dispute and it seems likely that Southtown Moxie is going to win in front of the Trademark Trial and Appeal Board.

But, hey, we should at least thank Financial Moxie and its legal team for setting things up for this gem of a C&D response.

Read More | 9 Comments »

Content Moderation Case Study: Facebook Nudity Filter Blocks Historical Content And News Reports About The Error (June 2020)

from the content-moderation-is-hard dept

by Copia Institute - August 5th @ 3:38pm

Summary: Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.

One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that “there was no slavery in Australia”. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:

Kidnapped, ripped from the arms of their loved ones and forced into back-breaking labour: The brutal reality of life as a Kanaka worker - but Scott Morrison claims ‘there was no slavery in Australia’

Facebook removed the post and image for violation of their policy against nudity, although no genitals are visible, and restricted the user’s account. The Guardian Australia contacted Facebook to determine if this decision was made in error and, the following day, Facebook restored the post and apologized to the user, explaining that it was an erroneous takedown caused by a false positive in the automated nudity filter. However, at the same time, Facebook continued to block posts that included The Guardian’s news story about the incident, which featured the same photo, and placed 30-day suspensions on some users who attempted to share it. Facebook’s community standards report shows that in the first three months of 2020, 39.5-million pieces of content were removed for nudity or sexual activity, over 99% of those takedowns were automated, 2.5-million appeals were filed, and 613,000 of the takedowns were reversed.

Decisions to be made by Facebook:

  • Can nudity filters be improved to result in fewer false-positives, and/or is more human review required?
  • For appeals of automated takedowns, what is an adequate review and response time?
  • Should automated nudity filters be applied to the sharing of content from major journalistic sources such as The Guardian?
  • Should questions about content takedowns from major news organizations be prioritized over those from regular users?
  • Should 30-day suspensions and similar account restrictions be manually reviewed only if the user files an appeal?

Questions and policy implications to consider:

  • Should automated filter systems be able to trigger account suspensions and restrictions without human review?
  • Should content that has been restored in one instance be exempted from takedown, or flagged for automatic review, when it is shared again in future in different contexts?
  • How quickly can erroneous takedowns be reviewed and reversed, and is this sufficient when dealing with current, rapidly-developing political conversations?
  • Should nudity policies include exemptions for historical material, even when such material does include visible genitals, such as occurred in a related 2016 controversy over a Vietnam War photo?
    • Should these policies take into account the source of the content?
    • Should these policies take into account the associated messaging?

Resolution: Facebook’s restoration of the original post was undermined by its simultaneous blocking of The Guardian’s news reporting on the issue. After receiving dozens of reports from its readers that they were blocked from sharing the article and in some cases suspended for trying, The Guardian reached out to Facebook again and, by Monday, June 15, 2020, users were able to share the article without restriction. The difference in response times between the original incident and the blocking of posts is possibly attributable to the fact that the latter came to the fore on a weekend, but this meant that critical reporting on an unfolding political issue was blocked for several days while the subject was being widely discussed online.

Photo Credit (for first photo):
State Library of Western Australia
[Screenshot is taken directly from a Twitter embed]

4 Comments »

Techdirt Podcast Episode 250: Modeling The Pandemic

from the different-approaches dept

by Leigh Beadon - August 5th @ 1:30pm

As the coronavirus pandemic continues, nobody really knows what's going to happen — especially if kids start going back to school. Statistical models of the possibilities abound, but this week we're joined by some people who are taking a different approach: John Cordier and Don Burke are the founders of Epistemix, which is using a new agent-based modeling approach to figure out what the future of the pandemic might look like.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

3 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Judge Hits District Attorney Who Issued Fake Subpoenas With A $50,000 Penalty For Blowing Off Records Requests

from the I-guess-laws-are-something-only-little-people-have-to-follow dept

by Tim Cushing - August 5th @ 12:19pm

Orleans Parish District Attorney Leon Cannizzaro continues to get himself in legal hot water. Back in 2017, New Orleans journalistic outlet The Lens uncovered his office's use of fake subpoenas to coerce witnesses and crime victims into showing up to provide testimony and make statements.

The documents weren't real. They had never been approved by a judge. But they still had the same threat of fines or imprisonment printed on them. Just like the real ones. But these threats were also fake -- no judge had given the office permission to lock these witnesses/victims up.

Once this practice was exposed, the lawsuits began. The DA's office was sued multiple times by multiple plaintiffs. One suit -- filed by the MacArthur Justice Center -- demanded copies of every bogus subpoena issued by the DA's office. Another -- filed by the ACLU -- sought the names of every DA's office attorney who'd signed or sent one of these bogus subpoenas.

Yet another lawsuit targeted the DA's office and the DA directly for violating the law and citizens' rights by issuing fake subpoenas. That one is still pending but DA Cannizzaro and his attorneys were denied immunity by the Fifth Circuit Court of Appeals, making it far more likely someone will be held personally responsible for cranking out fake legal paperwork.

The MacArthur Center lawsuit continues. And it's more bad news for the DA, which has spent nearly a half-decade dodging the Center's public records requests.

An Orleans Parish Civil District Court judge has issued a $51,000 judgment against District Attorney Leon Cannizzaro for his office’s failure to turn over bogus subpoenas under a public-records request filed two years before the practice was exposed by the Lens.

Judge Ethel Julien said in a Monday ruling that Cannizzaro acted “arbitrarily and capriciously” when he failed to fork over documents requested by an attorney for a nonprofit law firm who was probing the practice in 2015.

Cannizzaro's defense of his stonewalling has changed over the years. His office first denied the request back in 2015, claiming it was too "burdensome" to hand over copies of witness subpoenas issued by prosecutors. Then The Lens broke the news about the fake subpoenas. The DA's office then claimed it didn't have to fully fulfill the request because the Center hadn't asked for any fake subpoenas.

In its defense, the District Attorney’s Office said the fake subpoenas weren’t covered by Washington’s request -- because they weren’t the genuine documents for which Washington specifically asked.

The DA is appealing the decision, of course. His office likely would have appealed it anyway, but this appeal might be more personal. The MacArthur Center says the $50,000 penalty issued by the judge may end up being applied against Cannizzaro himself, rather than his entire office, because the judge found the DA himself had "acted unreasonably."

The saga continues, with the DA and his office looking worse and worse with every new court ruling. True, it's sometimes difficult to secure cooperation from witnesses and crime victims. But the solution isn't to bypass the court system and threaten people with bogus legal documents. Prosecutors are supposed to help enforce laws, not break them.

14 Comments »

Moderate Globally, Impact Locally

from the monumental-balancing-act dept

by Michael Karanicolas - August 5th @ 10:44am

Every minute, more than 500 hours of video are uploaded to YouTube, 350,000 tweets are sent, and 510,000 comments are posted on Facebook.

Managing and curating this fire hose of content is an enormous task, and one which grants the platforms enormous power over the contours of online speech. This includes not just decisions around whether a particular post should be deleted, but also more minute and subtle interventions that determine its virality. From deciding how far to allow quack ideas about COVID-19 to take root, to the degree of flexibility that is granted to the President of the United States to break the rules, content moderation raises difficult challenges that lie at the core of debates around freedom of expression.

But while plenty of ink has been spilled on the impact of social media on America’s democracy, these decisions can have an even greater impact around the world. This is particularly true in places where access to traditional media is limited, giving the platforms a virtual monopoly in shaping the public discourse. A platform which fails to take action against hate speech might find itself instrumental in triggering a local pogrom, or even genocide. A platform which acts too aggressively to remove suspected “terrorist propaganda” may find itself destroying evidence of war crimes.

Platforms’ power over the public discourse is partly the result of a conscious decision by global governments to outsource online moderation functions to these private sector actors. Around the world, governments are making increasingly aggressive demands for platforms to police content which they find objectionable. The targeted material can range from risqué photos of the King of Thailand, to material deemed to insult Turkey’s founding president. In some instances, these requests are grounded in local legal standards, placing platforms in the difficult position of having to decide how to enforce a law from Pakistan, for example, which would be manifestly unconstitutional in the United States.

In most instances, however, moderation decisions are not based on any legal standard at all, but on the platforms’ own privately drafted community guidelines, which are notoriously vague and difficult to understand. All of this leads to a critical lack of accountability in the mechanisms which govern freedom of expression online. And while the perceived opacity, inconsistency and hypocrisy of online content moderation structures may seem frustrating to Americans, for users in the developing world it is vastly worse.

Nearly all of the biggest platforms are based in the United States. This means not only that their decision-makers are more accessible and receptive to their American user-base than they are to frustrated netizens in Myanmar or Uganda, but also that their global policies are still heavily influenced by American cultural norms, particularly the First Amendment.

Even though the biggest platforms have made efforts to globalize their operations, there is still a massive imbalance in the ability of journalists, human rights activists, and other vulnerable communities to get through to the U.S.-based staff who decide what they can and cannot say. When platforms do branch out globally, they tend to recruit staff who are connected to existing power structures, rather than those who depend on the platforms as a lifeline away from repressive restrictions on speech.

For example, the pressure to crackdown on “terrorist content” inevitably leads to collateral damage against journalism or legitimate political speech, particularly in the Arab world. In setting this calculus, governments and ex-government officials are vastly more likely to have a seat at the table than journalists or human rights activists. Likewise, the Israeli government has an easier time communicating their wants and needs to Facebook than, say, Palestinian journalists and NGOs.

None of this is meant to minimize the scope and scale of the challenge that the platforms face. It is not easy to develop and enforce content policies which account for the wildly different needs of their global user base. Platforms generally aim to provide everyone with an approximately identical experience, including similar expectations with regard to the boundaries of permitted speech. There is a clear tension between this goal and the conflicting legal, cultural and moral standards in force across the many countries where they operate.

But the importance and weight of these decisions demands that platforms get this balancing right, and develop and enforce policies which adequately reflect their role at the heart of political debates from Russia to South Africa. Even as the platforms have grown and spread around the world, the center of gravity of these debates continues to revolve around D.C. and San Francisco.

This is the first in a series of articles developed by the Wikimedia/Yale Law School Initiative on Intermediaries and Information appearing here at Techdirt Policy Greenhouse and elsewhere around the internet—intended to bridge the divide between the ongoing policy debates around content moderation, and the people who are most impacted by them, particularly across the global south. The authors are academics, civil society activists and journalists whose work lies on the sharp edge of content decisions. In asking for their contributions, we offered them a relatively free hand to prioritize the issues they saw as the most serious and important with regard to content moderation, and asked them to point to areas where improvement was needed, particularly with regard to the moderation process, community engagement, and transparency.

The issues that they flag include a common frustration with the distant and opaque nature of platforms’ decision-making processes, a desire for platforms to work towards a better understanding of local socio-cultural dynamics underlying the online discourse, and a feeling that platforms’ approach to moderation often did not reflect the importance of their role in facilitating the exercise of core human rights. Although the different voices each offer a unique perspective, they paint a common picture of how platforms’ decision making impacts their lives, and of the need to do better, in line with the power that platforms have in defining the contours of global speech.

Ultimately, our hope with this project is to shed light on the impacts of platforms’ decisions around the world, and provide guidance on how social media platforms might do a better job of developing and applying moderation structures which reflect their needs and values of their diverse global users.

Michael Karanicolas is a Resident Fellow at Yale Law School, where he leads the Wikimedia Initiative on Intermediaries and Information as part of the Information Society Project. You can find him on twitter at @M_Karanicolas.

13 Comments »

Daily Deal: The Ultimate Leadership And Stress Management Bundle

from the good-deals-on-cool-stuff dept

by Daily Deal - August 5th @ 10:39am

The Ultimate Leadership and Stress Management Bundle has 9 courses to help you develop the tools you need to lead and empower your team. Courses focus on interpersonal skills, remote team management, time management and stress management. It's on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Comment »

Welcome To The Techdirt Tech Policy Greenhouse: Content Moderation Edition

from the stop,-listen,-and-learn dept

by Karl Bode - August 5th @ 9:09am

In the early days of the internet, there was no shortage of predictions insisting the emerging technology would be a bold new frontier of transformative change, ushering forth a golden era of connectivity free from the pesky befuddlement of incompetent government leaders, bad actors, and malicious overlords. This new frontier, we were told, would culminate in a fairer and more humane planet, unshackled from the petty hassles of the brick and mortar world, extracting us from our worst impulses as we marched, collectively, toward a better and more ethical future.

Technological innovation, it would seem, was going to fix everything.

This optimism certainly wasn't unwarranted. For those of us who cut our teeth on the advent of the internet (I spent much of my own youth on an Apple IIe at 300 baud, enamored with early bulletin board systems), the capacity for revolutionary change was obvious. It still is. But while there's certainly an endless list of examples showcasing the internet's incredible potential for positive, transformative cultural change and innovation, the last decade has witnessed a clear reckoning for those who seemingly believed the lesser angels of our nature wouldn't come along for the ride.

Internet corporations so large, or so fused to the government itself, that they laugh off the intervention of world governments. Foreign and domestic propaganda efforts, often working in concert, geared toward sowing discord and division. Disinformation at scale so dangerous it helps spur genocide. Bogus missives so potent they can impact elections and the democratic process itself. Trolls; swatting; deep fakes; racist subreddits; live streamed mass shootings; Pinterest child porn; gamergate; millions getting dangerous health information from unqualified nitwits on YouTube.

The core of many of these problems aren't new. In fact in many instances, they're as old as humanity itself. But they have mutated into dangerous new variants courtesy of technology, scale, naivete, and apathy. There's simply no escaping the fact we could have done a better job predicting their evolutionary impact, and establishing systems of oversight, transparency, and accountability that could have dulled many of their sharpest edges.

As with Greenhouse, privacy edition, there are no easy answers here. Moderation at scale is utterly formidable. Doing it well at scale may be impossible. Every last policy decision comes with trade offs and a myriad of unforeseen consequences that need to be adequately understood before rushing face first into the fray. As the Section 230 debate makes abundantly clear, there's no shortage of bad faith or unworkable ideas that hold the potential to create far more problems than they profess to solve. Avoiding these pitfalls will require stopping, listening, and understanding one another -- American cultural anomalies to be sure.

We're hopeful that the insights presented here from those on the front lines of the content moderation debate will help inform policy makers, the public, and experts alike. And we're hopeful the pieces make some small contribution to the foundation of a better, kinder, more equitable internet more in line with our original good intentions. Techdirt Greenhouse is a conversation, so if you've got expertise in the content moderation arena, or see pieces you'd like to respond to, please feel free to reach out.

Comment »

Dish Buys Ting Mobile To Disrupt Wireless, But Questions Remain

from the steep-uphill-climb dept

by Karl Bode - August 5th @ 6:07am

We've noted repeatedly that not only did the Trump FCC and DOJ rubber stamp the controversial T-Mobile and Sprint merger, they willfully ignored data showing the deal would result in high prices, lower overall sector pay, fewer jobs, and less overall competition. As most objective antitrust and telecom experts predicted, the ink was barely dry on the deal before the pink slips started to arrive. The higher rates will still likely take a few more years to materialize as the remaining three industry players (T-Mobile, AT&T, and Verizon) perfect their ability to pretend to compete on price without actually doing so.

Over at the DOJ, top "antitrust enforcer" Makan Delrahim not only ignored hard data and critics of the deal, he actively helped guide T-Mobile executives to deal completion (if you're unaware, folks tasked with leading the governments antitrust enforcement efforts most assuredly should not be doing that).

To try and justify this grotesque regulatory capture, the DOJ came up with a bad idea: it would require T-Mobile offload some spectrum and its Boost Mobile prepaid brand to Dish Network, which would then, theoretically, try and build a replacement carrier for Sprint over a period of 7 years. For much of that time Dish will simply operate as a glorified MVNO (mobile virtual network operator) on T-Mobile's network and be subject to T-Mobile whims.

The problem: Dish has a long history of hoarding valuable spectrum and promising to build a wireless network and then, you know, not doing that (just ask pre-merger T-Mobile). The other problem: shepherding such a deal to completion requires the current FCC (rabidly proud of "hands off," "light touch" regulation) to aggressively nanny this deal to completion, something that simply isn't in Ajit Pai's ideological nature. The remaining three players in the space (T-Mobile, AT&T, Verizon) have every motivation to try and scuttle the creation of this fourth competitor to avoid having to actually (gasp) compete on price.

Throughout, there have been questions about just how serious Dish is. Again, the company has a long history of buying up valuable spectrum and then doing absolutely nothing with it. Dish's spectrum holdings are extremely valuable, and critics have long wondered if the company is just stringing feckless U.S. regulators along until it can sell its spectrum at a steep premium.

Whether Dish is serious still isn't really a settled question, but the company continues to give every impression it may genuinely want to disrupt wireless as a survival strategy in the wake of its struggling traditional TV business. That manifested this week in the acquisition of Tucows' Ting, a small MVNO that had been making slow inroads as a minor player in the wireless space. In a blog post, Ting insists that nothing will really change at the small operation now that it has been acquired by a major corporation engaged in (hopefully) a massive disruption play:

"DISH enters the mobile market with a well-established, well-loved brand in Ting Mobile, a wonderful customer base in you and a proven platform on which to build its mobile service. It also gets a strong, smart partner (if we do say so ourselves) to support its mobile business moving forward. As for DISH’s big plans in mobile, much has been written on that topic. We’re happy to be a part of these plans.

From the sounds of things this isn't a full acquisition of all Ting assets (Ting's fiber efforts will not be part of the deal). Users in the comments of the blog post were skeptical that selling a small upstart with a focus on consumers to a giant satellite TV company with a long history of obnoxious executive leadership won't result in some obvious changes:

"I will say I'm disappointed and incredibly wary. Ting is a brand I have high confidence in. Dish is a brand I have zero confidence in. "Nothing changes today." But changes will come. Sadly, I will not be surprised if I find myself shopping for a new provider once they start."

Maybe this all ends with Dish Network shifting from the dying satellite TV sector and becoming a major rival to AT&T and Verizon, but I remain wary. AT&T and Verizon play dirty pool in the DC lobbying realm, and both will do absolutely everything in their power to disrupt the creation of a viable fourth replacement price competitor. And if Trump is re-elected, his "light touch" (read: utterly apathetic to all consumer issues, competitive problems, and price gouging) FCC simply lacks the backbone or ideological motivation to hold any of these companies seriously accountable should their promises wind up being little more than hot air.

Pre-merger promises in the U.S. telecom sector simply don't have a great track record, and I remain skeptical that this wasn't just a regulatory stage play by the Barr DOJ to help justify apathy toward reduced competition, resulting in Dish profitably cashing out of its spectrum holdings a few years from now. And while it's certainly possible Dish can become a major replacement fourth competitor for Sprint, it's the sort of thing you should probably believe only once you've seen it accomplished.

5 Comments »

Turkey Passes New Internet Censorship Law, Cites Germany's Awful 'Hate Speech' Law As Its Inspiration

from the how-cosmopolitan dept

by Tim Cushing - August 5th @ 3:04am

Turkey's president, Recep "Gollum" Erdogan, continues to use legislation to silence everyone that might possibly criticize or mock him. This has been an ongoing process, one that keeps getting worse with every iteration. A failed coup didn't help calm things down in Turkey, which is apparently hoping to pass China and take the top spot on the "journalists jailed" chart.

The latest law has a supposedly noble goal, but there's nothing noble about the propelling force behind it. The EFF reports another law giving the government even more censorship powers has been passed, thanks to Erdogan's inability to handle criticism.

[A] new law, passed by the Turkish Parliament on the 29th of July, introduces sweeping new powers and takes the country another giant step towards further censoring speech online. The law was ushered through parliament quickly and without allowing for opposition or stakeholder inputs and aims for complete control over social media platforms and the speech they host. The bill was introduced after a series of allegedly insulting tweets aimed at President Erdogan’s daughter and son-in-law and ostensibly aims to eradicate hate speech and harassment online.

So, it obviously isn't there to eradicate all hate speech and harassment. It's there to eradicate hate speech and harassment targeting Erdogan and other members of the government. A law like this being implemented by this government -- one with a long history of silencing/arresting/jailing critics -- will only be used to target citizens who aren't thrilled with their authoritarian "representatives."

Of course, the Turkish government won't bear the expense of keeping the country's internet free of Erdogan-bashing. That will rest on social media platforms. Once served with an order to remove content that "violates personal rights" and/or the "privacy of personal life," platforms will have 48 hours to take it down. If they don't, they'll face fines and -- in an unprecedented move -- the throttling of their bandwidth by up to 90% via local internet service providers.

To better facilitate censorship of Erdogan-related criticism, social media platforms will be forced to establish a local presence to expedite takedowns.

Once ratified by President Erdogan, the law would mandate social media platforms with more than two million daily users to appoint a local representative in Turkey…

The EFF's report highlights another disturbing aspect of the new law: it was inspired by legislation in countries that respect personal freedom and expression far more than Turkey has under Erdogan.

When introducing the new law, Turkish lawmakers explicitly referred to the controversial German NetzDG law and a similar initiative in France as a positive example.

Germany's "hate speech" law has been a solid generator of collateral damage since its inception. German lawmakers may believe they've ushered a new era of online enlightenment with the law, but it's inspired a number of censorial governments to create their own versions and point to Germany when anyone asks why they're silencing dissent and criticism. EFF says thirteen countries, including Venezuela, Malaysia, Russia, and the Philippines have all cloned NetzDG to better serve the continued restriction of their citizens' free speech rights.

And while Germany's law has effectively killed satire and chilled speech, at least it contains some limited restraints on the government via the court system. In Turkey (and other countries run by authoritarians), these checks and balances don't exist. Turkey's adoption of German legal principles takes the bad parts of the law and makes them even worse.

7 Comments »

You Might Like
 
 
 
Learn more about RevenueStripe...

Visit Techdirt for today's stories.

Forward to a Friend
 
 
  • This mailing list is a public mailing list - anyone may join or leave, at any time.
  • This mailing list is announce-only.

Techdirt's original daily email. Once a day, Techdirt will email the full-length version of the previous day's stories from Techdirt.com (based on Pacific time).

Privacy Policy:

Floor64 will not share your email address with third parties.

Go back to Techdirt