Are you interested in receiving a shorter, easy-to-scan, email of post excerpts? Check our our new
Stories from Wednesday, October 28th, 2020
One Restaurant Sends Cease And Desist To Another Over The Word 'Juicy'
from the nah-dawg dept
by Timothy Geigner - October 28th @ 8:21pm
If it seems like there are more stupid trademark battles per capita fought in the restaurant industry, it's not because you're crazy. It's very much a thing. Whether it's Taco John's wanting to own "Taco Tuesday", McDonalds insisting only it can call a fish sandwich a "filet o' fish", or two Brazilian restaurants fighting over the rights to use image of a fire in their logos, the common theme you should notice is how these battles are all over things that are descriptive or generic. And, yet, these fights rage on.
Take, for instance, a burger joint in Texas sending a cease and desist notice to another burger joint in Texas for daring to use the word "juicy."
Longhorn Cafe LLC sent a cease-and-desist letter to local restaurateur Andrew Weissman, owner of Mr. Juicy restaurants, demanding he stop using that name for his growing burger spots.
The letter, which Weissman posted to his Instagram yesterday, was sent on Longhorn Cafe’s behalf by lawyer John Cave of Gunn, Lee and Cave PC. It demands that Weissman stop using the name “Mr. Juicy” at the McCullough and Hildebrand locations, or any other restaurants. The letter also demands that Weissman remove Mr. Juicy from all internet listing sites such as Yelp and food delivery sites such UberEats. It gave Weissman 10 days to respond.
While the letter itself doesn't tease out what trademarks it has that make this all trademark infringement, its menu does include an item called the "Original Big Juicy" burger. That truly seems to be what all of this is about.
Which is stupid. We'll start with the fact that "juicy" is a descriptive term, particularly in the restaurant industry. Even if its use in the name of the restaurant is not describing a specific product, it's still descriptive generally. When you add to all of that the simple fact that the term "juicy," when it comes to food, is laughably generic, then the validity of this whole dispute goes right out the window.
To be clear, Longhorn Cafe's actual trademark is valid: "The Original Big Juicy". That whole term seems worthy of trademark status. But it certainly doesn't make a competitor in the foodstuffs industry using "juicy" infringing upon that trademark.
Weissman said it is unfortunate that Longhorn Cafe didn’t reach out to him to work something out before sending the letter. Nevertheless, he intends to fight it, which he acknowledges will cost money.
Needless money from a burger joint that probably can't afford it. Yay, trademark bullying!
from the context-matters dept
by Copia Institute - October 28th @ 3:30pm
Summary: In almost every country in which it offers its service, Facebook has been asked -- sometimes via direct regulation -- to limit the spread of "terrorist" content.
But moderating this content has proven difficult. It appears the more aggressively Facebook approaches the problem, the more collateral damage it causes to journalists, activists, and others studying and reporting on terrorist activity.
Because documenting and reporting on terrorist activity necessitates posting of content considered to be "extremist," journalists and activists are being swept up in Facebook's attempts to purge its website of content considered to be a violation of terms of service, if not actually illegal.
The same thing happened in another country frequently targeted by terrorist attacks.
In the space of one day, more than 50 Palestinian journalists and activists had their profile pages deleted by Facebook, alongside a notification saying their pages had been deactivated for "not following our Community Standards."
"We have already reviewed this decision and it can't be reversed," the message continued, prompting users to read more about Facebook's Community Standards.
There appears to be no easy solution to Facebook's over-moderation of terrorist content. With algorithms doing most of the work, it's left up to human moderators to judge the context of the posts to see if they're glorifying terrorists or simply providing information about terrorist activities.
Decisions to be made by Facebook:
Facebook's ongoing efforts with the Global Internet Forum to Counter Terrorism (GIFCT) probably aren't going to limit the collateral damage to activists and journalists. Hashes of content designated "extremist" are uploaded to GIFCT's database, making it easier for algorithmic moderation to detect and remove unwanted content. But utilizing hashes and automatic moderation won't solve the problem facing Facebook and others: the moderation of extremist content uploaded by extremists and similar content uploaded by users who are reporting on extremist activity. The company continues to address the issue, but it seems likely this collateral damage will continue until more nuanced moderation options are created and put in place.
Another Section 230 Reform Bill: Dangerous Algorithms Bill Threatens Speech
from the another-problem dept
by Will Duffield - October 28th @ 1:45pm
Representatives Malinowski and Eshoo and have introduced a Section 230 amendment called the “Protecting Americans from Dangerous Algorithms Act” (PADAA). The title is somewhat of a misnomer. The bill does not address any danger inherent to algorithms but instead seeks to prevent them from being used to share extreme speech.
Section 230 of the Communications Act prevents providers of an interactive computer service, such as social media platforms, from being treated as the publisher or speaker of user-submitted content, while leaving them free to govern their services as they see fit.
The PADAA would modify Section 230 to treat platforms as the speakers of algorithmically selected user speech, in relation to suits brought under 42 U.S.C. 1985 and the Anti-Terrorism Act. If platforms use an “algorithm, model, or computational process to rank, order, promote, recommend, [or] amplify” user provided content, the bill would remove 230’s protection in suits seeking to hold platforms responsible for acts of terrorism or failures to prevent violations of civil rights.
These are not minor exceptions. A press release published by Rep. Malinowski’s office presents the bill as intended to reverse the US Court of Appeals for the 2nd Circuit’s ruling in Force v. Facebook, and endorses the recently filed McNeal v. Facebook, which seeks to hold Facebook liable for recent shootings in Kenosha, WI. These suits embrace a sweeping theory of liability that treats platforms’ provision of neutral tools as negligent.
Force v. Facebook concerned Facebook’s algorithmic “Suggested Friends” feature and its prioritization of content based on users’ past likes and interests. Victims of a Hamas terror attack sued Facebook under the Anti-Terrorism Act for allegedly providing material support to Hamas by connecting Hamas sympathizers to one another based on their shared interests and surfacing pro-Hamas content in its Newsfeed.
The 2nd Circuit found that Section 230 protected Facebook’s neutral processing of the likes and interests shared by its users. Plaintiffs appealed the ruling to the US Supreme Court, which declined to hear the case. The 2nd Circuit’s held that, although:
plaintiffs argue, in effect, that Facebook’s use of algorithms is outside the scope of publishing because the algorithms automate Facebook’s editorial decision‐making.
Facebook is nevertheless protected by Section 230 because its content-neutral processing of user information doesn’t render it the developer or author of user submissions.
The algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers.
The court concludes by noting the radical break from precedent that that plaintiffs’ claims demand. The PADAA would establish this sweeping shift as law.
Plaintiffs “matchmaking” argument would also deny immunity for the editorial decisions regarding third‐party content that interactive computer services have made since the early days of the Internet. The services have always decided, for example, where on their sites (or other digital property) particular third‐party content should reside and to whom it should be shown
Explicitly opening platforms to lawsuits for algorithmically curated content would compel them to remove potentially extreme speech from algorithmically curated feeds. Algorithmic feeds are given center stage on most contemporary platforms – Twitter’s Home timeline, the Facebook Newsfeed, and TikTok’s “For You” page are all algorithmically curated. If social media platforms are exposed to liability for harms stemming from activity potentially inspired by speech in these prominent spaces, they will cleanse them of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.
Exposing platforms to a general liability for speech implicated in terrorism and civil rights deprivation claims is more insidiously restrictive than specifically prohibiting certain content. In the face of a nebulous liability, contingent on the future actions of readers and listeners, platforms will tend to restrict speech on the margins.
The aspects of the bill specifically intended to address Force v. Facebook’s “Suggested Friends” claims, the imposition of liability for automated procedures that “recommend” any “group, account, or affiliation,” will be even more difficult to implement without inhibiting speech and political organizing in an opaque, idiosyncratic, and ultimately content-based manner.
After I attended a Second Amendment Rally in Richmond early this year, Facebook suggested follow-up rallies, local town hall meetings, and militia musters. However, in order to avoid liability under the PADAA, Facebook wouldn’t have to simply refrain from serving me militia events. Instead, it would have to determine if the Second Amendment rally itself was likely to foster radicalism. In light of their newfound liability for users’ subsequent actions, would it be wise to connect participants or suggest similar events? Would all political rallies receive the same level of scrutiny? Some conservatives claim Facebook “incites violent war on ICE” by hosting event pages for anti-ICE protests. Should Facebook be held liable for Willem van Spronsen’s firebombing of ICE vehicles? Rep. Malinowski’s bill would require private firms to far reaching determinations about diverse political movements under a legal Sword of Damocles.
Spurring platforms to exclude organizations and interests tangentially linked to political violence or terrorism from their recommendation algorithm would have grave consequences for legitimate speech and organization. Extremists have found community and inspiration in everything from pro-life groups to collegiate Islamic societies. Budding eco-terrorists might be connected by shared interests in hiking, veganism, and conservation. Should Facebook avoid introducing people with such shared interests to one another?
The bill also fails to address real centers of radicalization. As I noted in my recent testimony to the House Commerce Committee, most radicalization occurs in small, private forums, such as private Discord and Telegram channels, or the White Nationalist bulletin board Iron March. The risk of radicalization – like rumor, disinformation, or emotional abuse, is inherent to private conversation. We accept these risks because the alternative – an omnipresent corrective authority – would foreclose the sense of privileged access necessary to the development of a self. However, this bill does not address private spaces. It only imposes liability on algorithmic content provision and matching, and wouldn’t apply to sites with fewer than 50 million monthly users. Imageboards such as 4 or 8chan are too small to be covered, and users join private Discord groups via invite links, not algorithmic suggestion.
The Protecting Americans from Dangerous Algorithms Act attempts to prevent extremism by imposing liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence. Expecting platforms to predictively police algorithmically selected speech a la Minority Report is fantastic. In practice, this liability will compel platforms to set broad, stringent rules for speech in algorithmically arranged forums. Legislation that would push radical, popularly disfavored, or simply illegible speech out the public eye via private proxies raises unavoidable First Amendment concerns.
Community Broadband In The Age Of Covid
from the home-grown dept
by Christopher Mitchell and Ry Marcattilio-McCracken - October 28th @ 12:00pm
When it comes to the goal of ensuring all Americans have affordable and reliable Internet access, we are pretty much stalled. Sure, the FCC will make noise every year about our quest to bridge the digital divide, but it has focused solely on for-profit private solutions. And while there are many hundreds of good local companies making important local investments, the FCC has tended to throw the most money at the few extremely big ones (the same big ones that are on the other side of the revolving door at the FCC for most employees, whether staff or political appointees.)
In response to the pandemic, companies like Charter and AT&T have been on their best behavior and done their best to extend connections more widely than they did in normal times. It was far from good enough, and culminated in AT&T asking for billions more in subsidies than it was already getting. Tens of millions of Americans are not particularly attractive to the big ISP monopolies, either because they live in more rural areas or low-income neighborhoods of cities.
We need better broadband policies — but we have to learn from what is actually working. Second, we should never write another check to AT&T or bankrupt disasters like Frontier and Windstream. Third, thoughtful, practical solutions moving ahead will need to address the broadband gap in whatever form it takes. As NDIA notes, broadband policy that focuses solely on the infrastructure challenge is structurally racist — it ignores the needs of millions of families of color in urban areas that face other obstacles to getting online.
Community networks are commonly assumed to be best at connecting those neglected families, and many are showing their worth during this Covid-19 pandemic. But Covid-19 is just the latest piece of evidence that these types of networks should play a much larger role in expanding competition more generally — whether they take the form of municipal networks, cooperative-owned networks, or take some other non-profit form.
For those living in areas covered by electric cooperatives, there are plenty of recent examples which can provide local leaders inspiration and offer lessons learned. For example, 15 out of 25 electric co-ops in Mississippi are currently pursuing fiber projects that will connect tens of thousands over the next handful of years. 35% of residents in the state lack basic broadband, but $65 million in CARES funding is bringing fiber to folks with nothing today.
Other electric cooperatives have also stepped up to the plate. OzarksGo, based in Arkansas, responded to the public health crisis by wiring fiber to Wi-Fi hotspots on parked school busses last spring to help community members with no access get online. The New Hampshire Electric Cooperative, after a rousing local organizing campaign by local residents citing COVID, in part, saw its board vote unanimously to create a separate entity to pursue broadband after previously refusing to.
City-owned networks also provide an abundance of examples. North Carolina’s Greenlight network, operated by the city of Wilson, has developed several programs to connect those commonly left behind. This includes offering 40 Megabit per second (Mbps) symmetrical connections to families in public housing for $10/month, and a flexpay program to give anyone a pay-ahead option that is essential for families with bad credit or irregular income. Greenlight also accepts cash, essential for the unbanked. All of these programs have remained in place or expanded since last spring. When the pandemic hit, Greenlight also accelerated its lifeline approach, keeping people connected with a basic option for $10/month.
Chattanooga again made national news in July when it unveiled an initiative called HCS EdConnect to provide free 100Mbps symmetrical Internet access to every student in the Hamilton County School District on free or reduced lunch. That’s 17,700 homes and 32,000 students, and the initiative has pledged to keep those connections free for the next ten years. The $15 million project serves as a case study in decisive action, thoughtful municipal leadership, local investment, and fundraising by all involved: Hamilton County Public Schools, municipal network EPB Fiber, nonprofit The Enterprise Center, and city and county governments. The first large chunk of students has already been connected.
Covid-19 has amped up growth in community networks. In San Rafael, California, a coalition between local government, Marin County, and the nonprofit Canal Alliance came together initially in a bid to get students online for the upcoming school year, but the effort quickly turned into a free Wi-Fi network for the entirety of the Canal neighborhood. Residents there make up the backbone of the area’s service economy and have some of the worst Internet access options in what is otherwise among the wealthiest counties in the state. The city plans to keep it online and free indefinitely.
The same thing is happening in McAllen, Texas, with the city’s IT department spearheading an effort to put 5,000 access points around the city of 140,000 on water towers and power poles, with dozens of neighborhoods already blanketed in free Wi-Fi. Like in San Rafael, all residents of McAllen will enjoy the benefits of the public network forever, with the city committing to maintaining it with a regular budget appropriation from here out.
And over the summer in Champaign, Illinois, when it became clear that hundreds of District 4 students on the north side of town in Shadowwood Mobile Home Park were unable to attend classes because they lacked Internet access, the city partnered with local broadband provider i3 and wireless hardware manufacturer Mesh++ to solve the problem. Solar-powered Wi-Fi routers were installed on power and light poles to create a free, neighborhood-wide wireless network so that more than 250 students could log on and access course content and join live classes via Zoom. Champaign, too, plans for their network to be permanent, and is pursuing similar projects in apartment housing in other parts of town.
San Rafael, McAllen, and Champaign are by no means alone, with similar efforts unfolding in Cleveland, Ohio, Pittsburgh, Pennsylvania, San Antonio, Texas, and elsewhere. These should be celebrated, but we can still do better. The networks can be more ambitious - like Wilson and Chattanooga — and more numerous.
The Barriers to Community Broadband Today
First, 19 states have laws on the books which significantly restricts the ability of communities to build networks responsive to their needs, even though we know this results in slower broadband adoption. The HEROES Act would eliminate these barriers, but it remains stalled in the U.S. Senate. If Democrats take the Senate, they say they will wipe out these barriers, but the big monopolies are undoubtedly undermining that determination as you read this.
Second, the federal government should invest more money, more intelligently in the effort. While a good chunk of the first round of the USDA’s ReConnect Program grants went to cooperatives and other municipal networks working to expand Fiber-to-the-Home (FTTH) networks, it’s not enough. ReConnect has given out a little more than $800 million in funds, but the need is far greater and it is focused solely on areas where no broadband is physically available. Far more people are stuck in areas with networks that are too expensive for their income. The upcoming RDOF auction will give cooperatives a shot at billions in subsidies, but there is no program for the millions of low-income families that need affordable options.
Third, and perhaps most importantly, we see a lack of ambition, commitment, and imagination by local communities in solving the problem. AT&T demands more subsidies while withdrawing its services in rural areas, and Mississippi asks why nearly $300 million in subsidies didn’t improve access for most. But most local government officials still think AT&T or their state or the federal government will suddenly recognize decades of failure and swoop in to connect local businesses and residents.
Communities have to step up. Recognizing the value of local action alone isn’t enough. Elected officials and community stakeholders will have to confront the above obstacles and an array of others as well. If the states or federal government actually offer help, it will most likely be financial aid to local efforts, not a magic button that doubles the rate of connectivity in the United States.
There are a wide variety of models, and the Covid-19 pandemic has provided renewed popular support for better networks. In some cases, local officials have recognized the need for, at the minimum, a local plan. But in most cases, local residents need to push their local government to investigate the options. The Michigan Moonshot effort offers both a playbook and plenty of background information, templates, and more. There is no better time to start organizing locally for better networks.
Christopher Mitchell is the Director of the Community Broadband Networks Initiative whose work focuses on telecommunications — helping communities ensure the networks upon which they depend are accountable to the community. He is a leading national expert on community broadband networks.
Ry Marcattilio-McCracken is a Senior Researcher with the Institute for Local Self-Reliance’s Community Broadband Networks Initiative, where he writes about municipal networks, cooperatives, and broadband policy around the United States.
Another Arrest Shows It's Pretty Much Everyone But Antifa Engaging In Anti-Government Violence
from the presidential-conspiracy-theory-suffers-another-setback dept
by Tim Cushing - October 28th @ 10:43am
The DOJ really wants to make El Presidente's antifa dreams come true. The anti-police brutality protests have been cast by the administration as a leftist conspiracy to… um… demand better policing and better police officers. In addition to sending federal officers to clamp down on unrest in "Democratic" cities, the FBI has been sending analysts to crack phones taken from protesters in hopes of finding some sort of antifa org chart the feds can use to dismantle this "group."
If you think it's weird a free world government would be obsessed with tracking down people fighting fascism, you're not alone. Seems like the time and effort would be better utilized to neutralize the threat posed by homegrown extremists, many of whom align themselves with white supremacist movements. But this is what this Administration is diverting resources to, even when available evidence suggests the antifa movement isn't filled with dangerous individuals.
More evidence suggests the government might want to focus on another loose assortment of anti-government individuals: the so-called "Boogaloo Bois." If antifa is a collective in the loosest definition of the word, the Boogaloo Bois are similarly unstructured. Small groups exist but there's no organizational head to bring down or nationwide structure to dismantle. While the president complains about "violent" BLM/antifa protesters, real violence is being perpetrated by actual anarchists Trump has never criticized publicly.
In the wake of protests following the May 25 killing of George Floyd, a member of the Boogaloo Bois opened fire on the Minneapolis Police Third Precinct with an AK-47-style gun and screamed “Justice for Floyd” as he ran away, according to a federal complaint made public Friday.
[...]
Ivan Harrison Hunter, a 26-year-old from Boerne, Texas, is charged with one count of interstate travel to incite a riot for his alleged role in ramping up violence during the protests in Minneapolis on May 27 and 28. According to charges, Hunter, wearing a skull mask and tactical gear, shot 13 rounds at the south Minneapolis police headquarters while people were inside. He also looted and helped set the building ablaze, according to the complaint, which was filed Monday under seal.
Hunter's public social media posts helped bring him down. So did posts from other members of the group Hunter associated himself with, including Steven Carillo, who shot and killed a federal officer in Oakland, California and a sheriff's deputy in Santa Cruz. Hunter apparently traveled all the way from Texas to open fire on a police precinct and help set it on fire.
And, as if everything happening with protests and various self-invited interlopers wasn't confusing enough, this particular Boogaloo Bois unit managed to mix domestic and international terrorism into a completely incomprehensible blend.
Two members of the Boogaloo Bois, including one from Minnesota, have been indicted on federal charges of attempting to provide material support to Hamas, a designated foreign terrorist organization, the U.S. Justice Department announced Friday.
And this is apparently all it takes to talk a Boogaloo Boi into believing you work for a foreign terrorist organization.
In June, the FBI began receiving information about Teeter, Solomon and other Boogaloo Bois from a confidential source that the Bois believed to be a member of the terrorist organization Hamas. The source, a paid informant, had a Middle Eastern accent.
This isn't going to stop Trump and Bill Barr from continuing their hunt for an antifa kingpin. There's really no difference between the two, as far as Trump is concerned. Anti-THIS government is indistinguishable from anti-ALL government when you're THIS government. But one "group" tends to be composed of white guys with guns wearing Hawaiian shirts while the other is a very loose affiliation of what Trump considers to be "leftists." And it will always be the "leftists" that are considered more dangerous, even when its actual anarchists killing cops.
Daily Deal: The Complete Coder Bundle
from the good-deals-on-cool-stuff dept
by Daily Deal - October 28th @ 10:38am
The Complete Coder Bundle has 6 courses to help you learn how to program. Courses cover C++, CSS, HTML, Java, JavaScript, and Python. It's on sale for $35.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Zoom Shuts Down NYU Event To Discuss Whether Zoom Should Be Shutting Down Events Based On Content
from the content-moderation-inception dept
by Mike Masnick - October 28th @ 9:43am
Last month we wrote about Zoom blocking an online event by San Francisco State University because one of the speakers was Leila Khaled, a Palestinian activist/politician. 50 years ago she was involved in two airplane hijackings. As I noted in the post, this blockade was somewhat different than social media companies doing content moderation. Zoom is not hosting content, but rather just transmitting it, and thus is more akin to telecommunications infrastructure, and that raises significantly more questions about what it means when it starts reviewing the content of calls.
Indeed, because of this move, a series of online seminars were setup to discuss this very issue -- and they were done on Zoom. The company apparently got wind of one such event at NYU and refused to let it happen:
Today, Zoom unilaterally shut down a webinar hosted by the NYU chapter of the AAUP, and co-sponsored by several NYU departments and institutes. The webinar was scheduled to discuss the censorship, by Zoom and other big tech platforms, of an open classroom session last month at SFSU, featuring the Palestinian rights advocate Leila Khaled.
Of course, we recognize that it is an act of sick comedy to censor an event about censorship, but it raises serious questions about the capacity of a corporate, third-party vendor to decide what is acceptable academic speech and what is not.
I would argue that this was not, as is claimed, "censorship." There are literally dozens of other platforms that can be used for webinars these days, and many of them would probably be happy to host this event.
While I think that Zoom certainly has a legal right to exclude users it doesn't want, it still sets a worrying precedent that they're picking and choosing who can use the service based on what they might talk about during a call. I recognize that some will insist (perhaps in both directions!) that this kind of thing is no different than Facebook or Twitter or YouTube banning someone, but to me there remain fundamental differences in the type of service being provided (transmission of transitory bits, rather than long-term hosting of content). Separately, unlike social media platforms, you can participate in a Zoom call without getting a Zoom account. As such, this strikes me as a slippery slope that goes way beyond social media content moderation.
After the cancellation of the NYU event (in which Khaled was not speaking), Zoom put out a bland meaningless statement about its various terms of use, but refused to explain what policy was possibly violated by this academic seminar about Zoom's content moderation practices.
“Zoom is committed to supporting the open exchange of ideas and conversations and does not have any policy preventing users from criticizing Zoom,” a spokesperson for the company said. “Zoom does not monitor events and will only take action if we receive reports about possible violations of our Terms of Service, Acceptable Use Policy, and Community Standards. Similar to the event held by San Francisco State University, we determined that this event was in violation of one or more of these policies and let the host know that they were not permitted to use Zoom for this particular event.”
However, Zoom did not respond to questions about which specific policy was violated or whether other events have been shut down by the company.
As we've been saying, content moderation questions can be different based on different types of services, and which layer of the infrastructure stack they exist in. I think Zoom is making a mistake here.
The Trump Swamp Fights Itself Over Multi-Billion Dollar No-Bid Spectrum Grab
from the ouroboros-of-corruption dept
by Karl Bode - October 28th @ 6:33am
What does it look like when Trump's swamp devours itself? Look no closer than a battle between Peter Thiel backed Rivada Networks and incumbent telecom giants AT&T and Verizon.
Last week, anonymous "senior administration officials" told CNN that Rivada was lining up to grab potentially tens of billions in extremely valuable middle band spectrum via a no bid contract. The sources told the outlet that Rivada, which is financially and politically backed by Trump/GOP allies like Karl Rove, Newt Gingrich, and Thiel, is pushing to bypass the normal FCC approval process to gain access to a massive windfall:
"informed sources tell CNN that the White House is unquestionably pressuring the Pentagon to approve what would likely be, in the words of one senior administration official, "the biggest handoff of economic power to a single entity in history," and to do so without full examination of the impact on national security and without a competitive bidding process."
To be clear, Rivada's idea isn't necessarily bad. The company wants to share middle-band spectrum currently held by the Department of Defense to offer a wholesale 5G network. Rivada lost a bid to build the country's first emergency wireless network to AT&T, then sued (and lost). It has, quite correctly, been pointing out in recent years that the existing spectrum auction process tends to favor giants like AT&T and Verizon, something CEO Declan Ganley talked about again last week in a post at Morning Consult:
"The wireless industry is dominated by three big players. They buy up almost all the spectrum, they build networks where it suits them and don’t where it doesn’t. The FCC’s spectrum-auction process, which began as a great way to discover the highest and best use of a scarce resource, has been captured by the big carriers, and now serves merely to divide the spoils among the current winners. New entry into wireless via FCC auction is all but impossible."
While this is sometimes correct (smaller players still do occasionally nab decent swaths of spectrum at auction), Rivada's decision to use guys like Karl Rove to try and hotwire Trump corruption to directly obtain a massive spectrum handout is possibly worse than the corruption its complaining about. So on one side you've got Karl Rove, Peter Thiel, and Newt Gingrich, and on the other you've got telecom operators like AT&T and Verizon, and the lawmakers paid to love them. Companies that are no stranger to Trump handouts and are immensely politically powerful because they're effectively bone grafted to our intelligence apparatus.
To scuttle Rivada's chances, it appears that AT&T and Verizon lobbyists have been making the rounds among gullible and patriotic natsec lawmakers, trying to claim that Rivada's DOD spectrum sharing plan is some type of some attempt to "nationalize" the nation's 5G networks (damn socialists!). While some dusty corners of the Trump administration simply discussed the idea of a nationalized 5G network, it was never likely here in the U.S. And Rivada has argued (probably correctly) the term has been used by incumbent lobbyists to undermine the company's efforts in DC and in the press:
"One challenge Rivada has faced in getting its story out is that the influence of big-money lobbyists is just the water in which Washingtonians swim. When 19 senators sign a letter against nationalization — a measure opposed by multibillion-dollar multinationals — in a matter of hours, and introduce legislation opposing it within days, that’s business as usual. No one can see the strings. But a story about a entrepreneurial company with a big idea is somehow easy to recast as something nefarious, given enough PR pros and money."
The problem, again, is it's hard to portray yourself as the good guy in this equation when Peter Thiel, Karl Rove and Newt Gingrich are your primary pitchman in a quest to bypass existing processes -- and an open competitive bidding process (that while certainly skewed to favor deeper pockets, does include smaller competitors). It's just one of many examples of the Trump swamp fighting itself, despite nobody in the equation genuinely holding the high moral ground.
from the decades-in-jail-for-sock-puppet-construction dept
by Tim Cushing - October 28th @ 3:31am
The FBI is still creating terrorists -- finding loud-mouthed online randos to radicalize by hooking them up with undercover agents and informants seemingly far more interested in escalating things than defusing possibly volatile individuals.
The Ninth Circuit Court has, fortunately, decided to roll back a ridiculous sentencing enhancement added to another one of the FBI's homegrown extremists. The terrorism enhancement in this case was triggered by this: the defendant's opening of six social media accounts for alleged ISIS sympathizers.
The details of the case -- contained in the court's reversal [PDF] of the sentencing enhancement -- show Amer Alhaggagi was a bit of a troll. The son of Yemeni immigrants, Alhaggagi was born in California but spent a lot of his life traveling back and forth to Yemen to spend time with his mother. He was raised in a Muslim home but that upbringing didn't really make him a Muslim. He didn't seem to have much interest in adhering to the religion's rules and instead drifted towards the internet, where he developed a "sarcastic and antagonistic persona."
This is how things were going before the FBI got involved:
In 2016, at the age of 21, Alhaggagi began participating in chatrooms, and chatting on messaging apps like Telegram, which is known to be used by ISIS. He chatted both in Sunni group chats sympathetic to ISIS and Shia group chats that were anti-ISIS. He trolled users in both groups, attempting to start fights by claiming certain users were Shia if he was in a Sunni chatroom, or Sunni if he was in a Shia chatroom, to try to get other users to block them. He was expelled from chatrooms for inviting female users to chat, which was against the etiquette of these chatrooms, as participants in those chats followed the Islamic custom of gender segregation.
One can only imagine what would have happened to Alhaggagi if the FBI hadn't decided to step in. Probably something way less interesting than what happened when it did. The internet is full of trolls, even the parts of the internet most people don't access. But an FBI source happened to be hanging out in a chatroom when Alhaggagi attempted to stir up the crowd there with some extremist chatter.
In one Sunni chatroom, in late July 2016, Alhaggagi caught the attention of a confidential human source (CHS) for the FBI when he expressed interest in purchasing weapons. In chats with the CHS, Alhaggagi made many claims about his ability to procure weapons, explaining that he had friends in Las Vegas who would buy firearms and ship them to him via FedEx or UPS. Alhaggagi also made disturbing claims suggesting he had plans to carry out attacks against “10,000 ppl” in different parts of the Bay Area by detonating bombs in gay nightclubs in San Francisco, setting fire to residential areas of the Berkeley Hills, and lacing cocaine with the poison strychnine and distributing it on Halloween. He claimed to have ordered strychnine online using a fake credit card, of which he sent a screenshot to the CHS, bragging that he engaged in identity theft and had his own device-making equipment to make fake credit cards.
In isolation, this sounds horrifying. Given the context of Alhaggagi's internet trolling, it was just more bullshit. As the court notes, his online persona was not especially well-crafted and prone to delivering contradictory claims during the same chatroom conversations.
One minute his persona was selling weapons, the next he claimed to need them, all in the same chatroom. His persona allegedly had associates in Mexican cartels who could get him grenades, bazookas, and RPGs, offered to join a user in Brazil to attack the Olympics, and was considering conducting attacks in Dubai.
This isolated braggadocio led to 24-hour surveillance by the feds and the insertion of an undercover agent into Alhaggagi's life. The FBI's confidential informant pushed Alhaggagi to meet with the undercover agent and things kind of took off from there. The pair discussed bomb-making and visited a storage space to supposedly be used to store bomb stuff. Alhaggagi said other ridiculous things, like detailing a plan to become a cop so he could obtain more weapons. On the third visit, the FBI stocked the storage space with fake explosives. On the drive back, Alhaggagi pointed out good locations for bombs.
Then he stopped meeting with the undercover agent. Alhaggagi claimed seeing the explosives in the storage space made it clear he'd taken this too far. From that point on, he never contacted the undercover agent again.
The FBI searched his home in November of 2016, finding evidence showing Alhaggagi was back to trolling, rolling into ISIS-related chatrooms to say bad things about the US government. It also found that he had opened up Twitter and Gmail accounts for some people in the chatroom, who were alleged ISIS sympathizers. Some of these accounts were later linked to ISIS's propaganda organization. Agents also found online searches related to bomb-making, strychnine, and flammable devices/substances.
The "material support" alleged here was the opening of social media and email accounts. But the court doesn't find the government's arguments persuasive. The government had the burden of proving Alhaggagi knew these accounts would be used to "intimidate or coerce" the US government or "retaliate" against it via violent acts.
The lower court simply shrugged and said there was no other possible use for the accounts, given that they were created for people in a pro-ISIS chatroom. But that shrug of indifference added decades to Alhaggagi's sentence. The probation office's pre-sentencing report suggested 48 months. The government countered with its terrorism enhancement, asking for a 396-month sentence. That's a difference of 29 years. The court settled on 188 months with 10 years of supervised release.
The Appeals Court reminds the government that the enhancement doesn't automatically apply to material support for terrorism charges. It also notes the government fell short on the burden of proof.
The district court’s conclusion rests on the erroneous assumption that in opening the social media accounts for ISIS, Alhaggagi necessarily understood the purpose of the accounts was “to bolster support for ISIS’s terrorist attacks on government and to recruit adherents.” Unlike conspiring to bomb a federal facility, planning to blow up electrical sites, attempting to bomb a bridge, or firebombing a courthouse—all of which have triggered the enhancement— opening a social media account does not inherently or unequivocally constitute conduct motivated to “affect or influence” a “government by intimidation or coercion.” 18 U.S.C. § 2332b(g)(5)(A). In other words, one can open a social media account for a terrorist organization without knowing how that account will be used; whereas it is difficult to imagine someone bombing a government building without knowing that bombing would influence or affect government conduct.
The lower court stretched the definition beyond credulity to add more than a decade to the defendant's sentence.
The district court’s “cause and effect” reasoning is insufficient because the cause—opening social media accounts—and the effect—influencing government conduct by intimidation or coercion—are much too attenuated to warrant the automatic triggering of the enhancement. Instead, to properly apply the enhancement, the district court had to determine that Alhaggagi knew the accounts were to be used to intimidate or coerce government conduct.
The FBI found a malcontent wandering the internet and when the troll refused to keep participating in the "blow shit up" charade, it raided and arrested him for material support. Then, for the crime of opening social media accounts, the government wanted to lock him up for more years than he'd been alive. And, for all the effort -- the 24-hour surveillance, the undercover agent, and the confidential informant -- the FBI came no closer to heading off a terrorist attack or taking down a terrorist organization.
This mailing list is announce-only.
Floor64 will not share your email address with third parties.