We begin today with Perry Stein and Rachel Weiner of The Washington Post reporting about the a motion filed yesterday by Special Counsel Jack Smith to prevent “irrelevant disinformation” from being included in Trump’s arguments for his defense.
Special counsel Jack Smith filed on Wednesday what is known as a motion in limine, urging U.S. District Judge Tanya S. Chutkan to prohibit Trump from including certain arguments in his defense. Such filings are common in legal proceedings and aim to eliminate arguments at trial that prosecutors say are not supported by evidence or are irrelevant to the case, and could mislead jurors.[…]
Prosecutors have filed similar motions in many of the hundreds of trials of people charged with storming the Capitol on Jan. 6. In those cases, prosecutors have typically sought to prohibit defense attorneys from arguing that their clients were exercising their First Amendment rights when they broke into the Capitol or that the police — acting as part of some sort of conspiracy — allowed the riot to happen.
The federal judges overseeing the cases at U.S. District Court in D.C. generally agree to those requests, unless a defendant testifies that he or she personally saw police allow rioters into the building. As Chutkan put it when presiding over the trial of Antony Vo in September, a defendant “can’t speculate as to what other people might have been doing. … He can simply say what he saw and what his observations led him to believe.”
Since most of the punditry (other than breaking news) is being devoted to retrospectives about 2023, the APR will look at some of the year-end stories about one of the greatest problems that world society faced in 2023: misinformation and disinformation.
We’ll continue across the fold with Marcy Wheeler.
Marcy Wheeler of Empty Wheel identifies ten categories of misinformation that Smith asks to be excluded from the trial and focuses in one related to speculation about Trump’s state of mind and and around the time of Jan. 6, 2021.
This is the kind of testimony that Trump-friendly witnesses — even Mike Pence!! — have often offered in the press. And Trump could call a long list of people who’d be happy to claim that Trump believed and still believes that the election was stolen.
But as the filing notes, that would be inadmissible testimony for several reasons. It would also be a ploy to help Trump avoid taking the stand himself.
That said, there are several quips in the filing, which was submitted by Molly Gaston (who has had a role in earlier Trump-related prosecutions), that are more salient observations about Trump.
For example, in one place, the government argues that Trump should not be able to argue (as he has in pretrial motions) that it’s not his fault if his rubes fell for his lies.
Julia Mueller and Jared Gans write for The Hill about the dangers of the addition of AI chatbots to an already polarized 2024 electorate.
AI — advanced tech that can generate text, images and audio, and even build deepfake videos — could fuel misinformation in an already polarized political landscape and further erode voter confidence in the country’s election system. […]
Experts are sounding alarms that AI chatbots could generate misleading information for voters if they use it to get info on ballots, calendars or polling places — and also that AI could be used more nefariously, to create and disseminate misinformation and disinformation against certain candidates or issues.
“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.
Polling shows the concern about AI doesn’t just come from academics: Americans appear increasingly worried about how the tech could confuse or complicate things during the already contentious 2024 cycle.
David Gilbert of WIRED magazine shows that the fears of most Americans and election experts about AI and chatbots are well founded.
When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.
After being asked to create an image of a person voting at a ballot box in Arizona, Copilot told WIRED it was unable to—before displaying a number of different images pulled from the internet that linked to articles about debunked election conspiracies regarding the 2020 US election.When WIRED asked Copilot to recommend a list of Telegram channels that discuss “election integrity,” the chatbot shared a link to a website run by a far-right group based in Colorado that has been sued by civil rights groups, including the NAACP, for allegedly intimidating voters, including at their homes, during purported canvassing and voter campaigns in the aftermath of the 2020 election. On that web page, dozens of Telegram channels of similar groups and individuals who push election denial content were listed, and the top of the site also promoted the widely debunked conspiracy film 2000 Mules.This isn’t an isolated issue. New research shared exclusively with WIRED alleges that Copilot’s election misinformation is systemic. Research conducted by AI Forensics and AlgorithmWatch, two nonprofits that track how AI advances are impacting society, claims that Copilot, which is based on OpenAI’s GPT-4, consistently shared inaccurate information about elections in Switzerland and Germany last October. “These answers incorrectly reported polling numbers,” the report states, and “provided wrong election dates, outdated candidates, or made-up controversies about candidates.”
Stacy Weiner writes for the American Association of Medical Colleges (AAMC) about the dangers of physicians that spread medical disinformation.
COVID-19 vaccines are ineffective and unsafe and may even cause infertility. Masks don’t provide any protection against the SARS-CoV-2 virus. Ivermectin, a medication generally used to deworm animals, is an effective treatment for COVID-19.
These statements are among the many types of misinformation disseminated by doctors on social media during the pandemic, according to recent research published in JAMA Network. […]
Hoping to help answer those questions, the Federation of State Medical Boards (FSMB), which represents 70 state medical and osteopathic licensing boards and is widely recognized as an authority on medical licensing and disciplinary issues, convened a group of lawyers, ethicists, and state medical board members for nearly a year of deliberations in 2021-2022. The result was a 12-page document with detailed recommendations around professional expectations regarding medical misinformation and disinformation.
The document declares that physicians “must use the best available scientific evidence” when advising patients or the public. “We had a lot of discussion before we settled on using ‘must,’” says FSMB President Humayun Chaudhry, DO.
Kaitlyn Tiffany of The Atlantic says that thanks to Elon Musk’s ownership and changes at Twitter/X the “alt-tech” social media systems are now beginning to thrive.
Previously, the “alt-tech” ecosystem was a bit of a sideshow. It encompassed moderation-averse social-media sites that popped up in the Trump era and resembled popular services such as Twitter and Facebook; their creators typically resented that their views had been deplatformed elsewhere. Parler and especially Gab (which is run by a spiky Christian nationalist) were never going to be used by very many normal people—apart from their political content, they were junky-looking and covered in spam.
But now, alt-tech is emerging from within, Alien-style. Twitter’s decade of tinkering with content moderation in response to public pressure—adding line items to its policies, expanding its partnerships with civil-society organizations—is over. Now we have X, a rickety, reactionary platform with a skeleton crew behind it. Substack, which got its start by offering mainstream journalists lucrative profit-sharing arrangements, has embraced a Muskian set of free-speech principles: As Jonathan Katz reported for The Atlantic last month, the company’s leadership is unwilling to remove avowed Nazis from its platform. (In a statement published last week, Hamish McKenzie, one of Substack’s co-founders, said, “We don’t like Nazis either,” but he and his fellow executives are “committed to upholding and protecting freedom of expression, even when it hurts.”) The trajectory of both resembles that of Rumble, which started out as a YouTube alternative offering different monetization options for creators, then pulled itself far to the political fringes and has been very successful.
These transformations are more about culture than actual product changes. Musk has tinkered plenty with the features of Twitter/X in the past year, though he’s also talked about changing far more than he actually has. More notable, he’s brought back the accounts of conspiracy theorists, racists, and anti-Semites, and he got rid of Twitter’s policy against the use of a trans person’s deadname as a form of harassment. In a recent Rumble video, the white supremacists Richard Spencer and Nick Fuentes praised Musk’s management of the platform, saying that the “window has shifted noticeably on issues like white identity” during his tenure. And in support of anecdotal claims that hate speech rose after Musk’s acquisition of Twitter, a team of researchers has shown that this was actually the case. They observed a large spike right after the acquisition, and even after that spike had somewhat abated, hate speech still remained higher than pre-acquisition levels, “hinting at a new baseline level of hate speech post-Musk.”
Edan Ring of Haaretz writes about the explotation of social media networks by the terrorist group Hamas and what it may mean for the future of warfare.
Psychological warfare and terror are not new phenomena. They were basic elements of conflicts and wars well before the era of digital communication. However, the advent of the internet and the emergence of commercial social networks have armed terrorist organizations with highly effective and convenient tools of destruction and manipulation, making it easier to terrorize those beyond the victims they have harmed physically by instilling fear and anxiety through posting videos, for example of their violence.Media and communication technologies have been transformed into weapons capable of instigating terror and fear on a massive scale, pushing boundaries of space and time. The appalling attack on civilian homes and military bases near the Gaza Strip on October 7, and its subsequent reverberations across social media, serve as a stark illustration of this terrifying reality lurking in our mobile phones.
The inherent ability of every media platform user to both consume and produce news and information, makes social media an especially lethal tool at wartime. […]
Unlike traditional media, where the broadcast and coverage of war zone images or hostage videos are typically framed by journalistic context and commentary, on social media, such content spreads far more dangerously and effectively without any explanatory background or warning, sowing confusion, fear, and polarization.
Finally today, here’s a video of Teri Schultz reporting for DW on how the European Union is working to combat Russian disinformation.
At ~4:10 of the video, something called a Doppelgänger Operation is mentioned. This recent story from the security website DarkReading goes into some extensive detail about what precisely a Doppleganger operation consists of.
Have the best possible day everyone!