Dallas-based Newsweek and Vanity Fair reporter Kurt Eichenwald is a controversial guy in some circles. He’s been one of the more crusading reporters on a number of the issues involving Donald Trump—days before the election, he reported perhaps the most comprehensive look at the president-elect’s ties to Russia—and he’s combined straight, well-documented reportage with snarky commentary on Twitter and at least one attempt at satire that’s indistinguishable from “fake news.” Before the election, Eichenwald was an important reporter—Steven Soderbergh adapted his book The Informant into a film starring Matt Damon, and Eichenwald was nominated for a Pulitzer in 2000—but he wasn’t exactly a public-facing media figure until recently.

Eichenwald recently appeared on Fox News to debate Tucker Carlson, and after that segment—in which he and Carlson had a heated conversation for ten minutes—Eichenwald says a user that goes by @jew_goldstein sent him an animated GIF intended to induce a seizure. (Eichenwald has documented his life with epilepsy in his work for decades.) It wasn’t the first time this had happened—in October, Eichenwald wrote that another Trump supporter had sent him a similar GIF—but it was, Eichenwald says, the first time that it worked.

Dr. Theresa Eichenwald, the reporter’s wife of 26 years who has lived with his epilepsy since they started dating, said she first realized something was wrong when her husband called out to her.

“I heard him from the other room,” she told The Daily Beast. “I came in, and he was in his chair, slightly turned away from the flashing computer screen. He was incoherent.

“I knew right away what was going on. I quickly got the image off the screen. He did not have a grand mal seizure,” she added, citing the most serious form of epileptic seizure, which is characterized by severe muscle contractions and loss of consciousness, and can result in death. “He had a localized seizure. All you can do is make sure the person is safe and wait it out and tell him he’s OK. My response was more anger than anything else.”

Eichenwald’s wife tweeted to the account that sent the image, “This is his wife, you caused a seizure. I have your information and have called the police to report the assault.” (Carlson’s Daily Caller website, meanwhile, questions whether Eichenwald actually called the police that night.) Regardless of whether a police report exists, though, Eichenwald definitely filed legal paperwork in a Dallas court to push Twitter to identify the person who sent the tweet.

The court ordered Twitter to give a deposition and to preserve all logs of the user, and Twitter indicated that it will comply with Eichenwald to help him identify the person who sent the seizure-inducing tweet.

  Rule 202 Order_signed 12.19.16_copy by Kurt Eichenwald on Scribd

All of this recalls the real questions and concerns as we explore the brave new world of Twitter, social media, and online privacy—none of which is exactly new, but all of which has taken on a sharper tenor over the past several months. Twitter, particularly, is in a unique spot here for the access it provides to anyone with an account. Facebook creates barriers for communicating with someone you don’t know; Twitter has multiple methods of sending those messages. It’s been a long-simmering problem for the service, wherein users—particularly women—have for years dealt with harassment, threats, and violent imagery. That Eichenwald’s condition puts him in the relatively extraordinary position of being at direct physical risk from the wrong kind of tweet is a new frontier on harassment—but it’s a difference of scope, not substance, as anyone who’s received hundreds or thousands of abusive tweets for offenses such as, say, starring in a reboot of Ghostbusters or pointing out sexist tropes in video games can attest.

Twitter, as a company, is in a challenging position. It’s hard to believe that it wants the service to be used for harassment—Milo Yannapolis, the self-identified “alt-right” leader who led the racist campaign against Ghostbusters star Leslie Jones, was permanently banned from the service in July, and the company’s compliance with the Eichenwald lawsuit suggests that its leaders have some moral compass guiding their decision-making. But Twitter has also faced criticism for being too slow to implement changes that might stop harassment on the service.

It’s a challenge on a few fronts. Anonymity on Twitter is necessary for a certain kind of community organizing that the company is proud of—such as its role in facilitating things like the Arab Spring—but that same anonymity makes it easy to register an account and send seizure-inducing images to a reporter you disagree with. There’s also the bottom line for Twitter; it has to keep growing (as all tech companies are expected to do) and you don’t grow by making it harder for people to use your service, or by offering them less opportunity to interact with the famous users that are much of Twitter’s selling point.

What the solution to all of this ends up being is an open question. (Maybe there won’t be one.) There are good ideas out there—here’s one from a user who suggests allowing users to self-verify with a bank card or phone number, and then allowing users to see tweets only from verified accounts, and another user that suggests the idea of community “rating” of Twitter accounts to weed out low-value users. Perhaps now that the service has been used even more directly to attempt to physically hurt someone, the company will consider a holistic approach to dealing with harassment. It does seem like the time is right.