The Internet is a powerful tool. It allows for instantaneous and nearly effortless communication of the most benign messages, like wishing a friend happy birthday or sharing a photo of what you had for dinner. But as the saying goes, with great power comes great responsibility, something 17-year-old Montrae Toliver learned last month when he was arrested by Fort Worth police for making terroristic threats on Twitter.

The teenager allegedly posted a photo on his Twitter account—which now bears no sign of the offending tweet—of what appears to be a rifle pointed at a Fort Worth police squad car, with the message: “Should I Do It? They Don’t Care For A Black Male Anyways!” followed by a couple of well-selected emoji. The tweet was posted on December 22, and on December 29, Toliver was arrested by Fort Worth police. His bail was set at $500, and as of January 5, he was not being held in jail.

Two spokeswomen from the Fort Worth police department held a press conference the Monday Toliver was taken into custody and said that although detectives had discovered that the gun in the picture wasn’t a real rifle—six days before his arrest, Toliver posted a tweet that says: “Everyone is Posting The Pic of The One Picture But No one Will Post The One That’s Right Before It That Shows Its An Airsoft”—it didn’t matter; the post was still viewed as a threat against an officer, a crime punishable by up to two years in prison and a $10,000 fine.

As we’ve seen in the past, the things you post in the virtual world can have consequences in real life. In fact, another Texas teen found himself in this same trouble in early 2013, when then-18-year-old Justin Carter was arrested for posting threats on Facebook and charged with making terroristic threats (he made a comment about “shooting up a kindergarten” soon after the Sandy Hook massacre). His bail was set much higher than Toliver’s ($500,000, which was posted by an anonymous donor), and if convicted, Carter faces up to ten years in prison.

In both cases, these teens allegedly posted careless remarks to a public platform, remarks seen by a third party who reported them to police, who perceived the threats as real enough to make an arrest. And in both cases, the threatening posts came at a time when national sensitivities were already high—Carter wrote his post about a month after the Sandy Hook massacre, and the tweet that police say Toliver wrote came shortly after two police officers were killed in New York City by a man who used social media announce his intentions.

Following his arrest, Carter’s mom made public claims that her son was being sarcastic and there was no reason to believe his threats had any real substance behind them. But the “it’s a joke” defense isn’t going to stop police from investigating these types of messages. As Tamara Pena, a spokeswoman for Fort Worth police, said: 

“Hopefully people are getting it through their heads that it is not a joke, and it can be taken seriously … Officers are here to help. We are here to do our job, but when we feel threatened we will act upon it. If it is a joke, don’t post it. Keep it to yourself.”

At the crux of both of these cases, and others like them, is how we perceive and define threats as they exist on social media, and how police departments around the country are dealing with these situations, especially in light of the fact that a threat made through social media has been acted on. With this new territory comes questions about when these messages are criminal, a question that was put before the Supreme Court in last December’s oral arguments in Elonis v. United States. In 2010, Anthony Elonis used Facebook to make several detailed threats against his wife and coworkers that pushed his wife to seek a protection-from-abuse order against her husband. Elonis was arrested and charged in December 2010 under a federal law that makes it illegal to use interstate communication, like Facebook, Twitter, and the rest of the Internet, to threaten another person. He spent three years in prison for his actions, but is now appealing his conviction to the Supreme Court because, as he puts it, his Facebook posts were a form of self-expression meant to help relieve him of the pain he felt when his wife left him and he was fired from his job. A decision is likely to be reached by this summer.

Other Supreme Court rulings have determined when a threat becomes real enough to warrant an arrest. In the 1991 case United States v. Kosma, the court ruled that a threat could be perceived as true if a “reasonable speaker” would perceive it as true. In another case decided in 2003, Virginia v. Black, Supreme Court Justice Sandra Day O’Connor ruled that, within the limitations of the First Amendment, “a prohibition on true threats protects individuals from the fear of violence and the disruption that fear engenders, as well as from the possibility that the threatened violence will occur.” Both of these decisions put the power in the hands of someone whose life could be potentially disrupted or affected by the threatening message. The Elonis case could shift the power dynamic and raise the threshold for threats on the Internet by placing the perceived intention not in the hands of third parties, but in the hands of the user who posted the message.

That it’s possible to be held accountable for what you share online—to the point of legal action—has been established. But the line between what constitutes an ill-advised Twitter joke or disturbing self-expression, and something dangerous enough to create fear, is fuzzy; and as it exists now, leaves a lot of deciding power in the hands of the police officers who are presented with the online messages.