Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

The New York City subway would never accept an ad for a hitman-for-hire service, or for a pill that claimed to cure depression but wasn’t FDA-approved, or a social network designed exclusively to groom minors. Those would be illegal, unethical—and obviously dangerous. 

So why is the MTA running a massive ad campaign for friend.com, a product that, if not already illegal, certainly should be?

This AI-powered pendant claims to offer companionship by listening to your conversations and sending you messages.  

As a parent and teen, neither of us would want this recording our conversations with one another or with friends without consent from those around us. This is a glorified surveillance toy aimed at young adults (and possibly teenagers), disguised as empathy, and sold as a natural extension of the ways we already include AI in our daily lives. 

It is wildly unethical. It may be illegal. And no responsible media platform, especially one funded by tax dollars, should promote it.

Capitalizing on Crisis: Loneliness as a Business Model

Fueled by social isolation, under-resourced mental health systems, and the endless doomscroll, rates of depression and suicide among young people have surged in recent years. Adults under 40 are twice as likely to be depressed today as they were in 2008, and the pandemic exacerbated the problem further. This is the moment when thoughtful interventions, government investment, and compassionate support are most needed.

Instead, friend.com sees a market opportunity.

Their pitch is disarmingly simple: loneliness is a public health crisis, and they’ve got a product to fix it. Just clip a $129 pendant to your shirt, let it eavesdrop on everything around you, and wait for it to text you vaguely encouraging messages throughout the day. It’s your always-on, AI-powered “friend.”

But this is not care, it’s not medicine, it’s not even good technology.

The pendant’s listening abilities crash when exposed to any loud conversation, and it often likes to hallucinate conversations entirely. 

It is wildly unethical. It may be illegal. And no responsible media platform, especially one funded by tax dollars, should promote it.

To be clear, this is not an attack on tech as a way to solve isolation. When used thoughtfully, digital tools can be powerful instruments of expression, identity formation, community-building and democracy — especially for isolated or marginalized youth. We have personally experienced how online platforms like Discord can connect isolated adults and youth to real support, much-needed research and information, and give voice to feelings they struggle to express offline.

This “friend” is not doing that. 

It is a Band-Aid for the problem of social isolation, attempting to convince the loneliest generation in history that an intrusive, surveilling wearable is a substitute for human connection.

There is no community page or social element on friend.com. There are no humans on the other side of the line. Rather than being a safe harbor of supportive peers, the “friend” is an echo chamber of your own thoughts that disconnects you from human relationships.

LLM’s are widely known to have extraordinary confirmation bias. The last thing we want after a debate over homework or politics is a device interfering to tell us why the other person is entirely at fault. Instead of supporting one another in our relationship, another, this “companion” substitutes for those conversations.

Manufactured Outrage

Friend.com is the brainchild of Avi Schiffmann, a young tech founder who rose to fame during the COVID-19 pandemic by building a widely used case-tracking website. Now he’s turned his considerable talents to wearables—raising $2.5 million in seed funding (before spending 70% of those funds just to acquire the domain.) 

There is no community page or social element on friend.com. There are no humans on the other side of the line. Rather than being a safe harbor of supportive peers, the “friend” is an echo chamber of your own thoughts that disconnects you from human relationships.

Backers of the project include well-known investors like Caffeinated Capital, the CEO of Perplexity AI and the co-founder and CEO of Morning Brew. This is not a team of therapists, educators, or youth advocates. It’s a coalition of venture capitalists, crypto founders, and tech bros who have decided to play in the mental health space with the same “move fast and break things” approach that led to the current ills of social media and young people’s isolation in the first place.

If there was a true desire on the part of Schiffman and his investors to help with social isolation, they could start with the more than 200 billion dollar funding gap for mental health services worldwide. The $1M spent on useless MTA print ads could have gone to youth safe spaces or mental health research or sending more social workers into those same subways to help unhoused youth.

It is a Band-Aid for the problem of social isolation, attempting to convince the loneliest generation in history that an intrusive, surveilling wearable is a substitute for human connection.

Instead, this money is being used for marketing whose sole purpose is to provoke passersby to vandalize them with slogans like “get real friends” and “stop profiting off of loneliness” and take pictures that inevitably go viral on social media. All press seems to be good press to Schiffman.

Three Reasons This Tech Is Downright Evil

Some might say friend is just a natural evolution of ChatGPT, which, despite its downsides, people use as a therapy-substitute. But the potential damage is far greater. 

1. It’s a Non-Consensual Surveillance Device

Let’s call this what it is: a spy device. From its terms of service: “When you use the Device, you will, by using those Services, be allowing Friend to collect data from your surroundings, which may include but is not limited to biometric information, biometric data, facial recognition, voice and audio recordings, and images and videos of the things around you…we may disclose personal data to vendors.”

Unlike Alexa or Siri, which at least in theory only activate when triggered and typically live in private settings, the Friend pendant is worn in public and listens passively to everything around it. It doesn’t just listen to you—it listens to everyone you encounter.

The company claims it “doesn’t record” in the traditional sense, but instead scans conversations for sentiment to decide what to message you. This is a meaningless distinction. Whether you’re storing audio or analyzing it in real time, you’re processing the speech of unsuspecting people without their consent.

That’s surveillance. In 12 states, it's explicitly illegal. In Europe, it’s likely a GDPR nightmare.

Even more disturbing: there’s no way to opt out. If your classmate wears one to school, or your seatmate wears one on the subway, you’re being monitored and your words are being analyzed by a closed-source model, with no agreement or consent.

 While Friend claims it does not “knowingly permit” use of the product by minors, there is no proactive effort to stop this, and responsibility is placed on adults to report kids using it. 

All of this isn’t just creepy, it’s likely illegal. And even if it skirts the law or the current administration has no appetite to enforce it, it shreds every norm of decency and trust we should be building around AI.

2. It’s Useless, Misleading, and Built on Stupid Tech

Friend.com markets the pendant like it’s an emotional support system. In reality, it’s a low-functioning Tamagotchi with WiFi.

It can’t hold a real conversation. It can’t answer questions. It can’t help you solve a problem or de-escalate a crisis. It offers vague, reactive messages based on surface-level emotional signals and keywords.

You can’t meaningfully configure it. You can’t personalize it. You don’t even control when or why it says what it says. That makes it not only ineffective—but actively misleading. Teens and young adults with developing brains may believe this device understands them. That it’s supporting them, even that it’s a friend.

But it’s not a friend. It’s a sleek version of the 1966 ELIZA MIT chatbot that mimicked back what you said. 

In contrast, tools like ChatGPT—even with their limitations—can conduct research, answer real questions, and be programmed with safety guardrails. They’re far from perfect, but they’re useful.

This pendant isn’t useful. And it could steer lonely kids in confusing or even harmful directions.

3. It’s a Private, Opaque, and Unaccountable Company

Friend.com is a privately funded startup, with no transparency, no clinical oversight, and no regulatory guardrails. There are no published ethics reviews, no independent safety audits, and no governance over what the device says, recommends, or encourages.

And yet they’ve built a device that sits inches from your chest, absorbs your emotions, and texts you advice based on what it hears.

We’ve seen this playbook before. Platforms like Facebook, Instagram, and TikTok flooded youth attention without adequate safeguards—and now we’re dealing with the fallout: skyrocketing depression, anxiety, and suicide rates.

But even in the tech world, we’ve seen this exact product fail.

Amazon’s Halo Band, launched in 2020, tried to do something eerily similar: a wearable that recorded your voice, analyzed your tone, and gave you real-time coaching. It was widely criticized for privacy violations and pseudoscience, and was ultimately discontinued in 2023.

And earlier this year, another hyped wearable crashed and burned: Humane AI’s “AI Pin”, which promised ambient AI assistance through a voice interface. It failed spectacularly—plagued by poor usability, unclear purpose, and public backlash. By summer 2024, it was effectively dead.

Amazon couldn’t make it work for adults. Humane couldn’t make it work for anyone.

Now friend.com thinks it can succeed where others have failed by explicitly targeting lonely young people.

This is a cynical, dangerous play for cash and hype, and it’s the clearest possible sign that governments must act fast. If this kind of product is allowed to scale before laws catch up, we’ll be repeating every mistake we made with social media—only this time, the interface is strapped to our chests.

We’ve Seen Where This Leads

As we debate whether this pendant is silly, creepy, or criminal, we’re missing the bigger point: this is part of a larger, disturbing pattern. The summer of 2025, as Alondra Nelson aptly put it, is shaping up to be AI’s “cruel summer”—a season where the risks of unchecked artificial intelligence have become unignorable. She details the parade of horribles: the first wrongful death lawsuit filed against OpenAI for allegedly driving a teen to suicide and another instance of a man hospitalized after following bad diet advice from a chatbot. Without guardrails, transparency, public accountability, and training for the user even well-intentioned AI systems can cause devastating outcomes.

Now imagine placing that power—stripped down, dumbed down, and prettied up—in the hands of lonely young adults. Imagine telling them this machine is a friend.

Regulation Can’t Wait—And Neither Can Public Institutions

We're almost relieved that friend.com is blowing money on subway ads. It’s possible this will flame out in a swirl of public backlash and tech-journalist takedowns, like Humane AI and Halo before it.

But that would still be a waste of resources that could have supported real mental health initiatives. Programs that train school counselors. Peer support platforms. Community-designed digital tools that offer real connection, not simulated companionship.

And what if it doesn’t flame out? That’s why we need action.

  • Transit authorities like the MTA and others who control public spaces in the physical and digital world must exercise ethical judgment and reject ads that promote surveillance technology, especially when it targets minors.

  • Parents, educators, and advocates must push back against systems that offer fake intimacy in place of real community, and surveillance in place of support and teach kids about the dangers and the profit-motivation behind these tools.

  • Policymakers need to clarify existing rules on the books that prohibit nonconsensual recording or issue new ones. 

friend.com is not the future. It’s a warning.

And the question now is whether we heed it.

 

Tags