Connect with us

Brand Speak

Bumble now allows users to report AI-generated, fake profiles

Published

on

Marksmen Daily (64)

Dating is, in and of itself, such a random affair. What are the chances that you’ll find your soulmate, that perfect one just for you, somewhere on this big blue rock of ours, somewhere in a sea of billions of people? The odds are, honestly, infinitesimal, and the fact that anyone at all finds true love in any shape or form is such a miracle.

And then, when you factor in bad actors born from AI, dubious profiles and dead-end chats and matches become even more of a cumbersome affair. Summertime romances can easily lead to summertime sadness (with due apologies to Lana Del Rey), and it becomes difficult to tell Mr. Forever After from Mr. Forever Blocked.

In an effort to fix this fatal flaw, Bumble is making strides to combat the rising scourge of AI-heavy profiles, by allowing members to report profiles they come across that may be using AI-generated photos and videos.

This update comes as Bumble is building new safeguards to uphold its mission to foster healthy and equitable relationships, and continue to put women at the center of its experiences.

As the dating ecosystem evolves, Bumble is focused on responsible uses of AI and addressing new challenges brought by disingenuous usage. In a recent Bumble survey*, 71% of Gen-Z and Millennial respondents felt there should be limits to using AI-generated profile pictures and bios on dating apps. In addition, 71% of those surveyed believed people who use AI-generated photos of themselves doing things they have never done, or visiting places they have never been, qualifies as catfishing.

The new safety update comes as Bumble is building new safeguards to uphold its mission to foster healthy and equitable relationships and continue to put women at the center of its experiences. As the dating ecosystem evolves, Bumble is focused on responsible uses of AI and addressing new challenges brought by disingenuous usage. 

[Visual] Reporting AI-generated Fake Profiles Just Got Easier
Bumble is cracking down on fakery and keeping it real

Bumble’s VP of Product, Risa Stein, who leads trust & safety efforts, had this to say on the new feature. “An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous. We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously, so our community feels confident in making connections.”

The new reporting option is an addition to Bumble’s existing features that taps on AI for good to help members stay safe while dating online:

  • Deception Detector: Rolled out earlier this year, this AI tool helps identify spam, scam, and fake profiles—within the first two months, Bumble saw member reports of spam, scam, and fake profiles reduced by 45%**. Find out more in the release in our digital media kit here
  • Private Detector: An AI tool that automatically blurs a potential nude image shared within a chat on Bumble, before notifying you that you’ve been sent something that’s been detached as inappropriate. You can easily block or report the image after. 
  • For You feature: We also recently made new AI-powered advancements to our “For You” feature to improve our consumer experience. This is a daily set of four curated, relevant profiles based on our community’s preferences and past matches designed to show singles people that could be a great match.

This latest effort by Bumble to create a more authentic dating experience is a welcome to create a safer dating pool, stopping bad actors who try to misuse Bumble, and ensuring Bumble is a safe place for making kind connections. Surely a welcome move towards making something good rather than too-good-to-be-true.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *