Are customers asking for AI-based moderation tools?

Have any customers recently asked about AI-based solutions for flagging abusive content including images and video. Have customers mentioned moderation needs related to things that are not traditionally innapropriate like political content, use of tobacco products, self-promotion, or posting content in the wrong category, etc.?

Comments

  • I've had it come up a bit. I know Discourse offers something like that as an Enterprise plugin:

    That said, the majority of the time when people start to talk about AI and community it seems to come more down to either 1) Personalization - using AI to offer more suggested content and user relevant content based on what the user has done or is doing or 2) They're looking for something shiny and new. A lot of community platforms offer the same type of features and they've been the same type of features for quite some time.

    Personally, I don't find b2b SaaS companies that hung up on moderating content the way maybe a media company or a gaming company would be and those are much less important verticals to Vanilla than they were 3 years ago.

  • Niantic asked and paid for a custom Image Moderation Plugin for all of the communities. It is tied into Google's Safe Search Detection so all posts that don't meet Niantic's settings, get sent right to the Moderation Queue. Outside of that, I haven't had any specific Moderation requests outside of Moderator reporting capabilities and Moderation analytics.

    • MFP has asked for social listening type things
    • MSE has asked for anything that will work with spam but have less false positives than Akismet


  • Like Brendan - I've seen more requests about AI powered personalization around recommended content - also recommended users to answer questions - both things currently offered by competitors.

  • This is such a well timed question from Luc. We were just discussing that this morning in the sales and marketing meeting.

  • Recently Gerber sent us a ticket regarding the moderation of users that were giving 'bad advice' (they weren't really specific on what they meant by this) but it's apparently an issue on their site. If there was a way to quietly send those types of comments to moderation based on the user hitting a certain topic, they'd probably use it.

  • @Andrew_D That's a tough one.

  • Alex Powell
    edited July 2020

    I could see something along the lines of us tracking when people get their answers accepted more frequently being a measure of quality - which would also dovetail into what I was talking about people being recommended who could best answer by an AI system.

    here is an example from Tribe

    image.png