Artificial Intelligence (AI) will scan the comment before it is posted and after a person presses 'comment' to send their thought.
If the computer system detects the content of the message is hurtful it will trigger a notification which will let the poster either edit the message or commit to the comment.
Comments that break the guidelines but are not picked up by the new feature, which comes into force next year on Android, will be subject to the same process of review and removal as they are currently.
Scroll down for video
Artificial Intelligence (AI) will scan the comment before it is posted and after a person presses 'comment' to send their thought. If the computer system detects the content of the message is hurtful, it will trigger the notification which will let the poster 'reflect' before committing
'Our system learns from content that has been repeatedly reported by users,' YouTube said.
'We know there is more work to be done and this is just a start.'
Instagram installed a similar feature earlier in June 2019. The Facebook-owned site announced via blog post it was introducing a mechanism to notify users 'when their comment may be considered offensive before it's posted.'
The comment sections underneath videos is a key way for creators to interact with fans but often descends into a cesspit demonstrating the ugly side of the internet.
YouTube has also today announced that it is testing a new comment filter tool in YouTube Studio, the backend video management area for creators.
Writing on Twitter, YouTube chief executive Susan Wojcicki said: 'We've been looking closely at how our policies and products are working for everyone, and specifically for the Black community.'