International Journal of Progressive Research in Engineering Management and Science
(Peer-Reviewed, Open Access, Fully Referred International Journal)
www.ijprems.com
editor@ijprems.com or Whatsapp at (+91-9098855509)


Toxic Comments Classification Using NLP (KEY IJP************141)
Abstract
"Building a multi-headed model that's capable of detecting different types of toxicity like threats,obscenity, insult and identity-based hate. Discussing things you care about can be difficult. The threat of abuse and harassment online means that many people stop expressing themselves and give up onseeking different opinions. Platforms struggle to efficiently facilitate conversations, leading many communities to limit or completely shut down user comments. So far we have a range of publicly available models served through the perspective APIs, including toxicity. But the current models still make errors, and they don't allow users to select which type of toxicity they're interested in finding Online discussions often face challenges due to toxic comments, like threats, obscenity, insults, and identity-based hate. Existing models and APIs can help identify toxic content but often lack precision and user customization. They might make mistakes, discouraging people from participating in discussions. To improve online conversations, we need smarter models that can accurately spot and categorize different types of toxicity. These models should allow users to select what kind of harmful content they want to filter, making online interactions safer and more open. This way, we can create a more welcoming digital space where people can freely express their thoughts and ideas.
DOI Requested