Resumen |
Social media has emerged as a crucial space for communication, especially with the increase in its use during the pandemic. These platforms enable the exchange of information and connection among users, being particularly significant for communities such as the LGBT+ movement. However, cyberbullying towards the LGBT+ community on social media has severe consequences, including psychological harm, isolation, low self-esteem, and in extreme cases, physical violence and even deaths. Despite the policies and tools implemented by platforms to combat hate, their effectiveness varies, and the LGBT+ community remains highly exposed to these behaviors. One of the current challenges in content moderation on social media is identifying satire and irony, complicating the classification of messages as hate content. In this context, we participated in the tasks proposed by Homo-Mex [1], focusing on Task 1 and Task 3. Task 1 centers on the classification of tweets with hate content directed at the LGBT+ community, while Task 3 involves binary classification of songs in Spanish. To solve these problems, we used BERT, achieving results of 92.37% F1-score for Task 1 and 89.14% F1-score for Task 3. This research work aims to improve artificial intelligence systems for the categorization of hate speech, contributing to the creation of safer digital spaces for the LGBT+ community. © 2024 Copyright for this paper by its authors. |