Department of Computer Science

URI for this collectionhttps://rps.wku.edu.et/handle/123456789/45765

Department of Computer Science

Browse

Search Results

Now showing 1 - 2 of 2
  • Item
    TIME SERIES CRIME PREDICTION ANALYSIS USING RNN: A CASE OF WOLKITE CITY POLICE DEPARTMENT
    (Wolkite University, 2024-01-01) SOLOMON KASSAYE ESHETU
    Crime is an undesirable phenomenon and a global concern that impacts both society and individuals. Annually, we observe an increase in the number of criminal incidents, posing a threat to both public safety and the well-being of the community. Demanding facilities at unequal times is one problem observed in police workforce assignment. Our study aims to determine and examine the relationship between Crime date-time and the number of Crime incidents, as well as their types and locations. We collected nine thousand eight hundred twenty (9,820) criminal offenses handled by Wolkite City ranging from 2008-to-2014 E.C and we include seven (7) most frequently occur Crime types and fifty-two (52) Crime locations in our study. Different preprocessing techniques are applied such as label encoder and Minmax scaler. And we employed RNN models, including Long Short-Term Memory, Gated Recurrent Unit, Bidirectional Long Short-Term Memory and Bidirectional Gated Recurrent Unit, also train the models using training dataset and predict Crime type and location, finally evaluate the model’s using metrics like MSE, R2 and others by testing dataset. For Crime type prediction LSTM has MSE of 0.0125, 0.0126 and 0.0468, Bi-LSTM has MSE of 0.0126, 0.0125 and 0.0466, GRU has MSE of 0.0127, 0.0128 and 0.0501, Bi-GRU has MSE of 0.0126, 0.026, and 0.0468, for hourly, daily and monthly respectively for each model. For Crime location prediction LSTM has MSE of 0.0108, 0.0109 and 0.0617, Bi- LSTM has MSE of 0.0108, 0.0110 and 0.0506, GRU has MSE of 0.0106, 0.0105 and 0.0582, Bi-GRU has MSE of 0.0105, 0.0106, and 0.0513, for hourly, daily and monthly respectively for each model. For Crime type prediction Bi-GRU, Bi-LSTM, LSTM, GRU perform R2 of 0.9995, 0.9994, 0.9899, and 0.9811 respectively. Fo Crime location prediction Bi-LSTM, LSTM, Bi-Bi-GRU and GRU gained R2 of 0.9938 0.9937, 0.9937 and 0.9934, respectively. For hourly Crime type prediction LSTM is slightly better and for daily and monthly Bi-LSTM is better. For hourly and monthly Crime location Bi-GRU is slightly better and for daily, GRU slightly better. In terms of R2, Bi-GRU slightly higher score than others for Crime type and for Crime location Bi-LSTM is slightly higher R2 values. In general, Bi-LSTM and Bi-GRU gained better score for Crime prediction with low error for our dataset.
  • Item
    END-TO-END SPEECH RECOGNITION FOR GURAGIGNA LANGUAGE USING DEEP LEARNING TECHNIQUES
    (Wolkite University, 2025-10-05) ABDO NESRU EBRAHIM
    Speech recognition entails converting long sequences of acoustic features into shorter sequences of discrete symbols, such as words or phonemes. This process is complicated by varying sequence lengths and uncertainty in output symbol locations, making traditional classifiers impractical. Current automated systems struggle with speaker-independent continuous speech, particularly in low-resource languages like Guragigna, where the Cheha dialect poses additional challenges due to its purely spoken nature and lack of a rigid grammatical structure. To address these issues, this research develops an end-to-end speech recognition model utilizing deep learning techniques, specifically a hybrid CNN-BIGRU architecture combined with CTC and attention mechanisms. This approach aims to enhance alignment and robustness in noisy environments. To train and test the model, a text and speech corpus was created by compiling dataset from different sources like in Wolkite FM, the Old and New Testaments. Experimental results indicate that the CNN-BIGRU model achieves a Word Error Rate (WER) of 2.5%, showcasing improved generalization capabilities. Additionally, four recurrent neural network models LSTM, Bilstm, GRU, and BIGRU were evaluated, each configured with 1024 hidden units and optimized using the Adam optimizer over 50 epochs. The BIGRU model outperformed the others, achieving an accuracy of 97.50%, while the LSTM, Bilstm, and GRU models achieved maximum accuracies of 95.99%, 96.92%, and 96.25%, respectively. The successful implementation of this end-to-end speech recognition system significantly advances communication technologies for low-resource languages, enhancing accessibility for diverse linguistic communities. The findings underscore the effectiveness of deep learning methods in improving speech recognition performance in challenging linguistic contexts.