Sign language is a primary mode of communication for the deaf and hard-ofhearing community, providing a rich, visual language that enables expression and connection. However, for individuals who do not understand sign language, communication barriers persist. With recent advances in computer vision and deep learning, automated sign language recognition systems offer promising solutions to bridge this gap, enabling real-time translation of hand gestures into text or spoken language. This project focuses on implementing a real-time sign language recognition system using Convolutional Neural Networks (CNNs) to identify static hand gestures representing letters of the English alphabet.