Student of Computer Science w/ Physics | Researcher, Data Scientist
Trained a deep learning agent using TensorFlow, CUDA, and cuDNN for self-play optimization on GPUs.
As part of Shellhacks 2025 3 Teammates and I created a coastline reccession prediction website. my contribution was development, tuning, and training of a transformer model we used for prediction. I also created the portion of the backend that takes the output of the model and converts it to a suitable format. Website
Built a CNN for ASL character recognition (~95% accuracy) and integrated it into a Flutter app using TensorFlow Lite.
Simulated a robot arm using Jacobian-based IK with obstacle avoidance and joint constraints in Python.
Hosted a lecture + lab on CNNs and the MNIST dataset. View Slides
A Structured Query System for Document Mining... – Presented research funded by NASA, under Dr. Kachouie and Dianeliz Ortiz Martes.
Developed a pipeline leading 4 undergrads with custom Lagrangians for CERN collaboration using C++, Python, C, Wolfram language, BASH, etc. Employed tools such as Pythia6&8, Hepmc2&3, Geant4, Mathematica, Feynrules, and MadGraph to simulate dark matter events with a custom dark photon model.HEP Lab
LLM-Based Benchmarking and Performance Assessment of Paraphrased Sentences publication accepted by Springer Nature into ICAI'25, paper ID: ICA9746, indexation: Scopus; DBLP, EI Engineering Index (Compendex, Inspec databases); Springer Link; Google Scholar; Conference Proceedings Citation Index (CPCI), part of Clarivate Analytics' Web of Science; ACM Digital Library; IO-Port; MathSciNet; Zentralblatt MATH, and others. Only about 5% of all journals and conference proceedings reach the same high level of scientific indexing as CSCE publications. As of now, the paper acceptance rate has been between 18% and 24%
Transformer Models for Paraphrase Detection: A Comprehensive Semantic Similarity Study Journal article published by Computers. Impact factor: 4.2 (2024) doi Keywords: large language models (LLMs); paraphrase identification; performance metrics; semantic similarity; transformer-based models