About Me
The path from infrastructure to AI Security
My Path to Security
I've thought like a security engineer since before I knew the term existed. As a kid, I noticed how social systems worked. How trust was given and exploited, how people got deceived, and I found myself analyzing the mechanisms rather than just the outcomes. That same pattern carried into my technical career: when I look at a system, I naturally see how it could fail, how it could be abused, where the assumptions might be wrong.
My path through infrastructure, from banking systems at GCB, to healthcare data at Lightwave, to AWS architecture at Prymo, kept pulling me toward security because that's where the stakes are real. I do my best work when failure actually matters. I'm not chasing the title; I'm drawn to building systems that are elegant, correct, and resilient. Security is where that craftsmanship matters most.
Why AI Security
Traditional security has clear boundaries. You can audit firewall rules, inspect network traffic, trace code execution. AI systems break those assumptions. A neural network doesn't have logic you can step through. Its behavior emerges from training data you may not fully control. The same model can behave differently based on inputs in ways that aren't predictable from reading code.
That's a fundamentally different threat model, and it fascinates me. How do you secure something you can't fully inspect? How do you detect data poisoning? How do you prevent prompt injection when the attack surface is natural language? These problems don't have mature solutions yet. That's where I want to be.
What I Believe
"Deep understanding beats tool familiarity."
Most security practitioners know how to use tools but can't explain why they work, which means they're helpless when the attack doesn't match the pattern. I'd rather understand systems at the fundamental level than memorize vendor dashboards.
Problems I Want to Solve
- ML Pipeline Security: Training data integrity, model storage, inference endpoint protection. End-to-end security for machine learning systems.
- LLM Security: Prompt injection, jailbreaking, and the unique challenges of securing systems where the attack surface is natural language.
- Anomaly Detection: Building systems that detect when AI is being manipulated or behaving abnormally.
- Rapid Response: Developing methods to quickly respond when new AI vulnerabilities emerge.
The practical engineering problems organizations face deploying AI today, not theoretical alignment research.
Education
MSc Computing (Internet Technology & Security)
In Progress
Dissertation: ML-Based Anomaly Detection in DevSecOps Pipelines
BSc Computer Science
Ghana Institute of Management and Public Administration