
Ekemini ThompsonMost job platforms still rely heavily on keyword matching. That means a candidate searching for...
Most job platforms still rely heavily on keyword matching.
That means a candidate searching for “backend engineer” might never match with a company looking for a “server-side developer” — even though they’re essentially the same role.
I wanted to solve that problem.
So I built an AI-powered recruitment infrastructure called JobSync: a semantic matching system that understands meaning instead of just keywords.
The platform uses a dual-encoder semantic retrieval architecture powered by transformer embeddings.
Instead of matching exact words, both job descriptions and candidate profiles are converted into vector embeddings, allowing the system to retrieve candidates based on semantic similarity.
For example:
can all be recognized as closely related concepts.
The system was built with:
I wasn’t just building another CRUD app.
I wanted to explore how modern AI infrastructure could be deployed realistically by a solo developer without expensive GPU servers.
One of the biggest challenges was designing a system that could:
One of the most interesting parts of the project was comparing vector search systems.
I tested:
to evaluate retrieval latency and consistency for semantic job matching.
The results showed that Qdrant delivered significantly faster retrieval performance in my tests, especially under repeated semantic search queries.
That experiment gave me deeper insight into ANN (Approximate Nearest Neighbor) search systems and how vector infrastructure behaves in production environments.
Another thing I explored was remote LoRA fine-tuning.
Instead of training models locally on GPUs, I integrated a remote fine-tuning workflow through an external AI training API.
This allowed me to experiment with model adaptation while deploying the actual backend on CPU-only cloud infrastructure.
That experience taught me a lot about:
Some of the hardest problems were not the ML models themselves.
They were things like:
I ended up implementing lazy-loaded ML components, caching strategies, and modular API routing to keep the system responsive.
This project changed how I think about AI engineering.
I learned that building production AI systems is not only about training models — it’s about system design, retrieval infrastructure, APIs, scalability, deployment, and developer experience.
Most importantly, I learned that modern AI products can now be built by independent developers using open-source tools and smart architecture decisions.
This project started as an experiment in semantic search and evolved into a full AI-powered recruitment infrastructure.
It gave me hands-on experience with:
I’m currently continuing research and development around semantic systems, recommendation engines, and AI-powered platforms.
Would love to connect with others building in AI infrastructure, retrieval systems, or applied ML.