Bookmarks
Your saved articles
Build a Domain-Specific Embedding Model in Under a Day
Back to Articles Build a Domain-Specific Embedding Model in Under a Day Enterprise + Article Published March 20, 2026 Upvote 9 +3 Steve H steve-nvidia Follow nvidia Rucha Apte ruchaa01 Follow nvidia Sean Sodha ssodha-nv Follow nvidia Oliver Holworthy nvidia-oliver-holworthy Follow nvidia If you are building a RAG (Retrieval-Augmented Generation) system, you have likely hit this wall: Everything works… until it doesn’t. General-purpose embedding models are trained to understand the internet; not your contracts, manufacturing logs, proprietary chemical formulations or internal taxonomy. They capture broad semantic similarity, but they do not understand the fine-grained distinctions that matter in your domain. Fine-tuning an embedding model can improve the performance of your retrieval pipeline when off-the-shelf models fail to effectively capture domain-specific nuances. Despite how critical embeddings are to RAG performance, the process remains surprisingly fragmented, the skills required are specialized, and the time investment is daunting. With a single GPU and less than a day of training time, you can transform a general-purpose embedding model into one that truly understands your domain, no manual labeling required. To help you hit the ground running, we are also releasing a ready-to-use synthetic training dataset generated from NVIDIA's public documentation using this exact pipeline. Using this data and the recipe, we saw over 10% improvement in both Recall@10 and NDCG@10. Atlassian applied this recipe to fine-tune on their JIRA dataset, increasing Recall@60 from 0.751 to 0.951, a 26% improvement - on a single GPU. 🔗Quick Links to Dataset and Codes: Embedding Model GitHub Synthetic dataset on NVIDIA’s public documents 🧑💻Open Source Projects Recipe Integrates: NeMo Data Designer for synthetic data generation NeMo Automodel for embedding model training BEIR for Information retrieval evaluation NeMo Export-Deploy for ONNX/TensorRT conversion NVIDIA NIM for production inference serving 📋Prerequisites: A directory of domain documents (text files - .txt, .md, or similar) A valid NVIDIA API key (free at build.nvidia.com) NVIDIA Ampere GPU or newer with at least 80GB memory (with Compute Capability >= 8.0) This tutorial has been tested on 1xA100 (80GB), and 1xH100 (80GB) By the end of this post, you’ll know how to answer:📄 Generate training data from domain documents without labeled data🎯 Use hard negative mining for effective contrastive training🔗 Improve embedding quality with multi-hop queries⚙️ Fine-tune a bi-encoder embedding model📊 Evaluate whether fine-tuning improves retrieval🚀 Deploy the fine-tuned model in your pipeline ⚙️Setup In this tutorial, we will finetune the base model Llama-Nemotron-Embed-1B-v2 - a 1-billion-parameter embedding model that balances quality and inference cost. To get started, follow this setup guide. 📚 Step 1: Generate Training Data from Documents Fine-tuning an embedding model requires thousands of (query, relevant document) pairs. Most use cases don’t have this data readily available. Creating it manually is expensive, slow, and often biased by the annotator’s personal interpretation of what’s “relevant.”Instead of labeling data by hand, you can use an LLM (nvidia/nemotron-3-nano-30b-a3b) to read your documents and automatically generate high-quality synthetic question–answer pairs. nemotron embed sdg -c default corpus_dir=./data/my_domain_docs How does it work? Behind the scenes, this runs a four-stage synthetic data generation (SDG) pipeline powered by NeMo Data Designer: What does the output look like? Source document chunk: The thermal design power (TDP) of the H100 GPU is 700W in SXM form factor. The cooling solution must maintain junction temperature below 83°C under sustained workloads. Liquid cooling is recommended for dense deployments exceeding 4 GPUs per node, as air cooling cannot dissipate sufficient heat in standard 2U chassis configurations. Generated QA pairs: { "question": "What cooling approach is recommended when deploying more than 4 H100 GPUs per server node?", "answer": "Liquid cooling is recommended for dense deployments exceeding 4 GPUs per node, as air cooling cannot dissipate sufficient heat in standard 2U chassis configurations.", "query_type": "contextual", "reasoning_type": "factual", "question_complexity": 3, "segment_ids": [1], "quality_score": 8.5 } { "question": "How does the 700W TDP of the H100 SXM constrain the choice between air and liquid cooling in multi-GPU configurations?", "answer": "The 700W TDP generates substantial heat that must be dissipated to keep junction temperatures below 83°C. In dense configurations exceeding 4 GPUs per node, air cooling in standard 2U chassis cannot handle this thermal load, making liquid cooling necessary.", "query_type": "multi_hop", "reasoning_type": "causal", "question_complexity": 4, "segment_ids": [1, 2], "hop_count": 2, "quality_score": 9.0 } Notice the difference: the first question is