Hi! My apologies, if possible, could someone please help me with fine-tuning of the MegaMolBART model? I’m essentially trying to replicate the fine-tuning process from this repository (GitHub - Sanofi-Public/LipoBART ) on my own set of molecules, but it seems that there were some updates in BioNeMo documentation that create conflicts when I try to follow the tutorial BioNeMo - MegaMolBART Inferencing for Generative Chemistry — NVIDIA BioNeMo Framework on a GCP VM. Is there a guide or a tutorial covering the whole process?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Inference on finetuned MegaMolBart model fails "Unexpected key(s) in state_dict" provided model is fine | 1 | 191 | July 11, 2024 | |
Issues with Integrating Fine-Tuned Models (e.g., MegaMolBART, ESM2) into BioNeMo: Unexpected Key(s) in State_dict | 1 | 45 | February 12, 2025 | |
MolMIM example webpage missing | 2 | 22 | February 12, 2025 | |
Request for API Endpoint Information | 1 | 805 | April 24, 2024 | |
Train Generative AI Models for Drug Discovery with NVIDIA BioNeMo Framework | 1 | 340 | September 25, 2024 | |
Unable to run megatron-20b-gpt model on nemo Q&Amodel | 1 | 1200 | August 22, 2023 | |
[Tutorial] NeMo Framework Supervised fine-tuning (SFT) with Llama2 | 3 | 895 | January 30, 2024 | |
How to Create a Custom Language Model | 0 | 442 | March 15, 2023 | |
How do I obtain the attention, query and key of fine tuned model output? | 0 | 692 | October 25, 2022 | |
Deploying a 1.3B GPT-3 Model with NVIDIA NeMo Megatron | 3 | 964 | March 31, 2023 |