Namdinator guide


What will you get?

An output PDB file fitted to the EM density map (last_frame.pdb), a phenix real space refined version of that file (last_frame_rsr.pdb), a trajectory file of the actual MDFF simulation and a log file of the whole run.


Running Namdinator

To start a Namdinator run you need to either upload a density map (in .ccp4, .mrc or .map format) or fetch a density map from the EMDB server using a valid EMDB entry code. You also need to either upload a PDB file or fetch a PDB from the RCSB using a valid PDB code. The resolution of the map is mandatory and needs to be stated in the “Map resolution” field before a run can commence.


The default settings for Namdiantor are recommended for initial runs, as they are found to be sufficient for the majority of cases. If using Namdinator for fitting models with large parts located outside the density, the number of simulation steps and the G-force scaling factor are obvious candidates to play around with.



The input model needs to be roughtly fitted to the map before using Namdinator. This means that you can not just fecth a PDB of one model and the map from a second model, and expect that the PDB and map are compatible for fitting in Namdiantor. Instead, the PDB model should be downloaded and fitted using either Colores from Situs, Chimera’s fit in map or manually fitted in Coot or PyMol etc. The whole model does not need to be inside the density, just a part of it.


Simulation time and G-scale

The G-force scaling parameter scales the force applied to the model proportionally to how far it is from the potential minima as defined by the map. If the G-scale is set too high, Namdinator may crash as the atoms may obtain to high velocity. This is why an appropiate simulation time (steps) may be required. Additionally, increasing the number of minimization steps can also help in avoiding this problem.


Tips for getting the most out of Namdinator

If Namdinator fails with errors such as “Bad global bond/angle/dihedral count” it is advisable to load the input PDB file (with HETATM removed manually) into VMD and run autopsf on it (see this tutorial for help: https://www.ks.uiuc.edu/Training/Tutorials/science/mdff/tutorial_mdff-html/node4.html) If the resulting model displays any extraordinary long bonds, remove either of the involved residues from the PDB file and try Namdiantor again.



If Namdiantor crashes due to atoms moving too fast and increasing the minimization steps did not solve it, it is advisable to visually look at the atoms that triggerd the crash, as stated in the log file, using programs like PyMol, VMD or Coot etc. It is important to know that NAMD only outputs atom number and that does not correlate with the atom numbering in the input PDB file. Instead the atom number corresponds to the atom numbering in the .psf file created by autoPSF. Within the .psf file the residue number and Chain ID belonging to the problematic atom(s), can be obtained, thus enabling visual inspection of the atoms in the input PDB file. Often it is obvious why a specific atom is causing the simulation to crash and it is advisable to correct these residues manually or deleting them from the input PDB before trying again.



In general, it is always a good idea to run iterative rounds of Namdinator, where the output from one round is used as input for the next round. This have been shown to be good at catching models stuck in a small local minimum or tidy up more severe clashes. Varying the number of maco crycles (between 1 to 5) used for phenix real space refinement, has also been observed to give very different results.



To focus a fitting procedure on a specific part of the map, segmentation of the input map can be a very powerful approach. This is especially useful for fitting individual domains from a multi domain model or to avoid the model going into part of the density you know it does not belong in e.g. micelle density of membrane proteins etc. Furthermore, if the input map contains density for large glycosylation’s or large ligands, removing the corresponding density via segmentation could also improve the obtained results, as HETATM’s are automatically removed from the input PDB file and thus not included in the simulation.



To obtain satisfactory results when trying to fit a model, where relative large conformational changes are needed, it can be very beneficial to use different filtered versions of the input map. The input map can be low-pass filtered to either 10, 15 or 20 Å using various EM software (EMAN, Relion, Chimera etc.) and then used as input for Namdinator. The resulting output model can then be used as input for another round of Namdinator against the original unfiltered map this time. For such scenarios it may be beneficial to run the first Namdinator run using a relative low G-scale value (-g 0.05-0.1) and high number of steps (-s 100.000-500.000) together with the low-passed filtered version of the map. Followed by the second Namdinator run where the g-scale is increased relative to the first round (0.5-5) for the original unfiltered map. To identify the correct combinations of the above-mentioned parameters several Namdinator runs are most likely needed.



MDFF in general is not very well suited for dealing with conformational changes where parts of the model undergo large rotations (=> 40-45 degrees). In such cases it is highly recommended that the input model is split into independent domains, if applicable, and rotated manually, using programs like Coot or Chimera. Then the domains can either be used as one single PDB file for a Namdinator run or run independent of each other in multiple Namdinator runs. This kind of manual intervention can not only greatly enhance the quality of the obtained results when using Namdinator, but can also sometimes be the difference between failure or success.



Phenix.real_space_refine is called using the default settings. While this is sufficient and beneficial for many scenarios, it will not work well for all cases. For such cases it is advisable to run phenix.real_space_refine manually in order to utilize non-default settings. Make sure to use the last_frame.pdb file as input together with the input map and the stated resolution of the map. High resolution(<3Å) structures often does not benefit from phenix real space refine!


How to expand crystallography maps to P1 spacegroup (example)

The below set of commands are based on the phenix software suite and on files downloaded directly from https://www.rcsb.org using default RCSB naming convention. The downloaded PDB file is: PDBID.pdb and the corresponding map coefficients is PDBID_phases.mtz.



To obtain a mtz file in spacegroup P1 use the following command:



iotbx.reflection_file_editor PDBID.mtz --expand_to_p1 output_file=PDBID_p1.mtz



To generate a CCP4 map based on the P1 mtz file and using weighted map coefficients and excluding R-free reflections in map calculation, use the following command:



phenix.mtz2map PDBID.pdb PDBID_p1.mtz labels=FWT,PHWT --remove extension=ccp4