User Guide

This
guide is
intended to be
used as a reference manual. You may also want to follow the simple
steps
described in the tutorials
which give usage
examples
of the most important utilities. More documentation is also available
on
the methodology page
and in the published
articles . 
Content:
collage 
Simultaneous MultiFragment Refinement
colores  Exhaustive
OneAtATime 6D Search
eul2pdb
 Graphical Representation of Euler Angles
map2map 
Format Conversion
matchpt
 Point Cloud Matching
pdb2sax
 Create a Simulated SAXS Bead Model from a PDB
pdb2vol
 Create a Volumetric Map from a PDB
pdbsymm
 Symmetry Builder
qplasty
 Interpolation of Sparsely Sampled Displacements
quanpdb
 Vector
Quantization of a PDB
quanvol
 Vector
Quantization of Volumetric Map
vol2pdb
 Create
a PDB from a
Volumetric Map
volaver
 Map Averaging
voldiff
 Discrepancy / Difference Mapping
voledit
 Inspecting
2D Sections and Editing of 3D Maps
volfltr
 Denoising 3D Maps and 2D Image Stacks
volhist
 Inspecting and Shifting the Voxel Histogram
volmult
 Map / Mask Multiplication
voltrac
 AlphaHelix Detection and Filament Tracing
Header
File
and Library Routines

collage
 Simultaneous MultiFragment Refinement
Former name: colacor
Purpose:
collage performs
an offlattice Powell optimization that refines a
single structure or a multifragment model (consisting of n
input PDB files) to the nearest maximum of the
crosscorrelation score. The needed start model of fragments can be
built by eye in a graphics program, or based on colores or matchpt solutions. Due
to the large basins of attraction of each peak of the crosscorrelation
score, the program is quite tolerant of initial orientational or
translational mismatches, and it is used without the coloresstyle
exhaustive search and peak search steps. The simultaneous
multifragment optimization of 6n
rigid body degrees of freedom is advantageous because the fragments see
each other and steric clashes are thus avoided by means of the
normalization of the crosscorrelation. For more information see
(Birmanns
et al., 2011).
As an additional useful
option, collage can be used to report the
crosscorrelation coefficient
between a density map and aligned atomic structure(s) without any fitting
performed.
Basic
usage
(at shell prompt):
./collage <Density
map> <PDB file(s)> res
<float>
cutoff <float> 
The basic
input parameters
are:
<Density
map> Density map in Situs
or CCP4/MRC format (auto detect). To
convert other maps to either of these formats use the map2map
utility.
<PDB
file(s)>
A single or few input PDB files can be specified as a sequential list
(all arguments are white space delimited, i.e. there should be no
commas
or other symbols between file names). To
avoid very long arguments when processing a large number of
input
files one can also specify a directory name as the second
argument. The designated directory should contain only relevant PDB
input structures
(identified by a .pdb or .PDB filename extension).
res
<float>
Estimated resolution of the density map in Å.
[default
res 15.0]
cutoff
<float> Density map threshold value. All density levels
below
this
value
are ignored. You can use volhist
to
rescale
or shift the density levels in the voxel
histogram. [default cutoff 0.0]
corr
<int> This
option
controls the fitting
criterion.
Two
options
are implemented:
corr
0
Standard
linear crosscorrelation. The scalar product
between
the density maps of the low resolution map and the lowpass filtered
atomic
structure. This is the recommended criterion for high resolution (<10Å)
docking
or for multifragment docking at all resolution levels. [default]
corr
1
A Laplacian filter is applied by default to maximize
the
fitting
contrast. This is the recommended docking criterion
for low resolution docking (>10Å)
of single fragments. To
provide for a more robust algorithm when dealing with cropped
or thresholded experimental
data we implemented also a mask that filters out hard artificial
surfaces. Due to the masking and filtering expect overall smaller
correlation
values compared to corr 0.
More
advanced
options (at shell prompt):
<ani <float> Defines the resolution
anisotropy
factor (z direction vs. x,y
plane)
[default: ani 1.0]. Allows
one to set a different resolution in the z direction vs. the x,y plane.
E.g. "ani 1.5 res 20" specifies a 30 A resolution in the z direction,
and a 20 A resolution in x,y. This is useful for researchers dealing
with
membrane protein or tomography reconstructions that have a reduced
resolution
in the z direction.
nopowell
This flag skips the Powell optimization. Only the crosscorrelation
coefficient is computed. By default the Powell optimization is turned
on.
pwti
<float int> Powell tolerance and max number of iterations
of
the
Powell algorithm. This two parameters control the convergence of the
optimization.
By default the tolerance is set to 1e6 and the max iterations are
limited
to 50.
pwdr
<float
float> Initial gradient of the translational and rotational
search
in the
Powell optimization. By default the initial rotational gradient is set
to
3.5 degrees. The rotational gradient cannot be larger than 10 degrees.
If a larger value is chosen, that value is ignored and the gradient is
set to 10 degrees. The translational initial gradient is set to 25% of
the voxel spacing. To use the default value only for
thehttp://xray.bmc.uu.se/usf/rave.html rotational or
only for the translational gradient, choose a negative number for the
parameter that must be left at default, and your chosen value for the
other.
pwcorr
<int> This option sets the
Powell correlation algorithm. By default, the fastest algorithm which
reproduces the standard cross correlation coefficient to within the
Powell tolerance is determined at runtime.
pwcorr
0 Determined at runtime [default]
pwcorr 1 Standard
threestep code
pwcorr
2 Threestep code with mask applied
pwcorr
3 Onestep code optimized for a single, small PDB
boost <int float
float> Steric exclusion option (0=scale; 1=power), boost
threshold t,
and scale factor or exponent u.
In
multifragment fitting, overlapping subunits lead to enhanced densities
in the overlap region that reduce correlation values with the target
map. This penalty can be increased by applying a scale factor or power
law to high densities resulting from steric overlap. The threshold
parameter t
defines the fraction (typically < 1) of the maximum
singlesubunit density at
which the boost kicks in. Densities above this threshold are increased
either by simple scaling (option 0): f(x>t) = ux, or by a power
function (option 1) which is continuously differentiable at the
threshold: f(x) =
x^u (t=0),
f(x>t) = (tt/u)
+ (t/u)
(x/t)^u (t>0).
In practice thresholds near 0.9 and factors of 25, or
exponents of 520, seem
to
work
fine. Note that the density transformation affects only the upper tail
of the density distribution, i.e. nonoverlapping regions are largely
unaffected. [default: none]
Input
at
program prompt:
None.
Output:
Shell
window: The crosscorrelation
coefficient and other useful information about the progress of the
refinement. Depending on the overlap of structure and map, a good fit
with
corr 1 (default Laplacian filter) setting may have values upto 0.5,
and with corr 0 (standard correlation) may have values upto 0.9. These
values are smaller if the structures do not account for the entire map
density.
Files:
cge_001.pdb
... cge_00n.pdb The
atomic coordinates in PDB format of the final fits for each input
fragment.
cge_powell.log
This
file contains information about the 6n dimensional
Powell offlattice search. Rotational
and
translational coordinates correspond to the first 6
columns, the Euler angles are in degrees.

colores
 Exhaustive OneAtATime 6D Search
Purpose:
colores
is a general purpose, multiprocessor
capable rigidbody
search
tool, suitable for oneatatime fitting of single subunit structures,
which are not necessarily
expected to account for the full map. The fitting procedure consists of
three separate steps: (A) An exhaustive rigid body search on a discrete
6D lattice (3
translational
and 3 rotational degrees of freedom);
(B) an automatic ("black box") peak detection based on the correlation
scores on the lattice (returning a set number of solutions); (C) the
final offlattice refinement of solutions to the nearest maximum of the
correlation score (similar to collage). The program supports
the use of a Laplacianstyle filter (can be turned off) for
contourbased matching that in many cases increases the fitting
contrast
at medium to low resolution (for more info see the corresponding
methods page and Chacón and
Wriggers,
2002).
One
can balance the precision of the
search (angular granularity, option deg, and translational
granularity=voxel spacing of map) with the compute expense. However,
please consider
first the relationship of procedures (AC) and the granularity step
sizes.
The translational search
in (A) is
FFTaccelerated, whereas the rotational search is performed by
evaluating a list of Euler angles for each voxel. In
recent years a number of colores clones have been developed by us and
other groups that aim to accelerate the search in (A) further, but the
refinements in step (C) cannot be FFTaccelerated and our
tests showed that for most practical purposes the current
implementation of (A) is efficient enough to shift the performance
bottleneck to steps (B) and (C).
The standard translational granularity (voxel spacing of EM map sampled
at Nyquist rate) and the recommended rotational granularity (2030°)
are well within the large basin of attraction of the refinement in (C),
so finer granularity is not normally needed. The proposed relatively
coarse sampling in (A) also has the benefit of providing more
spreadout solutions in step (B), resulting in fewer redundant runs in
(C). Therefore, a default deg rotational step size of 30°
degrees is provided, and a translational downsampling is performed,
when necessary, to match the voxel spacing to the map resolution
according to Nyquist rate (this is a sanity check that prevents
absurdly small translational steps, e.g. in maps created by pdb2vol).
There
is no reliable way to validate the accuracy of the fitted models based
on the precision of the fitting alone (shape of peaks, scoring values
etc) as
was pointed out by Henderson
et al., 2012.
We believe that models are best validated in a comprehensive test that
includes independent biological information. As an inspiration for how
this is done for colores see the supplementary
material of a user paper we recommend: Tung
et al., 2010, doi:10.1038/nature09471.
Basic usage
(at shell prompt):
./colores <density
map> <PDB
structure> res <float>
cutoff <float> deg <float> nprocs
<int>

The basic
input parameters
are:
<density
map> Density
map in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
<PDB
structure> Atomic structure in PDB format.
nprocs
<int> This option sets the number of processors used for
the
onlattice 6D search and the offlattice Powell optimization. Colores
supports shared memory systems with multiple core and/or hyperthreaded
processors.
[default: the number of cores of the CPU]
res
<float>
Estimated resolution of your density map in Å.
[default
res 15.0]
cutoff
<float> Density map cutoff value. All density levels
below this
value
are set to zero. You can use volhist
to
rescale
the density levels or to shift the background peak in the voxel
histogram
to the origin. [default cutoff 0.0]
deg
<float> Angular sampling of the
rotational search space in
degrees.
For typical electron microscopy maps the recommended angular step size
is 2030° (a
smaller step size might return nearredundant solutions in the peak
search, and any orientational mismatch will be refined in the
subsequent Powell optimization anyway). [default deg 30.0]
corr
<int> This
option controls the fitting
criterion.
Two
options are implemented:
corr
0 [default
for res.<10Å]
Standard
linear crosscorrelation. The scalar product
between
the density maps of the low resolution map and the lowpass filtered
atomic
structure. For resolutions > 10Å this criterion is
less
discriminative.
corr
1 [default for res.>=10Å]
A
Laplacian filter is applied by default to maximize
the
fitting
contrast. This is the recommended docking criterion
for low resolution docking (up to 25Å resolution). To
provide for a more robust algorithm when dealing with cropped
or thresholded experimental
data we implemented a mask that filters out hard artificial
surfaces. Due to the masking and filtering expect overall smaller
correlation
values compared to corr 0. Also, due to the emphasis on contour
matching over interior volume matching some solutions may not be fully
contained within the map if the subunit surfaces are large compared to
the volume.
More
advanced
options (at shell prompt):
ani
<float>
Defines the resolution anisotropy
factor (z direction vs. x,y
plane)
[default: ani 1.0]. Allows
one to set a different resolution in the z direction vs. the x,y plane.
E.g. "ani 1.5 res 20" specifies a 30 A resolution in the z direction,
and a 20 A resolution in x,y. This is useful for researchers dealing
with
membrane protein or tomography reconstructions that have a reduced
resolution
in the z direction.
erang
<float float float float float float>
Defines the rotational space limits according to the range of the Euler
angles (psi, theta, phi). By default the entire rotational space is
considered
[default: erang 0 360 0 180 0 360]. Note that the Euler
angle
range
is not limited to these standard intervals, so you can specify also
negative
values (within certain limits), but in any event any colores output
angles
are remapped to the standard intervals. For example, if you want to
perform
a fine search of 2° angular sampling in only one of the Euler
angles,
these are the options:
./colores
em.sit
atoms.pdb
res 9.0 cutoff 1.0 deg 2 erang 0 360 0 0 0 0 
euler
<int> There are three ways to generate an exhaustive
list of Euler
angles that covers (nearly) uniformly the rotational space for a given
angular sampling
(option deg). The proportional method yields very even
results and also performs well for smaller intervals specified via
'erang'. The pole sparsing method is widely used but yields slightly
less
uniform distributions. The spiral method also produces a less uniform
distribution, but for medium
to low resolution docking it is quite reasonable. The Euler angles are
saved to a file col_eulers.dat, and you can edit this file and reload
it
using the euler 3 option. This way, you can also load any manually
generated
Euler angle files.
euler
0 Proportional method [default]
euler
1
Pole sparsing method
euler 2 Spiral method
euler 3
<filename> Input file
Here is
an example generating the Euler angle distribution with 8°
angular
sampling using the spiral method:
./colores
em.sit
atoms.pdb
res 9.0 cutoff 1.0 deg 8 euler 2

peak
<int> This option sets the peak search algorithm. By
default, a combined sorting and filteringbased approach is used. A
standalone filteringbased algorithm is available as an option.
peak
0 Original peak search by sort and filter [default]
peak 1 Peak search by
filter only
explor
<int> Controls the number of the best fits found in the
6D
onlattice
search to be subsequently refined by Powell optimization. This number
is only an upper bound for the final number, since redundant solutions
are removed in the Powell stage. [default explor 10]
sizef
<float> FFT zero padding factor. The low resolution map
is
enlarged by a margin of
width sizef times
the map dimensions. We have optimized the zero padding empirically [default
sizef 0.1 for standard and 0.2 for Laplacian correlation]
sculptor
Save
additional outout files for interactive
exploration (manual peak search) with Sculptor
[default: Off]
nopowell
This flag skips the Powell optimization and only the onlattice search
is performed. By default the Powell optimization is turned on.
pwti
<float int> Powell tolerance and max number of iterations
of
the
Powell algorithm. This two parameters control the convergence of the
optimization.
By default, the tolerance is set to 1e6 and the max iterations are
limited
to 25.
pwdr
<float
float> Initial gradient of the translational and rotational
search
in the
Powell optimization. By default the initial rotational gradient is set
to
25% of the angular sampling (but not larger than 10°), and the
translational
initial gradient is set to 25% of the voxel spacing. To use the default
value only for the rotational or only for the translational gradient,
choose a negative number for the parameter that must be left at
default, and your chosen value for the other.
pwcorr
<int> This option sets the
Powell correlation algorithm. By default, the fastest algorithm which
reproduces the standard cross correlation coefficient to within the
Powell tolerance is determined at runtime.
pwcorr
0 Determined at runtime [default]
pwcorr 1 Standard
threestep code
pwcorr
2 Threestep code with mask applied
pwcorr
3 Onestep code for small probe
structures
nopeaksharp
This flag skips the peak sharpness estimation procedure in order to
save processing time. By default, the peak sharpness estimation is
turned on.
Input
at
program prompt:
None.
Output:
Here is a
brief description
of the output files (see also the file headers);
col_best*.pdb
The atomic coordinates in PDB format of the best fits found in the
search.
The total number of best fits saved is controlled by the option
explor,
but only nondegenerate fits are returned, so the number may be smaller
than specified by the explor option. The PDB header contains
information
about the docking (sampling, fit criteria used, correlation values,
position
and orientation etc.). It also includes a table containing the angular
variability of the correlation about the fit.
col_rotate.log This
file contains the best translational fit (onlattice) found for each
rotation.
The first 3 columns are the Euler angles (in degrees), the next 3
columns
are the translational coordinates that gave the highest correlation
value,
followed by the correlation value (not normalized).
col_powell.log
This
file contains information about the Powell offlattice search performed
for the best fits from 6D lattice search. As before, rotational
and
translational coordinates correspond to the first 6 columns , but note
that the Euler angles are in radian units.
col_trans.sit
The onlattice translation function (in Situs map format). Since the
translational
search space corresponds to the input map lattice , we can generate a
map
in which density values are the correlation values normalized by the
maximum. You can use map2map
to
convert this to
other formats.
col_trans.log
Same as col_trans.sit, but instead of a map, the translational
correlation
values are stored in a regular file. Each row that corresponds to a
lattice
coordinate (columns 4,5,6) shows the corresponding Euler angles
(columns
1,2,3) in degrees that exhibit the highest correlation value (column
7).
col_eulers.dat
This file contains the list of uniform Euler angle triplets that
defines
the rotational space search. You can load such a file by using the
option
euler 3. You can also inspect this file with the eul2pdb
tool.
col_lo_fil.sit
The zero padded, interpolated, and filtered target volume in Situs
format just prior
to correlation calculation. This map is useful mainly for inspecting
the effect of lowpass or Laplacian filtering. You
can use map2map
to
convert this to
other formats.
col_hi_fil.sit
The filtered (and centered) probe structure on the lattice in Situs
format just prior
to correlation calculation. This map is useful mainly for inspecting
the effect of
lowpass or Laplacian
filtering. You
can use map2map
to
convert this to
other formats.
Additional output files will be written for interactive
exploration
with the sculptor
option (see
colores output and the Sculptor
documentation).
Notes:
 Depending
on the overlap of structure and map, one can expect correlation values
of about 0.60.9 or 0.30.5 for standard and Laplacian correlation,
respectively. These guideline numbers would be
smaller if the fitted atomic structure does not account for the entire
map density.
 The time
estimation gives
a quite accurate estimate of the onlattice 6D search time, however,
the
subsequent peak search and offlattice Powell optimization are not
considered in the
estimation
and depend on the explor number. You could save time
if you use only the carbon alpha or backbone atoms of the input
structure but the savings are often insignificant so we recommend
fitting with all heavy (nonhydrogen) atoms.
 If you have a large map you can try to crop
the
data
to a region of interest (e.g. the asymmetric unit). Allow for
sufficient
room for the probe structure since you do not know its exact location a
priori.
 This
is a rigid body
search. If you expect large, induced fit conformational changes, you
can
dissect your atomic structures into rigidbody domains and perform the
docking
with each of them individually. Alternatively, you may want to try our flexible
fitting strategies.

eul2pdb
 Graphical
Representation of Euler Angles
Purpose:
The eul2pdb
utility is used to generate a graphical representation of a set of
Euler angles resulting from a colores
run. The eul2pdb programs writes a pseudoatomic structure in a
PDB formatted file where the set of Euler angles is represented as a
set
of points on a 10A radius sphere. This file
can then be inspected with a visualization program (for example VMD).
Usage (at
shell prompt):
./eul2pdb col_eulers.dat
out_file
out_file: output file, PDB
format 
Input at
program prompt:
None.
Output:
Pseudoatomic structure
in PDBformat. Each triplet of Euler angles in the input
file is represented as a point on a 10A radius sphere. The phi Euler
angle (rotation in the projection plane) is encoded in the Bfactor
column of the PDB file (in radians units), whereas theta and psi
correspond to longitude and latitude on the sphere.

map2map
 Format Conversion
Former names: convert,
conformat
Purpose:
Situs programs require
density maps either in Situs
or in CCP4/MRC format. The details and conventions of these
formats are documented elsewhere.
The
map2map program
converts other map formats to and from these supported formats.
The map2map
utility reads and writes many file
formats used by standard EM or crystallographic application software.
These include the
MRC,
SPIDER,
and CCP4 formats and generic 4byte floatingpoint binary
formats
(automatic byteorder adjustment). XPLOR maps in ASCII format, and
ASCII
files that contain a sequence of density values in free format are also
recognized. The reverse conversion to CCP4,
MRC,
XPLOR, or SPIDER format is also supported.
Usage (at
shell prompt):
./map2map file1 file2
file1: inputfile
file2: outputfile

Interactive input at
program prompt (for automation see below):
 Input
file
format.
 Input
file
specific header fields if they are missing or if manual editing of
fields is selected.
Output:
Density file in
selected format.
If necessary, the program permutes the axis order (CCP4 and
MRC) and interpolates maps to a cubic lattice (XPLOR,
CCP4,
MRC). Details vary by map format and are too numerous to
list
here, please read about the Situs and CCP4/MRC conventions here and inspect the program text in
the terminal window carefully.
Notes:
 Also
check
out the free conversion programs MAPMAN/RAVE,
and
especially em2em
which also has Situs and CCP4/MRC format options.
 The maps are quite
robust under
most round trip conversions, but note that
header fields
WIDTH,
ORIGX,
ORIGY, ORIGZ are not part of the
SPIDER
specification and cannot be saved in SPIDER.
 Advanced
users may try the manual assignment of header fields, if
available.
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).

matchpt 
Point Cloud Matching
Supersedes former programs:
qdock, qrange
Purpose:
The
classic Situs 1.x style matchpt ("matchpoint") utility is a
commandline program for matching arbitrary sized 3D point sets
(coarsegrained models), which can be generated on the fly or by using the output of the Situs
programs quanpdb
and quanvol. matchpt
can dock a subunit into a larger target map, i.e find N codebook vectors
within another set
with M vectors, N<M (where M
≈ nr. units * N) and match them. To solve this problem, matchpt
uses a heuristic and investigates only a subset of all possible
permutations of featurepoints (Birmanns
and Wriggers, 2007).
The idea of matching
point sets was based on the observation that for many lowresolution
maps numeric values of the
crosscorrelation (CC) are often in a narrow range and less
discriminatory as the RMSD values of the feature point (Wriggers
et al, 1999).
This is due to the fact that feature points can reliably and
reproducibly encode the molecular shape even in the absence of interior
(secondary structure) density features. Therefore, it makes sense for
difficult lowresolution maps to try matchpt as an alternative to
the CCbased tools colores
and collage.
In the default mode a user would explore the quality of the match of
the
point clouds by minimizing their RMSD. Alternatively, the
minimum
of the statistical
variability (here: sum of average variabilities of both point sets) can
be used for selecting an optimum number N. M as it was found to be a
good estimator for the docking accuracy (Wriggers
and Birmanns, 2001).
Finally, a user may wish to explore the standard crosscorrelation
(CC),
which is discretely sampled by the point cloud matches (however, as
stated CC refinement is not the main purpose of the program as
more powerful CCbased tools exist).
Basic usage (at
shell prompt):
./matchpt
file1 file2 file3 file4 [options]
file1:
inputfile 1,
Codebook
vectors from quanvol in PDB format
Use NONE if the codebook vectors should be calculated within matchpt.
In that case matchpt will compute and match a series of vector sets and
will return the result with the smallest rmsd. file3 also has to be set
to NONE.
file2:
inputfile 2, Density
map. Use NONE if no correlation calculation desired
file3:
inputfile 3, Codebook
vectors from
quanpdb in PDB format.
Use NONE if the codebook vectors should be calculated within matchpt
file4:
inputfile 4,
Highresolution
structure in PDB format. Use NONE if only the codebook vectors should
be matched.

Optional
command
line parameters:
res
<float>
Estimated resolution of file2 in Å. This currently affects
only the
computed crosscorelation value, not the docking.
[default
15.0Å]
explor
<int> Controls the maximum number of docking solutions
that are
'explored' and written
to disk. This number is an upper bound since the solutions must pass
the anchor point matching criteria below.
[default
10]
mincv
<int> For autogeneration of
feature points, or "codebook vectors" (CV), minimum
number N of vectors per structure (file4)
unit,
must be >= 4. [default 4]
maxcv
<int> For
autogeneration of codebook vectors, maximum
number N of vectors per structure (file4) unit,
must be >= mincv.
[default 9]
units
<float> For
autogeneration of N
and M (M
≈ units * N) codebook
vectors, number
of structure units contained in target volume.
[default
1.0]
nprocs
<int>
Number of parallel processing threads. This
is still an experimental feature, so use with caution: We noticed that
the parallel performance can be compiler or
machinedependent. If
a
multithreaded application appears slow or unstable on your system, we
recommend to reduce the number of threads, or use the default serial
application. [default:
1]
anchor
<float> Radius of initial anchor point triangle
search
space in
Å. [default
12.0Å, the larger the slower]
radius
<float> Radius of the
neighborsearch in
Å.
[default 10.0Å, the larger the slower]
wildcards
<int> Wildcards: How many unmatched points are allowed.
To avoid
false positives, it should not be larger than 10% of N. [default
0, the
larger the slower]
penalty
<float> Wildcard penalty in
Å.
How much should the solutions be penalized if they include unmatched
points. [default: 1Å]
runs
<int>
Number of anchor point runs. The algorithm will try different anchor
point triangles
if set to > 1. [default:
3]
cluster <int>
Number of statistically independent runs used in the
clustering of
the CV and in the determination of their variabilities. The CV and
their variabilities will become more accurate with larger number but
compute time increases linearly. [default:
8, the
larger the slower]
ident
<float> Distance cutoff in
Å
for removing identical solutions. Useful mainly for oligomeric systems.
[default
0Å, the higher the more results are
filtered]
ranking <int> Sets criteria for sorting
the solutions and for the selection of the optimum number of CV when
probing a series of codebook vectors in autogenerate mode. The
left
criterion in the list below controls the selection of
optimum number of
CV (if a range of numbers is explored),
the right criterion
always controls the ranking of
the final solutions written to disk:
ranking
0: minimum
RMSD of codebook vectors / RMSD
[default]
ranking
1:
minimum
RMSD of codebook vectors / CC
ranking 2:
minimum
CV
variability / RMSD
ranking 3: minimum
CV variability / CC
ranking
4:
maximum
CC / RMSD
ranking 5:
maximum
CC / CC
Output:
Shell
window: Solution
filenames, codebook vector RMSDs, crosscorrelation coefficients and
permutations are printed out. The permutations indicate the order of
low
res. vectors fitted to high res. vectors.
Files:
mpt_CV_001.pdb
... mpt_CV_00n.pdb
The fitted codebook vectors in PDB format.
mpt_CV_map.pdb
The codebook vectors for file2 (volumetric map) in PDB format.
mpt_001.pdb
... mpt_00n.pdb The
atomic coordinates in PDB format of the structures
fitted by codebook vectors.
[
or mpt_001.log
... mpt_00n.log
log files (text format) with fitting transformation.
]
Notes:
 Like
in other Situs tools, the input density map should be in Situs or CCP4/MRC format (auto
detect). To
convert other maps to either of these formats use the map2map
utility.
 For
situations where a smaller
structure has to be docked into a
larger density (e.g. oligomeric map), several parameters
need to be adjusted.
(1) Most importantly,
the units
parameter defines the fraction of occupied volume (which
may be noninteger), i.e. it estimates how many
atomic structures (file4) fit into the volume (file2). (2) To estimate
the number of codebook vectors (mincv and maxcv), divide the volume
of the target map by (units * res^3) which gives an upper bound for
the number of CV that should be used. It's useful to bracket near the
50% level of this upper bound, e.g. mincv ~ 20% and maxcv ~
50%. (3)
The explor parameter controls how many files are written to disk.
This should be at least the number of subunits (units) of the system,
but in
practice it should be set to a higher value to avoid false negatives
(sometimes the algorithm finds multiple possible orientations for a
single subunit which might push another solution out of the the top
ranking list). (4) The parameter ident can also help avoid finding
multiple instances of the same unit. ident will
filter the solutions based on the distance of the centroids of the
found subunits: if two configurations are too close, only the one with
the higher score is considered. It is recommended to try to increase
the number of solutions first before one filters the found units
with the ident parameter.
 Sometimes
the default parameters simplify the search process too much
and an insufficient
number of solutions (or none) are found. In this
case try first to match the map as closely to
the atomic structure as possible, e.g. one could segment or threshold
the map with voledit,
to reduce the size of the occupied volume and to exclude outside
densities or noise. One could also adjust the
algorithm, e.g. an increase in the number of anchorpoint triangles
(via
anchor), leads to only a moderatly increased runtime of the program.
In another step, one could also try to increase the search radius for
potentially
matching points
(via radius), which will increase the runtime more
significantly.

pdb2sax
 Create a
Simulated Bead Model from a PDB
Purpose:
The pdb2sax
utility allows one to fill an input atomic structure with closepacked
spheres on a hexagonal lattice. It allows one to create simulated bead
models for validating Situs modeling applications.
Usage
(at shell prompt):
./pdb2sax file1 file2 radius
file1: inputfile, PDB format
file2: outputfile (bead model),
PDB format
radius: bead radius in Angstrom

Interactive input at
program prompt (also suitable for automation):
Choice of
atom
massweighting and
Bfactor cutoff level. Atoms with Bfactors above the cutoff level will
be ignored. You
can automate
this program by "overloading" the standard input if you
put expected
values in a script, see run_tutorial.bash script in the tutorials.
Output:
PDB file that contains the
centers of
the simulated beads with the radii in the occupancy column. This file
can then be either inspected with a visualization program (for example VMD), or processed into a bead
volume map
using pdb2vol.

pdb2vol
 Create a Volumetric Map from a PDB
Former name:
pdblur
Purpose:
The pdb2vol
utility is a realspace
convolution tool. It allows one to lower the resolution of an atomic
structure
to a userspecified value, or to create a bead model from atomic
coordinates.
The structure is first projected to a cubic lattice by trilinear
interpolation.
Subsequently, each lattice point is convoluted with one of five
supported
kernel (pointspread) functions.
Usage
(at shell prompt):
./pdb2vol file1 file2
file1: inputfile (PDB format)
file2: outputfile (density map)

Interactive input at
program prompt (also suitable for automation):
 If
water,
hydrogen,codebook vector atoms are present, choice
of ignoring them.
 Choice
of
atom
massweighting and
Bfactor cutoff level. Atoms with Bfactors above the cutoff level will
be ignored.
 Desired
voxel
spacing for output
map.
 Kernel
width,
defined by either the
kernel halfmax radius rhalf (enter positive
value) or by the
target
resolution of the output map (enter value of resolution as negative
number).
The standard deviation (sigma) of the kernel is
assumed to be
half
the target resolution.
 Type
of
smoothing kernel:
 Gaussian,
exp(1.5
r^2 / sigma^2)
 Triangular,
max(0,
1  0.5 r
/ rhalf)
 SemiEpanechnikov,
max(0, 1 
0.5 r^1.5 / rhalf^1.5)
 Epanechnikov,
max(0, 1  0.5 r^2
/ rhalf^2)
 Hard
Sphere, max(0, 1 
0.5 r^60 / rhalf^60)
 Choice
of
correction for lattice
smoothing (subtract the lattice projection meansquare deviation from
the
kernel variance).
 The
kernel
amplitude at the kernel
origin (r=0).
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Output:
Density map either in
Situs or CCP4/MRC format
(format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise).
The new grid follows the coordinate system origin convention of the
atomic
structure and forms the smallest possible box that fully encloses the
structure
convoluted by the kernel.

pdbsymm
 Symmetry Builder
Supersedes
former program: hlxbuild
Purpose:
The
pdbsymm tool generates multiple copies of the input structure according
to a userspecified symmetry. Currently supported symmetry types
include: C, D and helical. The program assumes that principal symmetry
or helical axes are oriented in the z direction. If an input map
(optional) is specified, the x and yposition of the
principal symmetry or helical axis is automatically
set to
the center of
a xy crosssection*. If D symmetry is requested, the zposition of the
secondary axes is set to
the center of the yz crosssection*.
*Normally
the geometric center is used, but if
this center falls between voxels, in version 2.6.2 and
later it will be set to the
next highest
voxel.
E.g. the center of an evennumbered 64x128x256 map would be set
to voxel (33,65,129), whereas the center of an oddnumbered 63x127x255 map would be the exact
geometric center (32,64,128).
You can pad or crop maps
with voledit to
set the symmetry axes as desired, or the symmetry center
can be specified interactively without use of the optional
map.
The
optional map must be in Situs
or CCP4/MRC format (auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./pdbsymm file1 [file2] file3
file1: inputfile, PDB format
file2: (optional) inputfile for
helical or symmetry axis (density map)
file3: outputfile, PDB format

Interactive input at
program prompt (also suitable for automation):
Depending
on
symmetry type:
 Helical
rise
per subunit (in zdirection).
 Angular
twist
per subunit (sign determines
handedness).
 Desired
number
of subunits to be
placed before file1 structure.
 Desired
number
of subunits to be
placed after file1 structure.
 Order
of
the principal symmetry.
 [If
file2
is unspecified: x and yposition
of helical axis (offset from file1 coordinate system origin).]
 zposition
of secondary symmetry axes (for D symmetry  offset from file1
coordinate system origin).
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Output:
Symmetry
PDB file containing
multiple copies of input PDB file.

qplasty
 Interpolation of Sparsely Sampled Displacements
Purpose:
This program performs an
approximative
flexible fitting of
atomic resolution data
based on a coarse input model of displacements. The interpolationbased
flexing is quite reasonable
at the carbon alpha level of proteins, but bond lengths and angles at
the atomic level may get distorted a bit. Flexing with qplasty is
offered as a userfriendly alternative to a more complicated molecular
dynamics simulation protocol. The
qplastyflexed structures may be processed further by a variety of
simulation or structure refinement tools. For more information see
(Rusu
et al., 2008).
Usage:
First, the user
must create
a suitable coarse model of the displacements using codebook
vectors as simulated markers for the feature positions before
and
after flexing. Details of
the modeling steps are explained in the basic
and advanced
flexing tutorials. The
displacements in the form of two PDB input files are then applied in
the UNIX command shell as follows. By default, the program assumes
Global IDW interpolation with exponent 8. The user may specify an
option byres
to turn on
interpolation by residue, or interactive
for a free choice of interpolation methods and parameters.
Usage (at
shell prompt):
./qplasty file1 file2 file3
file4
[options]
file1:
inputfile (atomic structure), PDB
format
file2: inputfile (start codebook
vectors),
PDB format
file3: inputfile (end codebook vectors or
displacements), PDB format
file4: outputfile (flexed atomic
structure),
PDB format
[options]:
optional flag for default
parameters or full interactive mode:
(default)
or
byatom : interpolation by atom
byres : interpolation by residue,
to reduce
the number of broken bonds
interactive : free choice of
methods and parameters

Optional interactive / manual input at
program prompt with interactive:
 The choice of interpolation
method
(ThinPlateSplines, Elastic Body Splines, Global Inverse Distance
Weighting, Local Inverse Distance Weighting). The default method (Global IDW interpolation
with exponent
8) performed best in our tests (Rusu
et al, 2008),
so there is no need to change it except for further validation of the
supported algorithms.
 Various kernel and parameter
choices
(for details see Rusu
et al, 2008).
Output:
 (Program
level:) Various interpolation parameters and methods.
 (Shell
level:) The flexed atomic coordinates will be exported into file4.

quanpdb
 Vector Quantization of a PDB
Former name: qpdb
Purpose:
Specialized tool
to perform a vector
quantization (coarsegraining) of atomic resolution data into
a set of pointbased fiducials, the socalled codebook vectors.
To enable skeletonbased
flexible
docking
with quanvol,
quanpdb
includes options to
learn vector
distances
and to export the Voronoi
cells generated by
the
codebook vectors.
In quanpdb a small number of
calculations
(8 by
default)
are repeated with different random number seeds. The averaged codebook
vectors and their statistical variability are then written to the
output
file.
Usage:
First, the user
must determine
a suitable number of codebook vectors. The program was originally
designed for 50 vectors or less, but user may experiment with higher
numbers. quanpdb employs an implicit massweighting of the
input PDB. The program also allows to ignore
flexible
or poorly defined atoms with high crystallographic Bfactors. This
option
should only be chosen if there is an indication that parts of the
protein
are not visible in the lowresolution data due to disorder.
Usage (at
shell prompt):
./quanpdb file1 file2
file1: inputfile (atomic
structure), PDB format
file2: outputfile (codebook
vectors), PDB format

Interactive input at
program prompt (also suitable for automation):
 If water,
hydrogen,codebook vector atoms are present, choice
of ignoring them.
 Bfactor
cutoff
level. Atoms with
Bfactors above this level will be ignored.
 Number of
codebook vectors.
 Choice of
computing the vector connectivities
(neighborhood relationships) with the Competitive
Hebb Rule and writing them to a file.
 Choice of
computing the Voronoi
cells and writing them to a file.
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Output:
 (Program
level:) The sphericity,
a measure between 0 and 1 that characterizes how spherical the shape of
the structure (file1) is. After the vector quantization the program
prints
the average rms variability of the codebook vectors in Angstrom. Also
given
in Angstrom is the radius of gyration of the
vectors.
 (Shell
level:)
Codebook vectors in
PDBformatted output file. The vector rms variabilities, representing
the
precision of the codebook vectors, are written to the occupancy fields
of the PDBstyle atom entries. The effective radii of the Voronoi cells
are written to the Bfactor fields of the PDBstyle atom entries.
(Optional)
Vector connectivities can be written to a PSF file or a
distance
constraints file. Constraint file entries are
triples
of freeformat values in the order <index1>
<index2>
<distance
in Angstrom>, where the indices correspond to the order of
vectors
in file2,
counting from 1. (Optional) The Voronoi cells can be written to a PDB
file
consisting of the file1 atom entries where the index of each
corresponding
vector is written to the Bfactor field of the output file.

quanvol
 Vector Quantization of Volumetric Map
Former name: qvol
Purpose:
Specialized tool
to perform a vector
quantization (coarsegraining) of density maps into
a set of pointbased fiducials, the socalled codebook vectors. quanvol
supports featurebased tracking and flexible
docking with qplasty.
In the absence
of existing vector positions, quanvol carries
out a global search using the TRN
algorithm. If
start vectors are already known, the LBG refinement
algorithm (local search) is used instead of TRN, or connectivities can be learned. LBG
allows to add distance
constraints to the vector refinement that are useful for
flexible
docking.
With TRN,
a small number
of calculations (8 by default) are repeated with different random
number
seeds. The averaged codebook vectors and their statistical variability
are then written to the output file. With LBG,
no statistical clustering is performed. In this case it is important to
specify reliable initial positions from a prior quanvol run.
Usage:
Before applying quanvol, one
can modify the density map using voledit.
Next, the user
must determine
a suitable number of codebook vectors. Only densities above a
userdefined threshold
value are considered by quanvol to eliminate background noise in the
lowresolution
data. Depending on the noise, this threshold value should be at 5080%
of the level that is typically considered the "molecular surface" of
the
biopolymer in the lowresolution data.
New vector
positions are calculated
automatically with the TRN
method if no start
vectors
are specified. Subsequently, these vector positions can be refined in a
second quanvol run with the LBG
method.
Also, any distance constraints can be read from a file or entered at
the
command prompt at this time.
The
input map must be in Situs or CCP4/MRC
format (auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./quanvol file1 [file2] file3
file1: inputfile, density map
file2: inputfile, start vectors,
PDB format (optional)
file3: outputfile, PDB format

Interactive input at
program prompt (also suitable for automation):
 Choice
of
utilities to inspect the
density distribution (e.g. voxel histogram).
 Threshold
(cutoff) density value.
 Number
of
codebook vectors.
 (If
file2
is
specified): Choice of
entering distance constraints manually or from a file.
There are two constraint file options. Constraint
file entries generated e.g. with quanpdb
are triples
of freeformat values in the order
<index1>
<index2> <distance in Angstrom>, where the
indices
correspond to
the order of vectors in file2, counting from 1. It is also possible to
read the connectivities from a PSF file in which case the missing
distances are computed from file2.
 Choice
of
computing the vector connectivities
(neighborhood relationships) with the Competitive
Hebb Rule and writing them to a file.
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Output:
 (Program
level:) Statistical analysis
of the vectors and their radius of gyration, i.e. the radial rms
deviation
from the vector center of mass.
 (Shell
level):
Codebook vectors in
a PDBformatted output file. The vector rms variabilities, representing
the precision of the codebook vectors, are written to the occupancy
fields
of the PDBstyle atom entries. If desired, vector connectivities can also be
written to a PSF
file
or to a distance constraints file.
Notes:

Vector
connectivities in PSF format
can be visualized and edited as bond connections (together with the
atomstyle PDB
entries of file2 and file3) using the molecular graphics program VMD.
Simply overload
the
PSF file into the PDB file in the VMD 'Molecule' menu. Then
under the 'Mouse' menu select 'Add/Remove Bonds'. The edited
connectivity can then be saved later into a PSF file from the VMD
command console (assuming your molecule
is 'top'):
set sel [atomselect top all] $sel writepsf my.psf

 If
there
are
cluster size deviations
from the expected value (default: 8) when using the TRN
algorithm, refine the found vector positions by passing them to quanvol
as
input file of a second, LBG
run.
 Distance
constraints do not determine
the chirality (handedness) of vector connections. If you encounter
mirror
images or otherwise flipped connections after running quanvol compared
to
connections determined with quanpdb, you need to experiment with the
indexing
of your constraints. The LBG method combined with the SHAKE constraint
algorithm is relatively insensitive to the position of start vectors.

vol2pdb
 Create
a PDB from a
Volumetric Map
Purpose:
The
vol2pdb
utility allows one to encode positive density values of a 3D map into a
PDB file with the densities written to the PDB occupancy column. This
is useful for colores
and collage,
both of which require a PDB and a map as
input parameters. The
input map must be in Situs or CCP4/MRC
format (auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./vol2pdb
file1 file2
file1:
inputfile, density map
file2:
outputfile, PDB format

Input at
program prompt:
None.
Output:
PDB
format file
with densities written to occupancy field (if rescaling is necessary due to limited bandwidth in that field, the
conversion factor is reported by the program).

volaver
 Map Averaging
Purpose:
The
volaver
utility allows one to compute the density average of multiple input
density maps The input datasets can differ
in their geometric parameters. If necessary the input files are
resampled to a grid that spans all input files. Maps must
be in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./volaver file1 file2 [file...]
outfile
file1:
inputfile 1, density map
file2:
inputfile 2, density map
[file...] (optional) additional input maps
outfile:
outputfile density map

Input at
program prompt:
None.
Output:
Density map either in
Situs or CCP4/MRC format
(format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise). The
new density values are computed by averaging
the input densities. The input maps are resampled by
trilinear interpolation, if necessary, to span all input files. The
voxel spacing of the output grid is set by input file 1.

voldiff
 Discrepancy / Difference Mapping
Former
name:
subtract
Purpose:
The
voldiff
utility allows one to compute the difference density map (discrepancy
map)
of two volume data sets. The input maps can differ
in their geometric parameters. If necessary the second input file is
resampled to
the grid of the first input file. Maps
must
be in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./voldiff file1 file2 outfile
file1:
inputfile 1, density map
file2:
inputfile 2, density map
outfile:
outputfile,
density map

Input at
program prompt:
None.
Output:
Density map either in
Situs or CCP4/MRC format
(format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise). The
new density values are computed by
subtracting
the corresponding density values of file2 (which is resampled by
trilinear interpolation, if necessary) from those of input file1.
The output grid thus inherits the geometric parameters from file1.

voledit
 Inspecting 2D Sections and Editing of 3D Maps
Supersedes
former programs: volslice, floodfill, volpad (padup), volcrop
(pindown), volvoxl (interpolate)
Purpose:
2D
crosssections
of 3D density
data in the (x,y), (y,z), or (z,x)planes, or 2D projections
in z, x, y direction, can be inspected with the
simple terminal window graphics program voledit. To
ensure maximum compatibility and ease of installation we use 1970s
style ASCII text character rendering. Despite this simple retro
appearance the program is surprisingly powerful and offers a large
number of useful map editing functions. For example, volumes can be
edited by cropping, zero padding, polygon clipping, thresholding, and
segmentation (specified under options). The
utility
can also be used to write individual 2D slices or 3D volumes to
external files. The
map
must be in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage
(at
shell prompt):
./voledit file1
file1: inputfile, density map

Interactive input at
program prompt (also suitable for automation):
 Type of cross
section, (x,y),
(y,z), or (z,x).
 Display cutoff value for
the rendering of the density (options).
 z, x, or y
position of the cross
section plane (grid units).
 Display voxel step (for
display of
larger maps).
 Polygon
clipping
parameters and
vertices (options).
 Cropping
parameters in voxel units (options).
 Zero padding
in voxel units (options).
 Density threshold where all
densities will be set to zero (or one) (options).
 Segmentation
parameters to extract a
targeted
contiguous volume.
Originating
from the vicinity of a given start voxel, voledit finds recursively
the
maximum contiguous volume formed by neighboring voxels that exceed a
given threshold density level. An additional layer is added
for
aesthetic
reasons
to facilitate isocontouring near
the cutoff
level. Although the extracted grid contains some voxels (in the contour
layer) with densities below the cutoff, all voxels with density values
above the cutoff are guaranteed to be part of the found contiguous
volume.
Voxels outside the contour layer are assigned a density value
of
0.
 File name for
2D slice or 3D
volume output
file (options).
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Output:
(Shell
window:)
Crosssection
of the input map for standard section indices > 0 (follwing
Situs
index numbering convention starting at 1), or the projection (average)
along this direction for a special slice 0. Larger maps are resaled by
the voxel
step parameter that can be set under options. Pairs of displayed voxels
neighboring
in the vertical direction are represented by a single character:
'^'
if
the upper voxel
density exceeds a cutoff evel,
'u',
if the lower
voxel density
exceeds the cutoff level,
'0'
if both upper
and lower voxel
densities exceed the cutoff level, and
' ',
if the
densities are below
the cutoff.
(2D
Output File:)
Voxel indices and/or
density of specified section (or projection) with userselected
formatting.
(3D Output
File:) Density map either in
Situs or CCP4/MRC format
(format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise). The new grid inherits the
voxel size (grid spacing) of the old grid.
The
number of x, y, and z increments, and the coordinates of voxel (1,1,1)
depend on the chosen editing options (cropping, padding, segmentation).
For
example,
in segmentation when shrinking of the box is selected, the grid
dimensions are determined by the minimum box that contains
both
the contiguous volume plus one layer of neighboring voxels (for
rendering purposes).
Notes:
This
program
requires the use
of a fixedwidth font in the shell window.

volfltr
 Denoising 3D Maps and 2D
Image Stacks
Purpose:
The volfltr
("volfilter") utility currently performs a denoising of 3D maps and
2D image stacks using a digital paths filtering approach (Starosolski
et al., 2012). As with all Situs programs, volfltr can be called
without arguments to access the usage documentation. Users are referred to a related Sculptor tutorial for usage examples.
Note that the input map must be in Situs or
CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage
(at
shell prompt):
./volfltr file1 file2
<mask_width> <path_length>
<path_type>
<beta> [<nprocs>]
file1: input file, density map
file2: output file, density map
Example of use: volftr infile.situs outfile.mrc 5 2 3 0.0001

Required
command
line parameters:
mask_width
<int>
2*n+1 (odd integer), where integer n > 0 is the minimum path
length. [example:
5]
path_length
<int> n (emphasize straight paths) or >n
(emphasize curved paths).
[example: 2]
path_type
<int> 0 for 4neighborhood 2D
model (image stack)
1 for 8neighborhood 2D model (image stack)
2 for 6neighborhood 3D model (3D map)
3 for 26neighborhood 3D model (3D map) [example:
3]
beta
<float> weighting exponent. This exponent determines how
the ranked paths affect the filter output. The number depends on the
path cardinality (see paper). Larger values are
more discriminative. [example:
0.0001]
Optional
command
line parameter:
nprocs
<int> Number
of parallel processing threads. [default: number
of cores of the CPU]
Input
at
program prompt:
None.
Output:
Filtered
density map
either in Situs or CCP4/MRC
format
( format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise).

volhist
 Inspecting and Manipulating the Voxel Histogram
Former name:
histovox
Purpose:
The volhist
utility prints or matches the
voxel density histogram (Wriggers et al., 2011) of input maps. The
histogram
illustrates two general properties of lowresolution density
distributions.
First, a pronounced peak at low densities is due to background
scattering.
The protein density typically corresponds to a second, broader peak at
higher densities. When integrating the histogram ``from the top down'',
the known molecular volume of a protein can be used to compute its
boundary
density value. The volhist program also allows the user to add a
constant
value to the densities (bias) to shift the background density,
and applies a factor to rescale the densities (gain). This can be done
interactively to match densities between two input maps.
The input map(s)
must be in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./volhist infile1 [[infile2]
outfile]
Modes:
volhist
infile1
(print histogram)
volhist infile1
outfile
(rescale or shift densities)
volhist infile1 infile2
outfile (match
histogram of infile2 to that of infile1)

Interactive input
at
program prompt (if outfile specified):
 Offset
density
value (bias, will be added
to all voxels).
 Scaling
factor (gain)
 You can automate
this interactive program by "overloading" the standard input (if you
put expected
values in a script, see run_tutorial.bash script in the tutorials).
Interactive input
at
program prompt (if infile2 and outfile specified):
 Surface
isocontour levels for infile1 and infile2
 Scaling
factor (gain) is automatically computed based on input surface values.
 Centering
of the central peak of the trimodal difference histogram.
 Offset
density
value (bias, will be added
to all voxels), based on interactive centering.
 For more information on
the interactive matching procedure see (Wriggers
et al., 2011).
Output:
 (Program
level:) Voxel histogram
and fractional volume of volumetric data echoed to the screen. The
histogram
bars are normalized by the second highest density peak.
 (Shell
level:) If specified, a density map
is written either in
Situs or CCP4/MRC
format (format based on file name extension: Situs if .sit or
.situs, CCP4/MRC otherwise). The new density values are
computed by adding
the
offset value and by multiplying the scaling factor entered at the
program
prompt, and setting any negative densities to zero. The new grid
inherits all size and position parameters of the
old
grid.

volmult
 Map / Mask Multiplication
Purpose:
The
volmult
utility allows one to compute the product of two volume data sets. This
is useful e.g. when using a binary map (generated
with voledit binary
thresholding) as a mask. The input datasets can differ
in their geometric parameters. If necessary the second input file is
resampled to
the grid of the first input file. Maps
must
be in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
Usage (at
shell prompt):
./volmult file1 file2 outfile
file1:
inputfile 1, density map
file2:
inputfile 2, density map
outfile:
outputfile,
density map

Input at
program prompt:
None.
Output:
Density map either in
Situs or CCP4/MRC format
(format based on file name extension: Situs if .sit or .situs, CCP4/MRC
otherwise). The
new density values are computed by multiplying
the corresponding density values of file2 (which is resampled by
trilinear interpolation, if necessary) to those of input file1.
The output grid thus inherits the geometric parameters from file1.

voltrac
 AlphaHelix Detection and
Filament Tracing
Purpose:
The voltrac ("volume
tracer") utility can be used for
detecting curved alphahelices in intermediate resolution density maps
using a genetic algorithm. For detailed usage information,
motivation of the parameters, and examples see the application papers to alpha helix detection (Rusu and Wriggers, 2012) and tomography filament tracing (Rusu et al., 2012). Users are also referred to a related Sculptor tutorial for usage examples.
Note: Negative densities may be discarded (i.e. the
map is thresholded at zero) after an (optional) local normalization
step. To avoid this, map densities can be shifted with volhist prior to application of this tool.
Basic usage (at
shell prompt):
./voltrac <density
map> res <float> ntraces <int>
expth <float> nprocs <int> 
The basic
input parameters
are:
<density
map> Density
map in Situs or CCP4/MRC format
(auto detect). To
convert other maps to either of these formats use the map2map
utility.
res
<float>
Estimated resolution of the density map in Å.
[default: 8.0]
ntraces
<int> The number objects (i.e.
alpha helices or filaments) to be traced.
[default: 20]
expth
<float> Expansion threshold as
percent (values between [0 100]). Decrease value for noisy maps.
[default: 70]
nprocs
<int> The number of parallel threads on shared memory
machines.
[default: the number of cores of the CPU]
ani
<float> Resolution anisotropy factor (Z vs XY),
typically ≥ 1. [default: 1.0]
lambda
<float> Orientationdependent density attenuation
parameter (Z vs XY), typically
≤ 1.
[default: 1.0]
More
advanced
options (at shell prompt):
locnorm
<float> Apply local normalization (see paper)
using sigma in voxel units. 0  no local normalization is applied. [default: 2.5]
postgauss
<float> Gaussian smoothing after any local normalization,
sigma in voxel units. 0  no Gaussian smoothing
is applied after any local normalization.
[default:
1.5]
popsize
<int> Genetic algorithm population size (increase this
number for large maps). [default: 100]
maxgen
<int> Maximum number of generations explored by the
genetic algorithm (this is really only a hard limit of last resort, as
the runs are usually stopped early by stop criteria) [default:
10000]
syncgen
<int> Generations explored before parallel
population is synchronized [default:
100]
garadius
<float> Radius of the search template in Å
[default: 2.0]
galength
<int> Length of the search template in Å
[default:
20]
expradius
<float> Radius of the expansion template in Å
[default:
1.0]
explength
<int> Length of the expansion template in Å
[default: 8]
distseg
<float> Sampling step distance for search/expansion
templates in Å
[default:
1.0]
taburad
<float> Tabu region radius in Å
[default: 6]
expstep
<float> Translation step of the template during the
expansion in Å
[default:
1.4]
outtempl
Output the search and expansion templates [default: none]
Even
more advanced
options (using a configuration file):
The inner workings of the
genetic algorithm and many more parameters can
be controlled in detail using a parameter
configuration file that can be edited by the user. For
information on how to write and read such a configuration file, call
voltrac with the expert
option.
Output:
All traces
detected during the search as well as a ranked list of top ntraces traces
(see paper) will be written as PDB files to the output directory [default: voltrac_results].
Any parameter configuration file and output redirect (log file
specified by a configuration file) will also be saved in this
directory.

Header
File and Library Routines
The
suite
of programs is supported by various header files (.h)
containing
userdefined parameters and by auxiliary library programs using C and
C++ code. The library
programs and their respective header files handle
input and output of atomic coordinates in PDB format (lib_pio), input
and output of volumetric data (lib_vio), input of data at the command
prompt
(lib_std),
error handling (lib_err), Euler angle generation (lib_eul), random number generation
(lib_rnd), array management (lib_vec), Powell optimization
(lib_pow),
map manipulation (lib_vwk), PDB manipulation (lib_pwk), matchpt
support (lib_mpt), volfltr and voltrac support (lib_sba and
lib_svt), symmetric
multiprocessing (lib_smp), and timing (lib_tim).

Return
to the front page . 
