refl1d-readthedocs-io-en-latest (1)
refl1d-readthedocs-io-en-latest (1)
Analysis
Release 0.8.16
Paul Kienzle
1 Getting Started 1
1.1 Installing the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Server installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Contributing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Tutorial 9
2.1 Simple films . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Tethered Polymer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Composite sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4 Superlattice Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5 MLayer Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.6 Four column data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7 Anticorrelated parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.8 Functional Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.9 Random model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.10 Magnetism example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 User’s Guide 37
3.1 Using Refl1D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Data Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5 Sample Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.6 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7 Fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4 Reference 59
4.1 abeles - Pure python reflectivity calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 anstodata - Reader for ANSTO data format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 cheby - Freeform - Chebyshev model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.4 dist - Non-uniform samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.5 errors - Plot sample profile uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.6 experiment - Reflectivity fitness function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.7 fitplugin - Bumps plugin definition for reflectivity models . . . . . . . . . . . . . . . . . . . . . . . 74
4.8 flayer - Functional layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.9 freeform - Freeform - Parametric B-Spline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.10 fresnel - Pure python Fresnel reflectivity calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.11 garefl - Adaptor for garefl models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
i
4.12 instrument - Reflectivity instrument definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.13 magnetism - Magnetic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.14 material - Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.15 materialdb - Materials Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.16 model - Reflectivity Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.17 mono - Freeform - Monotonic Spline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.18 names - Public API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.19 ncnrdata - NCNR Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.20 polymer - Polymer models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.21 probe - Instrument probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.22 profile - Model profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.23 refllib - Low level reflectivity calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4.24 reflectivity - Reflectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.25 resolution - Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4.26 snsdata - SNS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.27 staj - Staj File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.28 stajconvert - Staj File Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.29 stitch - Overlapping reflectivity curve stitching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.30 support - Environment support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.31 util - Miscellaneous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Index 179
ii
CHAPTER
ONE
GETTING STARTED
1-D reflectometry allows material scientists to understand the structure of thin films, providing composition and density
information as a function of depth. With polarized neutron measurements, scientists can study the sub-surface structure
of magnetic samples. The Refl1D modeling program supports a mixture of slabs, freeform and specialized layer types
such as models for the density distribution of polymer brushes.
Recent versions of the Refl1D application are available for windows from github. The file Refl1D-VERSION-exe.zip
contains python, the application, the supporting libraries and everything else needed to run the application.
To install, download and extract the zip file. Go to the extracted directory and click on refl1d_gui.bat. This will open
a dialog saying that the application is untrusted with a “Don’t run” button at the bottom. Click on “more info” and a
“Run anyway” button will appear. For command line operation, open a cmd command terminal, change to the extracted
directory and type refl1d.bat.
The installed python is a full version of python. If your specialized reflectometry models need additional python
packages, then you can use python -m pip in the extracted directory to install them.
Linux users will need to install from using pip:
Note that the binary versions will lag the release version until the release process is automated. Windows and Mac
users may want to install using pip as well to get the version with the latest changes.
1
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Installing the application from source requires a working python environment. See below for operating system specific
instructions.
Our base scientific python environment contains the following packages as well as their dependencies:
• python 3
• numpy
• scipy
• matplotlib
• wxpython
Once your environment is in place, you can install directly from PyPI using pip:
Windows
[build]
compiler=mingw32
Once the python is prepared, you can install the periodic table and bumps package using the Windows console.
Linux
Linux distributions will provide the base required packages. You will need to refer to your distribution documentation
for details.
On debian/ubuntu, the command will be something like:
OS/X
Similar to windows, you can install the official python distribution or use Anaconda. You will need to install the Xcode
command line utilities to get the compiler.
To run the interactive interface on OS/X you may need to use:
Refl-1D jobs can be submitted to a remote bumps queue for processing. You just need to install the refl1d plugin in the
bumps server.
TODO: show details.
• Simple patches
• Larger changes
– Building Documentation
– Windows Installer
– OS/X Installer
The best way to contribute to the reflectometry package is to work from a copy of the source tree in the revision control
system.
The refl1d project is hosted on github at:
https://github.com/reflectometry/refl1d
You will need the git source control software for your computer. This can be downloaded from the git page, or you can
use an integrated development environment (IDE) such as Eclipse and PyCharm, which may have git built in.
If you want to make one or two tiny changes, it is easiest to clone the project, make the changes, document and test,
then send a patch.
Clone the project as follows:
You will need bumps and periodictable to run. If you are fixing bugs in the scattering length density calculator or the
fitting engine, you will want to clone the repositories as sister directories to the refl1d source tree:
If you are only working with the refl1d modeling code, then you can install bumps and periodictable using pip:
To run the package from the source tree use the following:
cd refl1d
python run.py
This will first build the package into the build directory then run it. Any changes you make in the source directory will
automatically be used in the new version.
As you make changes to the package, you can see what you have done using git:
git status
git diff
Please update the documentation and add tests for your changes. We use doctests on all of our examples that we know
our documentation is correct. More thorough tests are found in test directory. With the nosetest package, you can run
the tests using:
python tests.py
For a larger set of changes, you should fork refl1d on github, and issue pull requests for each part.
Once you have create the fork, the clone line is slightly different:
After you have tested your changes, you will need to push them to your github fork:
git log
git commit -a -m "short sentence describing what the change is for"
git push
Good commit messages are a bit of an art. Ideally you should be able to read through the commit messages and create
a “what’s new” summary without looking at the actual code.
Make sure your fork is up to date before issuing a pull request. You can track updates to the original refl1d package
using:
When making changes, you need to take care that they work on different versions of python. In particular, RHEL6,
Centos6.5, Rocks and ScientificLinux all run python 2.6, most linux/windows/mac users run python 2.7, but some of
the more bleeding edge distributions run 3.3/3.4. The anaconda distribution makes it convenient to maintain multiple
independent environments Even better is to test against all python versions 2.6, 2.7, 3.3, 3.4:
pythonX.Y tests.py
pythonX.Y run.py
When all the tests run, issue a pull request from your github account.
Building Documentation
Building the package documentation requires a working Sphinx installation, and latex to build the pdf. As of this
writing we are using sphinx 1.2.
The command line to build the docs is as follows:
doc/_build/html/index.html
doc/_build/latex/Refl1d.pdf
Note that this only works with a unix-like environment for now since we are using make. On windows, you can run
sphinx directly from python:
cd doc
python -m sphinx.__init__ -b html -d _build/doctrees . _build/html
ReStructured text format does not have a nice syntax for superscripts and subscripts. Units such as g·cm-3 are entered
using macros such as |g/cm^3| to hide the details. The complete list of macros is available in
doc/sphinx/rst_prolog
In addition to macros for units, we also define cdot, angstrom and degrees unicode characters here. The corresponding
latex symbols are defined in doc/sphinx/conf.py.
There is a bug in older sphinx versions (e.g., 1.0.7) in which latex tables cannot be created. You can fix this by changing:
self.body.append(self.table.colspec)
to:
self.body.append(self.table.colspec.lower())
in site-packages/sphinx/writers/latex.py.
Windows Installer
You can build the standalone executable using the powershell script:
extra\\build_win_installer.ps1
This creates the distribution archive in the dist directory, including python, the application, the supporting libraries and
everything else needed to run the application.
The installer build script is run automatically on github in response to a checkin on the master branch (currently via the
appveyor.yml file, but maybe moving to github actions).
OS/X Installer
python setup_py2app
This creates a .dmg file in the dist directory with the Refl1D app inside.
1.4 License
The DANSE/Reflectometry group relies on a large body of open source software, and so combines the work of many
authors. These works are released under a variety of licenses, including BSD and LGPL, and much of the work is in
the public domain. See individual files for details.
The combined work is released under the following license:
Copyright (c) 2006-2011, University of Maryland All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided
that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the fol-
lowing disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided with the
distribution. Neither the name of the University of Maryland nor the names of its contributors may be
used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS”
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
1.5 Credits
Refl1D package was developed under DANSE project and is maintained by its user community.
Please cite:
Kienzle, P.A., Krycka, J., Patel, N., & Sahin, I. Refl1D (Version 0.8.16) [Computer Software]. College
Park, MD: University of Maryland. Retrieved Jan 05, 2024.
Available from https://github.com/reflectometry/refl1d
We are grateful for the existence of many fine open source packages such as Pyparsing, NumPy and Python without
which this package would be much more difficult to write.
1.5. Credits 7
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
TWO
TUTORIAL
This tutorial will describe walk through the steps of setting up a model with Python scripting. Scripting allows the user
to create complex models with many constraints relatively easily.
These tutorials describe the process of defining reflectometry depth profiles using scripts. Scripts are defined using
Python. Python is easy enough that you should be able to follow the tutorial and use one of our examples as a starting
point for your own models. A complete introduction to programming and Python is beyond the scope of this document,
and the reader is referred to the many fine tutorials that exist on the web.
Lets examine the code down on a line by line basis to understand what is going on.
The first step in any model is to load the names of the functions and data that we are going to use. These are defined in
a module named refl1d.names, and we import them all as follows:
This statement imports functions like SLD and Material for defining materials, Parameter, Slab and Stack for defining
materials, NeutronProbe and XrayProbe for defining data, and Experiment and FitProblem to tie everything together.
Note that ‘import *’ is bad style for anything but simple scripts. As programs get larger, it is much less confusing to
list the specific functions that you need from a module rather than importing everything.
Next we define the materials that we are going to use in our sample. silicon and air are common, so we don’t need to
define them. We just need to define nickel, which we do as follows:
nickel = Material('Ni')
9
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
This defines a chemical formula, Ni, for which the program knows the density in advance since it has densities for
all elements. By using chemical composition, we can compute scattering length densities for both X-ray and neutron
beams from the same sample description. Alternatively, we could take a more traditional approach and define nickel
as a specific SLD for our beam
#nickel = SLD(rho=9.4)
The ‘#’ character on the above line means that line is a comment, and it won’t be evaluated.
With our materials defined (silicon, nickel and air), we can combine them into a sample. The substrate will be silicon
with a 5 Å 1-𝜎 Si:Ni interface. The nickel layer is 100 Å thick with a 5 Å Ni:Air interface. Air is on the surface.
Our sample definition is complete, so now we need to specify the range of values we are going to view. We will use the
numpy library, which extends python with vector and matrix operations. The linspace function below returns values
from 0 to 5 in 100 steps for incident angles from 0∘ to 5∘ .
T = numpy.linspace(0, 5, 100)
From the range of reflection angles, we can create a neutron probe. The probe defines the wavelengths and angles which
are used for the measurement as well as their uncertainties. From this the resolution of each point can be calculated.
We use constants for angular divergence dT=0.01∘ , wavelength L=4.75 Å and wavelength dispersion dL=0.0475 in
this example, but each angle and wavelength is independent.
Combine the neutron probe with the sample stack to define an experiment. Using chemical formula and mass density,
the same sample can be simulated for both neutron and x-ray experiments.
M = Experiment(probe=probe, sample=sample)
Generate a random data set with 5% noise. While not necessary to display a reflectivity curve, it is useful in showing
how the data set should look.
M.simulate_data(5)
Combine a set of experiments into a fitting problem. The problem is used by refl1d for all operations on the model.
problem = FitProblem(M)
Let’s modify the simulation to show how a 100 Å nickel film might look if measured on the SNS Liquids reflectometer:
This model is defined in nifilm-tof.py
The sample definition is the same:
nickel = Material('Ni')
sample = silicon(0,5) | nickel(100,5) | air
Instead of using a generic probe, we are using an instrument definition to control the simulation.
10 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
instrument = SNS.Liquids()
M = instrument.simulate(sample,
T=[0.3,0.7,1.5,3],
slits=[0.06, 0.14, 0.3, 0.6],
uncertainty = 5,
)
The instrument line tells us to use the geometry of the SNS Liquids reflectometer, which includes information like the
distance between the sample and the slits and the wavelength range. We then simulate measurements of the sample for
several different angles T (degrees), each with its own slit opening slits (mm). The simulated measurement duration is
such that the median relative error on the measurement ∆𝑅/𝑅 will match uncertainty (%). Because the intensity 𝐼(𝜆)
varies so much for a time-of-flight measurement, the central points will be measured with much better precision, and
the end points will be measured with lower precision. See Pulsed.simulate for details on all simulation parameters.
Finally, we bundle the simulated measurement as a fit problem which is used by the rest of the program.
problem = FitProblem(M)
Simulating data is great for seeing how models might look when measured by a reflectometer, but mostly we are going to
use the program to fit measured data. We saved the simulated data from above into files named nifilm-tof-1.dat,
nifilm-tof-2.dat, nifilm-tof-3.dat and nifilm-tof-4.dat. We can load these datasets into a new model
using nifilm-data.py.
The sample and instrument definition is the same as before:
nickel = Material('Ni')
sample = silicon(0,5) | nickel(100,5) | air
instrument = SNS.Liquids()
In this case we are loading multiple data sets into the same ProbeSet object. If your reduction program stitches
together the data for you, then you can simply use probe=instrument.load('file').
The data and sample are combined into an Experiment, which again is bundled as a FitProblem for the fitting
program.
M = Experiment(probe=probe, sample=sample)
problem = FitProblem(M)
Now that we know how to define a sample and load data, we can learn how to perform a fit on the data. This is shown
in nifilm-fit.py:
We use the usual sample definition, except we set the thickness of the nickel layer to 125 Å so that the model does not
match the data:
# Turn off resolution bars in plots. Only do this after you have plotted the
# data with resolution bars so you know it looks reasonable, and you are not
# fitting the sample_broadening parameter in the probe.
Probe.show_resolution = False
nickel = Material('Ni')
sample = silicon(0, 10) | nickel(125, 10) | air
We are going to try to recover the original thickness by letting the thickness value range by 125 ± 50 Å. Since nickel
is layer 1 in the sample (counting starts at 0 in Python), we can access the layer parameters using sample[1]. The
parameter we are accessing is the thickness parameter, and we are setting it’s fit range to ±50 Å.
sample[1].thickness.pm(50)
We are also going to let the interfacial roughness between the layers vary. The interface between two layers is defined
by the width of the interface on top of the layer below. Here we are restricting the silicon:nickel interface to the interval
[3, 12] and the nickel:air interface to the range [0, 20]:
sample[0].interface.range(3, 12)
sample[1].interface.range(0, 20)
instrument = SNS.Liquids()
files = ['nifilm-tof-%d.dat'%d for d in (1, 2, 3, 4)]
probe = ProbeSet(instrument.load(f) for f in files)
M = Experiment(probe=probe, sample=sample)
problem = FitProblem(M)
As you can see the new nickel thickness changes the theory curve significantly:
We can now load and run the fit:
.. parsed-literal::
12 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
For samples measured with the incident beam through the substrate rather than reflecting off the surface, we don’t need
to modify our sample, we just need to tell the experiment that we are measuring back reflectivity.
We set up the example as before.
nickel = Material('Ni')
sample = silicon(0,25) | nickel(100,5) | air
T = numpy.linspace(0, 5, 100)
Because we are measuring back reflectivity, we create a probe which has back_reflectivity = True.
M = Experiment(probe=probe, sample=sample)
M.simulate_data(5)
problem = FitProblem(M)
We will tweak the fitting model a little to add an SiOx layer between the silicon and the nickel. There is no justification
for doing so for this data (and indeeded, it sets the SiOx layer to almost pure Si), but it does demonstrate a way to form
a mixture of two materials by volume nifilm-mixture.py:
Here is the mixture formula. We are giving the mass density along with the chemical formula for each part, followed
by the percentage for that part. We are giving it the name SiOx so we can reference it later. Additional components
woudl be added as material, fraction, material, fraction, . . . The bulk material will sum the fractions to 100%.
nickel = Material('Ni')
sample = silicon(0, 5) | SiOx(10, 2) | nickel(125, 10) | air
sample['Ni'].thickness.pm(50)
sample['Si'].interface.range(0, 12)
sample['Ni'].interface.range(0, 20)
. . . with the addition of a volume fraction between 0 and 100% for the SiOx layer. The thickness on this layer is not
fitted in this example because the system is already overparameterized (the sample data was generated without an SiOx
layer).
sample['SiOx'].interface.range(0, 12)
sample['SiOx'].thickness.range(0, 20)
sample['SiOx'].material.fraction[0].range(0, 100)
instrument = SNS.Liquids()
files = ['nifilm-tof-%d.dat'%d for d in (1, 2, 3, 4)]
probe = ProbeSet(instrument.load(f) for f in files)
M = Experiment(probe=probe, sample=sample)
problem = FitProblem(M)
Soft matter systems have more complex interfaces than slab layers with gaussian roughness.
We will now model a data set for tethered deuterated polystyrene chains. The chains start out at approximately 10 nm
thick in dry conditions, and swell to 14-18 nm thickness in toluene. Two measurements were made:
• 10ndt001.refl in deuterated toluene
• 10nht001.refl in hydrogenated toluene
The chains are bound to the substrate by an initiator layer between the substrate and brush chains. So the model needs
a silicon layer, silicon oxide layer, an initiator layer which is mostly hydrocarbon and scattering length density should
be between 0 and 1.5 depending on how much solvent is in the layer. Then you have the swollen brush chains and at
the end bulk solvent. For these swelling measurements, the beam penetrate the system from the silicon side and the
bottom layer is deuterated or hydrogenated toluene.
In this case we are using the neutron scattering length density as is standard practice in reflectivity experiments rather
than the chemical formula and mass density. The SLD class allows us to name the material and define the real and
imaginary components of scattering length density 𝜌. Note that we are using the imaginary 𝜌𝑖 rather than the absorption
coefficient 𝜇 = 2𝜆𝜌𝑖 since it removes the dependence on wavelength from the calculation of the reflectivity.
For the tethered polymer we don’t use a simple slab model, but instead define a PolymerBrush layer, which understands
that the system is compose of polymer plus solvent, and that the polymer chains tail off like:
if 𝑧 <= 𝑧𝑜
⎧
⎨ 𝑉𝑜
𝑉 (𝑧) = 𝑉𝑜 (1 − ((𝑧 − 𝑧𝑜 )/𝐿)2 )𝑝 if 𝑧𝑜 < 𝑧 < 𝑧𝑜 + 𝐿
0 if 𝑧 >= 𝑧𝑜 + 𝐿
⎩
This volume profile combines with the scattering length density of the polymer and the solvent to form an SLD profile:
𝜌(𝑧) = 𝜌𝑝 𝑉 (𝑧) + 𝜌𝑠 (1 − 𝑉 (𝑧))
14 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
This layer can be combined with the remaining layers to form the deuterated measurement sample
The stack notation material(thickness, interface) | ... is performing a number of tasks for you. One thing
it is doing is wrapping materials (which are objects that understand scattering length densities) into slabs (which are
objects that understand thickness and interface). These slabs are then gathered together into a stack:
The undeuterated sample is similar to the deuterated sample. We start by copying the polymer brush layer so that
parameters such as length, power, etc. will be shared between the two systems, but we replace the deuterated toluene
solvent with undeuterated toluene. We then use this H_brush to define a new stack with undeuterated tolune
We want to share thickness and interface between the two systems as well, so we write a loop to go through the layers
of D and copy the thickness and interface parameters to H
for i, _ in enumerate(D):
H[i].thickness = D[i].thickness
H[i].interface = D[i].interface
What is happening internally is that for each layer in the stack we are copying the parameter for the thickness from
the deuterated sample slab to the thickness slot in the undeuterated sample slab. Similarly for interface. When the
refinement engine sets a new value for a thickness parameter and asks the two models to evaluate 𝜒2 , both models will
see the same thickness parameter value.
With both samples defined, we next specify the ranges on the fitted parameters
Notice that in some cases we are using layer number to reference the parameter, such as D[1].thickness whereas
in other cases we are using variables directly, such as D_toluene.rho. Determining which to use requires an under-
standing of the underlying stack model. In this case, the thickness is associated with the SiOx slab thickness, but we
never formed a variable to contain Slab(material=SiOx), so we have to reference it via the stack. We did however
create a variable to contain Material(name="D_toluene") so we can access its parameters directly. Also, notice
that we only need to set one of D[1].thickness and H[1].thickness since they are the same underlying parameter.
D_probe.theta_offset.range(-0.1, 0.1)
We set back_reflectivity=True because we are coming in through the substrate. The reflectometry calculator will
automatically reverse the stack and adjust the effective incident angle to account for the refraction when the beam enters
the side of the substrate. Ideally you will have measured the incident beam intensity through the substrate as well so
that substrate absorption effects are corrected for in your data reduction steps, but if not, you can set an estimate for
back_absorption when you load the file. Like intensity you can set a range on the value and adjust it during
refinement.
Finally, we define the fitting problem from the probes and samples. The dz parameter controls the size of the profiles
steps when generating the tethered polymer interface. The dA parameter allows these steps to be joined together into
larger slabs, with each slab having (𝜌m𝑎𝑥 − 𝜌m𝑖𝑛 )𝑤 < ∆𝐴.
16 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
This is a multifit problem where both models contribute to the goodness of fit measure 𝜒2 . Since no weight vector was
defined the fits have equal weight.
problem = FitProblem(models)
problem.name = "tethered"
The polymer brush model is a smooth profile function, which is evaluated by slicing it into thin slabs, then joining
together similar slabs to improve evaluation time. The dz=0.5 parameter tells us that we should slice the brush into
0.5 Å steps. The dA=1 parameter says we should join together thin slabs while the scattering density uncertainty in the
joined slabs ∆𝐴 < 1, where ∆𝐴 = (max 𝜌 − min 𝜌)(max 𝑧 − min 𝑧). Similarly for the absorption cross section 𝜌𝑖
and the effective magnetic cross section 𝜌𝑀 cos(𝜃𝑀 ). If dA=None (the default) then no profile contraction occurs.
The resulting model looks like:
This complete model script is defined in tethered.py:
for i, _ in enumerate(D):
H[i].thickness = D[i].thickness
H[i].interface = D[i].interface
D_probe.theta_offset.range(-0.1, 0.1)
problem = FitProblem(models)
problem.name = "tethered"
Rather than using a specific model for the polymer brush we can use a freeform interface which varies the density
between layers using a cubic spline interface.
Materials used
18 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
n = 5
D_polymer_layer = FreeInterface(below=D_polystyrene, above=D_toluene,
dz=[1]*n, dp=[1]*n)
# Note: only need D_toluene to compute Fresnel-normalized reflectivity --- should fix
# this later so that we can use a pure freeform layer on top.
D = silicon(0, 5) | SiOx(100, 5) | D_initiator(100, 20) | D_polymer_layer(1000, 0) | D_
˓→toluene
Fitting parameters
Data files
problem = MultiFitProblem(models=models)
There are conditions wherein the sample you measure is not ideal. For example, a polymer brush may have enough
density in some domains that the brushes are standing upright, but in other domains the brushes lie flat.
In this example we will look at a nickel grating on a silicon substrate using specular reflectivity. When the spacing
within the grating is sufficiently large, this can be modeled to first order as the incoherent sum of the reflectivity on
the plateau and the reflectivity on the valley floor. By adjusting the weight of two reflectivities, we should be able to
determine the ratio of plateau width to valley width.
Since silicon and air are defined, the only material we need to define is nickel.
We need two separate models, one with 1000 Å nickel and one without.
We need only one probe for simulation. The reflectivity measured at the detector will be a mixture of those neutrons
which reflect off the plateau and those that reflect off the valley.
T = numpy.linspace(0, 2, 200)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
We are going to start with a 1:1 ratio of plateau to valley and create a simulated data set.
We will assume the silicon interface is the same for the valley as the plateau, which depending on the how the sample
is constructed, may or may not be realistic.
valley[0].interface = plateau[0].interface
plateau[0].interface.range(0,200)
plateau[1].interface.range(0,200)
plateau[1].thickness.range(200,1800)
The ratio between the valley and the plateau can also be fit, either by fixing size of the plateau and fitting the size of
the valley or fixing the size of the valley and fitting the size of the plateau. We will hold the plateau fixed.
M.ratio[1].range(0,5)
Note that we could include a second order effect by including a hillside term with the same height as the plateau but
using a 50:50 mixture of air and nickel. In this case we would have three entries in the ratio.
We wrap this as a fit problem as usual.
20 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
problem = FitProblem(M)
T = numpy.linspace(0, 2, 200)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
valley[0].interface = plateau[0].interface
plateau[0].interface.range(0,200)
plateau[1].interface.range(0,200)
plateau[1].thickness.range(200,1800)
M.ratio[1].range(0,5)
problem = FitProblem(M)
We can test how well the fitter can recover the original model by running refl1d with –random:
nickel = Material('Ni')
titanium = Material('Ti')
Next we will compose nickel and titanium into a bilayer and use that bilayer to define a stack with 10 repeats.
# Superlattice description
bilayer = nickel(50,5) | titanium(50,5)
sample = silicon(0,5) | bilayer*10 | air
# Fitting parameters
bilayer[0].thickness.pmp(100)
bilayer[1].thickness.pmp(100)
The interfaces vary between 0 and 30 Å. The interface between repeats is defined by the interface at the top of the
repeating stack, which in this case is the Ti interface. The interface between the superlattice and the next layer is an
independent parameter, whose value defaults to the same initial value as the interface between the repeats.
bilayer[0].interface.range(0,30)
bilayer[1].interface.range(0,30)
sample[0].interface.range(0,30)
sample[1].interface.range(0,30)
If we wanted to have the interface for Ti between repeats identical to the interface between Ti and air, we could have
tied the parameters together, but we won’t in this example:
# sample[1].interface = bilayer[1].interface
If instead we wanted to keep the roughness independent, but start with a different initial value, we could simply set the
interface parameter value. In this case, we are setting it to 10 Å
# sample[1].interface.value = 10
We can also fit the number of repeats. This is not realistic in this example (the sample grower surely knows the number
of layers in a sample like this), so we do so only to demonstrate how it works.
sample[1].repeat.range(5,15)
Before we can view the reflectivity, we must define the Q range over which we want to simulate, and combine this probe
with the sample.
T = numpy.linspace(0, 5, 100)
probe = XrayProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
M = Experiment(probe=probe, sample=sample)
M.simulate_data(5)
problem = FitProblem(M)
22 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Inter-diffusion properties of multilayer systems are of great interest in both hard and soft materials. Jomaa, et. al have
shown that reflectometry can be used to elucidate the kinetics of a diffusion process in polyelectrolytes multilayers.
Although the purpose of this paper was not to fit the presented system, it offers a good model for an experimentally
relevant system for which information from neutron reflectometry can be obtained. In this model system we will show
that we can create a model for this type of system and determine the relevant parameters through our optimisation
scheme. This particular example uses deuterated reference layers to determine the kinetics of the overall system.
Reference: Jomaa, H., Schlenoff, Macromolecules, 38 (2005), 8473-8480 http://dx.doi.org/10.1021/ma050072g
We will model the system described in figure 2 of the reference as PEMU.py.
Bring in all of the functions from refl1d.names so that we can use them in the remainder of the script.
The polymer system is deposited on a gold film with chromium as an adhesion layer. Because these are standard films
which are very well-known in this experiment we can use the built-in materials library to create these layers.
# == Sample definition ==
chrome = Material('Cr')
gold = Material('Au')
The polymer system consists of two polymers, deuterated and non-deuterated PDADMA/PSS. Since the neutron scatter-
ing cross section for deuterium is considerably different from that for hydrogen while having nearly identical chemical
properties, we can use the deuterium as a tag to see to what extent the deuterated polymer layer interdiffuses with an
underated polymer layer.
We model the materials using scattering length density (SLD) rather than using the chemical formula and mass density.
This allows us to fit the SLD directly rather than making assumptions about the specific chemical composition of the
mixture.
The polymer materials are stacked into a bilayer, with thickness estimates based on ellipsometery measurements (as
stated in the paper).
The bilayer is repeated 5 times and stacked on the chromium/gold substrate In this system we expect the kinetics
of the surface diffusion to differ from that of the bulk layer structure. Because we want the top bilayer to optimise
independently of the other bilayers, the fifth layer was not included in the stack. If the diffusion properties of each layer
were expected to vary widely from one-another, the repeat notation could not have been used at all.
Now that the model sample is built, we can start adding ranges to the fit parameters. We assume that the chromium and
gold layers are well known through other methods and will not fit it; however, additional optimisation could certainly
be included here.
As stated earlier, we will be fitting the SLD of the polymers directly. The range for each will vary from that for pure
deuterated to the pure undeuterated SLD.
# == Fit parameters ==
PDADMA_dPSS.rho.range(1.15,2.77)
PDADMA_PSS.rho.range(1.15,2.77)
We are primarily interested in the interfacial roughness so we will fit those as well. First we define the interfaces within
the repeated stack. Note that the interface for bilayer[1] is the interface between the current bilayer and the next bilayer.
Here we use sample[3] as the repeated bilayer, which is the 0-origin index of the bilayer in the stack.
sample[3][0].interface.range(5,45)
sample[3][1].interface.range(5,45)
The interface between the stack and the next layer is controlled from the repeated bilayer.
sample[3].interface.range(5,45)
Because the top bilayer has different dynamics, we optimize the interfaces independenly. Although we want the opti-
miser to threat these parameters independently because surface diffusion is expected to occur faster, the overall nature
of the diffusion is expected to be the same and so we use the same limits.
sample[4].interface.range(5,45)
sample[5].interface.range(5,45)
Finally we need to associate the sample with a measurement. We do not have the measurements from the paper available,
so instead we will simulate a measurement but setting up a neutron probe whose incident angles range from 0 to 5
degrees in 100 steps. The simulated measurement is returned together with the model as a fit problem.
# == Data ==
T = numpy.linspace(0, 5, 100)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
M = Experiment(probe=probe, sample=sample)
M.simulate_data(5)
problem = FitProblem(M)
The following is a freeform superlattice floating in a solvent and anchored with a tether molecule. The tether is anchored
via a thiol group to a multilayer of Si/Cr/Au. The sulphur in the thiol attaches well to gold, but not silicon. Gold will
stick to chrome which sticks to silicon.
Here is the plot using a random tether, membrane and tail group:
The model is defined by freeform.py.
The materials are straight forward:
chrome = Material('Cr')
gold = Material('Au')
solvent = Material('H2O', density=1)
The sample description is more complicated. When we define a freeform layer we need to anchor the ends of the
freeform layer to a known material. Usually, this is just the material that makes up the preceding and following layer.
24 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
In case we have freeform layers connected to each other, though, we need an anchor material that controls the SLD at
the connection point. For this purpose we introduce the dummy material wrap
Each section of the freeform layer has a different number of control points. The value should be large enough to give
the profile enough flexibility to match the data, but not so large that it over fits the data. Roughly the number of control
points is the number of peaks and valleys allowed. We want a relatively smooth tether and tail, so we keep n1 and n3
small, but make n2 large enough to define an interesting repeat structure.
Free layers have a thickness, horizontal control points z varying in [0, 1], real and complex SLD 𝜌 and 𝜌𝑖 , and the
material above and below.
With the predefined free layers, we can quickly define a stack, with the bilayer repeat structure. Note that we are setting
the thickness for the free layers when we define the layers, so there is no need to set it when composing the layers into
a sample.
T = numpy.linspace(0, 5, 100)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475,
back_reflectivity=True)
M = Experiment(probe=probe, sample=sample, dA=5)
M.simulate_data(5)
problem = FitProblem(M)
This package can load models from other reflectometry fitting software. In this example we load an mlayer .staj file
and fit the parameters within it.
The staj file can be used directly from the graphical interactor or it can be previewed from the command line:
.probe
.back_absorption = Parameter(1, name='back_absorption')
.background = Parameter(1e-10, name='background')
.intensity = Parameter(1, name='intensity')
.theta_offset = Parameter(0, name='theta_offset')
.sample
.layers
[0]
.interface = Parameter(4.24661e-11, name='B3 interface')
.material
.irho = Parameter(3.00904e-05, name='B3 irho')
.rho = Parameter(5.69228, name='B3 rho')
.thickness = Parameter(90, name='B3 thickness')
[1]
.interface = Parameter(4.24661e-11, name='B2 interface')
.material
.irho = Parameter(1.39368e-05, name='B2 irho')
.rho = Parameter(5.86948, name='B2 rho')
.thickness = Parameter(64.0154, name='B2 thickness')
[2]
.interface = Parameter(83.7958, name='B1 interface')
.material
.irho = Parameter(6.93684e-05, name='B1 irho')
.rho = Parameter(0.340309, name='B1 rho')
.thickness = Parameter(316.991, name='B1 thickness')
[3]
.interface = Parameter(33.2095, name='M2 interface')
.material
.irho = Parameter(6.93684e-05, name='M2 irho')
.rho = Parameter(1.73106, name='M2 rho')
.thickness = Parameter(1052.77, name='M2 thickness')
[4]
.interface = Parameter(20.6753, name='M1 interface')
.material
.irho = Parameter(0.00137419, name='M1 irho')
.rho = Parameter(4.02059, name='M1 rho')
.thickness = Parameter(567.547, name='M1 thickness')
[5]
.interface = Parameter(4.24661e-11, name='V interface')
.material
.irho = Parameter(0, name='V irho')
.rho = Parameter(0, name='V rho')
.thickness = Parameter(0, name='V thickness')
.thickness = stack thickness:2091.32
[chisq=2.16242, nllf=408.697]
Note that the parameters are reversed from the order in mlayer, so layer 0 is the substrate rather than the incident
medium. The graphical interactor, refl1d_gui, allows you to adjust parameters and fit ranges before starting the fit, but
you can also do so from a script, as shown in De2_VATR.py:
from refl1d.names import *
from refl1d.stajconvert import load_mlayer
(continues on next page)
26 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
problem = FitProblem(M)
problem.name = "Desorption 2"
Staj file constraints are ignored, but you can get similar functionality by setting parameters to equal expressions of other
parameters. You can even constrain one staj file to share parameters with another by setting, for example:
M1 = load_mlayer("De1_VATR.staj")
M2 = load_mlayer("De2_VATR.staj")
M1.sample[3].thickness = M2.sample[3].thickness
problem = MultiFitProblem([M1,M2])
Dura, J. A. et al. Porous Mg formation upon dehydrogenation of MgH2 thin films. Journal of Applied Physics 109,
093501–093501–7 (2011).
This example reuses the spin-value model for a completely unrelated measurement. The goal is to demonstrate loading
of four column data files (Q, R, dR, dQ) produced by the NCNR reductus fitting program.
The following is copied directly from the spin value example
sample[2].thickness.pmp(20)
sample[2].magnetism.rhoM.pmp(20)
sample[2].magnetism.interface_below.range(0, 10)
sample[2].magnetism.interface_above.range(0, 10)
sample[3].magnetism.interface_above.range(0, 10)
sample[5].magnetism.interface_below.range(0, 10)
sample[5].magnetism.interface_above.range(0, 10)
Here’s the new loader. Much simplified since the reduction computes the appropriate ∆𝑄 for the data points, and we
don’t need to specify the slit openings and distances for the data set. The options to the refl1d.probe.load4()
function allow you to override things during load, such as the sample broadening of the resolution.
probe = load4("refl.txt")
experiment = Experiment(probe=probe, sample=sample, dz=0.3, dA=None, interpolation=10)
problem = FitProblem(experiment)
To be sure that the analysis software supports ill-posed problems, we need to present it with problems that we know
to be ill-posed. In this example we will look a film with two layers composed of identical materials. The uncertainty
analysis should show perfect anticorrelation across the entire parameter range.
Since silicon and air are defined, the only material we need to define is nickel.
numpy.random.seed(5)
We need one model with two layers, which together should sum to 200 A. Because of the interface does not extend
beyond one layer, we cannot shrink either layer down to zero and preserve chisq, so the parameter values will not dip
much below the roughness at the ends of the layer.
28 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
sample[0].interface.range(0, 20)
sample[1].interface.range(0, 20)
sample[2].interface.range(0, 20)
sample[1].thickness.range(0, 400)
sample[2].thickness.range(0, 400)
T = numpy.linspace(0, 2, 200)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
M = Experiment(sample=sample, probe=probe)
M.simulate_data(noise=5)
problem = FitProblem(M)
numpy.random.seed(5)
sample[0].interface.range(0, 20)
sample[1].interface.range(0, 20)
sample[2].interface.range(0, 20)
sample[1].thickness.range(0, 400)
sample[2].thickness.range(0, 400)
T = numpy.linspace(0, 2, 200)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
M = Experiment(sample=sample, probe=probe)
M.simulate_data(noise=5)
problem = FitProblem(M)
We can test how well the fitter can recover the original model by running refl1d with –random:
Reflectometry layers can be arbitrary functions. This is a rather arbitrary example, with a sinusoidal nuclear profile
and an exponential magnetic profile. A simulated dataset is generated from the model.
import numpy as np
from numpy import sin, pi, log, exp, hstack
from bumps.util import push_seed
FunctionalProfile and FunctionalMagnetism are already available from refl1d.names, but a couple of aliases make them
a little easier to access.
Define the magnetic profile. Like the nuclear profile, the first parameter is z and the remaining parameters become
fittable parameters. The returned value is rhoM or the pair rhoM, thetaM, with thetaM defaulting to 0 if it is not
returned. Either rhoM or thetaM can be constant.
.. math::
where :math:`C = M_1`, :math:`r,k` are set such that :math:`re^{kz_1} = M_1` and
:math:`re^{kz_2} = M_2`, and :math:`a,b` are set such that :math:`az_2 + b = M_2`
and :math:`az_{\rm end} + b = M_3`.
"""
# Make sure z1 > z2, swapping if they are different. Note that in the
# posterior probability this will set P(z1, z2)=P(z2, z1) always.
if z1 > z2:
(continues on next page)
30 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
The functional layer is a normal layer which can be stacked into the model. flayer.start and flayer.end are materials
objects whose rho/irho values correspond to the complex rho + j irho value returned by the function at the start and
end of the layer. Similarly, magnetism.start and magnetism.end return a magnetic layer defined by the start and end of
the magnetic profile.
sample = (silicon(0, 5)
| flayer
| flayer.end(35, 15, magnetism=flayer.magnetism.end)
| air)
Need to be able to compute the thickness of the functional magnetic layer, which unfortunately requires the layer stack
and an index. The index can be layer number, layer name, or if there are multiple layers with the same name, (layer
name, k), where the magnetism is attached to the kth layer.
flayer.magnetism.set_anchor(sample, 'sin')
Set the fittable parameters. Note that the parameters to the function after the first parameter z become fittable parameters.
sample['sin'].period.range(0, 100)
sample['sin'].phase.range(0, 1)
sample['sin'].thickness.range(0, 1000)
sample['sin'].magnetism.M1.range(0, 10)
sample['sin'].magnetism.M2.range(0, 10)
sample['sin'].magnetism.M3.range(0, 10)
sample['sin'].magnetism.z1.range(0, 100)
sample['sin'].magnetism.z2.range(0, 100)
Define the model. Since this is a simulation, we need to define the incident beam in terms of angles, wavelengths and
dispersion. This gets attached to the model forming an experiment. Finally, we simulate data for the experiment with
5% dR/R. We set the seed for the simulation so that the result is reproducible. We could instead set the seed to None
so that it pulls a random seed from entropy.
T = np.linspace(0, 5, 100)
xs = [NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475, name=name)
for name in ("--", "-+", "+-", "++")]
probe = PolarizedNeutronProbe(xs)
(continues on next page)
problem = FitProblem(M)
The model can also accept a noise level and a random number seed. Noise defaults to 3%. If no seed is given, a random
seed is generated and printed so that the model can be regenerated.
To test the fitting engine, you will want to use –shake to set a random initial value before starting the fit:
refl1d model.py 3 –shake –fit=amoeba
You will find that the amoeba fitter does not work well for random models. Dream performs a bit better, able to recover
models of 1-2 layers.
The –simrandom method is not very good for reflectometry models, where we would rather have layer thicknesses
distributed as exponential values (occasional thick layers, lots of thinner layers), and with roughness small compared to
the layer thickness. The –simrandom will still work, overriding the parameters we generate with uniformly distributed
values.
There may be a more realistic choice for generated rho values than uniform in [-2, 10]; this may provide an unusual
amount of contrast. Still, it is a good enough starting point, and does lead to some models with low contrast in neigh-
bouring layers.
Set the seed for the random number generator. Later we will print the seed, even if it was not set explicitly, so that
interesting profiles can be regenerated.
np.random.seed(seed)
Set up a model with the desired number of layers. We will set the layer thickness and interfaces later.
32 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
sample[0].interface.range(0, 200)
for L in layers:
L.material.rho.range(-2, 10)
L.thickness.range(0, 1000)
L.interface.range(0, 200)
T = numpy.linspace(0.1, 5, 100)
probe = NeutronProbe(T=T, dT=0.01, L=4.75, dL=0.0475)
M = Experiment(probe=probe, sample=sample)
problem = FitProblem(M)
Set random values for rho. This also sets thickness and interfaces, but these will be ignored.
problem.randomize()
Generate layer thicknesses, with film thickness of about 400, but lots of variability in layer sizes. Layers are limited to
950 so that the fit range can work. Exponential distribution isn’t suitable for single layer systems
for L in layers:
L.thickness.value = (min(np.random.exponential(400./np.sqrt(n)), 950)
if n > 1 else np.random.uniform(5, 950))
Set interface limits based on neighbouring layer thickness, with substrate and surface having infinite thickness. Choose
an interface of at least 1 A
problem.simulate_data(noise=noise)
print("seed: %d"%seed)
print("target chisq: %s"%problem.chisq_str())
print(problem.summarize())
The materials are stacked as usual, but the layers with magnetism have an additional magnetism property specified.
This example use refl1d.magnetism.Magnetism to define a flat magnetic layer with the given magnetic scattering
length density rhoM and angle thetaM.
The magnetism is anchored to the corresponding nuclear layer, and by default will have the same thickness and interface.
The magnetic interface can be shifted relative to the nuclear interface using dead_below and dead_above. These can
be negative, allowing the magnetism to extend beyond the nuclear layer. The magnetic interface can also be varied
independently by using interface_above and interface_below as in the example below. Note that interface_below is
ignored # in consecutive layers, much like the nuclear layers, for which the interface attribute indicates the interface
above. Using extent=2, the single magnetism definition can extend over two consecutive layers.
The refl1d.magnetism.MagnetismTwist allows you to define a magnetic layer whose values of theta and rho
change linearly throughout the layer. There are additional magnetism types defined in reflid.magnetism. Note
that the current definition of interface only transitions smoothly into and out of layers with constant magnetism. This
behaviour may change in newer releases.
sample[2].thickness.pmp(20)
sample[2].magnetism.rhoM.pmp(20)
sample[2].magnetism.interface_below.range(0,10)
sample[2].magnetism.interface_above.range(0,10)
sample[3].magnetism.interface_above.range(0,10)
sample[5].magnetism.interface_below.range(0,10)
sample[5].magnetism.interface_above.range(0,10)
instrument = NCNR.NG1(slits_at_Tlo=0.1)
probe = instrument.load_magnetic("n101Gc1.reflA")
We are going to compare the calculated reflectivity given two different step sizes on the profile. Steps of dz=0.3 are
34 Chapter 2. Tutorial
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
good enough for this example in that finer steps will not significantly change 𝜒2 Steps of dz=2 however are significantly
different. You can see the difference by looking at the spin asymmetry curves for the model rendered with dz=2 and
dz=0.3 as we do below. The reflectivity calculation time scales linearly with the step size, so you may want to use a
large step size for your initial fits and a smaller step size later. The dA parameter ought to give the best of both worlds,
using a finer step size where the profile is changing quickly and coarser step size elsewhere, but it is currently broken
and disabled below.
36 Chapter 2. Tutorial
CHAPTER
THREE
USER’S GUIDE
Refl1D is a complex piece of software hiding some simple mathematics. The reflectivity of a sample is a simple function
of its optical transform matrix 𝑀 . By slicing the sample in uniform layers,∏︀each of which has a transfer matrix 𝑀𝑖 ,
we can estimate the transfer matrix for a depth-varying sample using 𝑀 = 𝑀𝑖 . We can adjust the properties of the
individual layers until the measured reflectivity best matches the calculated reflectivty.
The complexity comes from multiple sources:
• Determining depth structure from reflectivity is an inverse problem requiring a search through a landscape with
multiple minima, whose global minimum is small and often in an unpromising region.
• The solution is not unique: multiple minima may be equally valid solutions to the inversion problem.
• The measurement is sensitive to nuisance parameters such as sample alignment. That means the analysis program
must include data reduction steps, making data handling complicated.
• The models are complex. Since the ideal profile is not unique and is difficult to locate, we often constrain our
search to feasible physical models to limit the search space, and to account for information from other sources.
• The reflectivity is dependent on the type of radiation used to probe the sample and even its energy.
Using Refl1D
Model scripts associate a sample description with data and fitting options to define the system you wish to
refine.
Parameters
The adjustable values in each component of the system are defined by Parameter objects. When you set
the range on a parameter, the system will be able to automatically adjust the value in order to find the best
match between theory and data.
Data Representation
Data is loaded from instrument specific file formats into a generic Probe. The probe object manages
the data view and by extension, the view of the theory. The probe object also knows the measurement
resolution, and controls the set of theory points that must be evaluated in order to computed the expected
value at each point.
Materials
The strength of the interaction can be represented either in terms of their scattering length density using
SLD, or by their chemical formula using Material, with scattering length density computed from the
information in the probe. Mixture can be used to make a composite material whose parts vary be mass
or by volume.
Sample Representation
Materials are composed into samples, usually as a Stack of Slabs layers, but more specific profiles such
as PolymerBrush are available. Freeform sections of the profile can be described using FreeLayer,
37
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
allowing arbitrary scattering length density profiles within the layer, or FreeInterface allowing arbitrary
transitions from one SLD to another. New layer types can be defined by subclassing Layer.
Experiment
Sample descriptions and data sets are combined into an Experiment object, allowing the program to
compute the expected reflectivity from the sample and the probability that reflectivity measured could
have come from that sample. For complex cases, where the sample varies on a length scale larger than the
coherence length of the probe, you may need to model your measurement with a CompositeExperiment.
Fitting
One or more experiments can be combined into a FitProblem. This is then given to one of the many
fitters, such as PTFit, which adjust the varying parameters, trying to find the best fit. PTFit can also be
used for Bayesian analysis in order to estimate the confidence in which the parameter values are known.
The Refl1D library is organized into modules. Specific functions and classes can be imported from a module, such as:
The most common imports have been gathered together in refl1d.names. This allows you to use names like Slab
directly:
This pattern of importing all names from a file, while convenient for simple scripts, makes the code more difficult to
understand later, and can lead to unexpected results when the same name is used in multiple modules. A safer, though
more verbose pattern is to use:
This documents to the reader unfamiliar with your code (such as you when looking at your model files two years from
now) exactly where the name comes from.
3.2 Parameters
• Simulated probes
• Loading data
• Viewing data
• Instrument Resolution
• Applying Resolution
• Back reflectivity
• Alignment offset
• Scattering Factors
Data is represented using Probe objects. The probe defines the Q values and the resolution of the individual measure-
ments, returning the scattering factors associated with the different materials in the sample. If the measurement has
already been performed, the probe stores the measured reflectivity and its estimated uncertainty.
Probe objects are independent of the underlying instrument. When data is loaded, it is converted to angle (𝜃, ∆𝜃),
wavelength (𝜆, ∆𝜆) and reflectivity (𝑅, ∆𝑅), with NeutronProbe used for neutron radiation and XrayProbe used
for X-ray radiation. Additional properties,
Knowing the angle is necessary to correct for errors in sample alignment.
For time-of-flight measurements, each angle should be represented as a different probe. This eliminates the ‘stitching’
problem, where 𝑄 = 4𝜋 sin(𝜃1 )/𝜆1 = 4𝜋 sin(𝜃2 )/𝜆2 for some (𝜃1 , 𝜆1 ) and (𝜃2 , 𝜆2 ). With stitching, it is impossible
to account for effects such as alignment offset since two nominally identical Q values will in fact be different. No
information is lost treating the two data sets separately — each points will contribute to the overall cost function in
accordance with its statistical weight.
The probe object controls the plotting of theory and data curves. This is because the probe which knows details such
as the original points and the points used in the calculation.
The refl1d.probe.Probe object has a couple of attributes for controlling the plot. These attributes are usually set
directly on the class rather than the individual data sets so they apply uniformly.
Probe.view can be set to one of linear, log, q4, fresnel or logfresnel. Q4 divides reflectivity by 𝑞 4 after correcting for
intensity and background, plotting 𝑅/(𝐼0 𝑞 4 + 𝐵). Fresnel divides the reflectivity of the film by the fresnel reflectivity
of the substrate; in addition to intensity and background, it also applies the resolution function to the fresnel calculation
prior to division. The view does not affect the fit, which uses the uncertainty on the data points when evaluating the
likelihood of the model.
Probe.show_resolution can be True to show the resolution of each data point as a horizontal bar on the data point, or
False to hide it. The default is True so that you first assess the resolution of the loaded data before trying to fit it; after
verifying that it looks reasonable, set it to False so that the graphs are not so busy.
Probe.plot_shift is the number of pixels to shift each data set when plotting multiple data sets on the same plot.
Probe.resoiduals_shift is the number of pixels to shift each data set when plotting multiple residuals on the same plot.
With the instrument in a given configuration (𝜃𝑖 = 𝜃𝑓 , 𝜆), each neutron that is received is assigned to a particular 𝑄
based on the configuration. However, these vaues are only nominal. For example, a monochromator lets in a range of
wavelengths, and slits permit a range of angles. In effect, the reflectivity measured at the configuration corresponds to
a range of 𝑄.
For monochromatic instruments, the wavelength resolution is fixed and the angular resolution varies. For polychromatic
instruments, the wavelength resolution varies and the angular resolution is fixed. Resolution functions are defined in
refl1d.resolution.
The angular resolution is determined by the geometry (slit positions, openings and sample profile) with perhaps an
additional contribution from sample warp. For monochromatic instruments, measurements are taken with fixed slits at
low angles until the beam falls completely onto the sample. Then as the angle increases, slits are opened to preserve
full illumination. At some point the slit openings exceed the beam width, and thus they are left fixed for all angles
above this threshold.
When the sample is tiny, stray neutrons miss the sample and are not reflected onto the detector. This results in a
resolution that is tighter than expected given the slit openings. If the sample width is available, we can use that to
determine how much of the beam is intercepted by the sample, which we then use as an alternative second slit. This
simple calculation isn’t quite correct for very low 𝑄, but data in this region will be contaminated by the direct beam,
so we won’t be using those points.
When the sample is warped, it may act to either focus or spread the incident beam. Some samples are diffuse scatters,
which also acts to spread the beam. The degree of spread can be estimated from the full-width at half max (FWHM)
of a rocking curve at known slit settings. The expected FWHM will be 12 (𝑠1 + 𝑠2 )/(𝑑1 − 𝑑2 ). The difference between
this and the measured FWHM is the sample_broadening value. A second order effect is that at low angles the warping
will cast shadows, changing the resolution and intensity in very complex ways.
For time of flight instruments, the wavelength dispersion is determined by the reduction process which usually bins the
time channels in a way that sets a fixed relative resolution ∆𝜆/𝜆 for each bin.
Resolution in Q is computed from uncertainty in wavelength 𝜎𝜆 and angle 𝜎𝜃 using propagation of errors:
⃒ 𝜕𝑄 ⃒2 2 ⃒ 𝜕𝑄 ⃒2 2 ⃒ 𝜕𝑄 𝜕𝑄 ⃒2
⃒ ⃒ ⃒ ⃒ ⃒ ⃒
2
𝜎𝑄 = ⃒
⃒ ⃒ 𝜎 + ⃒ ⃒ 𝜎 + 2⃒ ⃒ ⃒ 𝜎𝜆𝜃
𝜕𝜆 ⃒ 𝜆 ⃒ 𝜕𝜃 ⃒ 𝜃 𝜕𝜆 𝜕𝜃 ⃒
𝑄 = 4𝜋 sin(𝜃)/𝜆
𝜕𝑄
= −4𝜋 sin(𝜃)/𝜆2 = −𝑄/𝜆
𝜕𝜆
𝜕𝑄
= 4𝜋 cos(𝜃)/𝜆 = cos(𝜃) · 𝑄/ sin(𝜃) = 𝑄/ tan(𝜃)
𝜕𝜃
With no correlation between wavelength dispersion and angular divergence, 𝜎𝜃𝜆 = 0, yielding the traditional form:
(︂ )︂2 (︂ )︂2 (︂ )︂2
∆𝑄 ∆𝜆 ∆𝜃
= +
𝑄 𝜆 tan(𝜃)
Wavelength dispersion ∆𝜆/𝜆 is usually constant (e.g., for AND/R it is 2% FWHM), but it can vary on time-of-flight
instruments depending on how the data is binned.
Angular divergence 𝛿𝜃 comes primarily from the slit geometry, but can have broadening or focusing due to a warped
sample. The FWHM divergence in radians due to slits is:
1 𝑠1 + 𝑠2
∆𝜃slits =
2 𝑑1 − 𝑑2
where 𝑠1 , 𝑠2 are slit openings edge to edge and 𝑑1 , 𝑑2 are the distances between the sample and the slits. For tiny
samples of width 𝑚, the sample itself can act as a slit. If 𝑠 = 𝑚 sin(𝜃) is smaller than 𝑠2 for some 𝜃, then use:
1 𝑠1 + 𝑚 sin(𝜃)
∆𝜃slits =
2 𝑑1
The sample broadening can be read off a rocking curve using:
∆𝜃sample = 𝑤 − ∆𝜃slits
where 𝑤 is the measured FWHM of the peak in degrees. Broadening can be negative for concave samples which have
a focusing effect on the beam. This constant should be added to the computed ∆𝜃 for all angles and slit geometries.
You will not usually have this information on hand, but you can leave space for users to enter it if it is available.
√
FWHM can be converted to 1-𝜎 resolution using the scale factor of 1/ 8 ln 2.
With opening slits we assume ∆𝜃/𝜃 is held constant, so if you know 𝑠 and 𝜃𝑜 at the start of the opening slits region
you can compute ∆𝜃/𝜃𝑜 , and later scale that to your particular 𝜃:
Because 𝑑 is fixed, that means 𝑠1 (𝜃) = 𝑠1 (𝜃𝑜 ) · 𝜃/𝜃𝑜 and 𝑠2 (𝜃) = 𝑠2 (𝜃𝑜 ) · 𝜃/𝜃𝑜 .
The instrument resolution is applied to the theory calculation on a point by point basis using a value of ∆𝑄 derived
from ∆𝜆 and ∆𝜃. Assuming the resolution is well approximated by a Gaussian, convolve applies it to the calculated
theory function.
The convolution at each point 𝑘 is computed from the piece-wise linear function 𝑅
¯ 𝑖 (𝑞) defined by the refectivity 𝑅(𝑄𝑖 )
computed at points 𝑄𝑖 ∈ 𝑄calc
¯ 𝑖 (𝑞) = 𝑚𝑖 𝑞 + 𝑏𝑖
𝑅
𝑚𝑖 = (𝑅𝑖+1 − 𝑅𝑖 )/(𝑄𝑖+1 − 𝑄𝑖 )
𝑏𝑖 = 𝑅𝑖 − 𝑚𝑖 𝑄𝑖
The range 𝑖min to 𝑖max for point 𝑘 is defined to be the first 𝑖 such that 𝐺𝑘 (𝑄𝑖 ) < 0.001, which is about 3∆𝑄𝑘 away
from 𝑄𝑘 .
By default the calculation points 𝑄calc are the same nominal 𝑄 points at which the reflectivity was measured. If the data
was measured densely enough, then the piece-wise linear function 𝑅 ¯ will be a good approximation to the underlying
reflectivity. There are two places in particular where this assumption breaks down. One is near the critical edge for a
sample that has sharp interfaces, where the reflectivity drops precipitously. The other is in thick samples, where the
Kissig fringes are so close together that the instrument cannot resolve them separately.
The method Probe.critical_edge() fills in calculation points near the critical edge. Points are added linear around
𝑄𝑐 for a range of ±𝛿𝑄𝑐 . Thus, if the backing medium SLD or the theta offset are allowed to vary a little during the
fit, the region after the critical edge may still be over-sampled. The method Probe.oversample() fills in calculation
points around every point, giving each 𝑅 ˆ a firm basis of support.
While the assumption of Gaussian resolution is reasonable on fixed wavelength instruments, it is less so on time of flight
instruments, which have asymmetric wavelength distributions. You can explore the effects of different distributions
by subclassing Probe and overriding the _apply_resolution method. We will happily accept code for improved
resolution calculators and non-gaussian convolution.
While reflectivity is usually performed from the sample surface, there are many instances where them comes instead
through the substrate. For example, when the sample is soaked in water or D2 O, a neutron beam will not penetrate
well and it is better to measure the sample through the substrate. Rather than reversing the sample representation, these
datasets can be flagged with the attribute back_reflectivity=True, and the sample constructed from substrate to surface
as usual.
When the beam enters the side of the substrate, there is a small refractive shift in 𝑄 based on the angle of the beam
relative to the side of the substrate. The refracted beam reflects off the the reversed film then exits the substrate on the
other side, with an opposite refractive shift. Depending on the absorption coefficient of the substrate, the beam will be
attenuated in the process.
The refractive shift and the reversing of the film are automatically handled by the underlying reflectivity calculation.
You can even combine measurements through the sample surface and the substrate into a single measurement, with
negative 𝑄 values representing the transition from surface to substrate. This is not uncommon with magnetic thin film
samples.
Usually the absorption effects of the substrate are accounted for by measuring the incident beam through the same
substrate before normalizing the reflectivity. There is a slight difference in path length through the substrate depending
on angle, but it is not significant. When this is not the case, particularly for measurements which cross from the surface
to substrate in the same scan, an additional back_absorption parameter can be used to scale the back reflectivity relative
to the surface reflectivity. There is an overall intensity parameter which scales both the surface and the back reflectivity.
The interaction between back_reflectivity, back_absorption, sample representation and 𝑄 value can be somewhat tricky.
It
It can sometimes be difficult to align the sample, particularly on X-ray instruments. Unfortunately, a misaligned sample
can lead to a error in the measured position of the critical edge. Since the statistics for the measurement are very good in
this region, the effects on the fit can be large. By representing the angle directly, an alignment offset can be incorporated
into the reflectivity calculation. Furthermore, the uncertainty in the alignment can be estimated from the alignment
scans, and this information incorporated directly into the fit. Without the theta offset correction you would need to
compensate for the critical edge by allowing the scattering length density of the substrate to vary during the fit, but this
would lead to incorrectly calculated reflectivity for the remaining points. For example, the simulation toffset.py
shows more than 5% error in reflectivity for a silicon substrate with a 0.005∘ offset.
The method Probe.alignment_uncertainty computes the uncertainty in a alignment from the information in a
rocking curve. The alignment itself comes from the peak position in the rocking curve, with uncertainty determined
from the uncertainty in the peak position. Note that this is not the same as the width of the peak; the peak stays
roughly the same width as statistics are improved, but the uncertainty in position and width will decrease.1 There is an
additional uncertainty in alignment due to motor step size, easily computed from the variance in a uniform distribution.
1M.R. Daymond, P.J. Withers and M.W. Johnson; The expected uncertainty of diffraction-peak location”, Appl. Phys. A 74 [Suppl.], S112 -
S114 (2002). http://dx.doi.org/10.1007/s003390201392
where 𝑤 is the full-width of the peak in radians at half maximum, 𝐼 is the integrated intensity under the peak and 𝑑 is
the motor step size is radians.
The effective scattering length density of the material is dependent on the composition of the material and on the type
and wavelength of the probe object. Using the chemical formula, scattering_factors computes the scattering
factors (𝜌, 𝜌𝑖 , 𝜌inc ) associated with the material. This means the same sample representation can be used for X-ray
and neutron experiments, with mass density as the fittable parameter. For energy dependent materials (e.g., Gd for
neutrons), then scattering factors will be returned for all of the energies in the probe. (Note: energy dependent neutron
scattering factors are not yet implemented in periodic table.)
The returned scattering factors are normalized to density=1 g·cm-3 . To use these values in the calculation of reflectiv-
ity, they need to be scaled by density and volume fraction. Using normalized density, the value returned by scatter-
ing_factors can be cached so only one lookup is necessary during the fit even when density is a fitting parameter.
The material itself can be flagged to use the incoherent scattering factor 𝜌inc which is by default ignored.
Magnetic scattering factors for the material are not presently available in the periodic table. Interested parties may con-
sider extending periodic table with magnetic scattering information and adding support to PolarizedNeutronProbe
3.4 Materials
Because this is elemental nickel, we already know it’s density. For compounds such as ‘SiO2’ we would have to specify
an additional density=2.634 parameter.
Common materials defined in materialdb:
air, water, silicon, sapphire, . . .
Specific elements, molecules or mixtures can be added using the classes in refl1d.material:
SLD unknown material with fittable SLD Material known chemical formula and fittable density Mixture
known alloy or mixture with fittable fractions
• Stacks
• Multilayers
• Interfaces
• Slabs
• Magnetic layers
• Polymer layers
• Functional layers
3.4. Materials 43
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
• Freeform layers
– Comparison of models
– Future work
• Subclassing Layer
3.5.1 Stacks
Reflectometry samples consist of 1-D stacks of layers joined by error function interfaces. The layers themselves may
be uniform slabs, or the scattering density may vary with depth in the layer. The first layer in the stack is the substrate
and the final layer is the surface. Surface and substrate are assumed to be semi-infinite, with any thickness ignored.
3.5.2 Multilayers
3.5.3 Interfaces
The interface between layers is assumed to smoothly follow and error function profile to blend the layer above with the
layer below. The interface value is the 1-𝜎 gaussian roughness. Adjacent flat layers with zero interface will act like a
step function, while positive values will introduce blending between the layers.
Blending is usually done with the Nevot-Croce formalism, which scales the index of refraction between two layers by
exp(−2𝑘𝑛 𝑘𝑛+1 𝜎 2 ). We show both a step function profile for the interface, as well as the blended interface.
Note: The blended interface representation is limited to the neighbouring layers, and is not an accurate representation
of the effective reflectivity profile when the interface value is large relative to the thickness of the layer.
We will have a mechanism to force the use of the blended profile for direct calculation of the interfaces rather than
using the interface scale factor.
3.5.4 Slabs
Materials can be stacked as slabs, with a thickness for each layer and roughness at the top of each layer. Because this
is such a common operation, there is special syntax to do it, using ‘|’ as the layer separator and () to specify thickness
and interface. For example, the following is a 30 Å gold layer on top of silicon, with a silicon:gold interface of 5 Å and
a gold:air interface of 2 Å:
Individual layers and stacks can be used in multiple models, with all parameters shared except those that are explicitly
made separate. The syntax for doing so is similar to that for lists. For example, the following defines two samples, one
with Si+Au/30+air and the other with Si+Au/30+alkanethiol/10+air, with the silicon/gold layers shared:
Stacks can be repeated using a simple multiply operation. For example, the following gives a cobalt/copper multilayer
on silicon:
>> Cu = Material('Cu')
>> Co = Material('Co')
>> sample = Si | [Co(30) | Cu(10)]*20 | Co(30) | air
>> print sample
Si | [Co(30) | Cu(10)]*20 | Co(30) | air
Multiple repeat sections can be included, and repeats can contain repeats. Even freeform layers can be repeated. By
default the interface between the repeats is the same as the interface between the repeats and the cap. The cap interface
can be set explicitly. See model.Repeat for details.
Freeform profiles allow us to adjust the shape of the depth profile using control parameters. The profile can directly
represent the scattering length density as a function of depth (a FreeLayer), or the relative fraction of one material
and another (a FreeInterface). With a freeform interface you can simultaneously fit two systems which should share
the same volume profile but whose materials have different scattering length densities. For example, a polymer in
deuterated and undeuterated solvents can be simultaneously fit with freeform profiles.
We have multiple representations for freeform profiles, each with its own strengths and weaknesses:
• monotone cubic interpolation (refl1d.mono)
• parameteric B-splines (refl1d.freeform)
• Chebyshev interpolating polynomials
(refl1d.cheby)
At present, monotone cubic interpolation is the most developed, but work on all representations is in flux. In particular
not every representation supports all features, and the programming interface may vary. See the documentation for the
individual models for details.
Comparison of models
Future work
We only have polynomial spline representations for our profiles. Similar profiles could be constructed from different
basis functions such as wavelets, the idea being to find a multiscale representation of your profile and use model selection
techniques to determine the most coarse grained representation that matches your data.
Totally freeform representations as separately controlled microslab heights would also be interesting in the context of
a maximum entropy fitting engine: find the smoothest profile which matches the data, for some definition of ‘smooth’.
Some possible smoothness measures are the mean squared distance from zero, the number of sign changes in the second
derivative, the sum of the absolute value of the first derivative, the maximum flat region, the minimum number of flat
slabs, etc. Given that reflectometry inversion is not unique, the smoothness measure must correspond to the likelihood
of finding the system in that particularly state: that is, don’t expect your sample to show zebra stripes unless you are on
an African safari or visiting a zoo.
3.6 Experiment
• Direct Calculation
The Experiment object links a sample with an experimental probe. The probe defines the Q values and the resolution
of the individual measurements, and returns the scattering factors associated with the different materials in the sample.
Because our models allow representation based on composition, it is no longer trivial to compute the reflectivity from
the model. We now have to look up the effective scattering density based on the probe type and probe energy. You’ve
already seen this in Subclassing Layer: the render method for the layer requires the probe to look up the material
scattering factors.
3.6. Experiment 47
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
3.7 Fitting
• Quick Fit
• Uncertainty Analysis
• Using the posterior distribution
• Reporting results
• Publication Graphics
• Tough Problems
• Command Line
• Other optimizers
• References
Obtaining a good fit depends foremost on having the correct model to fit.
Too many layers, too few layers, too limited fit ranges, too open fit ranges, all of these can make fitting difficult. For
example, forgetting the SiOx layer on the silicon substrate will distort the model of a polymer film.
Even with the correct model, there are systematic errors to address (see _data_guide). A warped sample can lead
to broader resolution than expected near the critical edge, and sample_broadening=value must be specified when
loading the data. Small errors in alignment of the sample or the slits will move the measured critical edge, and so
probe.theta_offset may need to be fitted. Points near the critical edge are difficult to compute correctly with resolution
because the reflectivity varies so quickly. Using refl1d.probe.Probe.critical_edge(), the density of the points
used to compute the resolution near the critical edge can be increased. For thick samples the resolution will integrate
over multiple Kissig fringes, and refl1d.probe.Probe.over_sample() will be needed to average across them and
avoid aliasing effects.
While generating an appropriate model, you will want to perform a number of quick fits. The Nelder-Mead simplex
algorithm (fit=amoeba) works well for this. You will want to run it with steps between 1000 and 3000 so the algorithm
has a chance to converge. Restarting a number of times (somewhere between 3 and 100) gives a reasonably thorough
search of the fit space. From the graphical user interface (refl_gui), using starts=1 and clicking the fit button to improve
the fit as needed works pretty well. From the command line interface (refl_cli), the command line will be something
like:
The command line result can be improved by using the previous fit value as the starting point for the next fit:
Differential evolution (fit=de) and random lines (fit=rl) are alternatives to amoeba, perhaps a little more likely to find the
global minimum but somewhat slower. These are population based algorithms in which several points from the current
population are selected, and based on their position and value, a new point is generated. The population is specified
as a multiplier on the number of parameters in the model, so for example an 8 parameter model with DE’s default
population (pop=10) would create 80 points each generation. Random lines with a large population is fast but is not
good at finding isolated minima away from the general trend, so its population defaults to pop=0.5. These algorithms
can be called from the command line as follows:
More important than the optimal value of the parameters is an estimate of the uncertainty in those values. By casting
our problem as the likelihood of seeing the data given the model, we not only give ourselves the ability to incorporate
prior information into the fit systematically, but we also give ourselves a strong foundation for assessing the uncertainty
of the parameters.
Uncertainty analysis is performed using DREAM (fit=dream). This is a Markov chain Monte Carlo (MCMC) method
with a differential evolution step generator. Like simulated annealing, the MCMC explores the space using a random
walk, always accepting a better point, but sometimes accepting a worse point depending on how much worse it is.
DREAM can be started with a variety of initial populations. The random population (init=random) distributes the
initial points using a uniform distribution across the space of the parameters. Latin hypersquares (init=lhs) improves on
random by making sure that there is on value for each subrange of every variable. The covariance population (init=cov)
selects points from the uncertainty ellipse computed from the derivative at the initial point. This method will fail if the
fitting parameters are highly correlated and the covariance matrix is singular. The epsilon ball population (init=eps)
starts DREAM from a tiny region near the initial point and lets it expand from there. It can be useful to start with an
epsilon ball from the previous best point when DREAM fails to converge using a more diverse initial population.
The Markov chain will take time to converge on a stable population. This burn in time needs to be specified at the start
of the analysis. After burn, DREAM will collect all points visited for N iterations of the algorithm. If the burn time
was long enough, the resulting points can be used to estimate uncertainty on parameters.
A common command line for running DREAM is:
The file T1/model.err contains a table showing for each parameter the mean(std), median and best values, and the
68% and 95% credible intervals. The mean and standard deviation are computed from all the samples in the returned
distribution. These statistics are not robust: if the Markov process has not yet converged, then outliers will significantly
distort the reported values. Standard deviation is reported in compact notation, with the two digits in parentheses
representing uncertainty in the last two digits of the mean. Thus, for example, 24.9(28) is 24.9 ± 2.8. Median is the
best value in the distribution. Best is the best value ever seen. The 68% and 95% intervals are the shortest intervals
that contain 68% and 95% of the points respectively. In order to report 2 digits of precision on the 95% interval,
approximately 1000000 draws from the distribution are required, or steps = 1000000/(#parameters #pop). The 68%
interval will require fewer draws, though how many has not yet been determined.
3.7. Fitting 49
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Histogramming the set of points visited will gives a picture of the probability density function for each parameter. This
histogram is generated automatically and saved in T1/model-var.png. The histogram range represents the 95% credible
interval, and the shaded region represents the 68% credible interval. The green line shows the highest probability
observed given that the parameter value is restricted to that bin of the histogram. With enough samples, this will
correspond to the maximum likelihood value of the function given that one parameter is restricted to that bin. In
practice, the analysis has converged when the green line follows the general shape of the histogram.
The correlation plots show that the parameters are not uniquely determined from the data. For example, the thickness
of lamellae 3 and 4 are strongly anti-correlated, yielding a 95% CI of about 1 nm for each compared to the bulk nafion
thickness CI of 0.2 nm. Summing lamellae thickness in the sampled points, we see the overall lamellae thickness has
a CI of about 0.3 nm. The correlation plot is saved in T1/model-corr.png.
To assure ourselves that the uncertainties produced by DREAM do indeed correspond to the underlying uncertainty
in the model, we perform a Monte Carlo forward uncertainty analysis by selecting 50 samples from the computed
posterior distribution, computing the corresponding reflectivity and calculating the normalized residuals. Assuming
that our measurement uncertainties are approximately normally distributed, approximately 68% of the normalized
residuals should be within +/- 1 of the residual for the best model, and 98% should be within +/- 2. Note that our best
fit does not capture all the details of the data, and the underlying systematic bias is not included in the uncertainty
estimates.
Plotting the profiles generated from the above sampling method, aligning them such that the cross correlation with the
best profile is maximized, we see that the precise details of the lamellae are uncertain but the total thickness of the
lamellae structure is well determined. Bayesian analysis can also be used to determine relative likelihood of different
number of layers, but we have not yet performed this analysis. This plot is stored in T1/model-errors.png.
The trace plot, T1/model-trace.png, shows the mixing properties of the first fitting parameter. If the Markov process is
well behaved, the trace plot will show a lot of mixing. If it is ill behaved, and each chain is stuck in its own separate
local minimum, then distinct lines will be visible in this plot.
The convergence plot, T1/model-logp.png, shows the log likelihood values for each member of the population. When
the Markov process has converged, this plot will be flat with no distinct lines visible. If it shows a general upward
sweep, then the burn time was not sufficient, and the analysis should be restarted. The ability to continue to burn from
the current population is not yet implemented.
Given sufficient burn time, points in the search space will be visited with probability proportional to the goodness
of fit. It can be difficult to determine the correct amount of burn time in advance. If burn is not long enough, then
the population of log likelihood values will show an upward sweep. Similarly, if steps is insufficient, th likelihood
observed as a function of parameter value will be sparsely sampled, and the maximum likelihood curve will not match
the posterior probability histogram. To correct these issues, the DREAM analysis can be extended using the –resume
option. Assume the previous run completed with Markov chain convergence achieved at step 500. The following
command line will generate an additional 600 steps so that the posterior sample size is 1600, then run an additional
500 steps of burn to remove the intial upward sweep in the log likelihood plot:
3.7. Fitting 51
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Just because all the plots are well behaved does not mean that the Markov process has converged on the best result. It is
practically impossible to rule out a deep minimum with a narrow acceptance region in an otherwise unpromising part
of the search space.
In order to assess the DREAM algorithm for suitability for reflectometry fitting we did a number of tests. Given
that the fit surface is multimodal, we need to know that the uncertainty analysis can return multiple modes. Because
the fit problems may also be ill-conditioned, with strong correlations or anti-correlations between some parameters,
the uncertainty analysis needs to be able to correctly indicate that the correlations exist. Simple Metropolis-Hastings
sampling does not work well in these conditions, but DREAM is able to handle them.
You can load the DREAM output population an perform uncertainty analysis operations after the fact:
$ ipython -pylab
You can restrict a variable to a certain range when doing plots. For example, to restrict the third parameter to [0.8-1.0]
and the fifth to [0.2-0.4]:
You can also add derived variables using a function to generate the derived variable. For example, to add a parameter
which is p[0]+p[1] use:
You can generate multiple derived parameters at a time with a function that returns a sequence:
state.show()
The plotting code is somewhat complicated, and matplotlib doesn’t have a good way of changing plots interactively.
If you are running directly from the source tree, you can modify the dream plotting libraries as you need for a one-off
plot, the replot the graph:
Be sure to restore the original versions when you are done. If the change is so good that everyone should use it, be sure
to feed it back to the community via https://github.com/reflectometry/refl1d.
As with any parametric modeling technique, you cannot say that the model is correct and has certain parameter value,
only that the observed data is consistent with the model and the given parameter values. There may be other models
within the parameter search space that are equally consistent, but which were not discovered by Refl1D, particularly if
you are forced to use –init=eps to achieve convergence. This is true even for models which exhibit good convergence:
• the marginal maximum likelihood (the green line) follows the marginal probability density (the blue line)
• the log likelihood function is flat, not sweeping upward
• the individual parameter traces exhibit good mixing
• the marginal probability density is unimodal and roughly normal
• the joint probabilities show no correlation structure
• 𝜒2 ≈ 1
• the residuals plot shows no structure
The following blurb can be used as a description of the analysis method when reporting your results:
Refl1D[1] was used to model the reflectivity data. The sample depth profile is represented as a series of
slabs of varying scattering length density and thickness with gaussian interfaces between them. Freeform
sections of the profile are modeled using monotonic splines. Reflectivity is computed using the Abeles
optical matrix method, with interfacial effects computed by the method of Nevot and Croce or by ap-
proximating the interfaces by a series of thin slabs. Refl1d supports simultaneous refinement of multiple
reflectivity data sets with constraints between the models.
Refl1D uses a Bayesian approach to determine the uncertainty in the model parameters. By representing
the problem as the likelihood of observing the measured reflectivity curve given a particular choice of
parameters, Refl1D can use Markov Chain Monte Carlo (MCMC) methods[2] to draw a random sample
from the joint parameter probability distribution. This sample can then used to estimate the probability
distribution for each individual parameter.
[1] Kienzle P. A., Krycka J., A., and Patel, N. Refl1D: Interactive depth profile modeler. http://refl1d.
readthedocs.org
[2] Vrugt J. A., ter Braak C. J. F., Diks C. G. H., Higdon D., Robinson B. A., and Hyman J. M. Accelerating
Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace
sampling, Int. J. Nonlin. Sci. Num., 10, 271–288, 2009.
If you are reporting maximum likelihood and credible intervals:
The parameter values reported are the those from the model which best fits the data, with uncertainty
determined from the range of parameter values which covers 68% of the sample set. This corresponds to
the 1 − 𝜎 uncertainty level if the sample set were normally distributed.
If you are reporting mean and standard deviation:
The reported parameter values are computed from the mean and standard deviation of the sample set. This
corresponds to the best fitting normal distribition to marginal probability distribution for the parameter.
There are caveats to reporting mean and standard deviation. The technique is not robust. If burn-in is insufficient, if
the distribution is multi-modal, or if the distribution has long tails, then the reported mean may correspond to a bad fit,
and the standard deviation can be huge. [We should confirm this by modeling a cauchy distribution]
3.7. Fitting 53
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
The matplotlib package is capable of producing publication quality graphics for your models and fit results, but it
requires you to write scripts to get the control that you need. These scripts can be run from the refl1d application by
first loading the model and the fit results then accessing their data directly to produce the plots that you need.
The model file (called plot.py in this example) will start with the following:
import sys
import os.path
from bumps.fitproblem import load_problem
from bumps.cli import load_best
problem = load_problem(model)
load_best(problem, os.path.join(store, model[:-3]+".par"))
chisq = problem.chisq
print("chisq %g"%chisq)
Assuming your model script is in model.py and you have run a fit with –store=X5, you can run this file using:
Now model.py is loaded and the best fit parameters are set.
To produce plots, you will need access to the data and the theory. This can be complex depending on how many
models you are fitting and how many datasets there are per model. For refl1d.fitproblem.FitProblem mod-
els, the refl1d.experiment.Experiment object is referenced by problem.fitness. For refl1d.fitproblem.
MultiFitProblem models, you need to use problem.models[k].fitness to access the experiment for model k. Profiles
and reflectivity theory are returned from methods in experiment. The refl1d.probe.Probe data for the experiment
is referenced by experiment.probe. This will have attributes for Q, dQ, R, dR, T, dT, and L, dL, as well as methods
for plotting the data. This is not quite so simple: the sample may be non uniform, and composed of multiple samples
for the same probe, and at the same time the probe may be composed of independent measurements kept separate so
that you can fit alignment angle and overall intensity. Magnetism adds another level of complexity, with extra profiles
associated with each sample and separate reflectivities for the different spin states.
How does this work in practice? Consider a simple model such as nifilm-fit from the example directory. We can access
the parts by extending plot.py as follows:
experiment = problem.fitness
z,rho,irho = experiment.smooth_profile(dz=0.2)
# ... insert profile plotting code here ...
QR = experiment.reflectivity()
for p,th in self.parts(QR):
Q,dQ,R,dR,theory = p.Q, p.dQ, p.R, p.dR, th[1]
# ... insert reflectivity plotting code here ...
Next we can reload the the error sample data from the DREAM MCMC sequence:
The function refl1d.errors.calc_errors() provides details on the data structures for profiles, Q and residuals.
Look at the source in refl1d/errors.py to see how this data is used to produce the error plots with _profiles_overplot,
_profiles_contour, _residuals_overplot and _residuals_contour. The source is available from:
https://github.com/reflectometry/refl1d
Putting the pieces together, here is a skeleton for a specialized plotting script:
import sys
import os.path
from bumps.fitproblem import load_problem
from bumps.cli import load_best
problem = load_problem(model)
load_best(problem, os.path.join(store, model[:-3]+".par"))
print("chisq %s"%problem.chisq_str())
chisq = problem.chisq()
# We are going to assume that we have a simple experiment with only one
# reflectivity profile, and only one dataset associated with the profile.
# The details for more complicated scenarios are in experiment.plot_profile
# and experiment.plot_reflectivity.
z, rho, irho = experiment.smooth_profile(dz=0.2)
pylab.figure()
pylab.subplot(211)
pylab.plot(z, rho, label='SLD profile')
3.7. Fitting 55
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
pylab.show()
raise Exception() # We are just plotting; don't run the model
For the common problem of generating profile error plots aligned on a particular interface, you can use the simpler
align.py model:
from refl1d.names import * align_errors(model=””, store=””, align=’auto’)
If you are using the command line then you should be able to type the following at the command prompt to generate
the plots:
$ refl1d align.py <model>.py <store> [<align>] [1|2|n]
If you are using the GUI, you will have to set model, store and align directly in align.py each time you run.
Align is either auto for the current behaviour, or it is an interface number. You can align on the center of a layer by
adding 0.5 to the interface number. You can count interfaces from the surface by prefixing with R. For example, 0 is
the substrate interface, R1 is the surface interface, 2.5 is the the middle of layer 2 above the substrate.
You can plot the profiles and residuals on one plot by setting plots to 1, on two separate plots by setting plots to 2, or
each curve on its own plot by setting plots to n. Output is saved in <store>/<model>-err#.png.
With the toughest fits, for example freeform models with many control points, parallel tempering (fit=pt) is the most
promising algorithm. This implementation is an extension of DREAM. Whereas DREAM runs with a constant tem-
perature, T=1, parallel tempering runs with multiple temperatures concurrently. The high temperature points are able
to walk up steep hills in the search space, possibly crossing over into a neighbouring valley. The low temperature points
agressively seek the nearest local minimum, rejecting any proposed point that is worse than the current. Differential
evolution helps adapt the steps to the shape of the search space, increasing the chances that the random step will be a
step in the right direction. The current implementation uses a fixed set of temperatures defaulting to Tmin=0.1 through
Tmax=10 in nT=25 steps; future versions should adapt the temperature based on the fitting problem.
Parallel tempering is run like dream, but with optional temperature controls:
Parallel tempering does not yet generate the uncertainty plots provided by DREAM. The state is retained along the
temperature for each point, but the code to generate histograms from points weighted by inverse temperature has not
yet been written.
The GUI version is slower because it frequently updates the graphs showing the best current fit.
Run multiple models overnight, starting one after the last is complete by creating a batch file (e.g., run.bat) with one
line per model. Append the parameter –batch to the end of the command lines so the program doesn’t stop to show
interactive graphs. You can view the fitted results in the GUI using:
There are several other optimizers that are included but aren’t frequently used.
BFGS (fit=newton) is a quasi-newton optimizer relying on numerical derivatives to find the nearest local minimum.
Because the reflectometry problem often has correlated parameters, the resulting matrices can be ill-conditioned and
the fit isn’t robust.
Particle swarm optimization (fit=ps) is another population based algorithm, but it does not appear to perform well for
high dimensional problem spaces that frequently occur in reflectivity.
SNOBFIT (fit=snobfit) attempts to construct a locally quadratic model of the entire search space. While promising
because it can begin to offer some guarantees that the search is complete given reasonable assumptions about the fitting
surface, initial trials did not perform well and the algorithm has not yet been tuned to the reflectivity problem.
3.7.9 References
WH Press, BP Flannery, SA Teukolsky and WT Vetterling, Numerical Recipes in C, Cambridge University Press
I. Sahin (2011) Random Lines: A Novel Population Set-Based Evolutionary Global Optimization Algorithm. Lecture
Notes in Computer Science, 2011, Volume 6621/2011, 97-107 DOI:10.1007/978-3-642-20407-4_9
Vrugt, J. A., ter Braak, C. J. F., Diks, C. G. H., Higdon, D., Robinson, B. A., and Hyman, J. M.:Accelerating Markov
chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling, Int. J. Nonlin.
Sci. Num., 10, 271–288, 2009.
Kennedy, J.; Eberhart, R. (1995). “Particle Swarm Optimization”. Proceedings of IEEE International Conference on
Neural Networks. IV. pp. 1942–1948. doi:10.1109/ICNN.1995.488968
W. Huyer and A. Neumaier, Snobfit - Stable Noisy Optimization by Branch and Fit, ACM Trans. Math. Software
35 (2008), Article 9.
Storn, R.: System Design by Constraint Adaptation and Differential Evolution, Technical Report TR-96-039, Interna-
tional Computer Science Institute (November 1996)
Swendsen RH and Wang JS (1986) Replica Monte Carlo simulation of spin glasses Physical Review Letters 57 : 2607-
2609
3.7. Fitting 57
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
FOUR
REFERENCE
check
kz
[float[n] | Å-1 ] Scattering vector 2𝜋 sin(𝜃)/𝜆. This is 21 𝑄𝑧 .
depth
[float[m] | Å] thickness of each layer. The thickness of the incident medium and substrate are ignored.
rho, irho
[float[n, k] | 10-6 Å-2 ] real and imaginary scattering length density for each layer for each kz Note: absorption
cross section mu = 2 irho/lambda
sigma
[float[m-1] | Å] interfacial roughness. This is the roughness between a layer and the subsequent layer. There
is no interface associated with the substrate. The sigma array should have at least m-1 entries, though it
may have m with the last entry ignored.
rho_index
[int[m]] index into rho vector for each kz
Slabs are ordered with the surface SLD at index 0 and substrate at index -1, or reversed if kz < 0.
59
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
ANSTOData
Platypus Loader for reduced data from the ANSTO Platypus in-
strument.
load Return a probe for ANSTO data.
Platypus
class refl1d.anstodata.Platypus
Bases: ANSTOData
Loader for reduced data from the ANSTO Platypus instrument.
instrument = 'Platypus'
load(filename, **kw)
radiation = 'neutron'
60 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Chebyshev polynomials 𝑇𝑘 form a basis set for functions over [−1, 1]. The truncated interpolating polynomial 𝑃𝑛 is a
weighted sum of Chebyshev polynomials up to degree 𝑛:
𝑛
∑︁
𝑓 (𝑥) ≈ 𝑃𝑛 (𝑥) = 𝑐𝑖 𝑇𝑘 (𝑥)
𝑘=0
The interpolating polynomial exactly matches 𝑓 (𝑥) at the chebyshev nodes 𝑧𝑘 and is near the optimal polynomial
approximation to 𝑓 of degree 𝑛 under the maximum norm. For well behaved functions, the coefficients 𝑐𝑘 decrease
rapidly, and furthermore are independent of the degree 𝑛 of the polynomial.
FreeformCheby models the scattering length density profile of the material within a layer, and ChebyVF models the
volume fraction profile of two materials mixed in the layer.
The models can either be defined directly in terms of the Chebyshev coefficients 𝑐𝑘 with method = ‘direct’, or in terms of
control points (𝑧𝑘 , 𝑓 (𝑧𝑘 )) at the Chebyshev nodes cheby_points() with method = ‘interp’. Bounds on the parameters
are easier to control using ‘interp’, but the function may oscillate wildly outside the bounds. Bounds on the oscillation
are easier to control using ‘direct’, but the shape of the profile is difficult to control.
class refl1d.cheby.ChebyVF(thickness=0, interface=0, material=None, solvent=None, vf=None,
name='ChebyVF', method='interp')
Bases: Layer
Material in a solvent
Parameters
thickness
[float | Angstrom] the thickness of the solvent layer
interface
[float | Angstrom] the rms roughness of the solvent surface
material
[Material] the material of interest
solvent
[Material] the solvent or vacuum
vf
[[float]] the control points for volume fraction
method = ‘interp’
[string | ‘direct’ or ‘interp’] freeform profile method
method is ‘direct’ if the vf values refer to chebyshev polynomial coefficients or ‘interp’ if vf values refer to
control points located at 𝑧𝑘 .
The control point 𝑘 is located at 𝑧𝑘 ∈ [0, 𝐿] for layer thickness 𝐿, as returned by cheby_points() called with
n=len(vf ) and range=[0, 𝐿].
The materials can either use the scattering length density directly, such as PDMS = SLD(0.063, 0.00006) or they
can use chemical composition and material density such as PDMS=Material(“C2H6OSi”, density=0.965).
These parameters combine in the following profile formula:
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.cheby.FreeformCheby(thickness=0, interface=0, rho=(), irho=(), name='Cheby',
method='interp')
Bases: Layer
A freeform section of the sample modeled with Chebyshev polynomials.
sld (rho) and imaginary sld (irho) can be modeled with a separate polynomial orders.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
62 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
name = None
parameters()
Return parameters used to define layer
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Render slabs for use with the given probe
thickness = None
to_dict()
Return a dictionary representation of the Slab object
Inhomogeneous samples
In the presence of samples with short range order on scale of the coherence length of the probe in the plane, but
long range disorder following some distribution of parameter values, the reflectivity can be computed from a weighted
incoherent sum of the reflectivities for different values of the parameter.
DistristributionExperiment allows the model to be computed for a single varying parameter. Multi-parameter dispersion
models are not available.
class refl1d.dist.DistributionExperiment(experiment=None, P=None, distribution=None,
coherent=False)
Bases: ExperimentBase
Compute reflectivity from a non-uniform sample.
P is the target parameter for the model, which takes on the values from distribution in the context of the experi-
ment. The result is the weighted sum of the theory curves after setting P.value to each distribution value. Clearly,
P should not be a fitted parameter, but the remaining experiment parameters can be fitted, as can the parameters
of the distribution.
If coherent is true, then the reflectivity of the mixture is computed from the coherent sum rather than the inco-
herent sum.
See Weights for a description of how to set up the distribution.
format_parameters()
interpolation = 0
is_reset()
Returns True if a model reset was triggered.
magnetic_slabs()
magnetic_step_profile()
property name
nllf()
Return the -log(P(data|model)).
Using the assumption that data uncertainty is uncorrelated, with measurements normally distributed with
mean R and variance dR**2, this is just sum( resid**2/2 + log(2*pi*dR**2)/2 ).
The current version drops the constant term, sum(log(2*pi*dR**2)/2).
numpoints()
parameters()
plot_profile(plot_shift=0.0)
plot_weights()
reflectivity(resolution=True, interpolation=0)
residuals()
restore_data()
Restore original data after resynthesis.
resynth_data()
Resynthesize data with noise from the uncertainty estimates.
save(basename)
save_json(basename)
Save the experiment as a json file
save_profile(basename)
save_refl(basename)
simulate_data(noise=2.0)
Simulate a random data set for the model.
This sets R and dR according to the noise level given.
Parameters:
noise: float or array or None | %
dR/R uncertainty as a percentage. If noise is set to None, then use dR from the data if present, otherwise
default to 2%.
64 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
slabs()
smooth_profile(dz=1)
Compute a density profile for the material
step_profile()
Compute a scattering length density profile
to_dict()
update()
Called when any parameter in the model is changed.
This signals that the entire model needs to be recalculated.
update_composition()
When the model composition has changed, we need to lookup the scattering factors for the new model. This
is only needed when an existing chemical formula is modified; new and deleted formulas will be handled
automatically.
write_data(filename, **kw)
Save simulated data to a file
class refl1d.dist.Weights(edges=None, cdf=None, args=(), loc=None, scale=None, truncated=True)
Bases: object
Parameterized distribution for use in DistributionExperiment.
To support non-uniform experiments, we must bin the possible values for the parameter and compute the theory
function for one parameter value per bin. The weighted sum of the resulting theory functions is the value that
we compare to the data.
Performing this analysis requires a cumulative density function which can return the integrated value of the
probability density from -inf to x. The total density in each bin is then the difference between the cumulative
densities at the edges. If the distribution is wider than the range, then the tails need to be truncated and the bins
reweighted to a total density of 1, or the tail density can be added to the first and last bins. Weights of zero are
not returned. Note that if the tails are truncated, this may result in no weights being returned.
The vector edges contains the bin edges for the distribution. The function cdf returns the cumulative density
function at the edges. The cdf function must implement the scipy.stats interface, with function signature f(x, a1,
a2, . . . , loc=0, scale=1). The list args defines the arguments a1, a2, etc. The underlying parameters are available
as args[i]. Similarly, loc and scale define the distribution center and width. Use truncated=False if you want the
distribution tails to be included in the weights.
SciPy distribution D is used by specifying cdf=scipy.stats.D.cdf. Useful distributions include:
parameters()
to_dict()
reload_errors Reload the MCMC state and compute the model confi-
dence intervals.
run_errors Command line tool for generating error plots from mod-
els.
calc_errors Align the sample profiles and compute the residual dif-
ference from the measured reflectivity for a set of points.
align_profiles Align profiles for each sample
show_errors Plot the aligned profiles and the distribution of the resid-
uals for profiles and residuals returned from calc_errors.
show_profiles
show_residuals
66 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
The loaded error data is a sample from the fit space according to the fit parameter uncertainty. This is a subset
of the samples returned by the DREAM MCMC sampling process.
model is the name of the model python file
store is the name of the store directory containing the dream results
nshown and random are as for calc_errors_from_state().
Returns errs for show_errors().
refl1d.errors.run_errors(**kw)
Command line tool for generating error plots from models.
Type the following to regenerate the profile contour plots plots:
$ refl1d align <model>.py <store> [<align>] [0|1|2|n]
Align is either auto for the current behaviour, or it is an interface number. You can align on the center of a layer by
adding 0.5 to the interface number. You can count interfaces from the surface by prefixing with R. For example,
0 is the substrate interface, R1 is the surface interface, 2.5 is the the middle of layer 2 above the substrate.
You can plot the profiles and residuals on one plot by setting plots to 1, on two separate plots by setting plots to
2, or each curve on its own plot by setting plots to n. Plots are saved in <store>/<model>-err#.png. If plots is 0,
then no plots are created.
Additional parameters include:
nshown, random :
see bumps.errplot.calc_errors_from_state()
contours, npoints, plots, save :
see show_errors()
refl1d.errors.show_errors(errors, contours=(68, 95), npoints=200, align='auto', plots=1, save=None)
Plot the aligned profiles and the distribution of the residuals for profiles and residuals returned from calc_errors.
contours can be a list of percentiles or []. If percentiles are given, then show uncertainty using a contour plot
with the given levels, otherwise just overplot sample lines. contours defaults to [68, 95, 100].
npoints is the number of points to use when generating the profile contour. Since the z values for the various
lines do not correspond, the contour generator interpolates the entire profile range with linear spacing using this
number of points.
align is the interface number plus fractional distance within the layer following the interface. For example, use
0 for the substrate interface, use -1 for the surface interface, or use 2.5 for the center of the second slab above the
substrate. If align=’auto’ then choose an offset that minimizes the cross-correlation between the first profile and
the current profile.
plots is the number of plots to use (1, 2, or ‘n’).
save is the basename of the plot to save. This should usually be “<store>/<model>”. The program will add
‘-err#.png’ where ‘#’ is the number of the plot.
refl1d.errors.show_profiles(errors, align, contours, npoints)
refl1d.errors.show_residuals(errors, contours)
Experiment definition
An experiment combines the sample definition with a measurement probe to create a fittable reflectometry model.
class refl1d.experiment.Experiment(sample=None, probe=None, name=None, roughness_limit=0,
dz=None, dA=None, step_interfaces=None, smoothness=None,
interpolation=0)
Bases: ExperimentBase
Theory calculator. Associates sample with data, Sample plus data. Associate sample with measurement.
The model calculator is specific to the particular measurement technique that was applied to the model.
Measurement properties:
probe is the measuring probe
Sample properties:
sample is the model sample step_interfaces use slabs to approximate gaussian interfaces rough-
ness_limit limit the roughness based on layer thickness dz minimum step size for computed profile
steps in Angstroms dA discretization condition for computed profiles
If step_interfaces is True, then approximate the interface using microslabs with step size dz. The microslabs
extend throughout the whole profile, both the interfaces and the bulk; a value for dA should be specified to save
computation time. If False, then use the Nevot-Croce analytic expression for the interface between slabs.
The roughness_limit value should be reasonably large (e.g., 2.5 or above) to make sure that the Nevot-Croce
reflectivity calculation matches the calculation of the displayed profile. Use a value of 0 if you want no limits
on the roughness, but be aware that the displayed profile may not reflect the actual scattering densities in the
material.
The dz step size sets the size of the slabs for non-uniform profiles. Using the relation d = 2 pi / Q_max, we use
a default step size of d/20 rounded to two digits, with 5 Å as the maximum default. For simultaneous fitting you
may want to set dz explicitly using to round(pi/Q_max/10, 1) so that all models use the same step size.
The dA condition measures the uncertainty in scattering materials allowed when combining the steps of a non-
uniform profile into slabs. Specifically, the area of the box containing the minimum and the maximum of the
non-uniform profile within the slab will be smaller than dA. A dA of 10 gives coarse slabs. If dA is not provided
then each profile step forms its own slab. The dA condition will also apply to the slab approximation to the
interfaces.
interpolation indicates the number of points to plot in between existing points.
smoothness DEPRECATED This parameter is not used.
amplitude(resolution=False, interpolation=0)
Calculate reflectivity amplitude at the probe points.
68 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
format_parameters()
interpolation = 0
is_reset()
Returns True if a model reset was triggered.
property ismagnetic
True if experiment contains magnetic materials
magnetic_slabs()
magnetic_smooth_profile(dz=0.1)
Return the nuclear and magnetic scattering potential for the sample.
magnetic_step_profile()
Return the nuclear and magnetic scattering potential for the sample.
property name
nllf()
Return the -log(P(data|model)).
Using the assumption that data uncertainty is uncorrelated, with measurements normally distributed with
mean R and variance dR**2, this is just sum( resid**2/2 + log(2*pi*dR**2)/2 ).
The current version drops the constant term, sum(log(2*pi*dR**2)/2).
numpoints()
parameters()
Fittable parameters to sample and probe
penalty()
plot_profile(plot_shift=None)
profile_shift = 0
reflectivity(resolution=True, interpolation=0)
Calculate predicted reflectivity.
If resolution is true include resolution effects.
residuals()
restore_data()
Restore original data after resynthesis.
resynth_data()
Resynthesize data with noise from the uncertainty estimates.
save(basename)
save_json(basename)
Save the experiment as a json file
save_profile(basename)
save_refl(basename)
save_staj(basename)
simulate_data(noise=2.0)
Simulate a random data set for the model.
This sets R and dR according to the noise level given.
Parameters:
noise: float or array or None | %
dR/R uncertainty as a percentage. If noise is set to None, then use dR from the data if present, otherwise
default to 2%.
slabs()
Return the slab thickness, roughness, rho, irho for the rendered model.
smooth_profile(dz=0.1)
Return the scattering potential for the sample.
If dz is not given, use dz = 0.1 A.
step_profile()
Return the step scattering potential for the sample, ignoring interfaces.
to_dict()
update()
Called when any parameter in the model is changed.
This signals that the entire model needs to be recalculated.
update_composition()
When the model composition has changed, we need to lookup the scattering factors for the new model. This
is only needed when an existing chemical formula is modified; new and deleted formulas will be handled
automatically.
write_data(filename, **kw)
Save simulated data to a file
class refl1d.experiment.ExperimentBase
Bases: object
format_parameters()
interpolation = 0
is_reset()
Returns True if a model reset was triggered.
70 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
magnetic_slabs()
magnetic_step_profile()
property name
nllf()
Return the -log(P(data|model)).
Using the assumption that data uncertainty is uncorrelated, with measurements normally distributed with
mean R and variance dR**2, this is just sum( resid**2/2 + log(2*pi*dR**2)/2 ).
The current version drops the constant term, sum(log(2*pi*dR**2)/2).
numpoints()
parameters()
plot_profile(plot_shift=0.0)
reflectivity(resolution=True, interpolation=0)
residuals()
restore_data()
Restore original data after resynthesis.
resynth_data()
Resynthesize data with noise from the uncertainty estimates.
save(basename)
save_json(basename)
Save the experiment as a json file
save_profile(basename)
save_refl(basename)
simulate_data(noise=2.0)
Simulate a random data set for the model.
This sets R and dR according to the noise level given.
Parameters:
noise: float or array or None | %
dR/R uncertainty as a percentage. If noise is set to None, then use dR from the data if present, otherwise
default to 2%.
slabs()
smooth_profile(dz=0.1)
step_profile()
to_dict()
update()
Called when any parameter in the model is changed.
This signals that the entire model needs to be recalculated.
update_composition()
When the model composition has changed, we need to lookup the scattering factors for the new model. This
is only needed when an existing chemical formula is modified; new and deleted formulas will be handled
automatically.
write_data(filename, **kw)
Save simulated data to a file
class refl1d.experiment.MixedExperiment(samples=None, ratio=None, probe=None, name=None,
coherent=False, interpolation=0, **kw)
Bases: ExperimentBase
Support composite sample reflectivity measurements.
Sometimes the sample you are measuring is not uniform. For example, you may have one portion of you polymer
brush sample where the brushes are close packed and able to stay upright, whereas a different section of the
sample has the brushes lying flat. Constructing two sample models, one with brushes upright and one with
brushes flat, and adding the reflectivity incoherently, you can then fit the ratio of upright to flat.
samples the layer stacks making up the models ratio a list of parameters, such as [3, 1] for a 3:1 ratio probe the
measurement to be fitted or simulated
coherent is True if the length scale of the domains is less than the coherence length of the neutron, or false
otherwise.
Statistics such as the cost functions for the individual profiles can be accessed from the underlying experiments
using composite.parts[i] for the various samples.
amplitude(resolution=False)
format_parameters()
interpolation = 0
is_reset()
Returns True if a model reset was triggered.
magnetic_slabs()
magnetic_step_profile()
property name
nllf()
Return the -log(P(data|model)).
Using the assumption that data uncertainty is uncorrelated, with measurements normally distributed with
mean R and variance dR**2, this is just sum( resid**2/2 + log(2*pi*dR**2)/2 ).
The current version drops the constant term, sum(log(2*pi*dR**2)/2).
numpoints()
parameters()
72 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
penalty()
plot_profile(plot_shift=None)
reflectivity(resolution=True, interpolation=0)
Calculate predicted reflectivity.
This will be the weighted sum of the reflectivity from the individual systems. If coherent is set, then the
coherent sum will be used, otherwise the incoherent sum will be used.
If resolution is true include resolution effects.
interpolation is the number of theory points to show between data points.
residuals()
restore_data()
Restore original data after resynthesis.
resynth_data()
Resynthesize data with noise from the uncertainty estimates.
save(basename)
save_json(basename)
Save the experiment as a json file
save_profile(basename)
save_refl(basename)
save_staj(basename)
simulate_data(noise=2.0)
Simulate a random data set for the model.
This sets R and dR according to the noise level given.
Parameters:
noise: float or array or None | %
dR/R uncertainty as a percentage. If noise is set to None, then use dR from the data if present, otherwise
default to 2%.
slabs()
smooth_profile(dz=0.1)
step_profile()
to_dict()
update()
Called when any parameter in the model is changed.
This signals that the entire model needs to be recalculated.
update_composition()
When the model composition has changed, we need to lookup the scattering factors for the new model. This
is only needed when an existing chemical formula is modified; new and deleted formulas will be handled
automatically.
write_data(filename, **kw)
Save simulated data to a file
refl1d.experiment.nice(v, digits=2)
Fix v to a value with a given number of digits of precision
refl1d.experiment.plot_sample(sample, instrument=None, roughness_limit=0)
Quick plot of a reflectivity sample and the corresponding reflectivity.
data_view
model_view
new_model
calc_errors Align the sample profiles and compute the residual dif-
ference from the measured reflectivity for a set of points.
show_errors Plot the aligned profiles and the distribution of the resid-
uals for profiles and residuals returned from calc_errors.
74 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
residuals
Array of (theory-data)/uncertainty for each data point in the measurement. There will be one array
returned per error sample.
refl1d.fitplugin.data_view()
refl1d.fitplugin.model_view()
refl1d.fitplugin.new_model()
magnetic = True
parameters()
set_anchor(stack, index)
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
L1 = SLD('L1', rho=2.07)
L3 = SLD('L3', rho=4)
def linear(z, rhoL, rhoR):
rho = z * (rhoR-rhoL)/(z[-1]-z[0]) + rhoL
return rho
profile = FunctionalProfile(100, 0, profile=linear,
rhoL=L1.rho, rhoR=L3.rho)
sample = L1 | profile | L3
constraints()
Constraints
76 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
property thickness
to_dict()
Return a dictionary representation of the Slab object
class refl1d.freeform.FreeLayer(thickness=0, left=None, right=None, rho=(), irho=(), rhoz=(), irhoz=(),
name='Freeform')
Bases: Layer
A freeform section of the sample modeled with B-splines.
sld (rho) and imaginary sld (irho) can be modeled with a separate number of control points. The control points
can be equally spaced in the layers unless rhoz or irhoz are specified. If the z values are given, they must be in
the range [0, 1]. One control point is anchored at either end, so there are two fewer z values than controls if z
values are given.
Layers have a slope of zero at the ends, so the automatically blend with slabs.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
78 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.freeform.FreeformInterface01(thickness=0, interface=0, below=None, above=None,
z=None, vf=None, name='Interface')
Bases: Layer
A freeform section of the sample modeled with B-splines.
sld (rho) and imaginary sld (irho) can be modeled with a separate number of control points. The control points
can be equally spaced in the layers unless rhoz or irhoz are specified. If the z values are given, they must be in
the range [0, 1]. One control point is anchored at either end, so there are two fewer z values than controls if z
values are given.
Layers have a slope of zero at the ends, so the automatically blend with slabs.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
80 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
reflectivity(Q)
Compute the Fresnel reflectivity at the given Q/wavelength.
refl1d.fresnel.test()
Here, distance is the distance to the valid region of the search space so that any fitter that gets lost in a penalty region can
more quickly return to the valid region. Any penalty value above FIT_REJECT_PENALTY will suppress the evaluation
of the model at that point during the fit.
Consider a model with layers (Si | Au | FeNi | air) and the constraint that d_Au + d_FeNi < 200 A. The constraints
function would be written something like:
Then, if the fit algorithm proposes a value such as Au=125, FeNi=90, the excess will be 15, and the penalty will be
FIT_REJECT_PENALTY+225.
You can use penalties less than FIT_REJECT_PENALTY, but these should correspond to the negative log likeli-
hood of seeing that constraint value within the model in order for the MCMC uncertainty analysis to work correctly.
FIT_REJECT_PENALTY is set to 1e6, which should be high enough that it doesn’t perturb the fit.
refl1d.garefl.load(modelfile, probes=None)
Load a garefl model file as an experiment.
modelfile is a model.so file created from setup.c.
probes is a list of datasets to fit to the models in the model file, or None if the model file provides its own data.
refl1d.instrument (this module) defines two instrument types: Monochromatic and Pulsed. These represent
generic scanning and time of flight instruments, respectively.
To perform a simulation or load a data set, a measurement geometry must be defined. In the following example, we
set up the geometry for a pretend instrument SP:2. The complete geometry needs to include information to calculate
wavelength resolution (wavelength and wavelength dispersion) as well as angular resolution (slit distances and open-
ings, and perhaps sample size and sample warp). In this case, we are using a scanning monochromatic instrument with
slits of 0.1 mm below 0.5∘ and opening slits above 0.5∘ starting at 0.2 mm. The monochromatic instrument assumes a
fixed ∆𝜃/𝜃 while opening.
This instrument can be used to a data file, or generate a measurement probe for use in modeling or to read in a previously
measured data set or generate a probe for simulation:
All instrument parameters can be specified when constructing the probe, replacing the defaults that are associated with
the instrument. For example, to include sample broadening effects in the resolution:
The string representation of the geometry prints a multi-line description of the default instrument configuration:
>>> print(geometry)
== Instrument SP:2 ==
radiation = neutron at 5.0042 Angstrom with 0.9% resolution
slit distances = 2086 mm and 230 mm
fixed region below 0.5 and above 90 degrees
slit openings at Tlo are 0.2 mm
sample width = 1e+10 mm
sample broadening = 0 degrees
82 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Specific instruments can be defined for each facility. This saves the users having to remember details of the instrument
geometry.
For example, the above SP:2 instrument could be defined as follows:
This definition can then be used to define the measurement geometry. We have added a load method which knows
about the facility file format (in this case, three column ASCII data Q, R, dR) so that we can load a datafile in a couple
of lines of code:
>>> print(SP2.defaults())
== Instrument class SP:2 ==
radiation = neutron at 5.0042 Angstrom with 0.9% resolution
slit distances = 2086 mm and 230 mm
Graphical user interfaces follow different usage patterns from scripts. Here the emphasis will be on selecting a data set
to process, displaying its default metadata and allowing the user to override it.
File loading should follow the pattern established in reflectometry reduction, with an extension registry and a fallback
scheme whereby files can be checked in a predefined order. If the file cannot be loaded, then the next loader is tried. This
should be extended with the concept of a magic signature such as those used by graphics and sound file applications:
read the first block and run it through the signature check before trying to load it. For unrecognized extensions, all
loaders can be tried.
The file loader should return an instrument instance with metadata initialized from the file header. This metadata can be
displayed to the user along with a plot of the data and the resolution. When metadata values are changed, the resolution
can be recomputed and the display updated. When the data set is accepted, the final resolution calculation can be
performed.
class refl1d.instrument.Monochromatic(**kw)
Bases: object
Instrument representation for scanning reflectometers.
Parameters
instrument
[string] name of the instrument
radiation
[string | xray or neutron] source radiation type
d_s1, d_s2
[float | mm] distance from sample to pre-sample slits 1 and 2; post-sample slits are ignored
wavelength
[float | Å] wavelength of the instrument
dLoL
[float] constant relative wavelength dispersion; wavelength range and dispersion together
determine the bins
slits
[float OR (float, float) | mm] fixed slits
slits_at_Tlo
[float OR (float, float) | mm] slit 1 and slit 2 openings at Tlo; this can be a scalar if both slits
are open by the same amount, otherwise it is a pair (s1, s2).
slits_at_Qlo
[float OR (float, float) | mm] equivalent to slits_at_Tlo, for instruments that are controlled by
Q rather than theta
Tlo, Thi
[float | ∘ ] range of opening slits, or inf if slits are fixed.
Qlo, Qhi
[float | Å-1 ] range of opening slits when instrument is controlled by Q.
slits_below, slits_above
[float OR (float, float) | mm] slit 1 and slit 2 openings below Tlo and above Thi; again, these
can be scalar if slit 1 and slit 2 are the same, otherwise they are each a pair (s1, s2). Below
and above default to the values of the slits at Tlo and Thi respectively.
sample_width
[float | mm] width of sample; at low angle with tiny samples, stray neutrons miss the sample
and are not reflected onto the detector, so the sample itself acts as a slit, therefore the width
of the sample may be needed to compute the resolution correctly
sample_broadening
[float | ∘ FWHM] amount of angular divergence (+) or focusing (-) introduced by the sample;
this is caused by sample warp, and may be read off of the rocking curve by subtracting
(s1+s2)/2/(d_s1-d_s2) from the FWHM width of the rocking curve
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
84 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
sample_broadening = W - degrees( 0.5*(s1+s2) / (d1-d2))
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T OR Q incident angle or Q Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the
start of the range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = None
d_s1 = None
d_s2 = None
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'monochromatic'
Returns
probe
[Probe] Measurement probe with complete resolution information. The probe will not have
any data.
If both Q and T are specified then Q takes precedents.
You can override instrument parameters using key=value. In particular, settings for slits_at_Tlo, Tlo, Thi,
slits_below, and slits_above are used to define the angular divergence.
radiation = 'unknown'
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = None
class refl1d.instrument.Pulsed(**kw)
Bases: object
Instrument representation for pulsed reflectometers.
Parameters
instrument
[string] name of the instrument
radiation
[string | xray, neutron] source radiation type
TOF_range
[(float, float)] usabe range of times for TOF data
T
[float | ∘ ] sample angle
d_s1, d_s2
[float | mm] distance from sample to pre-sample slits 1 and 2; post-sample slits are ignored
wavelength
[(float, float) | Å] wavelength range for the measurement
dLoL
[float] constant relative wavelength dispersion; wavelength range and dispersion together
determine the bins
86 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
slits
[float OR (float, float) | mm] fixed slits
slits_at_Tlo
[float OR (float, float) | mm] slit 1 and slit 2 openings at Tlo; this can be a scalar if both slits
are open by the same amount, otherwise it is a pair (s1, s2).
Tlo, Thi
[float | ∘ ] range of opening slits, or inf if slits are fixed.
slits_below, slits_above
[float OR (float, float) | mm] slit 1 and slit 2 openings below Tlo and above Thi; again, these
can be scalar if slit 1 and slit 2 are the same, otherwise they are each a pair (s1, s2). Below
and above default to the values of the slits at Tlo and Thi respectively.
sample_width
[float | mm] width of sample; at low angle with tiny samples, stray neutrons miss the sample
and are not reflected onto the detector, so the sample itself acts as a slit, therefore the width
of the sample may be needed to compute the resolution correctly
sample_broadening
[float | ∘ FWHM] amount of angular divergence (+) or focusing (-) introduced by the sample;
this is caused by sample warp, and may be read off of the rocking curve by subtracting
0.5*(s1+s2)/(d_s1-d_s2) from the FWHM width of the rocking curve
T = None
Thi = 90
Tlo = 90
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T incident angle Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the start of the
range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = None
d_s1 = None
d_s2 = None
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'pulsed'
sample_width = 10000000000.0
88 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
slits = None
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = None
parameters()
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
parameters()
profile(Pz, thickness)
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
90 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
parameters()
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
interface_above and interface_below define the magnetic interface at the boundaries, if it is different from the
nuclear interface.
magnetic = True
parameters()
set_layer_name(name)
Update the names of the magnetic parameters with the name of the layer if it has not already been set. This
is necessary since we don’t know the layer name until after we have constructed the magnetism object.
to_dict()
Reflectometry materials.
Materials (see Material) have a composition and a density. Density may not be known, either because it has not been
measured or because the measurement of the bulk value does not apply to thin films. The density parameter can be
fitted directly, or the bulk density can be used, and a stretch parameter can be fitted.
Mixtures (see Mixture) are a special kind of material which are composed of individual parts in proportion. A mixture
can be constructed in a number of ways, such as by measuring proportional masses and mixing or measuring propor-
tional volumes and mixing. The parameter of interest may also be the relative number of atoms of one material versus
another. The fractions of the different mixture components are fitted parameters, with the remainder of the bulk filled
by the final component.
SLDs (see SLD) are raw scattering length density values. These should be used if the material composition is not
known. In that case, you will need separate SLD materials for each wavelength and probe.
air (see Vacuum) is a predefined scatterer transparent to all probes.
Scatter (see Scatterer) is the abstract base class from which all scatterers are derived.
The probe cache (see ProbeCache) stores the scattering factors for the various materials and calls the material sld
method on demand. Because the same material can be used for multiple measurements, the scattering factors cannot
be stored with material itself, nor does it make sense to store them with the probe. The scattering factor lookup for the
material is separate from the scattering length density calculation so that you only need to look up the material once
per fit.
The probe itself deals with all computations relating to the radiation type and energy. Unlike the normally tabulated
scattering factors f’, f” for X-ray, there is no need to scale by probe by electron radius. In the end, sld is just the returned
scattering factors times density.
92 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
cell_volume
[Å3 ] Density is mass / cell_volume
number_density: [atoms/cm^3]
Density is number_density * molar mass / avogadro constant
The resulting material will have a density attribute with the computed material density in addition to the
fitby attribute specified.
Note: Calling fitby replaces the density parameter in the material, so be sure to do so before using density
in a parameter expression. Using bumps.parameter.WrappedParameter for density is another alternative.
name = None
parameters()
sld(probe)
Return the scattering length density expected for the given scattering factors, as returned from a call to
scattering_factors() for a particular probe.
to_dict()
The materials base, M2, M3, . . . can be chemical formula strings including @density or from material objects.
Use natural_density to change from bulk values if the formula has isotope substitutions.
The fractions F2, F3, . . . are percentages in [0, 100]. The implicit fraction for the base material is 100 -
(F2+F3+. . . ). The SLD is NaN then F1 < 0.
name defaults to M2.name+M3.name+. . .
classmethod bymass(base, *parts, **kw)
Returns an alloy defined by relative mass of the constituents.
Mixture.bymass(base, M2, F2, . . . , name=’mixture name’)
classmethod byvolume(base, *parts, **kw)
Returns an alloy defined by relative volume of the constituents.
Mixture.byvolume(base, M2, F2, . . . , name=’mixture name’)
property density
Compute the density of the mixture from the density and proportion of the individual components.
name = None
parameters()
Adjustable parameters are the fractions associated with each constituent and the relative scale fraction used
to tweak the overall density.
94 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
sld(probe)
Return the scattering length density and absorption of the mixture.
to_dict()
class refl1d.material.ProbeCache(probe=None)
Bases: object
Probe proxy for materials properties.
A caching probe which only looks up scattering factors for materials which it hasn’t seen before. Note that
caching is based on object id, and will fail if the material object is updated with a new atomic structure.
probe is the probe to use when looking up the scattering length density.
The scattering factors need to be retrieved each time the probe or the composition changes. This can be done
either by deleting an individual material from probe (using del probe[material]) or by clearing the entire cash.
clear()
scattering_factors(material, density)
Return the scattering factors for the material, retrieving them from the cache if they have already been
looked up.
class refl1d.material.SLD(name='SLD', rho=0, irho=0)
Bases: Scatterer
Unknown composition.
Use this when you don’t know the composition of the sample. The absorption and scattering length density are
stored directly rather than trying to guess at the composition from details about the sample.
The complex scattering potential is defined by 𝜌 + 𝑗𝜌𝑖 . Note that this differs from 𝜌 + 𝑗𝜇/(2𝜆) more traditionally
used in neutron reflectometry, and 𝑁 𝑟𝑒 (𝑓1 + 𝑗𝑓2 ) traditionally used in X-ray reflectometry.
Given that 𝑓1 and 𝑓2 are always wavelength dependent for X-ray reflectometry, it will not make much sense to
uses this for wavelength varying X-ray measurements. Similarly, some isotopes, particularly rare earths, show
wavelength dependence for neutrons, and so time-of-flight measurements should not be fit with a fixed SLD
scatterer.
name = None
parameters()
sld(probe)
Return the scattering length density expected for the given scattering factors, as returned from a call to
scattering_factors() for a particular probe.
to_dict()
class refl1d.material.Scatterer
Bases: object
A generic scatterer separates the lookup of the scattering factors from the calculation of the scattering length
density. This allows programs to fit density and alloy composition more efficiently.
Note: the Scatterer base class is extended by _MaterialStacker so that materials can be implicitly converted
to slabs when used in stack construction expressions. It is not done directly to avoid circular dependencies
between model and material.
name = None
sld(sf )
Return the scattering length density expected for the given scattering factors, as returned from a call to
scattering_factors() for a particular probe.
class refl1d.material.Vacuum
Bases: Scatterer
Empty layer
name = 'air'
parameters()
sld(probe)
Return the scattering length density expected for the given scattering factors, as returned from a call to
scattering_factors() for a particular probe.
to_dict()
By formula:
If you want to adjust the density you will need to make your own copy of these materials. For example, for permalloy:
96 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Reflectometry models
Reflectometry models consist of 1-D stacks of layers. Layers are joined by gaussian interfaces. The layers themselves
may be uniform, or the scattering density may vary with depth in the layer.
Note: By importing model, the definition of material.Scatterer changes so that materials can be stacked into
layers using operator overloading: - the | operator, (previously known as “bitwise or”) joins stacks - the * operator
repeats stacks (n times, n is an int)
This will affect all instances of the Scatterer class, and all of its subclasses.
class refl1d.model.Layer
Bases: object
Component of a material description.
thickness (Parameter: angstrom)
Thickness of the layer
interface (Parameter: angstrom)
Interface for the top of the layer.
magnetism (Magnetism info)
Magnetic profile anchored to the layer.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.model.Repeat(stack, repeat=1, interface=None, name=None, magnetism=None)
Bases: Layer
Repeat a layer or stack.
If an interface parameter is provide, the roughness between the multilayers may be different from the roughness
between the repeated stack and the following layer.
Note: Repeat is not a type of Stack, but it does have a stack inside.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
98 Chapter 4. Reference
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
property thickness
to_dict()
Return a dictionary representation of the Repeat object
class refl1d.model.Slab(material=None, thickness=0, interface=0, name=None, magnetism=None)
Bases: Layer
A block of material.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.model.Stack(base=None, name='Stack')
Bases: Layer
Reflectometry layer stack
A reflectometry sample is defined by a stack of layers. Each layer has an interface describing how the top of the
layer interacts with the bottom of the overlaying layer. The stack may contain
add(other)
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
insert(idx, other)
Insert structure into a stack. If the inserted element is another stack, the stack will be expanded to accom-
modate. You cannot make nested stacks.
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
property thickness
to_dict()
Return a dictionary representation of the Stack object
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
profile(Pz)
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.mono.FreeLayer(below=None, above=None, thickness=0, z=(), rho=(), irho=(),
name='Freeform')
Bases: Layer
A freeform section of the sample modeled with splines.
sld (rho) and imaginary sld (irho) can be modeled with a separate number of control points. The control points
can be equally spaced in the layers unless rhoz or irhoz are specified. If the z values are given, they must be in
the range [0, 1]. One control point is anchored at either end, so there are two fewer z values than controls if z
values are given.
Layers have a slope of zero at the ends, so the automatically blend with slabs.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
profile(Pz, below, above)
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
refl1d.mono.inflections(dx, dy)
ModelFunction
Exported names
In model definition scripts, rather than importing symbols one by one, you can simply perform:
from refl1d.names import *
This is bad style for library and applications but convenient for small scripts.
refl1d.names.ModelFunction(*args, **kw)
Magnetic data has multiple cross sections and often has fixed slits:
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
sample_broadening = W - degrees( 0.5*(s1+s2) / (d1-d2))
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T OR Q incident angle or Q Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the
start of the range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.009
d_s1 = 2086.0
d_s2 = 230.0
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'AND/R'
load(filename, **kw)
load_magnetic(filename, **kw)
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 5.0042
class refl1d.ncnrdata.MAGIK(**kw)
Bases: NCNRData, Monochromatic
Instrument definition for NCNR MAGIK diffractometer/reflectometer.
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
sample_broadening = W - degrees( 0.5*(s1+s2) / (d1-d2))
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T OR Q incident angle or Q Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the
start of the range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.009
d_s1 = 1759.0
d_s2 = 330.0
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'MAGIK'
load(filename, **kw)
load_magnetic(filename, **kw)
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 5.0042
class refl1d.ncnrdata.NCNRData
Bases: object
load(filename, **kw)
load_magnetic(filename, **kw)
readfile(filename)
class refl1d.ncnrdata.NG1(**kw)
Bases: NCNRData, Monochromatic
Instrument definition for NCNR NG-1 reflectometer.
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
sample_broadening = W - degrees( 0.5*(s1+s2) / (d1-d2))
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T OR Q incident angle or Q Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the
start of the range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.015
d_s1 = 1905.0
d_s2 = 355.59999999999997
d_s3 = 228.6
d_s4 = 1066.8
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'NG-1'
load(filename, **kw)
load_magnetic(filename, **kw)
radiation = 'neutron'
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 4.75
class refl1d.ncnrdata.NG7(**kw)
Bases: NCNRData, Monochromatic
Instrument definition for NCNR NG-7 reflectometer.
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
d_detector = 2000.0
d_s1 = 1722.25
d_s2 = 222.25
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'NG-7'
load(filename, **kw)
load_magnetic(filename, **kw)
radiation = 'neutron'
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 4.768
class refl1d.ncnrdata.PBR(**kw)
Bases: NCNRData, Monochromatic
Instrument definition for NCNR PBR reflectometer.
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
d_s1 = 1835
d_s2 = 343
d_s3 = 380
d_s4 = 1015
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'PBR'
load(filename, **kw)
load_magnetic(filename, **kw)
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 4.75
class refl1d.ncnrdata.XRay(**kw)
Bases: NCNRData, Monochromatic
Instrument definition for NCNR X-ray reflectometer.
Normal dT is in the range 2e-5 to 3e-4.
Slits are fixed throughout the experiment in one of a few preconfigured openings. Please update this file with the
standard configurations when you find them.
You can choose to ignore the geometric calculation entirely by setting the slit opening to 0 and using sam-
ple_broadening to define the entire divergence. Note that Probe.sample_broadening is a fittable parameter, so
you need to access its value:
Thi = 90
Tlo = 90
calc_dT(**kw)
Compute the angular divergence for given slits and angles
Parameters
T OR Q
[[float] | ∘ OR Å-1 ] measurement angles
slits
[float OR (float, float) | mm] total slit opening from edge to edge, not beam center to edge
d_s1, d_s2
[float | mm] distance from sample to slit 1 and slit 2
sample_width
[float | mm] size of sample
sample_broadening
[float | ∘ FWHM] resolution changes from sample warp
Returns
dT
[[float] | ∘ FWHM] angular divergence
sample_broadening can be estimated from W, the full width at half maximum of a rocking curve measured
in degrees:
sample_broadening = W - degrees( 0.5*(s1+s2) / (d1-d2))
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T OR Q incident angle or Q Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the
start of the range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.0006486766995329528
d_detector = None
d_s1 = 275.5
d_s2 = 192.5
d_s3 = 175.0
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'X-ray'
load(filename, **kw)
load_magnetic(filename, **kw)
readfile(filename)
resolution(**kw)
Calculate resolution at each angle.
Return
T, dT
[[float] | ∘ ] Angles and angular divergence.
L, dL
[[float] | Å] Wavelengths and wavelength dispersion.
sample_broadening = 0
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
wavelength = 1.5416
refl1d.ncnrdata.find_xsec(filename)
Find files containing the polarization cross-sections.
Returns tuple with file names for ++ +- -+ – cross sections, or None if the spin cross section does not exist.
Unfortunately the interpretation is a little more complicated than this as the data acquisition system assigns letter
on the basis of flipper state rather than neutron spin state. Whether flipper on or off corresponds to spin up or
down depends on whether the polarizer/analyzer is a supermirror in transmission or reflection mode, or in the
case of ^3He polarizers, whether the polarization is up or down.
For full control, specify filename as a list of files, with None for the missing cross sections.
refl1d.ncnrdata.parse_ncnr_file(filename)
Parse NCNR reduced data file returning header and data.
header dictionary of fields such as ‘data’, ‘title’, ‘instrument’ data 2D array of data
If ‘columns’ is present in header, it will be a list of the names of the columns. If ‘instrument’ is present in the
header, the default instrument geometry will be specified.
Slit geometry is set to the default from the instrument if it is not available in the reduced file.
331-336. doi:10.1016/0921-4526(95)00946-9
3 Adamuți-Trache, M., McMullen, W. E. & Douglas, J. F. Segmental concentration profiles of end-tethered polymers with excluded-volume and
the solid/solvent interface: self-consistent field theory and a Monte Carlo model. Macromolecules, 20(7), 1692–1696. doi:10.1021/ma00173a041
5 De Vos, W. M., & Leermakers, F. A. M. (2009). Modeling the structure of a polydisperse polymer brush. Polymer, 50(1), 305–316.
doi:10.1016/j.polymer.2008.10.025
6 Sheridan, R. J., Orski, S. V., Jones, R. L., Satija, S., & Beers, K. L. (2017). Surface interaction parameter measurement of solvated polymers
solvent
the solvent material
Previous layer should not have roughness! Use a spline to simulate it.
According to7 , 𝑙lat and 𝑚lat should be calculated by the formulas:
𝑎2 𝑚/𝑙
𝑙lat =
𝑝𝑙
(𝑎𝑚/𝑙)2
𝑚lat =
𝑝𝑙
where 𝑙 is the real polymer’s bond length, 𝑚 is the real segment mass, and 𝑎 is the ratio between molecular weight
and radius of gyration at theta conditions. The lattice persistence, 𝑝𝑙 , is:
1 1 + 1/𝑍
𝑝𝑙 =
6 1 − 1/𝑍
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
profile(z)
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
7 Vincent, B., Edwards, J., Emmett, S., & Croot, R. (1988). Phase separation in dispersions of weakly-interacting particles in solutions of
to_dict()
Return a dictionary representation of the Slab object
class refl1d.polymer.PolymerBrush(thickness=0, interface=0, name='brush', polymer=None, solvent=None,
base_vf=None, base=None, length=None, power=None, sigma=None)
Bases: Layer
Polymer brushes in a solvent
Parameters
thickness
the thickness of the solvent layer
interface
the roughness of the solvent surface
polymer
the polymer material
solvent
the solvent material or vacuum
base_vf
volume fraction (%) of the polymer brush at the interface
base
the thickness of the brush interface (A)
length
the length of the brush above the interface (A)
power
the rate of brush thinning
sigma
rms brush roughness (A)
The materials can either use the scattering length density directly, such as PDMS = SLD(0.063, 0.00006) or they
can use chemical composition and material density such as PDMS=Material(“C2H6OSi”, density=0.965).
These parameters combine in the following profile formula:
if 𝑧 <= 𝑧𝑜
⎧
⎨ 𝑉𝑜
𝑉 (𝑧) = 𝑉𝑜 (1 − ((𝑧 − 𝑧𝑜 )/𝐿)2 )𝑝 if 𝑧𝑜 < 𝑧 < 𝑧𝑜 + 𝐿
0 if 𝑧 >= 𝑧𝑜 + 𝐿
⎩
1 2
𝑒− 2 (𝑧/𝜎)
𝑉𝜎 (𝑧) = 𝑉 (𝑧) ⋆ √
2𝜋𝜎 2
𝜌(𝑧) = 𝜌𝑝 𝑉𝜎 (𝑧) + 𝜌𝑠 (1 − 𝑉𝜎 (𝑧))
where 𝑉𝜎 (𝑧) is volume fraction convoluted with brush roughness 𝜎 and 𝜌(𝑧) is the complex scattering length
density of the profile.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
profile(z)
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.polymer.PolymerMushroom(thickness=0, interface=0, name='Mushroom', polymer=None,
solvent=None, sigma=0, vf=0, delta=0)
Bases: Layer
Polymer mushrooms in a solvent (volume profile)
Parameters
delta | real scalar
interaction parameter
vf | real scalar
not quite volume fraction (dimensionless grafting density)
sigma | real scalar
convolution roughness (A)
Using analytical SCF methods for gaussian chains, which are scaled by the radius of gyration of the equivalent
free polymer as an approximation to results of renormalization group methods.3
Solutions are only strictly valid for vf << 1.
constraints()
Constraints
find(z)
Find the layer at depth z.
Returns layer, start, end
interface = None
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
profile(z)
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
class refl1d.polymer.VolumeProfile(thickness=0, interface=0, name='VolumeProfile', material=None,
solvent=None, profile=None, **kw)
Bases: Layer
Generic volume profile function
Parameters
thickness
the thickness of the solvent layer
interface
the roughness of the solvent surface
material
the polymer material
solvent
the solvent material
profile
the profile function, suitably parameterized
The materials can either use the scattering length density directly, such as PDMS = SLD(0.063, 0.00006) or they
can use chemical composition and material density such as PDMS=Material(“C2H6OSi”, density=0.965).
These parameters combine in the following profile formula:
property ismagnetic
layer_parameters()
property magnetism
name = None
parameters()
Returns a dictionary of parameters specific to the layer. These will be added to the dictionary containing
interface, thickness and magnetism parameters.
penalty()
Return a penalty value associated with the layer. This should be zero if the parameters are valid, and
increasing as the parameters become more invalid. For example, if total volume fraction exceeds unity,
then the penalty would be the amount by which it exceeds unity, or if z values must be sorted, then penalty
would be the amount by which they are unsorted.
Note that penalties are handled separately from any probability of seeing a combination of layer parameters;
the final solution to the problem should not include any penalized points.
render(probe, slabs)
Use the probe to render the layer into a microslab representation.
thickness = None
to_dict()
Return a dictionary representation of the Slab object
refl1d.polymer.layer_thickness(z)
Return the thickness of a layer given the microslab z points.
The z points are at the centers of the bins. we can use the recurrence that boundary b[k] = z[k-1] + (z[k-1] -
b[k-1]) to compute the total length of the layer.
Experimental probe.
The experimental probe describes the incoming beam for the experiment. Scattering properties of the sample are
dependent on the type and energy of the radiation.
See Data Representation for details.
class refl1d.probe.NeutronProbe(T=None, dT=0, L=None, dL=0, data=None, intensity=1, background=0,
back_absorption=1, theta_offset=0, sample_broadening=0,
back_reflectivity=False, name=None, filename=None, dQ=None,
resolution='normal')
Bases: Probe
Neutron probe.
By providing a scattering factor calculator for X-ray scattering, model components can be defined by mass density
and chemical composition.
Aguide = 270.0
property Q
Q_c(substrate=None, surface=None)
property Ro
Returns:
dtheta
[float | degrees] uncertainty in alignment angle
apply_beam(calc_Q, calc_R, resolution=True, interpolation=0)
Apply factors such as beam intensity, background, backabsorption, resolution to the data.
resolution is True if the resolution function should be applied to the reflectivity.
interpolation is the number of Q points to show between the nominal Q points of the probe. Use this to draw
a smooth theory line between the data points. The resolution dQ is interpolated between the resolution of
the surrounding Q points.
√ √
If an amplitude signal is provided, 𝑟 will be scaled by 𝐼 +𝑖 𝐵/|𝑟|, which when squared will equal 𝐼|𝑟|2 +
𝐵. The resolution function will be applied directly to the amplitude. Unlike intensity and background, the
resulting |𝐺 * 𝑟|2 ̸= 𝐺 * |𝑟|2 for convolution operator *, but it should be close.
property calc_Q
If 𝑄𝑐 is imaginary, then −|𝑄𝑐 | is used instead, so this routine can be used for reflectivity signals which
scan from back reflectivity to front reflectivity. For completeness, the angle 𝜃 = 0 is added as well.
property dQ
fresnel(substrate=None, surface=None)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
label(prefix=None, gloss='', suffix='')
log10_to_linear()
Convert data from log to linear.
Older reflectometry reduction code stored reflectivity in log base 10 format. Call probe.log10_to_linear()
after loading this data to convert it to linear for subsequent display and fitting.
oversample(n=20, seed=1)
Generate an over-sampling of Q to avoid aliasing effects.
Oversampling is needed for thick layers, in which the underlying reflectivity oscillates so rapidly in Q that
a single measurement has contributions from multiple Kissig fringes.
Sampling will be done using a pseudo-random generator so that accidental structure in the function does
not contribute to the aliasing. The generator will usually be initialized with a fixed seed so that the point
selection will not change from run to run, but a seed of None will choose a different set of points each time
oversample is called.
The value n is the number of points that should contribute to each Q value when computing the resolu-
tion. These will be distributed about the nominal measurement value, but varying in both angle and energy
according to the resolution function. This will yield more points near the measurement and fewer farther
away. The measurement point itself will not be used to avoid accidental bias from uniform Q steps. De-
pending on the problem, a value of n between 20 and 100 should lead to stable values for the convolved
reflectivity.
Note: oversample() will remove the extra Q calculation points introduced by critical_edge().
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
Plot the Q**4 reflectivity associated with the probe.
Note that Q**4 reflectivity has the intensity and background applied so that hydrogenated samples display
more cleanly. The formula to reproduce the graph is:
plot_shift = 0
polarized = False
radiation = 'neutron'
residuals_shift = 0
resolution_guard()
Make sure each measured 𝑄 point has at least 5 calculated 𝑄 points contributing to it in the range
[−3∆𝑄, 3∆𝑄].
Not Implemented
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
show_resolution = True
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
subsample(dQ)
Select points at most every dQ.
Use this to speed up computation early in the fitting process.
This changes the data object, and is not reversible.
The current algorithm is not picking the “best” Q value, just the nearest, so if you have nearby Q points
with different quality statistics (as happens in overlapped regions from spallation source measurements at
different angles), then it may choose badly. Simple solutions based on the smallest relative error dR/R will
be biased toward peaks, and smallest absolute error dR will be biased toward valleys.
to_dict()
Return a dictionary representation of the parameters
view = 'log'
fresnel(*args, **kw)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
property mm
property mp
oversample(n=6, seed=1)
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
plot_fresnel(**kwargs)
plot_linear(**kwargs)
plot_log(**kwargs)
plot_logfresnel(**kwargs)
plot_residuals(**kwargs)
plot_resolution(**kwargs)
property pm
polarized = True
property pp
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
select_corresponding(theory)
Select theory points corresponding to the measured data.
Since we have evaluated theory at every Q, it is safe to interpolate measured Q into theory, since it will land
on a node, not in an interval.
shared_beam(intensity=1, background=0, back_absorption=1, theta_offset=0, sample_broadening=0)
Share beam parameters across all four cross sections.
New parameters are created for intensity, background, theta_offset, sample_broadening and
back_absorption and assigned to the all cross sections. These can be replaced with an explicit pa-
rameter in an individual cross section if that parameter is independent.
show_resolution = None
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
substrate = None
surface = None
to_dict()
Return a dictionary representation of the parameters
view = None
property xs
refl1d.probe.PolarizedNeutronQProbe
alias of PolarizedQProbe
class refl1d.probe.PolarizedQProbe(xs=None, name=None, Aguide=270.0, H=0)
Bases: PolarizedNeutronProbe
apply_beam(Q, R, resolution=True, interpolation=0)
Apply factors such as beam intensity, background, backabsorption, and footprint to the data.
property calc_Q
fresnel(*args, **kw)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
property mm
property mp
oversample(n=6, seed=1)
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
plot_fresnel(**kwargs)
plot_linear(**kwargs)
plot_log(**kwargs)
plot_logfresnel(**kwargs)
plot_residuals(**kwargs)
plot_resolution(**kwargs)
property pm
polarized = True
property pp
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
select_corresponding(theory)
Select theory points corresponding to the measured data.
Since we have evaluated theory at every Q, it is safe to interpolate measured Q into theory, since it will land
on a node, not in an interval.
shared_beam(intensity=1, background=0, back_absorption=1, theta_offset=0, sample_broadening=0)
Share beam parameters across all four cross sections.
New parameters are created for intensity, background, theta_offset, sample_broadening and
back_absorption and assigned to the all cross sections. These can be replaced with an explicit pa-
rameter in an individual cross section if that parameter is independent.
show_resolution = None
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
substrate = None
surface = None
to_dict()
Return a dictionary representation of the parameters
view = None
property xs
For calculation purposes, probe needs to return the values 𝑄calc at which the model is evaluated. This is normally
going to be the measured points only, but for some systems, such as those with very thick layers, oversampling
is needed to avoid aliasing effects.
A measurement point consists of incident angle, angular resolution, incident wavelength, FWHM wavelength
resolution, reflectivity and uncertainty in reflectivity.
A probe is a set of points, defined by vectors for point attribute. For convenience, the attribute can be initialized
with a scalar if it is constant throughout the measurement, but will be set to a vector in the probe. The attributes
are initialized as follows:
T
[float or [float] | degrees] Incident angle
dT
[float or [float] | degrees] FWHM angular divergence
L
[float or [float] | Å] Incident wavelength
dL
[float or [float] | Å] FWHM wavelength dispersion
data
[([float], [float])] R, dR reflectivity measurement and uncertainty
dQ
[[float] or None | Å-1 ] 1-$sigma$ Q resolution when it cannot be computed directly from angular
divergence and wavelength dispersion.
resolution
[‘normal’ or ‘uniform’] Distribution function for Q resolution.
Measurement properties:
intensity
[float or Parameter] Beam intensity
background
[float or Parameter] Constant background
back_absorption
[float or Parameter] Absorption through the substrate relative to beam intensity. A value of 1.0
means complete transmission; a value of 0.0 means complete absorption.
theta_offset
[float or Parameter] Offset of the sample from perfect alignment
sample_broadening
[float or √
Parameter] Additional FWHM angular divergence from sample curvature. Scale 1-𝜎
rms by 2 (2 ln 2) ≈ 2.35 to convert to FWHM.
back_reflectivity
[True or False] True if the beam enters through the substrate
Measurement properties are fittable parameters. theta_offset in particular should be set using
probe.theta_offset.dev(dT), with dT equal to the FWHM uncertainty in the peak position for the rocking
curve, as measured in radians. Changes to theta_offset will then be penalized in the cost function for the fit as if
it were another measurement. Use alignment_uncertainty() to compute dT from the shape of the rocking
curve.
Sample broadening adjusts the existing Q resolution rather than recalculating it. This allows it the resolution to
describe more complicated effects than a simple gaussian distribution of wavelength and angle will allow. The
calculation uses the mean wavelength, angle and angular divergence. See resolution.dQ_broadening() for
details.
intensity and back_absorption are generally not needed — scaling the reflected signal by an appropriate intensity
measurement will correct for both of these during reduction. background may be needed, particularly for samples
with significant hydrogen content due to its large isotropic incoherent scattering cross section.
View properties:
view
[string] One of ‘fresnel’, ‘logfresnel’, ‘log’, ‘linear’, ‘q4’, ‘residuals’
show_resolution
[bool] True if resolution bars should be plotted with each point.
plot_shift
[float] The number of pixels to shift each new dataset so datasets can be seen separately
residuals_shift :
The number of pixels to shift each new set of residuals so the residuals plots can be seen sepa-
rately.
Normally view is set directly in the class rather than the instance since it is not specific to the view. Fresnel
and Q4 views are corrected for background and intensity; log and linear views show the uncorrected data. The
Fresnel reflectivity calculation has resolution applied.
Aguide = 270.0
property Q
Q_c(substrate=None, surface=None)
property Ro
√ √
If an amplitude signal is provided, 𝑟 will be scaled by 𝐼 +𝑖 𝐵/|𝑟|, which when squared will equal 𝐼|𝑟|2 +
𝐵. The resolution function will be applied directly to the amplitude. Unlike intensity and background, the
resulting |𝐺 * 𝑟|2 ̸= 𝐺 * |𝑟|2 for convolution operator *, but it should be close.
property calc_Q
If 𝑄𝑐 is imaginary, then −|𝑄𝑐 | is used instead, so this routine can be used for reflectivity signals which
scan from back reflectivity to front reflectivity. For completeness, the angle 𝜃 = 0 is added as well.
property dQ
fresnel(substrate=None, surface=None)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
label(prefix=None, gloss='', suffix='')
log10_to_linear()
Convert data from log to linear.
Older reflectometry reduction code stored reflectivity in log base 10 format. Call probe.log10_to_linear()
after loading this data to convert it to linear for subsequent display and fitting.
oversample(n=20, seed=1)
Generate an over-sampling of Q to avoid aliasing effects.
Oversampling is needed for thick layers, in which the underlying reflectivity oscillates so rapidly in Q that
a single measurement has contributions from multiple Kissig fringes.
Sampling will be done using a pseudo-random generator so that accidental structure in the function does
not contribute to the aliasing. The generator will usually be initialized with a fixed seed so that the point
selection will not change from run to run, but a seed of None will choose a different set of points each time
oversample is called.
The value n is the number of points that should contribute to each Q value when computing the resolu-
tion. These will be distributed about the nominal measurement value, but varying in both angle and energy
according to the resolution function. This will yield more points near the measurement and fewer farther
away. The measurement point itself will not be used to avoid accidental bias from uniform Q steps. De-
pending on the problem, a value of n between 20 and 100 should lead to stable values for the convolved
reflectivity.
Note: oversample() will remove the extra Q calculation points introduced by critical_edge().
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
Plot the Q**4 reflectivity associated with the probe.
Note that Q**4 reflectivity has the intensity and background applied so that hydrogenated samples display
more cleanly. The formula to reproduce the graph is:
plot_shift = 0
polarized = False
residuals_shift = 0
resolution_guard()
Make sure each measured 𝑄 point has at least 5 calculated 𝑄 points contributing to it in the range
[−3∆𝑄, 3∆𝑄].
Not Implemented
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
show_resolution = True
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
subsample(dQ)
Select points at most every dQ.
Use this to speed up computation early in the fitting process.
This changes the data object, and is not reversible.
The current algorithm is not picking the “best” Q value, just the nearest, so if you have nearby Q points
with different quality statistics (as happens in overlapped regions from spallation source measurements at
different angles), then it may choose badly. Simple solutions based on the smallest relative error dR/R will
be biased toward peaks, and smallest absolute error dR will be biased toward valleys.
to_dict()
Return a dictionary representation of the parameters
view = 'log'
property Q
Q_c(substrate=None, surface=None)
property Ro
If 𝑄𝑐 is imaginary, then −|𝑄𝑐 | is used instead, so this routine can be used for reflectivity signals which
scan from back reflectivity to front reflectivity. For completeness, the angle 𝜃 = 0 is added as well.
property dQ
fresnel(*args, **kw)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
label(prefix=None, gloss='', suffix='')
log10_to_linear()
Convert data from log to linear.
Older reflectometry reduction code stored reflectivity in log base 10 format. Call probe.log10_to_linear()
after loading this data to convert it to linear for subsequent display and fitting.
oversample(**kw)
Generate an over-sampling of Q to avoid aliasing effects.
Oversampling is needed for thick layers, in which the underlying reflectivity oscillates so rapidly in Q that
a single measurement has contributions from multiple Kissig fringes.
Sampling will be done using a pseudo-random generator so that accidental structure in the function does
not contribute to the aliasing. The generator will usually be initialized with a fixed seed so that the point
selection will not change from run to run, but a seed of None will choose a different set of points each time
oversample is called.
The value n is the number of points that should contribute to each Q value when computing the resolu-
tion. These will be distributed about the nominal measurement value, but varying in both angle and energy
according to the resolution function. This will yield more points near the measurement and fewer farther
away. The measurement point itself will not be used to avoid accidental bias from uniform Q steps. De-
pending on the problem, a value of n between 20 and 100 should lead to stable values for the convolved
reflectivity.
Note: oversample() will remove the extra Q calculation points introduced by critical_edge().
parameters()
parts(theory)
plot(theory=None, **kw)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(theory=None, **kw)
Plot the Q**4 reflectivity associated with the probe.
Note that Q**4 reflectivity has the intensity and background applied so that hydrogenated samples display
more cleanly. The formula to reproduce the graph is:
plot_fresnel(theory=None, **kw)
Plot the Fresnel-normalized reflectivity associated with the probe.
Note that the Fresnel reflectivity has the intensity and background applied before normalizing so that hy-
drogenated samples display more cleanly. The formula to reproduce the graph is:
plot_resolution(**kw)
plot_shift = 0
polarized = False
residuals_shift = 0
resolution_guard()
Make sure each measured 𝑄 point has at least 5 calculated 𝑄 points contributing to it in the range
[−3∆𝑄, 3∆𝑄].
Not Implemented
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
shared_beam(intensity=1, background=0, back_absorption=1, theta_offset=0, sample_broadening=0)
Share beam parameters across all segments.
New parameters are created for intensity, background, theta_offset, sample_broadening and
back_absorption and assigned to the all segments. These can be replaced with an explicit parame-
ter in an individual segment if that parameter is independent.
show_resolution = True
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
stitch(same_Q=0.001, same_dQ=0.001)
Stitch together multiple datasets into a single dataset.
Points within tol of each other and with the same resolution are combined by interpolating them to a com-
mon 𝑄 value then averaged using Gaussian error propagation.
Returns
probe | Probe Combined data set.
Algorithm
To interpolate a set of points to a common value, first find the common 𝑄 value:
∑︁
𝑄ˆ= 𝑄𝑘 /𝑛
Then for each dataset 𝑘, find the interval [𝑖, 𝑖 + 1] containing the value 𝑄, and use it to compute interpolated
value for 𝑅:
𝑤 = (𝑄ˆ − 𝑄𝑖 )/(𝑄𝑖+1 − 𝑄𝑖 )
𝑅ˆ = 𝑤𝑅𝑖+1 + (1 − 𝑤)𝑅𝑖+1
√︁
ˆ𝑅 = 𝑤2 𝜎𝑅
𝜎 2 + (1 − 𝑤)2 𝜎 2
𝑖 𝑅𝑖+1 /𝑛
subsample(dQ)
Select points at most every dQ.
Use this to speed up computation early in the fitting process.
This changes the data object, and is not reversible.
The current algorithm is not picking the “best” Q value, just the nearest, so if you have nearby Q points
with different quality statistics (as happens in overlapped regions from spallation source measurements at
different angles), then it may choose badly. Simple solutions based on the smallest relative error dR/R will
be biased toward peaks, and smallest absolute error dR will be biased toward valleys.
to_dict()
Return a dictionary representation of the parameters
property unique_L
view = 'log'
property Q
Q_c(substrate=None, surface=None)
property Ro
fresnel(substrate=None, surface=None)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
label(prefix=None, gloss='', suffix='')
log10_to_linear()
Convert data from log to linear.
Older reflectometry reduction code stored reflectivity in log base 10 format. Call probe.log10_to_linear()
after loading this data to convert it to linear for subsequent display and fitting.
oversample(n=20, seed=1)
Generate an over-sampling of Q to avoid aliasing effects.
Oversampling is needed for thick layers, in which the underlying reflectivity oscillates so rapidly in Q that
a single measurement has contributions from multiple Kissig fringes.
Sampling will be done using a pseudo-random generator so that accidental structure in the function does
not contribute to the aliasing. The generator will usually be initialized with a fixed seed so that the point
selection will not change from run to run, but a seed of None will choose a different set of points each time
oversample is called.
The value n is the number of points that should contribute to each Q value when computing the resolu-
tion. These will be distributed about the nominal measurement value, but varying in both angle and energy
according to the resolution function. This will yield more points near the measurement and fewer farther
away. The measurement point itself will not be used to avoid accidental bias from uniform Q steps. De-
pending on the problem, a value of n between 20 and 100 should lead to stable values for the convolved
reflectivity.
Note: oversample() will remove the extra Q calculation points introduced by critical_edge().
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
Plot the Q**4 reflectivity associated with the probe.
Note that Q**4 reflectivity has the intensity and background applied so that hydrogenated samples display
more cleanly. The formula to reproduce the graph is:
plot_shift = 0
polarized = False
residuals_shift = 0
resolution_guard()
Make sure each measured 𝑄 point has at least 5 calculated 𝑄 points contributing to it in the range
[−3∆𝑄, 3∆𝑄].
Not Implemented
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
show_resolution = True
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
subsample(dQ)
Select points at most every dQ.
Use this to speed up computation early in the fitting process.
This changes the data object, and is not reversible.
The current algorithm is not picking the “best” Q value, just the nearest, so if you have nearby Q points
with different quality statistics (as happens in overlapped regions from spallation source measurements at
different angles), then it may choose badly. Simple solutions based on the smallest relative error dR/R will
be biased toward peaks, and smallest absolute error dR will be biased toward valleys.
to_dict()
Return a dictionary representation of the parameters
view = 'log'
Bases: Probe
X-Ray probe.
By providing a scattering factor calculator for X-ray scattering, model components can be defined by mass density
and chemical composition.
Aguide = 270.0
property Q
Q_c(substrate=None, surface=None)
property Ro
The 𝑛 points 𝑄𝑖 are evenly distributed around the critical edge in 𝑄𝑐 ± 𝛿𝑄𝑐 by varying angle 𝜃 for a fixed
wavelength < 𝜆 >, the average of all wavelengths in the probe.
Specifically:
If 𝑄𝑐 is imaginary, then −|𝑄𝑐 | is used instead, so this routine can be used for reflectivity signals which
scan from back reflectivity to front reflectivity. For completeness, the angle 𝜃 = 0 is added as well.
property dQ
fresnel(substrate=None, surface=None)
Returns a Fresnel reflectivity calculator given the surface and and substrate. The calculated reflectivity
includes The Fresnel reflectivity for the probe reflecting from a block of material with the given substrate.
Returns F = R(probe.Q), where R is magnitude squared reflectivity.
label(prefix=None, gloss='', suffix='')
log10_to_linear()
Convert data from log to linear.
Older reflectometry reduction code stored reflectivity in log base 10 format. Call probe.log10_to_linear()
after loading this data to convert it to linear for subsequent display and fitting.
oversample(n=20, seed=1)
Generate an over-sampling of Q to avoid aliasing effects.
Oversampling is needed for thick layers, in which the underlying reflectivity oscillates so rapidly in Q that
a single measurement has contributions from multiple Kissig fringes.
Sampling will be done using a pseudo-random generator so that accidental structure in the function does
not contribute to the aliasing. The generator will usually be initialized with a fixed seed so that the point
selection will not change from run to run, but a seed of None will choose a different set of points each time
oversample is called.
The value n is the number of points that should contribute to each Q value when computing the resolu-
tion. These will be distributed about the nominal measurement value, but varying in both angle and energy
according to the resolution function. This will yield more points near the measurement and fewer farther
away. The measurement point itself will not be used to avoid accidental bias from uniform Q steps. De-
pending on the problem, a value of n between 20 and 100 should lead to stable values for the convolved
reflectivity.
Note: oversample() will remove the extra Q calculation points introduced by critical_edge().
parameters()
plot(view=None, **kwargs)
Plot theory against data.
Need substrate/surface for Fresnel-normalized reflectivity
plot_Q4(**kwargs)
Plot the Q**4 reflectivity associated with the probe.
Note that Q**4 reflectivity has the intensity and background applied so that hydrogenated samples display
more cleanly. The formula to reproduce the graph is:
plot_shift = 0
polarized = False
radiation = 'xray'
residuals_shift = 0
resolution_guard()
Make sure each measured 𝑄 point has at least 5 calculated 𝑄 points contributing to it in the range
[−3∆𝑄, 3∆𝑄].
Not Implemented
restore_data()
Restore the original data after resynth.
resynth_data()
Generate new data according to the model R’ ~ N(R, dR).
The resynthesis step is a precursor to refitting the data, as is required for certain types of monte carlo error
analysis. The first time it is run it will save the original R into Ro. If you reset R in the probe you will also
need to reset Ro so that it is used for subsequent resynth analysis.
save(filename, theory, substrate=None, surface=None)
Save the data and theory to a file.
scattering_factors(material, density)
Returns the scattering factors associated with the material given the range of wavelengths/energies used in
the probe.
show_resolution = True
simulate_data(theory, noise=2.0)
Set the data for the probe to R + eps with eps ~ normal(dR^2).
theory is (Q, R),
If the percent noise is provided, set dR to R*noise/100 before simulating. noise defaults to 2% if no dR is
present.
Note that measured data estimates uncertainty from the number of counts. This means that points above
the true value will have larger uncertainty than points below the true value. This bias is not captured in the
simulated data.
subsample(dQ)
Select points at most every dQ.
Use this to speed up computation early in the fitting process.
This changes the data object, and is not reversible.
The current algorithm is not picking the “best” Q value, just the nearest, so if you have nearby Q points
with different quality statistics (as happens in overlapped regions from spallation source measurements at
different angles), then it may choose badly. Simple solutions based on the smallest relative error dR/R will
be biased toward peaks, and smallest absolute error dR will be biased toward valleys.
to_dict()
Return a dictionary representation of the parameters
view = 'log'
back_reflectivity is True if reflectivity was measured through the substrate. This allows you to arrange the model
from substrate to surface regardless of whether you are measuring through the substrate or reflecting off the
surface.
theta_offset indicates sample alignment. In order to use theta offset you need to be able to convert from Q to
wavelength and angle by providing values for the wavelength or the angle, and the associated resolution.
L, dL in Angstroms can be used to recover angle and angular resolution for monochromatic sources where wave-
length is fixed and angle is varying. These values can also be stored in the file header as:
T, dT in degrees can be used to recover wavelength and wavelength dispersion for time of flight sources where
angle is fixed and wavelength is varying, or you can store them in the header of the file:
# angle: 2 # degrees
# angular_resolution: 0.2 # degrees (1-sigma)
If both angle and wavelength are varying in the data, you can specify a separate value for each point, such the
following:
dR can be used to replace the uncertainty estimate for R in the file with ∆𝑅 = 𝑅 * dR. This allows files with
only two columns, Q and R to be loaded. Note that points with dR=0 are automatically set to the minimum dR>0
in the dataset.
Instead of constants, you can provide function, dT = lambda T: f(T), dL = lambda L: f(L) or dR = lambda Q, R,
dR: f(Q, R, dR) for more complex relationships (with dR() returning 1-𝜎 ∆𝑅).
√
sample_broadening in degrees FWHM adds to the angular_resolution. Scale 1-𝜎 rms by 2 (2 ln 2) ≈ 2.34 to
convert to FWHM.
Aguide and H are parameters for polarized beam measurements indicating the magnitude and direction of the
applied field.
Polarized data is represented using a multi-section data file, with blank lines separating each section. Each section
must have a polarization keyword, with value “++”, “+-”, “-+” or “–“.
FWHM is True if dQ, dT, dL are given as FWHM rather than 1-𝜎. dR is always 1-𝜎. sample_broadening is
always FWHM.
radiation is ‘xray’ or ‘neutron’, depending on whether X-ray or neutron scattering length density calculator should
be used for determining the scattering length density of a material. Default is ‘neutron’
columns is a string giving the column order in the file. Default order is “Q R dR dQ”. Note: include dR and dQ
even if the file only has two or three columns, but put the missing columns at the end.
data_range indicates which data rows to use. Arguments are the same as the list slice arguments, (start, stop,
step). This follows the usual semantics of list slicing, L[start:stop:step], with 0-origin indices, stop is last plus
one and step optional. Use negative numbers to count from the end. Default is (None, None) for the entire data
set.
resolution is ‘normal’ (default) or ‘uniform’. Use uniform if you are merging Q points from a finely stepped
energy sensitive measurement.
oversampling is None or a positive integer indicating how many points to add between data point to support
sparse data with denser theory (for PolarizedNeutronProbe)
refl1d.probe.make_probe(**kw)
Return a reflectometry measurement object of the given resolution.
refl1d.probe.measurement_union(xs)
Determine the unique (T, dT, L, dL) across all datasets.
refl1d.probe.spin_asymmetry(Qp, Rp, dRp, Qm, Rm, dRm)
Compute spin asymmetry for R++, R–.
Parameters:
Qp, Rp, dRp
[vector] Measured ++ cross section and uncertainty.
Qm, Rm, dRm
[vector] Measured – cross section and uncertainty.
If dRp, dRm are None then the returned uncertainty will also be None.
Returns:
Q, SA, dSA
[vector] Computed spin asymmetry and uncertainty.
Algorithm:
Spin asymmetry, 𝑆𝐴 , is:
𝑅++ − 𝑅−−
𝑆𝐴 =
𝑅++ + 𝑅−−
4.22.1 Example
This example sets up a model which uses tanh to transition from silicon to gold in 20 Å with 2 Å steps.
First define the profile, and put in the substrate:
Next add the interface. This uses microslabs() to select the points at which the interface is evaluated, much like
you would do when defining your own special layer type. Note that the points Pz are in the center of the micro slabs.
The width of the final slab may be different. You do not need to use fixed width microslabs if you can more efficiently
represent the profile with a smaller number of variable width slabs, but contract_profile() serves the same purpose
with less work on your part.
Finally, add the incident medium and see the results. Note that rho is a matrix, with one column for each incident
energy. We are only using one energy so we only show the first column.
Since irho and sigma were not specified, they will be zero.
Note that sigma is a pair (interface_below, interface_above) representing the magnetic roughness, which
may be different from the nuclear roughness at the layer boundaries.
append(w=0, sigma=0, rho=0, irho=0)
Extend the micro slab model with a single layer.
clear()
Reset the slab model so that none are present.
extend(w=0, sigma=0, rho=0, irho=0)
Extend the micro slab model with the given layers.
finalize(step_interfaces, dA)
Rendering complete.
Call this method after the microslab model has been constructed, so any post-rendering processes can be
completed.
In addition to clearing any width from the substrate and the surface surround, this will align magnetic
and nuclear slabs, convert interfaces to step interfaces if desired, and merge slabs with similar scattering
potentials to reduce computation time.
step_interfaces is True if interfaces should be rendered using slabs.
dA is the tolerance to use when deciding if similar layers can be merged.
property irho
Absorption (10^-6 number density)
property ismagnetic
True if there are magnetic materials in any slab
limited_sigma(limit=0)
Limit the roughness by some fraction of layer thickness.
This function should be called before finalize(), but after all slabs have been added to the profile.
limit is the number of times sigma has to fit in the layers on either side of the interface. The returned
sigma is truncated to min(wlow, whigh)/limit where wlow is the thickness of the layer below the interface,
and whigh is the thickness above the interface. A limit value of 0 returns the original sigma. Although a
gaussian inteface extends to infinity, in practice setting a limit of 3 allows the layer to reach its bulk value,
with no cross talk between the interfaces. For very large roughnesses, the blending algorithm allows the sld
beyond the interface to bleed through the entire layer and into the next. In this case the roughness should
be the same on both sides of the layer to avoid artifacts at the interface.
Magnetic roughness is ignored for now.
magnetic_smooth_profile(dz=0.1)
Return a profile representation of the magnetic microslab structure.
magnetic_step_profile()
Return a step profile representation of the microslab structure.
Nevot-Croce interfaces are not represented.
microslabs(thickness=0)
Return a set of microslabs for a layer of the given thickness.
The step size slabs.dz was defined when the Microslabs object was created.
This is a convenience function. Layer definitions can choose their own slices so long as the step size is
approximately slabs.dz in the varying region.
Parameters
thickness
[float | A] Layer thickness
Returns
widths: vector | A
Microslab widths
centers: vector | A
Microslab centers
repeat(start=0, count=1, interface=0)
Extend the model so that there are count versions of the slabs from start to the final slab.
This is equivalent to L.extend(L[start:]*(count-1)) for list L.
property rho
Scattering length density (10^-6 number density)
property sigma
rms roughness (A)
smooth_profile(dz=0.1)
Return a smooth profile representation of the microslab structure
Nevot-Croce roughness is approximately represented, though the calculation is incorrect for layers with
large roughness compared to the thickness.
The returned profile has uniform step size dz.
step_profile()
Return a step profile representation of the microslab structure.
Nevot-Croce interfaces are not represented.
property surface_sigma
roughness for the current top layer, or nan if substrate
thickness()
Total thickness of the profile.
Note that thickness includes the thickness of the substrate and surface layers. Normally these will be zero,
but the contract profile operation may result in large values for either.
property w
Thickness (A)
refl1d.profile.blend(z, sigma, offset)
blend function
Given a Gaussian roughness value, compute the portion of the neighboring profile you expect to find in the
current profile at depth z.
refl1d.profile.build_profile(z, offset, roughness, value)
Convert a step profile to a smooth profile.
z calculation points offset offset for each interface roughness roughness of each interface value target value for
each slab max_rough limit the roughness to a fraction of the layer thickness
refl1d.profile.compute_limited_sigma(thickness, roughness, limit)
reflectivity_amplitude
convolve_uniform
convolve_sampled
align_magnetic
contract_by_area
contract_mag
rebin_counts
rebin_counts_2D
theta_m (degrees)
Angle of the magnetism within the layer.
sigma (Å)
Interface roughness between the current layer and the next. The final layer is ignored. This may be a scalar
for fixed roughness on every layer, or None if there is no roughness.
wavelength (Å)
Incident wavelength (only affects absorption). May be a vector. Defaults to 1.
Aguide (degrees)
Angle of the guide field; -90 is the usual case
This function does not compute any instrument resolution corrections. Interface diffusion, if present, uses the
Nevot-Croce approximation.
Use magnetic_amplitude to return the complex waveform.
refl1d.reflectivity.reflectivity(*args, **kw)
Calculate reflectivity |𝑟(𝑘𝑧 )|2 from slab model.
:Parameters :
depth
[float[N] | Å] Thickness of the individual layers (incident and substrate depths are ignored)
sigma
[float OR float[N-1] | Å] Interface roughness between the current layer and the next. The final layer is
ignored. This may be a scalar for fixed roughness on every layer, or None if there is no roughness.
rho, irho
[float[N] OR float[N, K] | 10-6 Å-2 ] Real and imaginary scattering length density. Use multiple columns
when you have kz-dependent scattering length densities, and set rho_offset to select the appropriate
one. Data should be stored in column order.
kz
[float[M] | Å-1 ] Points at which to evaluate the reflectivity
rho_index
[integer[M]] rho and irho columns to use for the various kz.
Returns
R | float[M]
Reflectivity magnitude.
Returns
r | complex[M]
Complex reflectivity waveform.
FWHM2sigma
slit_widths Compute the slit widths for the standard scanning reflec-
tometer fixed-opening-fixed geometry.
Resolution calculations
refl1d.resolution.FWHM2sigma(s)
refl1d.resolution.QL2T(Q=None, L=None)
Compute angle from 𝑄 and wavelength.
𝜃 = sin−1 (|𝑄|𝜆/4𝜋)
Returns 𝜃∘ .
refl1d.resolution.QT2L(Q=None, T=None)
Compute wavelength from 𝑄 and angle.
𝜆 = 4𝜋 sin(𝜃)/𝑄
Returns 𝜆Å.
refl1d.resolution.TL2Q(T=None, L=None)
Compute 𝑄 from angle and wavelength.
𝑄 = 4𝜋 sin(𝜃)/𝜆
Returns 𝑄 Å-1
refl1d.resolution.TOF2L(d_moderator, TOF)
Convert neutron time-of-flight to wavelength.
𝜆 = (𝑡/𝑑)(ℎ/𝑛𝑚 )
where:
𝜆 is wavelength in Å
𝑡 is time-of-flight in 𝑢s
ℎ is Planck’s constant in erg seconds
𝑛𝑚 is the neutron mass in g
refl1d.resolution.binedges(L)
Construct bin edges E from bin centers L.
Assuming fixed 𝜔 = ∆𝜆/𝜆 in the bins, the edges will be spaced logarithmically at:
𝐸0 = min 𝜆
𝐸𝑖+1 = 𝐸𝑖 + 𝜔𝐸𝑖 = 𝐸𝑖 (1 + 𝜔)
with centers 𝐿 half way between the edges:
\Delta Q^2
&= \left(\frac{\partial Q}{\partial \lambda}\right)^2\Delta\lambda^2
+ \left(\frac{\partial Q}{\partial \theta}\right)^2\Delta\theta^2
if 𝑝 ≥ 𝑠2
{︂ 1 𝑠1 +𝑠2
2 𝑑1 −𝑑2
∆𝜃𝑑 = 1 𝑠1 +𝑝
2 𝑑1 if 𝑝 < 𝑠2
In addition to the slit divergence, we need to add in any sample broadening ∆𝜃𝑠 returning the total divergence in
degrees:
180
∆𝜃 = ∆𝜃𝑑 + ∆𝜃𝑠
𝜋
Reversing this equation, the sample broadening contribution can be measured from the full width at half maxi-
mum of the rocking curve, 𝐵, measured in degrees at a particular angle and slit opening:
180
∆𝜃𝑠 = 𝐵 − ∆𝜃𝑑
𝜋
refl1d.resolution.sigma2FWHM(s)
Tlo, Thi
[float | degrees] Start and end of the opening region. The default if Tlo is not specified is to
use fixed slits at slits_below for all angles.
slits_below, slits_above
[float OR [float, float] | mm] Slits outside opening region. The default is to use the values of
the slits at the ends of the opening region.
slits_at_Tlo
[float OR [float, float] | mm] Slits at the start of the opening region.
Returns
s1, s2
[[float] | mm] Slit widths for each theta.
Slits are assumed to be fixed below angle Tlo and above angle Thi, and opening at a constant dT/T between them.
Slit openings are defined by a tuple (s1, s2) or constant s=s1=s2. With no Tlo, the slits are fixed with widths
defined by slits_below, which defaults to slits_at_Tlo. With no Thi, slits are continuously opening above Tlo.
Note: This function works equally well if angles are measured in radians and/or slits are measured in inches.
Liquids Loader for reduced data from the SNS Liquids instru-
ment.
Magnetic Loader for reduced data from the SNS Magnetic instru-
ment.
QRL_to_data Convert data to T, L, R
SNSData
intensity_from_spline
Liquids, Magnetic
These are resolution.Pulsed classes tuned with default instrument parameters and loaders for reduced SNS data.
See resolution for details.
class refl1d.snsdata.Liquids(**kw)
Bases: SNSData, Pulsed
Loader for reduced data from the SNS Liquids instrument.
T = None
Thi = 90
Tlo = 90
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T incident angle Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the start of the
range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.02
d_moderator = 14.85
d_s1 = 2086.0
d_s2 = 230.0
classmethod defaults()
Return default instrument properties as a printable string.
feather = array([[ 2.02555 , 2.29927 , 2.57299 , 2.87409 , 3.22993 , 3.58577 ,
4.07847 , 4.5438 , 5.11861 , 5.7208 , 6.37774 , 7.19891 , 8.04745 , 9.06022 ,
10.1825 , 11.4142 , 12.8102 , 14.3431 ], [20.6369 , 23.6943 , 23.6943 , 21.1146 ,
15.5732 , 12.8981 , 9.4586 , 6.59236 , 4.68153 , 3.05732 , 1.91083 , 1.24204 ,
0.955414, 0.573248, 0.477707, 0.382166, 0.191083, 0.286624]])
fixed_slits = None
instrument = 'Liquids'
load(filename, **kw)
probe(**kw)
Simulate a measurement probe.
Returns a probe with Q, angle, wavelength and the associated uncertainties, but not any data.
You can override instrument parameters using key=value. In particular, slit settings slits and T define the
angular divergence and dLoL defines the wavelength resolution.
radiation = 'neutron'
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
class refl1d.snsdata.Magnetic(**kw)
Bases: SNSData, Pulsed
Loader for reduced data from the SNS Magnetic instrument.
T = None
Thi = 90
Tlo = 90
calc_slits(**kw)
Determines slit openings from measurement pattern.
If slits are fixed simply return the same slits for every angle, otherwise use an opening range [Tlo, Thi] and
the value of the slits at the start of the opening to define the slits. Slits below Tlo and above Thi can be
specified separately.
T incident angle Tlo, Thi angle range over which slits are opening slits_at_Tlo openings at the start of the
range, or fixed opening slits_below, slits_above openings below and above the range
Use fixed_slits is available, otherwise use opening slits.
dLoL = 0.02
d_s1 = 190.5
d_s2 = 35.56
classmethod defaults()
Return default instrument properties as a printable string.
fixed_slits = None
instrument = 'Magnetic'
load(filename, **kw)
sample_width = 10000000000.0
slits_above = None
slits_at_Tlo = None
slits_below = None
load(filename, **kw)
data_file
base name of the data file, or None if this is simulation only
active_xsec
active cross sections (usually ‘abcd’ for all cross sections)
Qmin, Qmax, num_Q
for simulation, Q sample points
Resolution is defined by wavelength and by incident angle:
wavelength, wavelength_dispersion, angular_divergence
resolution is calculated as ∆𝑄/𝑄 = ∆𝜆/𝜆 + ∆𝜃/𝜃
Additional beam parameters correct for intensity, background and possibly guide field angle:
intensity, background
incident beam intensity and sample background
guide_angle
angle of the guide field
Unlike pure structural models, magnetic models are in one large section with no repeats. The single parameter is
the number of layers, which is implicit in the length of the layer data and does not need to be an explicit attribute.
Interfaces are split into discrete steps according to a profile, either error function or hyperbolic tangent. For
sharp interfaces which do not overlap within a layer, the interface is broken into a fixed number of slabs with
slabs having different widths, but equal changes in height. For broad interfaces, the whole layer is split into the
same fixed number of slabs, but with each slab having the same width. The following attributes are used:
roughness_steps
number of roughness steps (13 is coarse; 51 is fine)
roughness_profile
roughness profile is either ‘E’ for error function or ‘H’ for tanh
Layers have thickness, interface roughness and real and imaginary scattering length density (SLD). Roughness is
stored in the file using full width at half maximum (FWHM) for the given profile type. For convenience, rough-
ness can also be set or queried using a 1-𝜎 equivalent roughness on an error function profile. Regardless, layer
parameters are represented as vectors with one entry for each top, middle and bottom layer using the following
attributes:
thickness, roughness
[float | Å] layer thickness and FWHM roughness
rho, irho
[float, float | 16𝜋𝜌, 2𝜆𝜌𝑖 ] complex scattering length density
mthickness, mroughness
[float | Å] magnetic thickness and roughness
mrho
[float | 16𝜋𝜌𝑀 ] magnetic scattering length density
mtheta
[float | ∘ ] magnetic angle
sigma_roughness, sigma_mroughness
[float | Å] computed 1-𝜎 equivalent roughness for erf profile
The conversion from stored 16𝜋𝜌, 2𝜆𝜌𝑖 to in memory 106 𝜌, 106 𝜌𝑖 happens automatically on read/write.
The layers are ordered from surface to substrate.
Qmin = 0
active_xsec = 'abcd'
angular_divergence = 0.001
background = 0
constraints = ''
data_file = ''
we can use a linear system solver to find the optimal ∆𝜆 and ∆𝜃 across our dataset from the over-determined
system:
If weights are provided (e.g., ∆𝑅𝑘 /𝑅𝑘 ), then weigh each point during the fit.
Given that the experiment is often run with fixed slits at the start and end, you may choose to match the
resolution across the entire 𝑄 range, or instead restrict it to just the region where the slits are opening. You
will generally want to get the resolution correct at the critical edge since that’s where it will have the largest
effect on the fit.
Returns the object so that operations can be chained.
fitpars = []
guide_angle = 270
intensity = 1
irho = None
classmethod load(filename)
Load a staj file, returning an MlayerModel object
mrho = None
mroughness = None
mtheta = None
mthickness = None
num_Q = 200
output_file = ''
rho = None
roughness = None
roughness_profile = 'E'
roughness_steps = 13
save(filename)
Save the staj file
set(**kw)
property sigma_mroughness
property sigma_roughness
thickness = None
wavelength = 1
wavelength_dispersion = 0.01
class refl1d.staj.MlayerModel(**kw)
Bases: object
Model definition used by MLayer program.
Attributes:
Q values and reflectivity come from a data file with Q, R, dR or from simulation with linear spacing from Qmin
to Qmax in equal steps:
data_file
name of the data file, or None if this is simulation only
Qmin, Qmax, num_Q
for simulation, Q sample points
Resolution is defined by wavelength and by incident angle:
wavelength, wavelength_dispersion, angular_divergence
resolution is calculated as ∆𝑄/𝑄 = ∆𝜆/𝜆 + ∆𝜃/𝜃
Additional beam parameters correct for intensity, background and possibly sample alignment:
intensity, background
incident beam intensity and sample background
theta_offset
alignment angle correction
The model is defined in terms of layers, with three sections. The top and bottom section correspond to the fixed
layers at the surface and the substrate. The middle section layers can be repeated an arbitrary number of times,
as defined by the number of repeats attribute. The attributes defining the sections are:
num_top num_middle num_bottom
section sizes
num_repeats
number of times middle section repeats
Interfaces are split into discrete steps according to a profile, either error function or hyperbolic tangent. For
sharp interfaces which do not overlap within a layer, the interface is broken into a fixed number of slabs with
slabs having different widths, but equal changes in height. For broad interfaces, the whole layer is split into the
same fixed number of slabs, but with each slab having the same width. The following attributes are used:
roughness_steps
number of roughness steps (13 is coarse; 51 is fine)
roughness_profile
roughness profile is either ‘E’ for error function or ‘H’ for tanh
Layers have thickness, interface roughness and real and imaginary scattering length density (SLD). Roughness is
stored in the file using full width at half maximum (FWHM) for the given profile type. For convenience, rough-
ness can also be set or queried using a 1-𝜎 equivalent roughness on an error function profile. Regardless, layer
parameters are represented as vectors with one entry for each top, middle and bottom layer using the following
attributes:
thickness, roughness
[float | Å] layer thickness and FWHM roughness
rho, irho, incoh
[float | 10-6 Å-2 ] complex coherent 𝜌 + 𝑗𝜌𝑖 and incoherent SLD
Computed attributes are provided for convenience:
sigma_roughness
[float | Å] 1-𝜎 equivalent roughness for erf profile
mu
absorption cross section (2*wavelength*irho + incoh)
Note: The staj files store SLD as 16𝜋𝜌, 2𝜆𝜌𝑖 with an additional column of 0 for magnetic SLD. This conversion
happens automatically on read/write. The incoherent cross section is assumed to be zero.
Qmin = 0
angular_divergence = 0.001
background = 0
constraints = ''
data_file = ''
we can use a linear system solver to find the optimal ∆𝜆 and ∆𝜃 across our dataset from the over-determined
system:
If weights are provided (e.g., ∆𝑅𝑘 /𝑅𝑘 ), then weigh each point during the fit.
Given that the experiment is often run with fixed slits at the start and end, you may choose to match the
resolution across the entire 𝑄 range, or instead restrict it to just the region where the slits are opening. You
will generally want to get the resolution correct at the critical edge since that’s where it will have the largest
effect on the fit.
Returns the object so that operations can be chained.
fitpars = []
incoh = 0
intensity = 1
irho = 0
classmethod load(filename)
Load a staj file, returning an MlayerModel object
property mu
num_Q = 200
num_bottom = 0
num_middle = 0
num_repeats = 1
num_top = 0
output_file = ''
rho = None
roughness = None
roughness_profile = 'E'
roughness_steps = 13
save(filename)
Save the staj file
set(**kw)
property sigma_roughness
split_sections()
Split the given set of layers into sections, putting as many layers as possible into the middle section, then
the bottom and finally the top.
Returns the object so that operations can be chained.
theta_offset = 0
thickness = None
wavelength = 1
wavelength_dispersion = 0.01
Data stitching.
Join together datasets yielding unique sorted x.
refl1d.stitch.poisson_average(xdxydyw)
Compute the poisson average of y/dy using a set of data points.
The returned x, dx is the weighted average of the inputs:
x = sum(x*I)/sum(I)
dx = sum(dx*I)/sum(I)
w = sum(y/dy^2)
y = sum((y/dy)^2)/w
dy = sqrt(y/w)
The above formula gives the expected result for combining two measurements, assuming there is no uncertainty
in the monitor.
This formula isn’t strictly correct when applied to values which have been scaled, for example to account for an
attenuator in the counting system.
refl1d.stitch.stitch(data, same_x=0.001, same_dx=0.001)
Stitch together multiple measurements into one.
data a list of datasets with x, dx, y, dy attributes same_x minimum point separation (default is 0.001). same_dx
minimum change in resolution that may be averaged (default is 0.001).
WARNING: the returned x values may be data dependent, with two measured sets having different x after stitch-
ing, even though the measurement conditions are identical!!
Either add an intensity weight to the datasets:
probe.I = slitscan
import numpy as np
x1, dx1, y1, dy1 = stitch([a1, b1, c1, d1])
x2, dx2, y2, dy2 = stitch([a2, b2, c2, d2])
x2[0], x2[-1] = x1[0], x1[-1] # Force matching end points
y2 = np.interp(x1, x2, y2)
dy2 = np.interp(x1, x2, dy2)
x2 = x1
WARNING: the returned dx value underestimates the true x, depending on the relative weights of the averaged
data points.
merge_ends join the leading and trailing ends of the profile together
so fewer slabs are required and so that gaussian rough-
ness can be used.
refl1d.util.merge_ends(w, p, tol=0.001)
join the leading and trailing ends of the profile together so fewer slabs are required and so that gaussian roughness
can be used.
r
refl1d.abeles, 59
refl1d.anstodata, 60
refl1d.cheby, 60
refl1d.dist, 63
refl1d.errors, 66
refl1d.experiment, 68
refl1d.fitplugin, 74
refl1d.flayer, 75
refl1d.freeform, 77
refl1d.fresnel, 80
refl1d.garefl, 81
refl1d.instrument, 81
refl1d.magnetism, 89
refl1d.material, 92
refl1d.materialdb, 96
refl1d.model, 97
refl1d.mono, 100
refl1d.names, 102
refl1d.ncnrdata, 103
refl1d.polymer, 117
refl1d.probe, 124
refl1d.profile, 150
refl1d.reflectivity, 155
refl1d.refllib, 154
refl1d.resolution, 157
refl1d.snsdata, 161
refl1d.staj, 166
refl1d.stajconvert, 173
refl1d.stitch, 174
refl1d.support, 175
refl1d.util, 176
177
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
179
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
180 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 181
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
182 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 183
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
184 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 185
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
186 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 187
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
188 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 189
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
S save_json() (refl1d.experiment.MixedExperiment
sample_broadening (refl1d.instrument.Monochromatic method), 73
attribute), 86 save_mlayer() (in module refl1d.stajconvert), 174
sample_broadening (refl1d.instrument.Pulsed at- save_profile() (refl1d.dist.DistributionExperiment
tribute), 88 method), 64
sample_broadening (refl1d.ncnrdata.ANDR attribute), save_profile() (refl1d.experiment.Experiment
105 method), 70
sample_broadening (refl1d.ncnrdata.MAGIK at- save_profile() (refl1d.experiment.ExperimentBase
tribute), 107 method), 71
sample_broadening (refl1d.ncnrdata.NG1 attribute), save_profile() (refl1d.experiment.MixedExperiment
110 method), 73
sample_broadening (refl1d.ncnrdata.NG7 attribute), save_refl() (refl1d.dist.DistributionExperiment
112 method), 64
sample_broadening (refl1d.ncnrdata.PBR attribute), save_refl() (refl1d.experiment.Experiment method),
114 70
sample_broadening (refl1d.ncnrdata.XRay attribute), save_refl() (refl1d.experiment.ExperimentBase
116 method), 71
sample_broadening (refl1d.snsdata.Liquids attribute), save_refl() (refl1d.experiment.MixedExperiment
163 method), 73
sample_broadening (refl1d.snsdata.Magnetic at- save_staj() (refl1d.experiment.Experiment method),
tribute), 165 70
sample_data() (in module refl1d.support), 175 save_staj() (refl1d.experiment.MixedExperiment
sample_width (refl1d.instrument.Monochromatic method), 73
attribute), 86 Scatterer (class in refl1d.material), 95
sample_width (refl1d.instrument.Pulsed attribute), 88 scattering_factors() (refl1d.material.ProbeCache
sample_width (refl1d.ncnrdata.ANDR attribute), 106 method), 95
sample_width (refl1d.ncnrdata.MAGIK attribute), 108 scattering_factors() (refl1d.probe.NeutronProbe
sample_width (refl1d.ncnrdata.NG1 attribute), 110 method), 127
sample_width (refl1d.ncnrdata.NG7 attribute), 112 scattering_factors()
sample_width (refl1d.ncnrdata.PBR attribute), 114 (refl1d.probe.PolarizedNeutronProbe method),
sample_width (refl1d.ncnrdata.XRay attribute), 116 129
sample_width (refl1d.snsdata.Liquids attribute), 163 scattering_factors()
sample_width (refl1d.snsdata.Magnetic attribute), 165 (refl1d.probe.PolarizedQProbe method),
save() (refl1d.dist.DistributionExperiment method), 64 131
save() (refl1d.experiment.Experiment method), 69 scattering_factors() (refl1d.probe.Probe method),
save() (refl1d.experiment.ExperimentBase method), 71 136
save() (refl1d.experiment.MixedExperiment method), 73 scattering_factors() (refl1d.probe.ProbeSet
save() (refl1d.probe.NeutronProbe method), 127 method), 139
save() (refl1d.probe.PolarizedNeutronProbe method), scattering_factors() (refl1d.probe.QProbe method),
129 144
save() (refl1d.probe.PolarizedQProbe method), 131 scattering_factors() (refl1d.probe.XrayProbe
save() (refl1d.probe.Probe method), 136 method), 147
save() (refl1d.probe.ProbeSet method), 139 select_corresponding()
save() (refl1d.probe.QProbe method), 144 (refl1d.probe.PolarizedNeutronProbe method),
save() (refl1d.probe.XrayProbe method), 147 129
save() (refl1d.staj.MlayerMagnetic method), 169 select_corresponding()
save() (refl1d.staj.MlayerModel method), 173 (refl1d.probe.PolarizedQProbe method),
save_json() (refl1d.dist.DistributionExperiment 131
method), 64 set() (refl1d.staj.MlayerMagnetic method), 169
save_json() (refl1d.experiment.Experiment method), set() (refl1d.staj.MlayerModel method), 173
69 set_anchor() (refl1d.flayer.FunctionalMagnetism
save_json() (refl1d.experiment.ExperimentBase method), 76
method), 71 set_layer_name() (refl1d.flayer.FunctionalMagnetism
method), 76
190 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 191
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
192 Index
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
Index 193
Refl1D: Neutron and X-Ray Reflectivity Analysis, Release 0.8.16
X
XRay (class in refl1d.ncnrdata), 114
XrayProbe (class in refl1d.probe), 144
xs (refl1d.probe.PolarizedNeutronProbe property), 130
xs (refl1d.probe.PolarizedQProbe property), 131
194 Index