Search is not available for this dataset
project
stringlengths 1
235
| source
stringclasses 16
values | language
stringclasses 48
values | content
stringlengths 909
64.8M
|
---|---|---|---|
open-babel | readthedoc | Ruby | Open Babel 3.0.1 documentation
### Navigation
* [Open Babel 3.0.1 documentation](index.html#document-index) »
[<NAME>. **2011**, *3*, 33](http://www.jcheminf.com/content/3/1/33)
### [Table of Contents](index.html#document-index)
* [Introduction](index.html#document-Introduction/intro)
+ [Goals of the Open Babel project](index.html#document-Introduction/goals)
+ [Frequently Asked Questions](index.html#document-Introduction/faq)
+ [Thanks](index.html#document-Introduction/thanks)
* [Install Open Babel](index.html#document-Installation/install)
+ [Install a binary package](index.html#install-a-binary-package)
+ [Compiling Open Babel](index.html#compiling-open-babel)
* [obabel - Convert, Filter and Manipulate Chemical Data](index.html#document-Command-line_tools/babel)
+ [Synopsis](index.html#synopsis)
+ [Options](index.html#options)
+ [Examples](index.html#examples)
+ [Format Options](index.html#format-options)
+ [Append property values to the title](index.html#append-property-values-to-the-title)
+ [Generating conformers for structures](index.html#generating-conformers-for-structures)
+ [Filtering molecules from a multimolecule file](index.html#filtering-molecules-from-a-multimolecule-file)
+ [Substructure and similarity searching](index.html#substructure-and-similarity-searching)
+ [Sorting molecules](index.html#sorting-molecules)
+ [Remove duplicate molecules](index.html#remove-duplicate-molecules)
+ [Aliases for chemical groups](index.html#aliases-for-chemical-groups)
+ [Forcefield energy and minimization](index.html#forcefield-energy-and-minimization)
+ [Aligning molecules or substructures](index.html#aligning-molecules-or-substructures)
+ [Specifying the speed of 3D coordinate generation](index.html#specifying-the-speed-of-3d-coordinate-generation)
* [The Open Babel GUI](index.html#document-GUI/GUI)
+ [Basic operation](index.html#basic-operation)
+ [Options](index.html#options)
+ [Multiple input files](index.html#multiple-input-files)
+ [Wildcards in filenames](index.html#wildcards-in-filenames)
+ [Local input](index.html#local-input)
+ [Output file](index.html#output-file)
+ [Graphical display](index.html#graphical-display)
+ [Using a restricted set of formats](index.html#using-a-restricted-set-of-formats)
+ [Other features](index.html#other-features)
+ [Example files](index.html#example-files)
* [Tutorial on using the GUI](index.html#document-GUITutorial/GUITutorial)
+ [Converting chemical file formats](index.html#document-GUITutorial/Conversion)
+ [Filtering structures](index.html#document-GUITutorial/Filtering)
+ [Substructure and similarity searching a large dataset](index.html#document-GUITutorial/Searching)
* [Molecular fingerprints and similarity searching](index.html#document-Fingerprints/intro)
+ [Fingerprint format](index.html#document-Fingerprints/fingerprints)
+ [Spectrophores™](index.html#document-Fingerprints/spectrophore)
* [obabel vs Chemistry Toolkit Rosetta](index.html#document-Command-line_tools/Rosetta)
+ [Heavy atom counts from an SD file](index.html#heavy-atom-counts-from-an-sd-file)
+ [Convert a SMILES string to canonical SMILES](index.html#convert-a-smiles-string-to-canonical-smiles)
+ [Report how many SD file records are within a certain molecular weight range](index.html#report-how-many-sd-file-records-are-within-a-certain-molecular-weight-range)
+ [Convert SMILES file to SD file](index.html#convert-smiles-file-to-sd-file)
+ [Report the similarity between two structures](index.html#report-the-similarity-between-two-structures)
+ [Find the 10 nearest neighbors in a data set](index.html#find-the-10-nearest-neighbors-in-a-data-set)
+ [Depict a compound as an image](index.html#depict-a-compound-as-an-image)
+ [Highlight a substructure in the depiction](index.html#highlight-a-substructure-in-the-depiction)
+ [Align the depiction using a fixed substructure](index.html#align-the-depiction-using-a-fixed-substructure)
+ [Perform a substructure search on an SDF file and report the number of false positives](index.html#perform-a-substructure-search-on-an-sdf-file-and-report-the-number-of-false-positives)
+ [Calculate TPSA](index.html#calculate-tpsa)
+ [Working with SD tag data](index.html#working-with-sd-tag-data)
+ [Unattempted tasks](index.html#unattempted-tasks)
* [2D Depiction](index.html#document-Depiction/depiction)
+ [Molecular graphics](index.html#molecular-graphics)
* [3D Structure Generation](index.html#document-3DStructureGen/Overview)
+ [Generate a single conformer](index.html#document-3DStructureGen/SingleConformer)
+ [Generate multiple conformers](index.html#document-3DStructureGen/multipleconformers)
* [Molecular Mechanics and Force Fields](index.html#document-Forcefields/Overview)
+ [Generalized Amber Force Field (gaff)](index.html#document-Forcefields/gaff)
+ [Ghemical Force Field (ghemical)](index.html#document-Forcefields/ghemical)
+ [MMFF94 Force Field (mmff94)](index.html#document-Forcefields/mmff94)
+ [Universal Force Field (uff)](index.html#document-Forcefields/uff)
* [Write software using the Open Babel library](index.html#document-UseTheLibrary/intro)
+ [The Open Babel API](index.html#document-UseTheLibrary/CppAPI)
+ [C++](index.html#document-UseTheLibrary/CppExamples)
+ [Python](index.html#document-UseTheLibrary/Python)
+ [Java](index.html#document-UseTheLibrary/Java)
+ [Perl](index.html#document-UseTheLibrary/Perl)
+ [CSharp and OBDotNet](index.html#document-UseTheLibrary/CSharp)
+ [Ruby](index.html#document-UseTheLibrary/Ruby)
+ [Updating to Open Babel 3.0 from 2.x](index.html#document-UseTheLibrary/migration)
* [Cheminformatics 101](index.html#document-Cheminf101/index)
+ [Cheminformatics Basics](index.html#document-Cheminf101/basics)
+ [Representing Molecules](index.html#document-Cheminf101/represent)
+ [Substructure Searching with Indexes](index.html#document-Cheminf101/search)
+ [Molecular Similarity](index.html#document-Cheminf101/similarity)
+ [Chemical Registration Systems](index.html#document-Cheminf101/registration)
* [Stereochemistry](index.html#document-Stereochemistry/stereo)
+ [Accessing stereochemistry information](index.html#accessing-stereochemistry-information)
+ [The Config() object](index.html#the-config-object)
+ [Modifying the stereochemistry](index.html#modifying-the-stereochemistry)
+ [Stereo perception](index.html#stereo-perception)
+ [Miscellaneous stereo functions in the API](index.html#miscellaneous-stereo-functions-in-the-api)
* [Handling of aromaticity](index.html#document-Aromaticity/Aromaticity)
+ [How is aromaticity information stored?](index.html#how-is-aromaticity-information-stored)
+ [Perception of aromaticity](index.html#perception-of-aromaticity)
+ [SMILES reading and writing](index.html#smiles-reading-and-writing)
+ [Effect of modifying the structure](index.html#effect-of-modifying-the-structure)
* [Radicals and SMILES extensions](index.html#document-Features/Radicals)
+ [The need for radicals and implicit hydrogen to coexist](index.html#the-need-for-radicals-and-implicit-hydrogen-to-coexist)
+ [How Open Babel does it](index.html#how-open-babel-does-it)
+ [In radicals either the hydrogen or the spin multiplicity can be implicit](index.html#in-radicals-either-the-hydrogen-or-the-spin-multiplicity-can-be-implicit)
+ [SMILES extensions for radicals](index.html#smiles-extensions-for-radicals)
+ [Other Supported Extensions](index.html#other-supported-extensions)
* [Contributing to Open Babel](index.html#document-Contributing/Contributing)
+ [Overview](index.html#document-Contributing/Overview)
+ [Developing Open Babel](index.html#document-Contributing/DevBestPractices)
+ [Documentation](index.html#document-Contributing/Documentation)
+ [Adding a new test](index.html#document-Contributing/Testing)
+ [Software Archaeology](index.html#document-Contributing/SoftwareArchaeology)
* [Adding plugins](index.html#document-WritePlugins/index)
+ [How to add a new file format](index.html#document-WritePlugins/AddFileFormat)
+ [Adding new operations and options](index.html#document-WritePlugins/AddingNewOptions)
+ [How to add a new descriptor](index.html#document-WritePlugins/AddNewDescriptor)
* [Supported File Formats and Options](index.html#document-FileFormats/Overview)
+ [Common cheminformatics formats](index.html#document-FileFormats/Common_cheminformatics_Formats)
+ [Utility formats](index.html#document-FileFormats/Utility_Formats)
+ [Other cheminformatics formats](index.html#document-FileFormats/Other_cheminformatics_Formats)
+ [Computational chemistry formats](index.html#document-FileFormats/Computational_chemistry_Formats)
+ [Molecular fingerprint formats](index.html#document-FileFormats/Molecular_fingerprint_Formats)
+ [Crystallography formats](index.html#document-FileFormats/Crystallography_Formats)
+ [Reaction formats](index.html#document-FileFormats/Reaction_Formats)
+ [Image formats](index.html#document-FileFormats/Image_Formats)
+ [2D drawing formats](index.html#document-FileFormats/2D_drawing_Formats)
+ [3D viewer formats](index.html#document-FileFormats/3D_viewer_Formats)
+ [Kinetics and Thermodynamics formats](index.html#document-FileFormats/Kinetics_and_Thermodynamics_Formats)
+ [Molecular dynamics and docking formats](index.html#document-FileFormats/Molecular_dynamics_and_docking_Formats)
+ [Volume data formats](index.html#document-FileFormats/Volume_data_Formats)
+ [JSON formats](index.html#document-FileFormats/JSON_Formats)
+ [Miscellaneous formats](index.html#document-FileFormats/Miscellaneous_Formats)
+ [Biological data formats](index.html#document-FileFormats/Biological_data_Formats)
+ [Obscure formats](index.html#document-FileFormats/Obscure_Formats)
* [Descriptors](index.html#document-Descriptors/descriptors)
+ [Numerical descriptors](index.html#numerical-descriptors)
+ [Textual descriptors](index.html#textual-descriptors)
+ [Descriptors for filtering](index.html#descriptors-for-filtering)
* [Charge models](index.html#document-Charges/charges)
+ [Cheminformatics charge models](index.html#cheminformatics-charge-models)
+ [Special charge models](index.html#special-charge-models)
* [Release Notes](index.html#document-ReleaseNotes/index)
+ [Open Babel 3.0.0](index.html#document-ReleaseNotes/ob300)
+ [Open Babel 2.4.0](index.html#document-ReleaseNotes/ob240)
+ [Open Babel 2.3.1](index.html#document-ReleaseNotes/ob231)
+ [Open Babel 2.3.0](index.html#document-ReleaseNotes/ob230)
+ [Open Babel 2.2.3](index.html#document-ReleaseNotes/ob223)
+ [Open Babel 2.2.2](index.html#document-ReleaseNotes/ob222)
+ [Open Babel 2.2.1](index.html#document-ReleaseNotes/ob221)
+ [Open Babel 2.2.0](index.html#document-ReleaseNotes/ob220)
+ [Open Babel 2.1.1](index.html#document-ReleaseNotes/ob211)
+ [Open Babel 2.1.0](index.html#document-ReleaseNotes/ob210)
+ [Open Babel 2.0.2](index.html#document-ReleaseNotes/ob202)
+ [Open Babel 2.0.1](index.html#document-ReleaseNotes/ob201)
+ [Open Babel 2.0](index.html#document-ReleaseNotes/ob200)
+ [Open Babel 1.100.2](index.html#document-ReleaseNotes/ob1.100.2)
+ [Open Babel 1.100.1](index.html#document-ReleaseNotes/ob1.100.1)
+ [Open Babel 1.100.0](index.html#document-ReleaseNotes/ob1.100.0)
+ [Open Babel 1.99](index.html#document-ReleaseNotes/ob1.99)
### Quick search
Open Babel, or how I learned to love the chemistry file format[¶](#open-babel-or-how-i-learned-to-love-the-chemistry-file-format)
===
The latest version of this documentation is available in several formats from <http://openbabel.org/docs/dev/>.
Introduction[¶](#introduction)
---
Open Babel is a chemical toolbox designed to speak the many languages of chemical data. It’s an open, collaborative project allowing anyone to search, convert, analyze, or store data from molecular modeling, chemistry, solid-state materials, biochemistry, or related areas.
### Goals of the Open Babel project[¶](#goals-of-the-open-babel-project)
Open Babel is a project to facilitate the interconversion of chemical data from one format to another – including file formats of various types. This is important for the following reasons:
* Multiple programs are often required in realistic workflows. These may include databases, modeling or computational programs, visualization programs, etc.
* Many programs have individual data formats, and/or support only a small subset of other file types.
* Chemical representations often vary considerably:
+ Some programs are 2D. Some are 3D. Some use fractional k-space coordinates.
+ Some programs use bonds and atoms of discrete types. Others use only atoms and electrons.
+ Some programs use symmetric representations. Others do not.
+ Some programs specify all atoms. Others use “residues” or omit hydrogen atoms.
* Individual implementations of even standardized file formats are often buggy, incomplete or do not completely match published standards.
As a free, and open source project, Open Babel improves by way of helping others. It gains by way of its users, contributors, developers, related projects, and the general chemical community. We must continually strive to support these constituencies.
We gratefully accept contributions in many forms – from bug reports, complaints, and critiques, which help us improve what we do poorly, to feature suggestions, code contributions, and other efforts, which direct our future development.
* For end users, we seek to provide a range of utility, from simple (or complex) file interconversion, to indexing, databasing, and transforming chemical and molecular data.
* For developers, we seek to provide an easy-to-use free and open source chemical library. This assists a variety of chemical software, from molecular viewers and visualization tools and editors to databases, property prediction tools, and in-house development.
To this end, we hope that our tools reflect several key points:
* As much chemical information and files should be read and understood by Open Babel. This means that we should always strive to support as many concepts as possible in a given file format, and support for additional file formats is beneficial to the community as a whole.
* Releases should be made to be “as good as we can make it” each and every time.
* Improving our code and our community to bring in additional contributions in many forms helps both developers and end-users alike. Making development easy for new contributors will result in better tools for users as well.
### Frequently Asked Questions[¶](#frequently-asked-questions)
#### General[¶](#general)
What is Open Babel?
Put simply, Open Babel is a free, open-source version of the Babel chemistry file translation program. Open Babel is a project designed to pick up where Babel left off, as a cross-platform program and library designed to interconvert between many file formats used in molecular modeling, computational chemistry, and many related areas.
Open Babel includes two components, a command-line utility and a C++ library. The command-line utility is intended to be used as a replacement for the original babel program, to translate between various chemical file formats. The C++ library includes all of the file-translation code as well as a wide variety of utilities to foster development of other open source scientific software.
How does this relate to BabelChat, BabelFish, Babel IM, etc. …?
It doesn’t. Not surprisingly, “babel” is used frequently in a lot of software names.
Is it Open Babel or OpenBabel?
Your choice. It’s probably easier to call it Open Babel since that’s what it is–an open version of Babel. But if you like one-word, mixed-case project names, then go for OpenBabel. In that case, the space is just too small to be printed.
How does this relate to the original Babel and OELib, the “next” Babel?
The original Babel was written by <NAME> and <NAME>, based on the “convert” program by <NAME>, and is still a remarkable application. Both Pat and Matt have moved on to other work. The original Babel is hosted by Smog.com on a [Babel homepage](http://smog.com/chem/babel/), by the [Computational Chemistry List](http://ccl.net/cca/software/UNIX/babel/) (CCL) and of course by Open Babel at [SourceForge.net](http://sourceforge.net/project/showfiles.php?group_id=40728&package_id=100796).
Along the way, the two original authors started a rewrite of Babel into C++ they called OBabel, which was never really publicly released. But Matt used some of these ideas in OELib, which was generously released under the GNU GPL by his employer, OpenEye Software, and the last known version of this OELib is still available from our [file repository](http://sourceforge.net/project/showfiles.php?group_id=40728&package_id=100796).
OpenEye decided that for their purposes OELib needed a rewrite (now called [OEChem](http://www.eyesopen.com/products/toolkits/oechem.html)), but this would be closed-source to include some advanced algorithms. So the GPL’ed version of OELib would not be maintained. Instead, the free version of OELib was renamed and has become “Open Babel” with the blessing of Matt and other contributors.
Open Babel has evolved quite a lot since its birth in 2001.
What’s the latest version?
As of this writing, the latest version is Open Babel 3.0.1. This is a stable version suitable for widespread use and development.
Can I use Open Babel code in a personal project?
One common misconception about the GNU GPL license for Open Babel is that it requires users to release any code that uses the Open Babel library. This is completely untrue. There are no restrictions on use of Open Babel code for personal projects, regardless of where you work (academia, industry, … wherever).
However, if you intend on releasing a software package that uses Open Babel code, the GPL requires that your package be released under the GNU GPL license. The distinction is between use and distribution. See [What’s in it for me to contribute?](#why-contribute) below for more on the licensing issues.
How do I cite Open Babel in a paper?
To support development of Open Babel, please cite:
* Hutchison et al. [[obj2011]](index.html#obj2011)
| [[obj2011]](#id1) | <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
**Open Babel: An open chemical toolbox.**
*J. Cheminf.* **2011**, *3*, 33.
[[Link](https://doi.org/10.1186/1758-2946-3-33)] |
* Open Babel, version 2.3.2, <http://openbabel.org> (accessed Oct 2012)
The first is a paper describing Open Babel; and the second is one way to cite a software package at a particular URL. Obviously, you should include the version number of Open Babel you used, and the date you downloaded the software or installed Open Babel.
#### Features, Formats, Roadmap[¶](#features-formats-roadmap)
Why don’t you support file format X?
The file formats currently supported are some of the more common file formats and, admittedly, those we use in our work. If you’d like to see other file formats added, we need one of:
> * documentation on the file format
> * working code to read the file format or translate it
> * example files in the new file format and in some other format
The latter obviously is the easiest with text file formats. Binary files take some time to reverse engineer without documentation or working code. Also consider pointing developers to this FAQ and the “What’s in it for me?” section.
When I convert from SMILES to MOL2/PDB/etc., why are all of the coordinates zero?
The SMILES format contains 2D information on the molecule. That is, it says which atoms are connected to which other atoms, and what type of bonds are present. MOL2, PDB and several other formats contain 3D coordinate information not present in the SMILES format. Since Open Babel does not attempt to generate 3D structure by default, all of the coordinates are set to zero. However, it is possible to generate 3D structure with the release of Open Babel 2.2.0 using the `--gen3d` option.
What sorts of features will be added in the future?
It’s an open project, so if features are suggested or donated, they’ll be considered as much as anything else on the drawing board. Some things are pretty clear from the roadmap.
#### What’s in it for me to contribute?[¶](#what-s-in-it-for-me-to-contribute)
What’s in it for my chemistry software company?
If your product is closed-source or otherwise incompatible with the GPL, you unfortunately cannot link directly to the code library. You can, however, distribute Open Babel in unmodified form with your products to use the command-line interface. This is fairly easy because the Open Babel babel program allow reading from the standard input and writing to the standard output (functioning as a POSIX pipe).
If you decide to distribute binaries, you should either offer users the source if they want, or point them to the Open Babel website. Note that if you modify the source, you obviously can’t point back to the Open Babel website – the GPL requires that you distribute the changed source. (Or you can convince us to incorporate the changes and point back to us.)
What’s not to like with this deal? You can have Open Babel translate foreign file formats for you and can point users at the website for distribution. You don’t need to write tons of code for all these formats and bug reports can be passed back to us.
Of course, there’s one catch. You’ll most likely need to add feature-rich support for your file formats. So if you contribute a small amount of code under the GPL to read/write your files, everything else is handled by Open Babel.
It’s a win-win for everyone. The community benefits by having feature-rich translation code and open file formats. Your company and its programs benefit by the ability to read just about every format imaginable. Users benefit by using the programs they need for the tasks they need.
What’s in it for me as an academic?
If you’re an academic developer, you certainly should read the previous answer too. It takes little work on your part to interface with Open Babel and you get a lot in return.
But even if you’re just an academic user, there’s a lot of reasons to contribute. Most of us deal with a variety of file formats in our work. So it’s useful to translate these cleanly. If a format isn’t currently supported by Open Babel, see [above](#why-no-support). If you find bugs please report them. Since it’s open source, you can patch the code yourself, recompile and have the problem fixed very quickly.
If you’re inclined to write code, the GPL is an excellent option for the academic. You’re the original copyright holder, so you can do whatever you want with the code, in addition to selling it. But if you’ve also licensed it under the GPL, no one can distribute it as proprietary (i.e., closed-source) software without your agreement. Fellow academics can use it directly, learn from it, improve it and contribute back to you. Isn’t that why many of us went into science?
Once licensed under the GPL, the code must remain free to interested parties. If someone modifies it, that code must still remain under the GPL, free for all.
What’s in it for an open-source software project?
Certainly the answers for closed-source software and academics also apply for you. Beyond that, if your code is compatible with the GPL, you can directly use Open Babel and all of the API. This is already happening with the Avogadro molecular editor, available under the GPL, and many others (see [related projects](http://openbabel.org/wiki/Related)). There’s a lot of code in Open Babel beyond file translation and more to come. Why reinvent the wheel?
Why is this covered under the GPL instead of license X?
The short answer is that [OpenEye Scientific Software](http://www.eyesopen.com) employs <NAME>, one of the authors of the original Babel. They released a library called OELib under the GPL that did many things that Babel did. Later they decided to release the next version of OELib as a closed-source project–their choice for their code. We took the version of OELib still under GPL and went from there.
If you’d like to see Open Babel licensed differently, we’d suggest asking OpenEye if they’d consider releasing the old code under a new license, e.g. the LGPL. At that point, we’d consider whether Open Babel should be relicensed or not. Obviously all copyright holders must agree to the new license.
It’s worth noting that since OpenEye is developing a closed-source library called [OEChem](http://www.eyesopen.com/products/toolkits/oechem.html) and implies one reason for purchase is in closed-source development products. So we think it’s highly unlikely that OpenEye would allow Open Babel to become a competitor by relicensing under the LGPL.
Where can I read more about the GNU GPL?
The Free Software Foundation maintains a [FAQ](http://www.fsf.org/licenses/gpl-faq.html) list about the GNU GPL. The FAQ attempts to address common questions in an easy-to-read (i.e., not in legal language) form.
### Thanks[¶](#thanks)
Open Babel would not be what it is without the help of a cast of many. We are fundamentally a community project and aim to offer open development, responsive to users and contributors alike.
In addition to contributors of all sorts, a variety of related projects keep us on our toes. We would also like to thank everyone who has cited Open Babel in academic and technical literature, posters, and presentations.
Credits (in alphabetical order)
| | | |
| --- | --- | --- |
| * <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
| * <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
| * <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* Ernst-<NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
* <NAME>
|
There are probably many more who have contributed to Babel, OBabel, OELib or directly to Open Babel who are not listed here. Please help us keep this list updated. THANKS!
Install Open Babel[¶](#install-open-babel)
---
Open Babel runs on Windows, Linux and MacOSX. You can either [install a binary package](#install-binaries) (the easiest option) or [compile Open Babel yourself](#compiling-open-babel) (also easy, but much more geek cred).
### Install a binary package[¶](#install-a-binary-package)
#### Windows[¶](#windows)
Open Babel is available as a binary installer for Windows, both 64-bit (preferred) or 32-bit (indicated by `x86` in the filename). It includes several command-line tools as well as a graphical user interface (GUI). The latest version can be download from [GitHub](https://github.com/openbabel/openbabel/releases).
Advanced users may be interested in compiling Open Babel themselves (see [Compiling Open Babel](#compiling-open-babel)).
#### Linux[¶](#linux)
Open Babel binary packages are available from many Linux distributions including Ubuntu, OpenSUSE and Fedora.
In general, we recommend using the latest release of Open Babel (currently 3.0.1). If this is not available for your Linux distribution, you should [compile Open Babel yourself](#compiling-open-babel).
### Compiling Open Babel[¶](#compiling-open-babel)
Open Babel is written in C++. Compiling is the process of turning this C++ into instructions that the computer’s processor can understand, machine code.
Although pre-compiled (or “binary”) packages are available for several platforms, there are several reasons you might want to compile Open Babel yourself:
* The current release (3.0.1) of Open Babel is not available for your platform. We recommend always using the latest release.
* You want more control over the features available. For example, perhaps you want the Python bindings but these were not included in your distribution.
* You want to use the latest development code.
* You want to add a new feature. It is easy to add new formats or operations to Open Babel as it has a plugin architecture (see [Adding plugins](index.html#add-plugins)).
* You just want to compile stuff yourself. We understand.
Open Babel can be compiled on Linux, MacOSX, BSDs and other Unixes, and also on Windows (with Cygwin, MinGW or MSVC).
#### Requirements[¶](#requirements)
To build Open Babel, you **need** the following:
* The [source code](http://sourceforge.net/projects/openbabel/files/openbabel/2.4.0/openbabel-openbabel-2-4-0.tar.gz/download) for the latest release of Open Babel
* A C++ compiler
> Open Babel is written in standards-compliant C++. The best-supported compilers are GCC 4 and MSVC++ 2008, but it also compiles with Clang and Intel Compiler 11.
>
* CMake 2.8 or newer
> Open Babel uses CMake as its build system. CMake is an open source cross-platform build system from KitWare.
> You need to install CMake 2.8 or newer. This is available as a binary package from the KitWare website; alternatively, it may be available through your package manager (on Linux). If necessary, you can also compile it yourself from the source code.
If you want to build the GUI (Graphical User Interface), you **need** the following in addition:
* wxWidgets 2.8 (or newer)
> Binary packages may be available through your package manager (*wx-common*, *wx2.8-headers* and *libwxbase2.8-dev* on Ubuntu) or from <http://www.wxwidgets.org/downloads/>. Otherwise, you could try compiling it yourself from the source code.
The following are **optional** when compiling Open Babel, but if not available some features will be missing:
* **libxml2** development headers are required to read/write CML files and other XML formats (the *libxml2-dev* package in Ubuntu)
* **zlib** development libraries are required to support reading gzipped files (the *zlib1g-dev* package in Ubuntu)
* **Eigen** version 2 or newer is **required** if using the language bindings in the release. In addition, if it not present, some API classes (OBAlign, OBConformerSearch) and plugins (the QEq and QTPIE charge models, the `--conformer` and `--align` operations) will not be available.
Eigen may be available through your package manager (the *libeigen2-dev* package in Ubuntu). Alternatively, Eigen is available from <http://eigen.tuxfamily.org>. It doesn’t need to be compiled or installed. Just unzip it and specify its location when configuring **cmake** (see below) using `-DEIGEN2_INCLUDE_DIR=whereever` or `-DEIGEN3_INCLUDE_DIR=wherever`.
* **Cairo** development libraries are required to support PNG depiction (the *libcairo2-dev* package in Ubuntu)
* If using GCC 3.x to compile (and not GCC 4.x), then the Boost headers are required for certain formats (CML, Chemkin, Chemdraw CDX, MDL RXN and RSMI)
If you want to use Open Babel using one of the supported **language bindings**, then the following notes may apply:
* You need the the Python development libraries to compile the Python bindings (package *python-dev* in Ubuntu)
* You need the the Perl development libraries to compile the Perl bindings (package *libperl-dev* in Ubuntu)
#### Basic build procedure[¶](#basic-build-procedure)
The basic build procedure is the same for all platforms and will be described first. After this, we will look at variations for particular platforms.
1. The recommended way to build Open Babel is to use a separate source and build directory; for example, `openbabel-2.3.2` and `build`. The first step is to create these directories:
```
$ tar zxf openbabel-2.3.2.tar.gz # (this creates openbabel-2.3.2)
$ mkdir build
```
2. Now you need to run **cmake** to configure the build. The following will configure the build to use all of the default options:
```
$ cd build
$ cmake ../openbabel-2.3.2
```
3. If you need to specify an option, use the `-D` switch to **cmake**. For example, the following line sets the value of `CMAKE_INSTALL_PREFIX` and `CMAKE_BUILD_TYPE`:
```
$ cmake ../openbabel-2.3.2 -DCMAKE_INSTALL_PREFIX=~/Tools -DCMAKE_BUILD_TYPE=DEBUG
```
We will discuss various possible options later.
4. At this point, it would be a good idea to compile Open Babel:
```
$ make
```
Have a coffee while the magic happens. If you have a multi-processor machine and would prefer an expresso, try a parallel build instead:
```
$ make -j4 # parallel build across 4 processors
```
5. And finally, as root (or using `sudo`) you should install it:
```
# make install
```
#### Local build[¶](#local-build)
By default, Open Babel is installed in `/usr/local/` on a Unix-like system. This requires root access (or `sudo`). Even if you do have root access, you may not want to overwrite an existing installation or you may want to avoid conflicts with a version of Open Babel installed by your package manager.
The solution to all of these problems is to do a local install into a directory somewhere in your home folder.
An additional advantage of a local install is that if you ever want to uninstall it, all you need to do is delete the installation directory; removing the files from a global install is more work.
1. To configure **cmake** to install into `~/Tools/openbabel-install`, for example, you would do the following:
```
$ cmake ../openbabel-2.3.2 -DCMAKE_INSTALL_PREFIX=~/Tools/openbabel-install
```
2. Then you can run **make** and **make install** without needing root access:
```
$ make && make install
```
#### Compile the GUI[¶](#compile-the-gui)
The GUI is built using the wxWidgets toolkit. Assuming that you have already installed this (see [Requirements](#requirements) above), you just need to configure **cmake** as follows:
```
$ cmake ../openbabel-2.3.2 -DBUILD_GUI=ON
```
When you run `make` and `make install`, the GUI will be automatically built and installed alongside the main Open Babel library and tools.
#### Compile language bindings[¶](#compile-language-bindings)
1. When configuring CMake, include options such as `-DPYTHON_BINDINGS=ON -DRUBY_BINDINGS=ON` for whichever bindings you wish to build (valid names are `PYTHON`, `CSHARP`, `PERL`, `JAVA` or `RUBY`) or `-DALL_BINDINGS=ON` to build them all. The bindings will then be built and installed along with the rest of Open Babel. You should note any warning messages in the CMake output.
2. If CMake cannot find Java, you should set the value of the environment variable `JAVA_HOME` to the directory containing the Java `bin` and `lib` directories. For example, if you download the JDK from Sun and run the self-extracting .bin file, it creates a directory `jdk1.6.0_21` (or similar); you should set `JAVA_HOME` to the full path to this directory.
3. If CMake cannot find the Perl libraries (which happens on Ubuntu 10.10, surprisingly), you need to configure CMake with something like `-DPERL_LIBRARY=/usr/lib/libperl.so.5.10 -DPERL_INCLUDE_PATH=/usr/lib/perl/5.10.1/CORE`.
4. If you are compiling the CSharp bindings, you should specify the CSharp compiler to use with something like `-DCSHARP_EXECUTABLE=C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe`.
5. When you run `make install`, all of the bindings will be installed to the same location as the Open Babel libraries (typically `/usr/local/lib`).
6. To prepare to use the bindings, add the install directory to the front of the appropriate environment variable: PYTHONPATH for Python, PERL5LIB for Perl, RUBYLIB for Ruby, CLASSPATH for Java, and MONO_PATH for Mono.
For example, for Python:
```
$ cmake ../openbabel-2.3.2 -DPYTHON_BINDINGS=ON
$ make
# make install
$ export PYTHONPATH=/usr/local/lib:$PYTHONPATH
```
#### Cygwin[¶](#cygwin)
The basic build instructions up above work just fine so long as you use the CMake provided by Cygwin rather than a native Windows installation.
If you get an error about `undefined reference to '_xmlFreeTextReader'`, you need to specify the location of the XML libraries with the `-DLIBXML2_LIBRARIES` option:
```
$ cmake ../openbabel-2.3.2 -DLIBXML2_LIBRARIES=/usr/lib/libxml2.dll.a
```
The language bindings don’t seem to work under Cygwin. If you can get them to work, let us know. Also remember that anything that uses Cygwin runs slower than a native build using MinGW or MSVC++, so if speed is an issue you might prefer to compile with MinGW or MSVC++.
#### MinGW[¶](#mingw)
Open Babel builds out of the box with MinGW. It’s an awkward system to set up though, so here are some step-by-step instructions…TODO
#### Windows (MSVC)[¶](#windows-msvc)
The main Windows build used by Open Babel uses the Microsoft Visual C++ compiler (MSVC).
1. Set up the following environment variables:
> 1. Add the CMake `bin` directory to the PATH.
> 2. (Optional, see [Requirements](#requirements) above) Set EIGEN2_INCLUDE_DIR to the location of the top level Eigen directory (if installed).
> 3. (Optional, required for GUI) Set WXWIN to the top level directory of wxWidgets (if installed).
>
2. Install the Microsoft Visual C++ 2010 (or newer) compiler.
We use the Visual C++ 2010 (10.0) [Express Edition](http://www.microsoft.com/Express/VC/) (available for free).
3. Open a command prompt, and change directory to the `windows-vc2008` subdirectory. To configure **cmake**, and generate the VC++ project files, run `default_build.bat`.
4. Double-click on `windows-vc2008/build/openbabel.sln` to start MSVC++. At the top of the window just below the menu bar, choose Release in the drop-down box.
5. On the left-hand side, right-click on the `ALL_BUILD` target, and choose Build.
#### Troubleshooting build problems[¶](#troubleshooting-build-problems)
CMake caches some variables from run-to-run. How can I wipe the cache to start from scratch?
Delete `CMakeCache.txt` in the build directory. This is also a very useful file to look into if you have any problems.
How do I specify the location of the XML libraries?
CMake should find these automatically if they are installed system-wide. If you need to specify them, try using the `-DLIBXML2_LIBRARIES=wherever` option with CMake to specify the location of the DLL or SO file, and `-DLIBXML2_INCLUDE_DIR=wherever` to specify the location of the header files.
How do I specify the location of the ZLIB libraries?
CMake should find these automatically if they are installed system-wide. If you need to specify them, try using the `-DZLIB_LIBRARY=wherever` option with CMake to specify the location of the DLL or SO file, and `-DZLIB_INCLUDE_DIR=wherever` to specify the location of the header files.
What environment variables affect how Open Babel finds formats, plugins and libraries?
**LD_LIBRARY_PATH** - Used to find the location of the `libopenbabel.so` file.
You should set this if you get error messages about not being able to find `libopenbabel.so`.
**BABEL_LIBDIR** - Used to find plugins such as the file formats If `obabel -L formats` does not list any file formats, then you need to set this environment variable to the directory where the file formats were installed, typically `/usr/local/lib/openbabel/`.
**BABEL_DATADIR** - Used to find the location of the data files used for fingerprints, forcefields, etc.
If you get errors about not being able to find some .txt files, then you should set this to the name of the folder containing files such as `patterns.txt` and `MACCS.txt`. These are typically installed to `/usr/local/share/openbabel`.
#### Advanced build options[¶](#advanced-build-options)
How do I control whether the tests are built?
The CMake option `-DENABLE_TESTS=ON` or `OFF` will look after this. To actually run the tests, use `make tests`.
How do I do a debug build?
`-DCMAKE_BUILD_TYPE=Debug` does a debug build (`gcc -g`). To revert to a regular build use `-DCMAKE_BUILD_TYPE=Release`.
How do I see what commands cmake is using to build?
Run Make as follows:
```
$ VERBOSE=1 make
```
How do I build one specific target?
Just specify the target when running Make. The following just builds the Python bindings:
```
$ make _openbabel
```
To speed things up, you can ask Make to ignore dependencies:
```
$ make _openbabel/fast
```
How do I create the SWIG bindings?
Use the `-DRUN_SWIG=ON` option with CMake. This requires SWIG 2.0 or newer. If the SWIG executable is not on the PATH, you will need to specify its location with `-DSWIG_EXECUTABLE=wherever`.
How do I build the Doxygen documentation?
Use the `-DBUILD_DOCS=ON` option with CMake. If the Doxygen executable is not on the PATH, you will need to specify its location with `-DDOXYGEN_EXECUTABLE=wherever`.
obabel - Convert, Filter and Manipulate Chemical Data[¶](#obabel-convert-filter-and-manipulate-chemical-data)
---
**obabel** is a command-line program for interconverting between many file formats used in molecular modeling and computational chemistry and related areas. It can also be used for filtering molecules and for simple manipulation of chemical data.
### Synopsis[¶](#synopsis)
| | |
| --- | --- |
| * `obabel [-H <help-options>]`
| * `obabel [-i <input-ID>] infile [-o <output-ID>] [-O outfile] [OPTIONS]`
|
### Options[¶](#options)
Information and help
* `obabel [-H <help-options>]`
| `-H` | Output usage information |
| `-H <format-ID>` | Output formatting information and options for the format specified |
| `-Hall` | Output formatting information and options for all formats |
| `-L` | List plugin types (`charges`, `descriptors`, `fingerprints`, `forcefields`, `formats`, `loaders` and `ops`) |
| `-L <plugin type>` |
| | List plugins of this type. For example, `obabel -L formats` gives the list of file formats. |
| `-L <plugin-ID>` | Details of a particular plugin (of any plugin type). For example, `obabel -L cml` gives details on the CML file format. |
| `-V` | Output version number |
Conversion options
* `obabel [-i <input-ID>] infile [-o <output-ID>] [-O outfile] [OPTIONS]`
* `obabel -:"<text>" [-i <input-ID>] [-o <output-ID>] [-O outfile] [OPTIONS]`
Note
If only input and output files are given, Open Babel will guess the file type from the filename extension. For information on the file formats supported by Open Babel, please see [Supported File Formats and Options](index.html#file-formats). If text is provided using the `-:` notation, SMILES are assumed by default if an input format is not specified.
| `-a <options>` | Format-specific input options. Use `-H <format-ID>` to see options allowed by a particular format, or see the appropriate section in
[Supported File Formats and Options](index.html#file-formats). |
| `--add <list>` | Add properties (for SDF, CML, etc.) from descriptors in list. Use
`-L descriptors` to see available descriptors. |
| `--addfilename` | Add the input filename to the title. |
| `--addinindex` | Append input index to title (that is, the index before any filtering) |
| `--addoutindex` | Append output index to title (that is, the index after any filtering) |
| `--addpolarh` | Like `-h`, but only adds hydrogens to polar atoms. |
| `--addtotitle <text>` |
| | Append the text after each molecule title |
| `--append <list>` |
| | Append properties or descriptor values appropriate for a molecule to its title. For more information, see [Append property values to the title](#append-option). |
| `-b` | Convert dative bonds (e.g. `[N+]([O-])=O` to `N(=O)=O`) |
| `-c` | Center atomic coordinates at (0,0,0) |
| `-C` | Combine molecules in first file with others having the same name |
| `--canonical` | Canonicalize the atom order. If generating canonical SMILES, do not use this option. Instead use the [Canonical SMILES format (can)](index.html#canonical-smiles-format). |
| `--conformer <options>` |
| | Conformer searching to generate low-energy or diverse conformers. For more information, see [Generating conformers for structures](#conformers). |
| `-d` | Delete hydrogens (make all hydrogen implicit) |
| `--delete <list>` |
| | Delete properties in list |
| `-e` | Continue to convert molecules after errors |
| `--energy <options>` |
| | Forcefield energy evaluation. See [Forcefield energy and minimization](#minimize-option). |
| `--errorlevel <N>` |
| | Filter the level of errors and warnings displayed:* 1 = critical errors only
* 2 = include warnings too (**default**)
* 3 = include informational messages too
* 4 = include “audit log” messages of changes to data
* 5 = include debugging messages too
|
| `-f <#>` | For multiple entry input, start import with molecule # as the first entry |
| `--fillUC <param>` |
| | For a crystal structure, add atoms to fill the entire unit cell based on the unique positions, the unit cell and the spacegroup. The parameter can either be `strict` (the default), which only keeps atoms inside the unit cell, or `keepconnect`, which fills the unit cell but keeps the original connectivity. |
| `--filter <criteria>` |
| | Filter based on molecular properties. See
[Filtering molecules from a multimolecule file](#filter-options) for examples and a list of criteria. |
| `--gen2d` | Generate 2D coordinates |
| `--gen3d` | Generate 3D coordinates. You can specify the speed of prediction. See [Specifying the speed of 3D coordinate generation](#specify-speed). |
| `-h` | Add hydrogens (make all hydrogen explicit) |
| `--highlight <substructure color>` |
| | Highlight substructures in 2D depictions. Valid colors are black, gray, white, red, green, blue, yellow,
cyan, purple, teal and olive. Additional colors may be specified as hexadecimal RGB values preceded by `#`.
Multiple substructures and corresponding colors may be specified. |
| `-i <format-ID>` | Specifies input format. See [Supported File Formats and Options](index.html#file-formats). |
| `-j, --join` | Join all input molecules into a single output molecule entry |
| `-k` | Translate computational chemistry modeling keywords. See the computational chemistry formats ([Computational chemistry formats](index.html#computational-chemistry)),
for example [GAMESS Input (gamin, inp)](index.html#gamess-input) and [Gaussian Input (com, gau, gjc, gjf)](index.html#gaussian-input). |
| `-l <#>` | For multiple entry input, stop import with molecule # as the last entry |
| `--largest <#N descriptor>` |
| | Only convert the N molecules which have the largest values of the specified descriptor. Preceding the descriptor by `~` inverts this filter. |
| `-m` | Produce multiple output files, to allow:* Splitting one input file - put each molecule into consecutively numbered output files
* Batch conversion - convert each of multiple input files into a specified output format
|
| `--minimize <options>` |
| | Forcefield energy minimization. See [Forcefield energy and minimization](#minimize-option). |
| `-o <format-ID>` | Specifies output format. See [Supported File Formats and Options](index.html#file-formats). |
| `-p <pH>` | Add hydrogens appropriate for pH (use transforms in `phmodel.txt`) |
| `--partialcharge <charge-method>` |
| | Calculate partial charges by the specified method. List available methods using `obabel -L charges`. |
| `--property <name value>` |
| | Add or replace a property (for example, in an SD file) |
| `-r` | Remove all but the largest contiguous fragment (strip salts) |
| `--readconformer` |
| | Combine adjacent conformers in multi-molecule input into a single molecule.
If a molecule has the same structure as the preceding molecule, as determined from its SMILES, it is not output but its coordinates are added to the preceding molecule as an additional conformer. There can be multiple groups of conformers, but the molecules in each group must be adjacent. |
| `-s <SMARTS>` | Convert only molecules matching the SMARTS pattern specified |
| `-s <filename.xxx>` |
| | Convert only molecules with the molecule in the file as a substructure |
| `--separate` | Separate disconnected fragments into individual molecular records |
| `--smallest <#N descriptor>` |
| | Only convert the N molecules which have the smallest values of the specified descriptor. Preceding the descriptor by `~` inverts this filter. |
| `--sort` | Output molecules ordered by the value of a descriptor. See [Sorting molecules](#sorting-option). |
| `--title <title>` |
| | Add or replace molecular title |
| `--unique, --unique <param>` |
| | Do not convert duplicate molecules. See [Remove duplicate molecules](#removing-duplicates). |
| `--writeconformers` |
| | Output multiple conformers as separate molecules |
| `-x <options>` | Format-specific output options. use `-H <format-ID>` to see options allowed by a particular format, or see the appropriate section in
[Supported File Formats and Options](index.html#file-formats). |
| `-v <SMARTS>` | Convert only molecules **NOT** matching the SMARTS pattern specified |
| `-z` | Compress the output with gzip (not on Windows) |
### Examples[¶](#examples)
The examples below assume the files are in the current directory. Otherwise you may need to include the full path to the files e.g. `/Users/username/Desktop/mymols.sdf` and you may need to put quotes around the filenames (especially on Windows, where they can contain spaces).
Standard conversion:
```
obabel ethanol.xyz -O ethanol.pdb babel ethanol.xyz ethanol.pdb
```
Conversion if the files do not have an extension that describes their format:
```
obabel -ixyz ethanol.aa -opdb -O ethanol.bb babel -ixyz ethanol.aa -opdb ethanol.bb
```
Molecules from multiple input files (which can have different formats) are normally combined in the output file:
```
obabel ethanol.xyz acetal.sdf benzene.cml -O allmols.smi
```
Conversion from a SMI file in STDIN to a Mol2 file written to STDOUT:
```
obabel -ismi -omol2
```
Split a multi-molecule file into `new1.smi`, `new2.smi`, etc.:
```
obabel infile.mol -O new.smi -m
```
In Windows this can also be written:
```
obabel infile.mol -O new*.smi
```
Multiple input files can be converted in batch format too. To convert all files ending in .xyz (`*.xyz`) to PDB files, you can type:
```
obabel *.xyz -opdb -m
```
Open Babel will not generate coordinates unless asked, so while a conversion from SMILES to SDF will generate a valid SDF file, the resulting file will not contain coordinates. To generate coordinates, use either the `--gen3d` or the `--gen2d` option:
```
obabel infile.smi -O out.sdf --gen3d
```
If you want to remove all hydrogens (i.e. make them all implicit) when doing the conversion the command would be:
```
obabel mymols.sdf -osmi -O outputfile.smi -d
```
If you want to add hydrogens (i.e. make them all explicit) when doing the conversion the command would be:
```
obabel mymols.sdf -O outputfile.smi -h
```
If you want to add hydrogens appropriate for pH7.4 when doing the conversion the command would be:
```
obabel mymols.sdf -O outputfile.smi -p
```
The protonation is done on an atom-by-atom basis so molecules with multiple ionizable centers will have all centers ionized.
Of course you don’t actually need to change the file type to modify the hydrogens. If you want to add all hydrogens the command would be:
```
obabel mymols.sdf -O mymols_H.sdf -h
```
Some functional groups e.g. nitro or sulphone can be represented either as `[N+]([O-])=O` or `N(=O)=O`. To convert all to the dative bond form:
```
obabel mymols.sdf -O outputfile.smi -b
```
If you only want to convert a subset of molecules you can define them using `-f` and `-l`. To convert molecules 2-4 of the file `mymols.sdf` type:
```
obabel mymols.sdf -f 2 -l 4 -osdf -O outputfile.sdf
```
Alternatively you can select a subset matching a SMARTS pattern, so to select all molecules containing bromobenzene use:
```
obabel mymols.sdf -O selected.sdf -s "c1ccccc1Br"
```
You can also select the subset that do *not* match a SMARTS pattern, so to select all molecules not containing bromobenzene use:
```
obabel mymols.sdf -O selected.sdf -v "c1ccccc1Br"
```
You can of course combine options, so to join molecules and add hydrogens type:
```
obabel mymols.sdf -O myjoined.sdf -h -j
```
Files compressed with gzip are read transparently, whether or not they have a .gz suffix:
```
obabel compressed.sdf.gz -O expanded.smi
```
On platforms other than Windows, the output file can be compressed with gzip, but note if you don’t specify the .gz suffix it will not be added automatically, which could cause problems when you try to open the file:
```
obabel mymols.sdf -O outputfile.sdf.gz -z
```
This next example reads the first 50 molecules in a compressed dataset and prints out the SMILES of those containing a pyridine ring, together with the index in the file, the ID (taken from an SDF property) as well as the output index:
```
obabel chembl_02.sdf.gz -osmi -l 50 -s c1ccccn1 --append chebi_id
--addinindex --addoutindex
```
For the test data (taken from ChEMBLdb), this gave:
```
N1(CCN(CC1)c1c(cc2c3c1OCC(n3cc(c2=O)C(=O)O)C)F)C 3 100146 1 c1(c(=O)c2c(n(c1)OC)c(c(N1CC(CC1)CNCC)c(c2)F)F)C(=O)O 6 100195 2 S(=O)(=O)(Nc1ncc(cc1)C)c1c2c(c(N(C)C)ccc2)ccc1 22 100589 3 c1([nH]c2c(c1)cccc2)C(=O)N1CCN(c2c(N(CC)CC)cccn2)CC1 46 101536 4
```
### Format Options[¶](#format-options)
Individual file formats may have additional formatting options. These are listed in the documentation for the individual formats (see [Supported File Formats and Options](index.html#file-formats)) or can be shown using the `-H <format-Id>` option, e.g. `-H cml`.
To use these additional options, input format options are preceded by `-a`, e.g. `-as`. Output format options, which are much more common, are preceded by `-x`, e.g. `-xn`. So to read the 2D coordinates (rather than the 3D) from a [CML file](index.html#chemical-markup-language) and generate an [SVG file](index.html#svg-2d-depiction) displaying the molecule on a black background, the relevant options are used as follows:
```
obabel mymol.cml out.svg -a2 -xb
```
### Append property values to the title[¶](#append-property-values-to-the-title)
The command line option `--append` adds extra information to the title of the molecule.
The information can be calculated from the structure of the molecule or can originate from a property attached to the molecule (in the case of CML and SDF input files). It is used as follows:
```
obabel infile.sdf -osmi --append "MW CAT_NO"
```
`MW` is the ID of a descriptor which calculates the molecular weight of the molecule, and `CAT_NO` is a property of the molecule from the SDF input file. The values of these are added to the title of the molecule. For input files with many molecules these additions are specific to each molecule. (Note that the related option `--addtotitle` simply adds the same text to every title.)
The append option only takes one parameter, which means that it may be necessary to enclose all of the descriptor IDs or property names together in a single set of quotes.
If the name of the property in the SDF file (internally the Attribute in OBPairData) contains spaces, these spaces should be replaced by underscore characters, ‘_’. So the example above would also work for a property named `CAT NO`.
By default, the extra items are added to the title separated by spaces. But if the first character in the parameter is a punctuation character other than ‘_’, it is used as the separator instead. If the list starts with “t”, a tab character is used as a separator.
### Generating conformers for structures[¶](#generating-conformers-for-structures)
The command line option `--conformer` allows performing conformer searches using a range of different algorithms and options:
* `--log` - output a log of the energies (default = no log)
* `--nconf #` - number of conformers to generate
Forcefield-based methods for finding stable conformers:
* `--systematic` - systematically (exhaustively) generate all conformers
* `--random` - randomly generate conformers
* `--weighted` - weighted rotor search for lowest energy conformer
* `--ff <name>` - select a forcefield (default = MMFF94)
Genetic algorithm based methods (default):
* `--children #` - number of children to generate for each parent (default = 5)
* `--mutability #` - mutation frequency (default = 5)
* `--converge #` - number of identical generations before convergence is reached
* `--score #` - scoring function [rmsd|energy] (default = rmsd)
You can use them like this (to generate 50 conformers, scoring with MMFF94 energies but default genetic algorithm options):
```
obabel EtOT5D.cml -O EtOT5D0.xyz --conformer --nconf 50 --score energy
```
or if you also wish to generate 3D coordinates, followed by conformer searching try something like this:
```
obabel ligand.babel.smi -O ligand.babel.sdf --gen3d --conformer --nconf 20 --weighted
```
### Filtering molecules from a multimolecule file[¶](#filtering-molecules-from-a-multimolecule-file)
Six of the options above can be used to filter molecules:
* `-s` - convert molecules that match a SMARTS string
* `-v` - convert molecules that don’t match a SMARTS string
* `-f` and `-l` - convert molecules in a certain range
* `--unique` - only convert unique molecules (that is, remove duplicates)
* `--filter` - convert molecules that meet specified chemical (and other) criteria
This section focuses on the `--filter` option, which is very versatile and can select a subset of molecules based either on properties imported with the molecule (as from a SDF file) or from calculations made by Open Babel on the molecule.
The aim has been to make the option flexible and intuitive to use; don’t be put off by the long description.
You use it like this:
```
obabel filterset.sdf -osmi --filter "MW<130 ROTATABLE_BOND > 2"
```
It takes one parameter which probably needs to be enclosed in double quotes to avoid confusing the shell or operating system. (You don’t need the quotes with the Windows GUI.) The parameter contains one or more conditional tests. By default, these have all to be true for the molecule to be converted. As well as this implicit AND behaviour, you can write a full Boolean expression (see below). As you can see, there can be spaces or not in sensible places and the conditional tests could be separated by a comma or semicolon.
You can filter on two types of property:
* An SDF property, as the identifier ROTATABLE_BOND could be. There is no need for it to be previously known to Open Babel.
* A descriptor name (internally, an ID of an OBDescriptor object). This is a plug-in class so that new objects can easily be added. MW is the ID of a descriptor which calculates molecular weight. You can see a list of available descriptors using:
```
obabel -L descriptors
```
or from a menu item in the GUI.
The descriptor names are case-insensitive. With the property names currently, you need to get the case right. Both types of identifier can contain letters, numbers and underscores, ‘_’. Properties can contain spaces, but then when writing the name in the filter parameter, you need to replace them with underscores. So in the example above, the test would also be suitable for a property ‘ROTATABLE BOND’.
Open Babel uses a SDF-like property (internally this is stored in the class OBPairData) in preference to a descriptor if one exists in the molecule. So with the example file, which can be found [here](https://raw.githubusercontent.com/openbabel/openbabel/master/test/files/filterset.sdf):
```
obabel filterset.sdf -osmi --filter "logP>5"
```
converts only a molecule with a property logP=10.900, since the others do not have this property and logP, being also a descriptor, is calculated and is always much less than 5.
If a property does not have a conditional test, then it returns true only if it exists. So:
```
obabel filterset.sdf -osmi --filter "ROTATABLE_BOND MW<130"
```
converts only those molecules with a ROTATABLE_BOND property and a molecular weight less than 130. If you wanted to also include all the molecules without ROTATABLE_BOND defined, use:
```
obabel filterset.sdf -osmi --filter "!ROTATABLE_BOND || (ROTATABLE_BOND & MW<130)"
```
The ! means negate. AND can be & or &&, OR can be | or ||. The brackets are not strictly necessary here because & has precedent over | in the normal way. If the result of a test doesn’t matter, it is parsed but not evaluated. In the example, the expression in the brackets is not evaluated for molecules without a ROTATABLE_BOND property. This doesn’t matter here, but if evaluation of a descriptor involved a lot of computation, it would pay to include it late in the boolean expression so that there is a chance it is skipped for some molecules.
Descriptors must have a conditional test and it is an error if they don’t. The default test, as used by MW or logP, is a numerical one, but the parsing of the text, and what the test does is defined in each descriptor’s code (a virtual function in the OBDescriptor class). Three examples of this are described in the following sections.
#### String descriptors[¶](#string-descriptors)
```
obabel filterset.sdf -osmi --filter "title='Ethanol'"
```
The descriptor *title*, when followed by a string (here enclosed by single quotes), does a case-sensitive string comparison. (‘ethanol’ wouldn’t match anything in the example file.) The comparison does not have to be just equality:
```
obabel filterset.sdf -osmi --filter "title>='D'"
```
converts molecules with titles Dimethyl Ether and Ethanol in the example file.
It is not always necessary to use the single quotes when the meaning is unambiguous: the two examples above work without them. But a numerical, rather than a string, comparison is made if both operands can be converted to numbers. This can be useful:
```
obabel filterset.sdf -osmi --filter "title<129"
```
will convert the molecules with titles 56 123 and 126, which is probably what you wanted.
```
obabel filterset.sdf -osmi --filter "title<'129'"
```
converts only 123 and 126 because a string comparison is being made.
String comparisons can use `*` as a wildcard if used as the first or last character of the string (anywhere else a `*` is a normal character). So `--filter "title='*ol'"` will match molecules with titles ‘methanol’, ‘ethanol’ etc. and `--filter "title='eth*'` will match ‘ethanol’, ‘ethyl acetate’, ‘ethical solution’ etc. Use a `*` at both the first and last characters to test for the occurrence of a string, so `--filter "title='*ol*'"` will match ‘oleum’, ‘polonium’ and ‘ethanol’.
#### SMARTS descriptor[¶](#smarts-descriptor)
This descriptor will do a SMARTS test (substructure and more) on the molecules. The smarts ID can be abbreviated to s and the = is optional. More than one SMARTS test can be done:
```
obabel filterset.sdf -osmi --filter "s='CN' s!='[N+]'"
```
This provides a more flexible alternative to the existing `-s` and `-v` options, since the SMARTS descriptor test can be combined with other tests.
#### InChI descriptor[¶](#inchi-descriptor)
```
obabel filterset.sdf -osmi --filter "inchi='InChI=1/C2H6O/c1-2-3/h3H,2H2,1H3'"
```
will convert only ethanol. It uses the default parameters for InChI comparison, so there may be some messages from the InChI code. There is quite a lot of flexibility on how the InChI is presented (you can miss out the non-essential bits):
```
obabel filterset.sdf -osmi --filter "inchi='1/C2H6O/c1-2-3/h3H,2H2,1H3'"
obabel filterset.sdf -osmi --filter "inchi='C2H6O/c1-2-3/h3H,2H2,1H3'"
obabel filterset.sdf -osmi --filter "inchi=C2H6O/c1-2-3/h3H,2H2,1H3"
obabel filterset.sdf -osmi --filter "InChI=1/C2H6O/c1-2-3/h3H,2H2,1H3"
```
all have the same effect.
The comparison of the InChI string is done only as far as the parameter’s length. This means that we can take advantage of InChI’s layered structure:
```
obabel filterset.sdf -osmi --filter "inchi=C2H6O"
```
will convert both Ethanol and Dimethyl Ether.
### Substructure and similarity searching[¶](#substructure-and-similarity-searching)
For information on using **obabel** for substructure searching and similarity searching, see [Molecular fingerprints and similarity searching](index.html#fingerprints).
### Sorting molecules[¶](#sorting-molecules)
The `--sort` option is used to output molecules ordered by the value of a descriptor:
```
obabel infile.xxx outfile.xxx --sort desc
```
If the descriptor desc provides a numerical value, the molecule with the smallest value is output first. For descriptors that provide a string output the order is alphabetical, but for the InChI descriptor a more chemically informed order is used (e.g. “CH4” is before than “C2H6”, “CH4” is less than “ClH” hydrogen chloride).
The order can be reversed by preceding the descriptor name with `~`, e.g.:
```
obabel infile.xxx outfile.yyy --sort ~logP
```
As a shortcut, the value of the descriptor can be appended to the molecule name by adding a `+` to the descriptor, e.g.:
```
obabel aromatics.smi -osmi --sort ~MW+
c1ccccc1C=C styrene 104.149
c1ccccc1C toluene 92.1384
c1ccccc1 benzene 78.1118
```
### Remove duplicate molecules[¶](#remove-duplicate-molecules)
The `--unique` option is used to remove, i.e. not output, any chemically identical molecules during conversion:
```
obabel infile.xxx outfile.yyy --unique [param]
```
The optional parameter *param* defines what is regarded as “chemically identical”. It can be the name of any descriptor, although not many are likely to be useful. If *param* is omitted, the InChI descriptor is used. Other useful descriptors are ‘cansmi’ and ‘cansmiNS’ (canonical SMILES, with and without stereochemical information),’title’ and truncated InChI (see below).
A message is output for each duplicate found:
```
Removed methyl benzene - a duplicate of toluene (#1)
```
Clearly, this is more useful if each molecule has a title. The `(#1)` is the number of duplicates found so far.
If you wanted to identify duplicates but not output the unique molecules, you could use the [null format](index.html#outputs-nothing):
```
obabel infile.xxx -onul --unique
```
#### Truncated InChI[¶](#truncated-inchi)
It is possible to relax the criterion by which molecules are regarded as “chemically identical” by using a truncated InChI specification as *param*. This takes advantage of the layered structure of InChI. So to remove duplicates, treating stereoisomers as the same molecule:
```
obabel infile.xxx outfile.yyy --unique /nostereo
```
Truncated InChI specifications start with `/` and are case-sensitive. *param* can be a concatenation of these e.g. `/nochg/noiso`:
```
/formula formula only
/connect formula and connectivity only
/nostereo ignore E/Z and sp3 stereochemistry
/nosp3 ignore sp3 stereochemistry
/noEZ ignore E/Z stereoochemistry
/nochg ignore charge and protonation
/noiso ignore isotopes
```
#### Multiple files[¶](#multiple-files)
The input molecules do not have to be in a single file. So to collect all the unique molecules from a set of MOL files:
```
obabel *.mol uniquemols.sdf --unique
```
If you want the unique molecules to remain in individual files:
```
obabel *.mol U.mol -m --unique
```
On the GUI use the form:
```
obabel *.mol U*.mol --unique
```
Either form is acceptable on the Windows command line.
The unique molecules will be in files with the original name prefixed by ‘U’. Duplicate molecules will be in similar files but with zero length, which you will have to delete yourself.
### Aliases for chemical groups[¶](#aliases-for-chemical-groups)
There is a limited amount of support for representing common chemical groups by an alias, e.g. benzoic acid as `Ph-COOH`, with two alias groups. Internally in Open Babel, the molecule usually has a ‘real’ structure with the alias names present as only an alternative representation. For MDL MOL and SD files alias names can be read from or written to an ‘A’ line. The more modern RGroup representations are not yet recognized. Reading is transparent; the alias group is expanded and the ‘real’ atoms given reasonable coordinates if the the molecule is 2D or 3D. Writing in alias form, rather than the ‘real’ structure, requires the use of the `-xA` option. SVGFormat will also display any aliases present in a molecule if the `-xA` option is set.
The alias names that are recognized are in the file `superatoms.txt` which can be edited.
Normal molecules can have certain common groups given alternative alias representation using the `--genalias` option. The groups that are recognized and converted are a subset of those that are read. Displaying or writing them still requires the `-xA` option. For example, if `aspirin.smi` contained `O=C(O)c1ccccc1OC(=O)C`, it could be displayed with the aliases `COOH` and `OAc` by:
```
obabel aspirin.smi -O out.svg --genalias -xA
```
### Forcefield energy and minimization[¶](#forcefield-energy-and-minimization)
Open Babel supports a number of forcefields which can be used for energy evaluation as well as energy minimization. The available forcefields as listed as follows:
```
C:\>obabel -L forcefields GAFF General Amber Force Field (GAFF).
Ghemical Ghemical force field.
MMFF94 MMFF94 force field.
MMFF94s MMFF94s force field.
UFF Universal Force Field.
```
To evaluate a molecule’s energy using a forcefield, use the `--energy` option. The energy is put in an OBPairData object “Energy” which is accessible via an SDF or CML property or `--append` (to title). Use `--ff <forcefield_id>` to select a forcefield (default is Ghemical) and `--log` for a log of the energy calculation. The simplest way to output the energy is as follows:
```
obabel infile.xxx -otxt --energy --append "Energy"
```
To perform forcefield minimization, the `--minimize` option is used. The following shows typical usage:
```
obabel infile.xxx -O outfile.yyy --minimize --steps 1500 --sd
```
The available options are as follows:
```
--log output a log of the minimization process (default= no log)
--crit <converge> set convergence criteria (default=1e-6)
--sd use steepest descent algorithm (default = conjugate gradient)
--newton use Newton2Num linesearch (default = Simple)
--ff <forcefield-id> select a forcefield (default = Ghemical)
--steps <number> specify the maximum number of steps (default = 2500)
--cut use cut-off (default = don't use cut-off)
--rvdw <cutoff> specify the VDW cut-off distance (default = 6.0)
--rele <cutoff> specify the Electrostatic cut-off distance (default = 10.0)
--freq <steps> specify the frequency to update the non-bonded pairs (default = 10)
```
Note that for both `--energy` and `--minimize`, hydrogens are made explicit before energy evaluation.
### Aligning molecules or substructures[¶](#aligning-molecules-or-substructures)
The `--align` option aligns molecules to the first molecule provided.
It is typically used with the `-s` option to specify an alignment based on a substructure:
```
obabel pattern.www dataset.xxx -O outset.yyy -s SMARTS --align
```
Here, only molecules matching the specified SMARTS pattern are converted and are aligned by having all their atom coordinates modified. The atoms that are used in the alignment are those matched by SMARTS in the first output molecule. The subsequent molecules are aligned so that the coordinates of atoms equivalent to these are as nearly as possible the same as those of the pattern atoms.
The atoms in the various molecules can be in any order.
Tha alignment ignores hydrogen atoms but includes symmetry.
Note that the standalone program **obfit** has similar functionality.
The first input molecule could also be part of the data set:
```
obabel dataset.xxx -O outset.yyy -s SMARTS --align
```
This form is useful for ensuring that a particular substructure always has the same orientation in a 2D display of a set of molecules.
0D molecules, for example from SMILES, are given 2D coordinates before alignment.
See documentation for the `-s` option for its other possible parameters. For example, the matching atoms could be those of a molecule in a specified file.
If the `-s` option is not used, all of the atoms in the first molecule are used as pattern atoms. The order of the atoms must be the same in all the molecules.
The output molecules have a property (represented internally as OBPairData) called `rmsd`, which is a measure of the quality of the fit. To attach it to the title of each molecule use
`--append rmsd`.
To output the two conformers closest to the first conformer in a dataset:
```
obabel dataset.xxx -O outset.yyy --align --smallest 2 rmsd
```
### Specifying the speed of 3D coordinate generation[¶](#specifying-the-speed-of-3d-coordinate-generation)
When you use the `--gen3d` option, you can specify the speed and quality. The following shows typical usage:
```
obabel infile.smi -O out.sdf --gen3d fastest
```
The available options are as follows:
| option | description |
| --- | --- |
| `fastest` | No cleanup |
| `fast` | Force field cleanup (100 cycles) |
| `med` (default) | Force field cleanup (100 cycles) + Fast rotor search (only one permutation) |
| `slow` | Force field cleanup (250 cycles) + Fast rotor search (permute central rotors) |
| `slowest` | Force field cleanup (500 cycles) + Slow rotor search |
| `better` | Same as `slow` |
| `best` | Same as `slowest` |
| `dist`, `dg` | Use distance geometry method (unstable) |
You can also specify the speed by an integer from `1` (slowest) to `5` (fastest).
The Open Babel GUI[¶](#the-open-babel-gui)
---
The **obabel** command line program converts chemical objects (currently molecules or reactions) from one file format to another. The Open Babel graphical user interface (GUI) is an alternative to using the command line and has the same capabilities. Since Open Babel 2.3, the GUI is available cross-platform on Windows, Linux and MacOSX. On Windows, you can find it in the Start Menu in the Open Babel folder; on Linux and MacOSX, the GUI can be started with the **obgui** command.
Since the functionality of the GUI mirrors that of **obabel**, you should consult the [previous chapter](index.html#obabel) to learn about available features and how to use them. This chapter describes the general use of the GUI and then focuses on features that are specific to the GUI.
### Basic operation[¶](#basic-operation)
Although the GUI presents many options, the basic operation is straightforward:
* Select the type of the type of the input file from the dropdown list.
* Click the “…” button and select the file. Its contents are displayed in the textbox below.
* Choose the output format and file in a similar way. You can merely display the output without saving it by not selecting an output file or by checking “Output below only..”.
* Click the “Convert” button.
The message window below the button gives the number of molecules converted, and the contents of the output file are displayed.
By default, all the molecules in an input file are converted if the output format allows multiple molecules.
**Screenshot of GUI running on BioLinux 6.0, an Ubuntu derivative**
### Options[¶](#options)
The options in the middle are those appropriate for the type of chemical object being converted (molecule or reaction) and the input and output formats. They are derived from the description text that is displayed with the `-Hxxx` option in the command line interface and with the “Format info” buttons here. You can switch off the display of any of the various types of option using the View menu if the screen is getting too cluttered.
### Multiple input files[¶](#multiple-input-files)
You can select multiple input files in the input file dialog in the normal way (for example, using the Control key in Windows). In the input filename box, each filename is displayed relative to the path shown just above the box, which is the path of the first file. You can display any of the files by moving the highlight with Tab/Shift Tab, Page Up/Down, the mouse wheel, or by double clicking.
Selecting one or more new file names normally removes those already present, but they can instead be appended by holding the Control key down when leaving the file selection dialog.
Files can be also be dragged and dropped (e.g. from Windows Explorer), adding the file when the Control key is pressed,
replacing the existing files when it is not.
Normally each file is converted according to its extension and the input files do not have to be all the same, but if you want to use non-standard file names set the checkbox “Use this format for all input files…”
If you want to combine multiple molecules (from one or more files)
into a single molecule with disconnected parts, use option “Join all input molecules…”
### Wildcards in filenames[¶](#wildcards-in-filenames)
When input filenames are typed in directly, any of them can contain the wildcard characters `*` and `?`. Typing Enter will replace these by a list of the matching files. The wildcarded names can be restored by typing Enter while holding down the Shift key. The original or the expanded versions will behave the same when the
“Convert” button is pressed.
By including the wildcard `*` in both the input and output filenames you can carry out batch conversion. Suppose there were files `first.smi`, `second.smi`, `third.smi`. Using `*.smi` as the input filename and `*.mol` as the output filename would produce three files `first.mol`, `second.mol` and `third.mol`. If the output filename was `NEW_*.mol`, then the output files would be `NEW_first.mol`, etc.
### Local input[¶](#local-input)
By checking the “Input below…” checkbox you can type the input text directly. The text box changes colour to remind you that it is this text and not the contents of any files that will be converted.
### Output file[¶](#output-file)
The output file name can be fully specified with a path, but if it is not, then it is considered to be relative to the input file path.
### Graphical display[¶](#graphical-display)
The chemical structures being converted can be displayed (as SVG)
in an external program. By default this is Firefox but it can be changed from an item on the View menu (for instance, Opera and Chrome work fine). When “Display in firefox” (under the output file name) is checked, the structures will be shown in a new Firefox tab. With multiple molecules the display can be zoomed (mousewheel)
and panned (dragging with mouse button depressed). Up to 100 molecules are easily handled but with more the system may be slow to manipulate. It may also be slow to generate, especially if 2D atom coordinates have to be calculated (e.g.from SMILES). A new Firefox tab is opened each time Convert is pressed.
### Using a restricted set of formats[¶](#using-a-restricted-set-of-formats)
It is likely that you will only be interested in a subset of the large range of formats handled by Open Babel.
You can restrict the choice offered in the dropdown boxes, which makes routine selection easier. Clicking “Select set of formats” on the View menu allows the formats to be displayed to be selected. Subsequently,
clicking “Use restricted set of formats” on the View menu toggles this facility on and off.
Using a restricted set overcomes an irritating bug in the Windows version. In the file Open and Save dialogs the files displayed can be filtered by the *current format*, *All Chemical Formats*, or *All Files*. The *All Chemical Formats* filter will only display the first 30 possible formats (alphabetically). The *All Files* will indeed display all files and the conversion processes are unaffected.
### Other features[¶](#other-features)
Most of the interface parameters, such as the selected format and the window size and position, are remembered between sessions.
Using the View menu, the input and output text boxes can be set not to wrap the text. At present you have to restart the program for this to take effect.
The message box at the top of the output text window receives program output on error and audit logging, and some progress reports. It can be expanded by dragging down the divider between the windows.
### Example files[¶](#example-files)
In the Windows distribution, there are three chemical files included to try out:
* **serotonin.mol** which has 3D atom coordinates
* **oxamide.cml** which is 2D and has a large number of properties that will be seen when converting to SDF
* **FourSmallMols.cml** which (unsurprisingly) contains four molecules with no atom coordinates and can be used to illustrate the handling of multiple molecules:
Setting the output format to SMI (which is easy to see), you can convert only the second and third molecules by entering `2` and `3` in the appropriate option boxes. Or convert only molecules with C-O single bonds by entering `CO` in the SMARTS option box.
Tutorial on using the GUI[¶](#tutorial-on-using-the-gui)
---
This chapter gives step-by-step descriptions on how to use Open Babel’s graphical user interface (GUI) to carry out some common tasks. It may also be used as the basis of a practical on cheminformatics, and to this end several questions are interspersed with the tutorial text.
For more information on the GUI itself, see the [previous chapter](index.html#gui).
### Converting chemical file formats[¶](#converting-chemical-file-formats)
The most common use of Open Babel is to convert chemical file formats. The following examples show how this is done.
#### File conversion[¶](#file-conversion)
Let’s convert a PDB file to MOL format:
* Create a folder on the Desktop called `Work`
* Download the PDB file for insulin (`4ins`) from the [Protein Data Bank](http://www.rcsb.org/pdb/download/downloadFile.do?fileFormat=pdb&compression=NO&structureId=4INS) and save it in the `Work` folder
* Set the input file format to PDB and the input filename to the downloaded PDB file
* Set the output file format to MOL and the output filename to file:4ins.mol in the `Work` folder
* Now click CONVERT
#### Converting without files[¶](#converting-without-files)
Rather than use input and output files, it is possible to paste the contents of a chemical file format into the input box, and see the results of the conversion in the output box.
Here we will try this with the SMILES format, and illustrate how stereochemistry is handled by SMILES:
* Choose the SMILES format as the input format
* Tick the box Input below (ignore input file)
* Copy and paste the following SMILES strings (and molecule titles) into the input box:
```
I/C=C/F I and F are trans I/C=C\F I and F are cis I[C@](Br)(Cl)F Anticlockwise from Iodine I[C@@](Br)(Cl)F Clockwise from Iodine
```
* Choose the SMILES format as the output format
* Tick the box for Output below only and Display in Firefox
* Click CONVERT.
In the resulting depiction, note that Open Babel only sets a single stereobond for a chiral centre. This is not ambiguous - it means that the stereobond is either above or below the plane, with the remaining three bonds the opposite site of the plane.
17. Can you figure out whether the depiction of the tetrahedral centre is consistent with the SMILES string?
Note
Open Babel 2.3.2 introduces a twisted double bond to indicate unknown cis/trans stereochemistry (e.g. IC=CF). See [here](http://baoilleach.blogspot.ie/2012/04/getting-your-double-bonds-in-twist-how.html) for more info.
### Filtering structures[¶](#filtering-structures)
Setup
We are going to use a dataset of 16 benzodiazepines. These all share the following substructure (image from [Wikipedia](http://en.wikipedia.org/wiki/Benzodiazepine)):
* Create a folder on the Desktop called `Work` and save [benzodiazepines.sdf](../_static/benzodiazepines.sdf) there
* Set up a conversion from SDF to SMI and set `benzodiazepines.sdf` as the input file
* Tick Display in Firefox
* Click CONVERT
Remove duplicates
If you look carefully at the depictions of the first and last molecules (top left and bottom right) you will notice that they depict the same molecule.
17. Look at the SMILES strings for the first and last molecules. If the two molecules are actually the same, why are the two SMILES strings different? (Hint: try using `CAN - canonical SMILES` instead of `SMI`.)
We can remove duplicates based on the InChI (for example):
* Tick the box beside remove duplicates by descriptor and enter `inchi` as the descriptor
* Click CONVERT
Duplicates can be removed based on any of the available descriptors. The full list can be found in the menu under Plugins, descriptors.
17. Are any of the other descriptors useful for removing duplicates?
Filtering by substructure
17. How many of the molecules contain the following substructure?
The SMILES string for this molecule is `c1ccccc1F`. This is also a valid SMARTS string.
17. Use the [SMARTSviewer](http://smartsview.zbh.uni-hamburg.de/) at the ZBH Center for Bioinformatics, University of Hamburg, to verify the meaning of the SMARTS string `c1ccccc1F`.
Let’s filter the molecules using this substructure:
* In the Options section, enter `c1ccccc1F` into the box labeled Convert only if match SMARTS or mols in file
* Click CONVERT.
17. How many structures are matched?
* Now find all those that are not matched by preceding the SMARTS filter with a tilde `~`, i.e. `~c1ccccc1F`.
* Click CONVERT.
17. How many structures are not matched?
Filter by descriptor
As discussed above, Open Babel provides several descriptors. Here we will focus on the molecular weight, `MW`.
To begin with, let’s show the molecular weights in the depiction:
* Clear the existing title by entering a single space into the box Add or replace molecule title
* Set the title to the molecular weight by entering `MW` into the box Append properties or descriptors in list to title
* Click CONVERT
You should see the molecular weight below each molecule in the depiction. Notice also that the SMILES output has the molecular weight beside each molecule. This could be useful for preparing a spreadsheet with the SMILES string and various calculated properties.
Now let’s sort by molecular weight:
* Enter `MW` into the box Sort by descriptor and click CONVERT
Finally, here’s how to filter based on molecular weight. Note that none of the preceding steps are necessary for the filter to work. We will convert all those molecules with molecular weights between 300 and 320 (in the following expression `&` signifies Boolean AND):
* Enter `MW>300 & MW<320` into the box Filter convert only when tests are true and click CONVERT
17. If `|` (the pipe symbol, beside Z on the UK keyboard) signifies Boolean OR, how would you instead convert all those molecules that do not have molecular weights between 300 and 320?
Note
Open Babel 2.3.2 allows specific substructures to be highlighted in a depiction. It also allows depictions to be aligned based on a substructure.
### Substructure and similarity searching a large dataset[¶](#substructure-and-similarity-searching-a-large-dataset)
Open Babel provides a format called the `fs -- fastsearch index` which should be used when searching large datasets (like ChEMBL) for molecules similar to a particular query. There are faster ways of searching (like using a chemical database) but FastSearch is convenient, and should give reasonable performance for most people.
To demonstrate similarity searching, we will use the first 1000 molecules in the latest release of ChEMBL:
* Download the 2D SDF version of ChEMBL, `chembl_nn.sdf.gz`, from the [ChEMBLdb download site](ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/latest/) and save in your Work folder. (Note: this is a gzipped file, but Open Babel will handle this without problems.)
* Set up an SDF to SDF conversion, set `chembl_nn.sdf.gz` as the input file and `1000_chembl.sdf` as the output file.
* Only convert the first 1000 molecules by entering `1000` in the box End import at molecule # specified.
* Click CONVERT
We can going to use the following structure for substructure and similarity searching. It can be represented by the SMILES string `Nc1ccc(N)cc1`.
Next, we will create a FastSearch index for this dataset of 1000 molecules:
* Convert `1000_chembl.sdf` from SDF to FS format, with an output filename of `1000_chembl.fs`
By using this FastSearch index, the speed of substructure and similarity searching is much improved. First of all, let’s do a substructure search:
* Set up a conversion from FS to SMILES with `1000_chembl.fs` as the input file. Tick the box for Output below only and Display in Firefox
* Enter `Nc1ccc(N)cc1` into the box Convert only if match SMARTS or mol in file
* Click CONVERT
17. How does the speed of the substructure search compare to if you used `1000_chembl.sdf` as the input file instead?
Next, let’s find the 5 most similar molecules to the same query. The Tanimoto coefficient of a path-based fingerprint is used as the measurement of similarity. This has a value from 0.0 to 1.0 (maximum similarity) and we will display the value below each molecule:
* Set up the FS to SMILES conversion as before, and again enter `Nc1ccc(N)cc1` into the box Convert only if match SMARTS or mol in file
* Enter `5` into the box Do similarity search: #mols or # as min Tanimoto
* Tick the box Add Tanimoto coefficient to title in similarity search
* Click CONVERT
17. Look at the 5 most similar molecules. Can you tell why they were regarded as similar to the query?
Molecular fingerprints and similarity searching[¶](#molecular-fingerprints-and-similarity-searching)
---
Molecular fingerprints are a way of encoding the structure of a molecule. The most common type of fingerprint is a series of binary digits (bits) that represent the presence or absence of particular substructures in the molecule. Comparing fingerprints allows you to determine the similarity between two molecules, to find matches to a query substructure, etc.
Open Babel provides several fingerprints of different types:
* [Fingerprint format](index.html#fingerprint-format-details): the path-based fingerprint FP2; substructure based fingerprints FP3, FP4 and MACCS; user-defined substructures
* [Multilevel Neighborhoods of Atoms (MNA) (mna)](index.html#multilevel-neighborhoods-of-atoms-mna): a circular fingerprint
* [MolPrint2D format (mpd)](index.html#molprint2d-format): a circular fingerprint
* [Spectrophores™](index.html#spectrophores): a fingerprint that encodes the 3D structure of a molecule
The next two sections describe the *Fingerprint format* and *Spectrophores* in depth. For the others, see the relevant sections listed above.
### Fingerprint format[¶](#fingerprint-format)
The [Fingerprint format (fpt)](index.html#fingerprint-format) is a utility file format that provides access to a number of substructure-based fingerprints, and that enables the user to carry out similarity and substructure searching. You can see the available fingerprints using the following command:
```
$ babel -L fingerprints FP2 Indexes linear fragments up to 7 atoms.
FP3 SMARTS patterns specified in the file patterns.txt FP4 SMARTS patterns specified in the file SMARTS_InteLigand.txt MACCS SMARTS patterns specified in the file MACCS.txt
```
At present there are four types of fingerprints:
* **FP2**, a path-based fingerprint which indexes small molecule fragments based on linear segments of up to 7 atoms (somewhat similar to the Daylight fingerprints):
> A molecule structure is analysed to identify linear fragments of length from 1-7 atoms. Single atom fragments of C, N, and O are ignored. A fragment is terminated when the atoms form a ring.
> For each of these fragments the atoms, bonding and whether they constitute a complete ring is recorded and saved in a set so that there is only one of each fragment type. Chemically identical versions, (i.e. ones with the atoms listed in reverse order and rings listed starting at different atoms) are identified and only a single canonical fragment is retained.
> Each remaining fragment is assigned a hash number from 0 to 1020 which is used to set a bit in a 1024 bit vector
>
* **FP3** uses a series of SMARTS queries stored in `patterns.txt`
* **FP4** uses a series of SMARTS queries stored in `SMARTS_InteLigand.txt`
* **MACCS** uses the SMARTS patterns in `MACCS.txt`
Note
Note that you can tailor the latter three fingerprints to your own needs by adding your own SMARTS queries to these files. On UNIX and Mac systems, these files are frequently found in `/usr/local/share/openbabel` under a directory for each version of Open Babel.
See also
The sections on the [fingerprint](index.html#fingerprint-format) and [fastsearch](index.html#fastsearch-format) formats contain additional detail.
#### Similarity searching[¶](#similarity-searching)
##### Small datasets[¶](#small-datasets)
For relatively small datasets (<10,000’s) it is possible to do similarity searches without the need to build a similarity index, however larger datasets (up to a few million) can be searched rapidly once a fastsearch index has been built.
On small datasets these fingerprints can be used in a variety of ways. The following command gives you the Tanimoto coefficient between a SMILES string in `mysmiles.smi` and all the molecules in `mymols.sdf`:
```
babel mysmiles.smi mymols.sdf -ofpt
MOL_00000067 Tanimoto from first mol = 0.0888889 MOL_00000083 Tanimoto from first mol = 0.0869565 MOL_00000105 Tanimoto from first mol = 0.0888889 MOL_00000296 Tanimoto from first mol = 0.0714286 MOL_00000320 Tanimoto from first mol = 0.0888889 MOL_00000328 Tanimoto from first mol = 0.0851064 MOL_00000338 Tanimoto from first mol = 0.0869565 MOL_00000354 Tanimoto from first mol = 0.0888889 MOL_00000378 Tanimoto from first mol = 0.0816327 MOL_00000391 Tanimoto from first mol = 0.0816327 11 molecules converted
```
The default fingerprint used is the FP2 fingerprint. You change the fingerprint using the `f` output option as follows:
```
babel mymols.sdf -ofpt -xfFP3
```
The `-s` option of **babel** is used to filter by SMARTS string. If you wanted to know the similarity only to the substituted bromobenzenes in `mymols.sdf` then you might combine commands like this (note: if the query molecule does not match the SMARTS string this will not work as expected, as the first molecule in the database that matches the SMARTS string will instead be used as the query):
```
babel mysmiles.smi mymols.sdf -ofpt -s c1ccccc1Br
MOL_00000067 Tanimoto from first mol = 0.0888889 MOL_00000083 Tanimoto from first mol = 0.0869565 MOL_00000105 Tanimoto from first mol = 0.0888889
```
If you don’t specify a query file, **babel** will just use the first molecule in the database as the query:
```
babel mymols.sdf -ofpt
MOL_00000067 MOL_00000083 Tanimoto from MOL_00000067 = 0.810811 MOL_00000105 Tanimoto from MOL_00000067 = 0.833333 MOL_00000296 Tanimoto from MOL_00000067 = 0.425926 MOL_00000320 Tanimoto from MOL_00000067 = 0.534884 MOL_00000328 Tanimoto from MOL_00000067 = 0.511111 MOL_00000338 Tanimoto from MOL_00000067 = 0.522727 MOL_00000354 Tanimoto from MOL_00000067 = 0.534884 MOL_00000378 Tanimoto from MOL_00000067 = 0.489362 MOL_00000391 Tanimoto from MOL_00000067 = 0.489362 10 molecules converted
```
##### Large datasets[¶](#large-datasets)
On larger datasets it is necessary to first build a fastsearch index. This is a new file that stores a database of fingerprints for the files indexed. You will still need to keep both the new .fs fastsearch index and the original files. However, the new index will allow significantly faster searching and similarity comparisons. The index is created with the following command:
```
babel mymols.sdf -ofs
```
This builds `mymols.fs` with the default fingerprint (unfolded). The following command uses the index to find the 5 most similar molecules to the molecule in `query.mol`:
```
babel mymols.fs results.sdf -squery.mol -at5
```
or to get the matches with Tanimoto>0.6 to 1,2-dicyanobenzene:
```
babel mymols.fs results.sdf -sN#Cc1ccccc1C#N -at0.6
```
#### Substructure searching[¶](#substructure-searching)
##### Small datasets[¶](#id1)
This command will find all molecules containing 1,2-dicyanobenzene and return the results as SMILES strings:
```
babel mymols.sdf -sN#Cc1ccccc1C#N results.smi
```
If all you want output are the molecule names then adding `-xt` will return just the molecule names:
```
babel mymols.sdf -sN#Cc1ccccc1C#N results.smi -xt
```
The parameter of the `-s` option in these examples is actually SMARTS, which allows a richer matching specification, if required. It does mean that the aromaticity of atoms and bonds is significant; use `[#6]` rather than `C` to match both aliphatic and aromatic carbon.
The `-s` option’s parameter can also be a file name with an extension. The file must contain a molecule, which means only substructure matching is possible (rather than full SMARTS). The matching is also slightly more relaxed with respect to aromaticity.
##### Large datasets[¶](#id2)
First of all, you need to create a fastsearch index (see above). The index is created with the following command:
```
babel mymols.sdf -ofs
```
Substructure searching is as for small datasets, except that the fastsearch index is used instead of the original file. This command will find all molecules containing 1,2-dicyanobenzene and return the results as SMILES strings:
```
babel mymols.fs -ifs -sN#Cc1ccccc1C#N results.smi
```
If all you want output are the molecule names then adding `-xt` will return just the molecule names:
```
babel mymols.fs -ifs -sN#Cc1ccccc1C#N results.smi -xt
```
#### Case study: Search ChEMBLdb[¶](#case-study-search-chembldb)
This case study uses a combination of the techniques described above for similarity searching using large databases and using small databases. Note that we are using the default fingerprint for all of these analyses. The default fingerprint is FP2, a path-based fingerprint (somewhat similar to the Daylight fingerprints).
1. Download Version 2 of ChEMBLdb from <ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/>.
2. After unzipping it, make a fastsearch index (this took 18 minutes on my machine for the 500K+ molecules):
```
babel chembl_02.sdf -ofs
```
3. Let’s use the first molecule in the sdf file as a query. Using Notepad (or on Linux, `head -79 chembl_02.sdf`) extract the first molecule and save it as `first.sdf`. Note that the molecules in the ChEMBL sdf do not have titles; instead, their IDs are stored in the “chebi_id” property field.
4. This first molecule is 100183. Check its [ChEMBL page](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/100183). It’s pretty weird, but is there anything similar in ChEMBLdb? Let’s find the 5 most similar molecules:
```
babel chembl_02.fs mostsim.sdf -s first.sdf -at5
```
5. The results are stored in `mostsim.sdf`, but how similar are these molecules to the query?:
```
babel first.sdf mostsim.sdf -ofpt
>
> Tanimoto from first mol = 1 Possible superstructure of first mol
> Tanimoto from first mol = 0.986301
> Tanimoto from first mol = 0.924051 Possible superstructure of first mol
> Tanimoto from first mol = 0.869048 Possible superstructure of first mol
> Tanimoto from first mol = 0.857143 6 molecules converted 76 audit log messages
```
6. That’s all very well, but it would be nice to show the ChEBI IDs. Let’s set the title field of `mostsim.sdf` to the content of the “chebi_id” property field, and repeat step 5:
```
babel mostsim.sdf mostsim_withtitle.sdf --append "chebi_id"
babel first.sdf mostsim_withtitle.sdf -ofpt
>
>100183 Tanimoto from first mol = 1 Possible superstructure of first mol
>124893 Tanimoto from first mol = 0.986301
>206983 Tanimoto from first mol = 0.924051 Possible superstructure of first mol
>207022 Tanimoto from first mol = 0.869048 Possible superstructure of first mol
>607087 Tanimoto from first mol = 0.857143 6 molecules converted 76 audit log messages
```
7. Here are the ChEMBL pages for these molecules: [100183](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/100183), [124893](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/124893), [206983](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/206983), [207022](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/207022), [607087](http://www.ebi.ac.uk/chembldb/index.php/compound/inspect/607087). I think it is fair to say that they are pretty similar. In particular, the output states that 206983 and 207022 are possible superstructures of the query molecule, and that is indeed true.
8. How many of the molecules in the dataset are superstructures of the molecule in `first.sdf`? To do this and to visualize the large numbers of molecules produced, we can output to SVG format (see [SVG 2D depiction (svg)](index.html#svg-2d-depiction)):
```
obabel chembl_02.fs -O out.svg -s first.sdf
```
> Note that **obabel** has been used here because of its more flexible option handling.
> This command does a substructure search and puts the 47 matching structures in the file `out.svg`. This can be viewed in a browser like Firefox, Opera or Chrome (but not Internet Explorer). The display will give an overall impression of the set of molecules but details can be seen by zooming in with the mousewheel and panning by dragging with a mouse button depressed.
9. The substructure that is being matched can be highlighted in the output molecules by adding another parameter to the `-s` option. Just for variety, the display is also changed to a black background, ‘uncolored’ (no element-specific coloring), and terminal carbon not shown explicitly. (Just refresh your browser to see the modified display.)
```
obabel chembl_02.fs -O out.svg -s first.sdf green -xb -xu -xC
```
> This highlighting option also works when the `-s` option is used without fastsearch on small datasets.
10. The substructure search here has two stages. The indexed fingerprint search quickly produces 62 matches from the 500K+ molecules in the dataset. Each of these is then checked by a slow detailed isomorphism check. There are 15 false positives from the fingerprint stage. These are of no significance, but you can see them using:
```
obabel chembl_02.fs -O out.svg -s ~first.sdf
```
> The fingerprint search is unaffected but the selection in the second stage is inverted.
### Spectrophores™[¶](#spectrophorestrade)
#### Introduction[¶](#introduction)
Spectrophores[[1]](#trademark) are one-dimensional descriptors generated from the property fields surrounding the molecules. This technology allows the accurate description of molecules in terms of their surface properties or fields. Comparison of molecules’ property fields provides a robust structure-independent method of aligning actives from different chemical classes. When applied to molecules such as ligands and drugs, Spectrophores can be used as powerful molecular descriptors in the fields of chemoinformatics, virtual screening, and QSAR modeling.
The computation of Spectrophores is independent of the position and orientation of the molecule and this enables easy and fast comparison of Spectrophores between different molecules. Molecules having similar three-dimensional properties and shapes always yield similar Spectrophores.
A Spectrophore is calculated by surrounding the three-dimensional conformation of the molecule by a three-dimensional arrangement of points,
followed by calculating the interaction between each of the atom properties and the surrounding the points. The three-dimensional arrangement of the points surrounding the molecule can be regarded as an ‘artificial’ cage or receptor,
and the interaction calculated between the molecule and the cage can be regarded as an artificial representation of an affinity value between molecule and cage.
Because the calculated interaction is dependent on the relative orientation of the molecule within the cage, the molecule is rotated in discrete angles and the most favorable interaction value is kept as final result. The angular stepsize at which the molecule is rotated along its three axis can be specified by the user and influences the accuracy of the method.
The Spectrophore code was developed by Silicos NV, and donated to the OpenBabel project in July 2010 (see sidebar for information on commercial support). Spectrophores can be generated either using the command-line application **obspectrophore** (see next section) or through the API (`OBSpectrophore`, as described in the [:obapi:`API documentation <OBSpectrophore>`](#id2)).
#### obspectrophore[¶](#obspectrophore)
Usage
`obspectrophore -i <input file> [options]`
Parameter details
| `-i <input file>` |
| | *Specify the input file*
Spectrophores will be calculated for each molecule in the input file.
The filetype is automatically detected from the file extension. |
| `-n <type>` | *The type of normalization that should be performed*
Valid values are (without quotes):
* No (default)
* ZeroMean
* UnitStd
* ZeroMeanAndUnitStd
|
| `-a <accuracy>` | *The required accuracy expressed as the angular stepsize*
Only the following discrete values are allowed: 1, 2, 5, 10, 15, 20 (default), 30, 36, 45, 60 |
| `-s <type>` | *The kind of cages that should be used*
The cage type is specified in terms of the underlying pointgroup: P1 or P-1. Valid values are (without quotes):
* No (default)
* Unique
* Mirror
* All
|
| `-r <resolution>` |
| | *The required resolution expressed as a real positive number*
The default value is 3.0 Angstrom. Negative values or a value of 0 generates an error message. |
| `-h` | *Displays help* |
#### Implementation[¶](#implementation)
##### Atomic properties[¶](#atomic-properties)
The calculation of a Spectrophore™ starts by calculating the atomic contributions of each property from which one wants to calculate a Spectrophore. In the current implementation, four atomic properties are converted into a Spectrophore; these four properties include the atomic partial charges, the atomic lipophilicities, the atomic shape deviations and the atomic electrophilicities. The atomic partial charges and atomic electrophilicity properties are calculated using the electronegativity equalisation method (EEM)
as described by Bultinck and coworkers [[bll2002]](index.html#bll2002) [[blc2003]](index.html#blc2003).
Atomic lipophilic potential parameters are calculated using a rule-based method. Finally, the atomic shape deviation is generated by calculating, for each atom, the atom’s deviation from the average molecular radius. This is done in a four step process:
* The molecular center of geometry (COG) is calculated
* The distances between each atom and the molecular COG are calculated
* The average molecular radius is calculated by averaging all the atomic distances
* The distances between each atom and the COG are then divided by the average molecular radius and centered on zero
##### Interaction between the atoms and cage points[¶](#interaction-between-the-atoms-and-cage-points)
Following the calculation of all required atomic properties, the next step in the calculation of a Spectrophore consists of determining the total interaction value V(c,p) between each of the atomic contributions of property p with a set of interaction points on an artificial cage c surrounding the molecular conformation.
**Schematic representation of a molecule surrounded by the artifical cage**
For this purpose, each of these interaction points i on cage c is assigned a value P(c,i)
which is either +1 or -1, with the constraint that the sum of all interaction points on a particular cage should be zero. In a typical Spectrophore calculation, a cage is represented as a rectangular box encompassing the molecular conformation in all three dimensions, with the centers of the box edges being the interaction points. Such a configuration gives twelve interaction points per cage, and, in the case of a non-stereospecific distribution of the interaction points, leads to 12 different cages. Although there are no particular requirements as to the dimensions of the rectangular cage, the distance between the interaction points and the geometrical extremes of the molecule should be such that a meaningful interaction value between each cage point and the molecular entity can be calculated. In this respect, the default dimensions of the cage are constantly adjusted to enclose the molecule at a minimum distance of 3 A along all dimensions. This cage size can be modified by the user and influences the resolution of the Spectrophore.
The total interaction value V(c,p) between the atomic contribution values A(j,p) of property p for a given molecular conformation and the cage interaction values P(c,i) for a given cage c is calculated according a standard interaction energy equation. It takes into account the Euclidean distance between each atom and each cage point. This total interaction V(c,p) for a given property p and cage c for a given molecular conformation is minimized by sampling the molecular orientation along the three axis in angular steps and the calculation of the interaction value for each orientation within the cage.
The final total interaction V(c,p) for a given cage c and property p corresponds to the lowest interaction value obtained this way, and corresponds to the c’th value in the one-dimensional Spectrophore vector calculated for molecular property p. As a result, a Spectrophore is organized as a vector of minimized interaction values V, each of these organized in order of cages and property values. Since for a typical Spectrophore implementation twelve different cages are used, the total length of a Spectrophore vector equals to 12 times the number of properties. Since four different properties are used in the current implementation (electrostatic, lipophilic, electrophilic potentials, and an additional shape index as described before), this leads to a total Spectrophore length of 48 real values per molecular conformation.
Since Spectrophore descriptors are dependent on the actual three-dimensional conformation of the molecule, a typical analysis includes the calculation of Spectrophores from a reasonable set of different conformations. It is then up to the user to decide on the most optimal strategy for processing the different Spectrophore vectors. In a typical virtual screening application, calculating the average Spectrophore vector from all conformations of a single molecule may be a good strategy; other applications have benefit from calculating a weighted average or the minimal values.
For each molecule in the input file, a Spectrophore is calculated and printed to standard output as a vector of 48 numbers (in the case of a non-stereospecific Spectrophore. The 48 doubles are organised into 4 sets of 12 doubles each:
* numbers 01-11: Spectrophore values calculated from the atomic partial charges;
* numbers 13-24: Spectrophore values calculated from the atomic lipophilicity properties;
* numbers 25-36: Spectrophore values calculated from the atomic shape deviations;
* numbers 37-48: Spectrophore values calculated from the atomic electrophilicity properties;
#### Choice of Parameters[¶](#choice-of-parameters)
##### Accuracy[¶](#accuracy)
As already mentioned, the total interaction between cage and molecule for a given property is minimized by sampling the molecular orientation in angular steps of a certain magnitude. As a typical angular step size, 20 degrees was found to be the best compromise between accuracy and computer speed. Larger steps sizes are faster to calculate but have the risk of missing the global interaction energy minimum, while smaller angular steps sizes do sample the rotational space more thoroughly but at a significant computational cost. The accuracy can be specified by the user using the `-a` option.
##### Resolution[¶](#resolution)
Spectrophores capture information about the property fields surrounding the molecule, and the amount of detail that needs to be captured can be regulated by the user. This is done by altering the minimal distance between the molecule and the surrounding cage. The resolution can be specified by the user with the
`-r` option. The default distance along all dimensions is 3.0 Angstrom.
The larger the distance, the lower the resolution.
With a higher resolution,
more details of the property fields surrounding the molecule are contained by the Spectrophore. On the other hand, low resolution settings may lead to a more general representation of the property fields, with little or no emphasis on small local variations within the fields. Using a low resolution can be the method of choice during the initial virtual screening experiments in order to get an initial, but not so discriminative, first selection. This initial selection can then further be refined during subsequent virtual screening steps using a higher resolution. In this setting, small local differences in the fields between pairs of molecules will be picked up much more easily.
The absolute values of the individual Spectrophore data points are dependent on the used resolution. Low resolution values lead to small values of the calculated individual Spectrophore data points, while high resolutions will lead to larger data values. It is therefore only meaningful to compare only Spectrophores that have been generated using the same resolution settings or after some kind of normalization is performed.
Computation time is not influenced by the specified resolution and hence is identical for all different resolution settings.
##### Stereospecificity[¶](#stereospecificity)
Some of the cages that are used to calculated Spectrophores have a stereospecific distribution of the interaction points. The resulting interaction values resulting from these cages are therefore sensitive to the enantiomeric configuration of the molecule within the cage. The fact that both stereoselective as well as stereo non-selective cages can be used makes it possible to include or exclude stereospecificity in the virtual screening search. Depending on the desired output, the stereospecificity of Spectrophores can be specified by the user using the `-s` option:
* No stereospecificity (default):
Spectrophores are generated using cages that are not stereospecific. For most applications, these Spectrophores will suffice.
* Unique stereospecificity:
Spectrophores are generated using unique stereospecific cages.
* Mirror stereospecificity:
Mirror stereospecific Spectrophores are Spectrophores resulting from the mirror enantiomeric form of the input molecules.
The differences between the corresponding data points of unique and mirror stereospecific Spectrophores are very small and require very long calculation times to obtain a sufficiently high quality level. This increased quality level is triggered by the accuracy setting and will result in calculation times being increased by at least a factor of 100. As a consequence, it is recommended to apply this increased accuracy only in combination with a limited number of molecules, and when the small differences between the stereospecific Spectrophores are really critical. However, for the vast majority of virtual screening applications, this increased accuracy is not required as long as it is not the intention to draw conclusions about differences in the underlying molecular stereoselectivity. Non-stereospecific Spectrophores will therefore suffice for most applications.
##### Normalisation[¶](#normalisation)
It may sometimes be desired to focus on the relative differences between the Spectrophore data points rather than focussing on the absolute differences.
In these cases, normalization of Spectrophores may be required. The current implementation offers with the `-n` option the possibility to normalize in four different ways:
* No normalization (default)
* Normalization towards zero mean
* Normalization towards standard deviation
* Normalization towards zero mean and unit standard deviation
In all these cases, normalization is performed on a ‘per-property’ basis, which means that the data points belonging to the same property set are treated as a single set and that normalization is only performed on the data points within each of these sets and not across all data points.
Normalization may be important when comparing the Spectrophores of charged molecules with those of neutral molecules. For molecules carrying a global positive charge, the resulting Spectrophore data points of the charge and electrophilicity properties will both be shifted in absolute value compared to the corresponding data points of the respective neutral species. Normalization of the Spectrophores removes the original magnitude differences for the data points corresponding to the charge and electrophilicity properties of charged and neutral species. Therefore, if the emphasis of the virtual screening consists of the identification of molecules with similar property fields without taking into account differences in absolute charge, then Spectrophores should be normalized towards zero mean. However, if absolute charge differences should be taken into account to differentiate between molecules, unnormalized Spectrophores are recommended.
| [[bll2002]](#id4) | <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>.
**The Electronegativity Equalization Method II: Applicability of Different Atomic Charge Schemes.**
*J. Phys. Chem. A* **2002**, *106*, 7895-7901.
[[Link](https://doi.org/10.1021/jp020547v)] |
| [[blc2003]](#id5) | <NAME>, <NAME>, <NAME>, and <NAME>.
**Fast Calculation of Quantum Chemical Molecular Descriptors from the Electronegativity Equalization Method.**
*J. Chem. Inf. Comput. Sci.* **2003**, *43*, 422-428.
[[Link](https://doi.org/10.1021/ci0255883)] |
Footnotes
| [[1]](#id1) | Spectrophore is a registered trademark of Silicos NV. |
obabel vs Chemistry Toolkit Rosetta[¶](#obabel-vs-chemistry-toolkit-rosetta)
---
The [Chemistry Toolkit Rosetta](http://ctr.wikia.com/wiki/Chemistry_Toolkit_Rosetta_Wiki) is the brainchild of Andrew Dalke. It is a website that illustrates how to program various chemical toolkits to do a set of tasks. To make it easily understandable, these tasks are probably on the simpler side of those in the real world. The Rosetta already contains several examples of using the Open Babel Python bindings to carry out tasks.
Here we focus on the use of the command line application **obabel** to accomplish the tasks listed in the Rosetta. Inevitably we will struggle with more complicated tasks; however this section is intended to show how far you can go simply using **obabel**, and to illustrate some of its less common features. Some of the tasks cannot be done exactly as specified, but they are are usually close enough to useful.
Note that except for the examples involving piping, the GUI could also be used. Also the copy output format at present works only for files with Unix line endings.
### Heavy atom counts from an SD file[¶](#heavy-atom-counts-from-an-sd-file)
> *For each record from the benzodiazepine file, print the total number of heavy atoms in each record (that is, exclude hydrogens). The output is one output line per record, containing the count as an integer. If at all possible, show how to read directly from the gzip’ed input SD file.*
```
obabel benzodiazepine.sdf.gz -otxt --title "" --append atoms -d -l5
```
The [txt format](index.html#title-format) outputs only the title but we set that to nothing and then append the result. The *atoms* descriptor counts the number of atoms after the `-d` option has removed the hydrogens. The `-l5` limits the output to the first 5 molecules, in case you really didn’t want to print out results for all 12386 molecules.
### Convert a SMILES string to canonical SMILES[¶](#convert-a-smiles-string-to-canonical-smiles)
> *Parse two SMILES strings and convert them to canonical form. Check that the results give the same string.*
```
obabel -:"CN2C(=O)N(C)C(=O)C1=C2N=CN1C" -:"CN1C=NC2=C1C(=O)N(C)C(=O)N2C" -ocan
```
giving:
```
Cn1cnc2c1c(=O)n(C)c(=O)n2C Cn1cnc2c1c(=O)n(C)c(=O)n2C 2 molecules converted
```
### Report how many SD file records are within a certain molecular weight range[¶](#report-how-many-sd-file-records-are-within-a-certain-molecular-weight-range)
> *Read the benzodiazepine file and report the number of records which contain a molecular weight between 300 and 400.*
```
obabel benzodiazepine.sdf.gz -onul --filter "MW>=300 MW<=400"
3916 molecules converted
```
### Convert SMILES file to SD file[¶](#convert-smiles-file-to-sd-file)
> *Convert a SMILES file into an SD file. The conversion must do its best to use the MDL conventions for the SD file, including aromaticity perception. Note that the use of aromatic bond types in CTABs is only allowed for queries, so aromatic structures must be written in a Kekule form. Because the stereochemistry of molecules in SD files is defined solely by the arrangement of atoms, it is necessary to assign either 2D or 3D coordinates to the molecule before generating output. The coordinates do not have to be reasonable (i.e. it’s ok if they would make a chemist scream in horror), so long as the resulting structure is chemically correct.*
```
obabel infile.smi -O outfile.sdf --gen3D
```
### Report the similarity between two structures[¶](#report-the-similarity-between-two-structures)
> *Report the similarity between “CC(C)C=CCCCCC(=O)NCc1ccc(c(c1)OC)O” (PubChem CID 1548943) and “COC1=C(C=CC(=C1)C=O)O” (PubChem CID 1183).*
Two types of fingerprint are used: the default FP2 path-based one, and FP4 which is structure key based:
```
obabel -:"CC(C)C=CCCCCC(=O)NCc1ccc(c(c1)OC)O" -:"COC1=C(C=CC(=C1)C=O)O" -ofpt Tanimoto from first mol = 0.360465
obabel -:"CC(C)C=CCCCCC(=O)NCc1ccc(c(c1)OC)O" -:"COC1=C(C=CC(=C1)C=O)O" -ofpt
-xfFP4 Tanimoto from first mol = 0.277778
```
### Find the 10 nearest neighbors in a data set[¶](#find-the-10-nearest-neighbors-in-a-data-set)
> *The data will come from the gzip’ed SD file of the benzodiazepine data set. Use the first structure as the query structure, and use the rest of the file as the targets to find the 10 most similar structures. The output is sorted by similarity, from most similar to least. Each target match is on its own line, and the line contains the similarity score in the first column in the range 0.00 to 1.00 (preferably to 2 decimal places), then a space, then the target ID, which is the title line from the SD file.*
A fastsearch index, using the default FP2 fingerprint, is prepared first:
```
obabel benzodiazepine.sdf -ofs
```
The query molecule (first in the file) is extracted:
```
obabel benzodiazepine.sdf -O first.sdf -l1
```
The similarity search of the index file for the 10 most similar molecules is done. The output is to [Title format (txt)](index.html#title-format), with the `-aa` option of [Fastsearch format (fs)](index.html#fastsearch-format) adding the Tanimoto score:
```
obabel benzodiazepine.fs -otxt -s first.sdf -at 10 -aa
623918 1 450820 1 1688 1 20351792 0.993007 9862446 0.986111 398658 0.97931 398657 0.97931 6452650 0.978873 450830 0.978873 3016 0.978873 10 molecules converted
```
The Tanimoto coefficient comes second, rather than first as requested and is not formatted to two decimal places, but the information is still there.
### Depict a compound as an image[¶](#depict-a-compound-as-an-image)
> *Depict the SMILES “CN1C=NC2=C1C(=O)N(C(=O)N2C)C” as an image of size 200x250 pixels. The image should be in PNG format if possible, otherwise in GIF format. If possible, give it the title “Caffeine”. It should display the structure on a white background.*
Open Babel can output 2D structures as [PNG](index.html#png-2d-depiction). The `-d` makes hydrogen implicit. Width and height are set with the -xw and -xh options.:
```
obabel -:"CN1C=NC2=C1C(=O)N(C(=O)N2C)C Caffeine" -O out.png -xw 200 -xh 250 -d
```
Open Babel also supports outputting [SVG](index.html#svg-2d-depiction), which is resolution independent as a vector format.:
```
obabel -:"CN1C=NC2=C1C(=O)N(C(=O)N2C)C Caffeine" -O out.svg -d
```
### Highlight a substructure in the depiction[¶](#highlight-a-substructure-in-the-depiction)
> *Read record 3016 from the benzodiazepine SD file. Find all atoms which match the SMARTS “c1ccc2c(c1)C(=NCCN2)c3ccccc3” and highlight them in red. All other atoms must be drawn in black.*
> *The resulting image should be 200x250 pixels and on a white background. The resulting image file should be in PNG (preferred) or GIF format.*
```
obabel benzodiazepine.sdf.gz -O out.png --filter "title=3016"
-s "c1ccc2c(c1)C(=NCCN2)c3ccccc3 red" -xu -xw 200 -xh 250 -d
```
Open Babel can output 2D structures as [PNG](index.html#png-2d-depiction). The compressed data file can be used as input. The `-d` makes hydrogen implicit and the `-xu` removes the element-specific coloring. Width and height are set with the -xw and -xh options.
This is slow (about a minute) because each molecule is fully interpreted, although in most cases only the title is required. The task can be done 10 times faster by using the uncompressed file, converting only the title (the `-aT` option) and copying the SD text to standard out when a match occurs. This is piped to a second command which outputs the structure.:
```
obabel benzodiazepine.sdf -ocopy --filter "title=3016" -aT |
obabel -isdf -O out.png -s "c1ccc2c(c1)C(=NCCN2)c3ccccc3 red" -xu -xw 200 -xh 250 -d
```
Open Babel also supports outputting [SVG](index.html#svg-2d-depiction), which is resolution independent as a vector format.:
```
obabel benzodiazepine.sdf.gz -O out.svg --filter "title=3016"
-s "c1ccc2c(c1)C(=NCCN2)c3ccccc3 red" -xu -d
obabel benzodiazepine.sdf -ocopy --filter "title=3016" -aT |
obabel -isdf -O out.svg -s "c1ccc2c(c1)C(=NCCN2)c3ccccc3 red" -xu -d
```
### Align the depiction using a fixed substructure[¶](#align-the-depiction-using-a-fixed-substructure)
> *Use the first 16 structures of the benzodiazepine SD file to make a 4x4 grid of depictions as a single image. The first structure is in the upper-left corner, the second is to its right, and so on. Each depiction should include the title field of the corresponding record, which in this case is the PubChem identifier.*
> *Use “[#7]~1~[#6]~[#6]~[#7]~[#6]~[#6]~2~[#6]~[#6]~[#6]~[#6]~[#6]12” as the common SMARTS substructure. This is the fused ring of the benzodiazepine system but without bond type or atom aromaticity information. Use the first molecule as the reference depiction. All other depictions must have the depiction of their common substructure aligned to the reference.*
Since Open Babel 2.3.1 this can be done in one line:
```
obabel benzodiazepine.sdf.gz -O out.png -l16 --align -d -xu -xw 400 -xh 400
-s"[#7]~1~[#6]~[#6]~[#7]~[#6]~[#6]~2~[#6]~[#6]~[#6]~[#6]~[#6]12 green"
```
The depiction has some cosmetic tweaks: the substructure is highlighted in green; `-d` removes hydrogen; `-xu` removes the element specific coloring.
Open Babel also supports outputting [SVG](index.html#svg-2d-depiction), which is resolution independent as a vector format.:
```
obabel benzodiazepine.sdf.gz -O out.svg -l16 --align -d -xu
-s"[#7]~1~[#6]~[#6]~[#7]~[#6]~[#6]~2~[#6]~[#6]~[#6]~[#6]~[#6]12 green"
```
In earlier versions the **obfit** program can be used. First extract the first molecule for the reference and the first 16 to be displayed:
```
obabel benzodiazepine.sdf.gz -O firstbenzo.sdf -l1 obabel benzodiazepine.sdf.gz -O sixteenbenzo.sdf -l16
```
Then use the program **obfit**, which is distributed with Open Babel:
```
obfit "[#7]~1~[#6]~[#6]~[#7]~[#6]~[#6]~2~[#6]~[#6]~[#6]~[#6]~[#6]12"
firstbenzo.sdf sixteenbenzo.sdf > 16out.sdf
```
Display the 16 molecules (with implicit hydrogens) as [SVG](index.html#svg-2d-depiction) (earlier versions of Open Babel do not support [PNG](index.html#png-2d-depiction)):
```
obabel 16out.sdf -O out.png -d -xw 400 -xh 400
```
### Perform a substructure search on an SDF file and report the number of false positives[¶](#perform-a-substructure-search-on-an-sdf-file-and-report-the-number-of-false-positives)
> *The sample database will be gzip’ed SD file of the benzodiazepine data set. The query structure will be defined as “C1C=C(NC=O)C=CC=1”.*
The default FP2 fingerprint is sensitive to whether a bond is aromatic or not. So this Kekule structure needs to be converted to its aromatic form. As this happens automatically on conversion, the easiest way is to store the SMILES string in a file, and use this file to specify the search pattern.
Prepare an index (of the unzipped data file):
```
obabel benzodiazepine.sdf -ofs
```
Do the substructure search. A very large number of molecules match the query, so the maximum number of hits has to be increased with the `-al 9000` option. By virtue of the `~` it is the false positives that are output (to nowhere) but their number is reported:
```
obabel benzodiazepine.fs -onul -s ~substruct.smi -al 9000 8531 candidates from fingerprint search phase 12 molecules converted
```
### Calculate TPSA[¶](#calculate-tpsa)
> *The goal of this task is get an idea of how to do a set of SMARTS matches when the data comes in from an external table.*
> *Write a function or method named “TPSA” which gets its data from the file “tpsa.tab”. The function should take a molecule record as input, and return the TPSA value as a float. Use the function to calculate the TPSA of “CN2C(=O)N(C)C(=O)C1=C2N=CN1C”. The answer should be 61.82, which agrees exactly with Ertl’s online TPSA tool but not with PubChem’s value of 58.4.*
Open Babel’s command line cannot parse tables with custom formats. But the TPSA descriptor, defined by a table in the file `psa.txt`, is already present and can be used as follows:
```
obabel -:CN2C(=O)N(C)C(=O)C1=C2N=CN1C -osmi --append TPSA
```
giving:
```
Cn1c(=O)n(C)c(=O)c2c1ncn2C 61.82 1 molecule converted
```
The table in `tpsa.tab` and Open Babel’s `psa.txt` have the same content but different formats. The first few rows of `tpsa.tab` are:
```
psa SMARTS description 23.79 [N0;H0;D1;v3] N#
23.85 [N+0;H1;D1;v3] [NH]=
26.02 [N+0;H2;D1;v3] [NH2]-
```
and the equivalent lines from Open Babel’s `psa.txt`:
```
[N]#* 23.79
[NH]=* 23.85
[NH2]-* 26.02
```
It is possible to add new descriptors without having to recompile. If another property, *myProp*, could be calculated using a table in `myprop.txt` with the same format as `psa.txt`, then a descriptor could set up by adding the following item to `plugindefines.txt`:
```
OBGroupContrib myProp # name of descriptor myprop.txt # data file Coolness index # brief description
```
The following would then output molecules in increasing order of *myProp* with the value added to the title:
```
obabel infile.smi -osmi --sort myProp+
```
### Working with SD tag data[¶](#working-with-sd-tag-data)
> *The input file is SD file from the benzodiazepine data set. Every record contains the tags PUBCHEM_CACTVS_HBOND_DONOR, PUBCHEM_CACTVS_HBOND_ACCEPTOR and PUBCHEM_MOLECULAR_WEIGHT, and most of the records contain the tag PUBCHEM_XLOGP3.*
> *The program must create a new SD file which is the same as the input file but with a new tag data field named “RULE5”. This must be “1” if the record passes Lipinski’s rule, “0” if it does not, and “no logP” if the PUBCHEM_XLOGP3 field is missing.*
This exercise is a bit of a stretch for the Open Babel command-line. However, the individual lines may be instructional, since they are more like the sort of task that would normally be attempted.
```
obabel benzodiazepine.sdf.gz -O out1.sdf --filter "PUBCHEM_CACTVS_HBOND_DONOR<=5 &
PUBCHEM_CACTVS_HBOND_ACCEPTOR<=10 & PUBCHEM_MOLECULAR_WEIGHT<=500 &
PUBCHEM_XLOGP3<=5"
--property "RULE5" "1"
obabel benzodiazepine.sdf.gz -O out2.sdf --filter "!PUBCHEM_XLOGP3"
--property "RULE5" "no logP"
obabel benzodiazepine.sdf.gz -O out3.sdf --filter "!PUBCHEM_XLOGP3 &
!(PUBCHEM_CACTVS_HBOND_DONOR<=5 & PUBCHEM_CACTVS_HBOND_ACCEPTOR<=10 &
PUBCHEM_MOLECULAR_WEIGHT<=500 & PUBCHEM_XLOGP3<=5)"
--property "RULE5" "0"
```
The first command converts only molecules passing Lipinski’s rule, putting them in `out1.sdf`, and adding an additional property, *RULE5*, with a value of `1`.
The second command converts only molecules that do not have a property *PUBCHEM_XLOGP3*.
The third command converts only molecules that do have a *PUBCHEM_XLOGP3* and which fail Lipinski’s rule.
Use **cat** or **type** at the command prompt to concatenate the three files `out1.sdf`, `out2.sdf`, `out3.sdf`.
These operations are slow because the chemistry of each molecule is fully converted. As illustrated below, the filtering alone could have been done more quickly using the uncompressed file and the `-aP` option, which restricts the reading of the SDF file to the title and properties only, and then copying the molecule’s SDF text verbatim with `-o copy`. But adding the additional property is not then possible:
```
obabel benzodiazepine.sdf -o copy -O out1.sdf -aP --filter
"PUBCHEM_CACTVS_HBOND_DONOR<=5 & PUBCHEM_CACTVS_HBOND_ACCEPTOR<=10 &
PUBCHEM_MOLECULAR_WEIGHT<=500 & PUBCHEM_XLOGP3<=5"
```
### Unattempted tasks[¶](#unattempted-tasks)
A number of the Chemical Toolkit Rosetta tasks cannot be attempted as the **obabel** tool does not (currently!) have the necessary functionality. These include the following:
* Detect and report SMILES and SDF parsing errors
* Ring counts in a SMILES file
* Unique SMARTS matches against a SMILES string
* Find the graph diameter
* Break rotatable bonds and report the fragments
* Change stereochemistry of certain atoms in SMILES file
To handle these tasks, you need to use the Open Babel library directly. This is the subject of the next section.
2D Depiction[¶](#d-depiction)
---
As the old Chinese proverb has it, a molecular depiction is worth a thousand words. This chapter covers everything relevant to using Open Babel to generate or read/write a 2D depiction, expected by most chemists for print or website purposes.
When we talk about a depiction in cheminformatics, there are really two different concepts covered by this term:
1. Graphical display of a molecule’s structure as a 2D image (such as the PNG and SVG formats). Here is an example:
```
obabel -:C(=O)Cl -O acidchloride.png
```
2. Storage of the 2D coordinates (and associated stereo symbols) associated with Concept 1 (using formats such as Mol and Mol2). Here is the connection table from the corresponding Mol file for the above depiction:
```
3 2 0 0 0 0 0 0 0 0999 V2000
0.8660 -0.5000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.7321 -0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0
0.0000 0.0000 0.0000 Cl 0 0 0 0 0 0 0 0 0 0 0 0 1 2 2 0 0 0 0 1 3 1 0 0 0 0
```
Note
The focus in this chapter is on 2D depiction and not 3D. It is of course possible to generate and store 3D coordinates in many of the file formats supported by Open Babel, but the only support for depiction is the Povray format, used to create ray-traced ball-and-stick diagrams of molecules.
Other Open Source chemistry projects such as [Avogadro](http://avogadro.sf.net), [PyMOL](http://pymol.org), and [Jmol](http://jmol.org) cover this area very well.
### Molecular graphics[¶](#molecular-graphics)
As of Open Babel 2.3.2, there are three output formats for displaying a 2D image:
1. PNG format: This is a bitmap format used to create images of a certain pixel width. These images can be inserted into Word documents or displayed on web pages.
2. SVG format: This is a vector format, which can be scaled to generate images of any size without loss of quality. In particular, Open Babel’s SVG images can be interactively zoomed and panned using a modern web browser.
3. ASCII format: This is a depiction of a molecule using ASCII text. This can be useful if you are logged into a remote server, or are working at the command-line, and just need a basic check of the identity of a molecule.
All of these formats support multimolecule files. The PNG and SVG formats arrange the molecules into rows and columns (you can specify the number of rows or columns if you wish), while the ASCII format just uses a single column. The remainder of this chapter will concentrate on the PNG and SVG formats; for more information on the ASCII format, see the format description [ref].
3D Structure Generation[¶](#d-structure-generation)
---
Open Babel provides support for generating a reasonable 3D structure just given connectivity information. It also has the ability to generate multiple conformers for each molecule. These topics are discussed below.
### Generate a single conformer[¶](#generate-a-single-conformer)
There are several steps involved in generating a low-energy conformer from a 0D or 2D structure.
#### OBBuilder[¶](#obbuilder)
The [:obapi:`OBBuilder`](#id1) class is the part of Open Babel that can take a 2D or 0D structure and generate a 3D structure. The 3D structure is made very quickly using a combination of rules (e.g. sp3 atoms should have four bonds arranged in a tetrahedron) and common fragments (e.g. cyclohexane is shaped like a chair).
The 3D structures that come straight out of OBBuilder may be useful for some purposes but most people will want to “clean them up”. This is because they may have clashes or have high energy structures due to some strain. The conformer search or geometry optimization methods described below are typically used after calling OBBuilder.
Full discussion of the methods for coordinate generation is available in ‘Fast, efficient fragment-based coordinate generation for Open Babel’ *J. Cheminf.* (2019) **11**, Art. 49.<https://doi.org/10.1186/s13321-019-0372-5>. Please cite this paper if you use the coordinate generation features in Open Babel.
The functionality of OBBuilder is not directly available through **obabel** but it is used as the necessary first step of the Gen3D operation discussed below.
#### Conformer searching[¶](#conformer-searching)
Given a 3D structure, the goal of conformer searching is to find a low energy conformation. This may be useful as a “clean-up” procedure after an initial 3D structure generation. Note that conformer searching does not alter stereochemistry.
The Open Babel library provides access to several algorithms for conformer searching. All of these algorithms adopt the torsion-driving approach; that is, conformations are generated by setting torsion angles to one of a number of allowed values. The allowed values are listed in the data file `torlib.txt`; for example, C-C bonds in alkanes have three allowed values: -60, 60 and 180.
1. [:obapi:`Systematic Rotor Search <SystematicRotorSearch>`](#id3): Systematically iterate through all possible conformers according to Open Babel’s torsion library.
This approach is thorough and will find the global minimum. However as the number of conformations increases by multiples for each additional rotational bond, this can take quite a while for molecules with even just 7 rotatable bonds. This approach scales to the power of N, where N is the number of rotatable bonds.
2. [:obapi:`Fast Rotor Search <FastRotorSearch>`](#id5): This iterates through the same conformer space as the SystematicRotorSearch but it greedily optimises the torsion angle at each rotatable bond in turn, starting from the most central. Thus it scales linearly with the number of rotatable bonds.
3. [:obapi:`Random Rotor Search <RandomRotorSearch>`](#id7): Conformations are generated by randomly choosing from the allowed torsion angles.
4. [:obapi:`Weighted Rotor Search <WeightedRotorSearch>`](#id9): This method uses an iterative procedure to find a global minimum. As with the Random Rotor Search, it randomly choses from the allowed torsion angles but the choice is reweighted based on the energy of the generated conformer. Over time, the generated conformer for each step should become increasingly better.
For each of these methods, the lowest energy conformation found is selected. In some cases, the entire set of conformations generated is also available. Many of these methods include an option to optimize the geometry of conformations during the search. This greatly slows down the procedure but may produce more accurate results.
The choice of which algorithm to use depends on the speed/accuracy tradeoff with which you are happy, and also on the number of rotatable bonds in the molecule.
Are you looking for a reasonable structure for 3D display? Or are you looking for a structure close to the global minimum?
To use from **obabel**, see the help for the conformer operation (`obabel -L conformer`). This operation is used both for conformer searching and for the genetic algorithm conformer generation described below.
Here is an example of use from Python:
```
>>> ff = ob.OBForceField.FindForceField("mmff94")
>>> ff.Setup(obmol)
True
>>> print(ff.Energy())
15.179054202
>>> ff.SystematicRotorSearch(100)
>>> print(ff.Energy())
10.8861155747
```
#### Gen3D[¶](#gen3d)
To illustrate how some of the above methods might be used in practice, consider the **gen3d** operation. This operation (invoked using `--gen3d` at the commandline) generates 3D structures for 0D or 2D structures using the following series of steps, all of which have been described above:
1. Use the OBBuilder to create a 3D structure using rules and fragment templates 2. Do 250 steps of a steepest descent geometry optimization with the MMFF94 forcefield
3. Do 200 iterations of a Weighted Rotor conformational search (optimizing each conformer with 25 steps of a steepest descent)
4. Do 250 steps of a conjugate gradient geometry optimization
Taken together, all of these steps ensure that the generated structure is likely to be the global minimum energy conformer. However, for many applications where 100s if not 1000s of molecules need to be processed, gen3d is rather slow:
> 1. `--fastest` only generate coordinates, no force field or conformer search
> 2. `--fast` perform quick forcefield optimization
> 3. `--medium` **(default)** forcefield optimization + fast conformer search
> 4. `--better` more optimization + fast conformer search
> 5. `--best` more optimization + significant conformer search
Details on some of the trade-offs involved are outlined in ‘Fast, efficient fragment-based coordinate generation for Open Babel’ *J. Cheminf.* (2019) **11**, Art. 49.<https://doi.org/10.1186/s13321-019-0372-5>. If you use the 3D coordinate generation, please cite this paper.
### Generate multiple conformers[¶](#generate-multiple-conformers)
In contrast to conformer searching, the goal of conformer generation is not simply to find a low energy conformation but to generate several different conformations. Such conformations may be required by another piece of software such as some protein-ligand docking and pharmacophore programs. They may also be useful if considering writing some sort of shape comparison method.
Open Babel has two distinct conformer generating codes:
1. Confab: A systematic conformer generator that generates all diverse low-energy conformers.
2. Genetic algorithm: This is a stochastic conformer generator that generates diverse conformers either on an energy or RMSD basis
#### Genetic algorithm[¶](#genetic-algorithm)
A genetic algorithm is a general computational method to find a globally optimum solution to some multiparameter problem. It involves a population of conformers which after a series of generations, iteratively arrive at an optimal solution in terms of either RMSD diversity or energy.
Information about using this method is available at the command-line using: `obabel -L conformer`. Although labelled as “Conformer Searching”, if you choose the genetic algorithm method (which is the default) then you can save the conformers in the final generation using `--writeconformers`. For example, the following line creates 30 conformers optimized for RMSD diversity:
```
obabel startingConformer.mol -O ga_conformers.sdf --conformer --nconf 30
--score rmsd --writeconformers
```
In this case `--score rmsd` was not strictly necessary as RMSD diversity was the default in any case.
#### Confab[¶](#confab)
Confab systematically generates all diverse low-energy conformers for molecules. To run Confab use the `--confab` operation, and to assess the results by calculating RMSDs to reference structures, use the **confabreport** output format.
confab operator
* `obabel <inputfile> -O <outputfile> --confab [confab options]` for typical usage
* `obabel -L confab` for help text
The *inputfile* should contain one or more 3D structures (note that 2D structures will generate erroneous results). Generated conformers are written to the *outputfile*. All of the conformers for a particular molecule will have the same title as the original molecule.
| `--rcutoff <rmsd>` |
| | RMSD cutoff (default 0.5 Angstrom) |
| `--ecutoff <energy>` |
| | Energy cutoff (default 50.0 kcal/mol) |
| `--conf <#confs>` |
| | Max number of conformers to test (default is 1 million) |
| `--original` | Include the input conformation as the first conformer |
| `--verbose` | Verbose - display information on torsions found |
confabreport format
* `obabel <inputfile> [-O <outputfile>] -o confabreport -xf <reference_file> [-xr <rmsd>]` for typical usage
* `obabel -L confabreport` for help text
Once a file containing conformers has been generated by Confab, the result can be compared to the original input structures or a set of reference structures using this output format. Conformers are matched with reference structures using the molecule title. For every conformer, there should be a reference structure (but not necessarily *vice versa*).
| `-f <filename>` | File containing reference structures |
| `-r <rmsd>` | RMSD cutoff (default 0.5 Angstrom)
The number of structures with conformers within this RMSD cutoff of the reference will be reported. |
Example
The example file, [bostrom.sdf](../_static/bostrom.sdf), contains 36 molecules which have from 1 to 11 rotatable bonds (see *Bostrom, Greenwood, Gottfries, J Mol Graph Model, 2003, 21, 449*).
We can generate and test up to 100K conformers using Confab as follows:
```
> obabel bostrom.sdf -O confs.sdf --confab --conf 100000
**Starting Confab 1.1.0
**To support, cite Journal of Cheminformatics, 2011, 3, 8.
..Input format = sdf
..Output format = sdf
..RMSD cutoff = 0.5
..Energy cutoff = 50
..Conformer cutoff = 1000000
..Write input conformation? False
..Verbose? False
**Molecule 1
..title = 1a28_STR_1_A_1__C__
..number of rotatable bonds = 1
..tot conformations = 12
..tot confs tested = 12
..below energy threshold = 10
..generated 3 conformers
... etc, etc
0 molecules converted
```
To check how many of the generated conformers are within 1.0 A RMSD of the original structures, we can use the confabreport format as follows:
```
> obabel confs.sdf -oconfabreport -xf bostrom.sdf -xr 1.0
**Generating Confab Report
..Reference file = bostrom.sdf
..Conformer file = confs.sdf
..Molecule 1
..title = 1a28_STR_1_A_1__C__
..number of confs = 3
..minimum rmsd = 0.0644801
..confs less than cutoffs: 0.2 0.5 1 1.5 2 3 4 100
..1 1 3 3 3 3 3 3
..cutoff (1) passed = Yes
... etc, etc
**Summary
..number of molecules = 36
..less than cutoff(1) = 35 52271 molecules converted
```
Molecular Mechanics and Force Fields[¶](#molecular-mechanics-and-force-fields)
---
Used by a number of features, such as 3D coordinate generation,
conformer searching, etc., Open Babel provides support for a variety of all-atom molecular mechanics force fields. The key idea is to use classical mechanics to rapidly simulate molecular systems.
Each force field method is parameterized for a set of possible molecules (e.g., proteins, organic molecules, etc.), building in assumptions about how various aspects of the molecules contribute to the overall potential energy.
The total potential energy of the system is usually given as a sum of multiple components, including some or all of (but not limited to):
> * Bond stretching
> * Angle bending
> * Dihedral torsions
> * Out-of-plane bending
> * Van der Waals repulsion
> * Atomic partial charges (electrostatic)
Open Babel supports several force field methods. In general, we recommend use of either the [Generalized Amber Force Field (gaff)](index.html#generalized-amber-force-field) or
[MMFF94 Force Field (mmff94)](index.html#mmff94-force-field) for organic molecules, and the
[Universal Force Field (uff)](index.html#universal-force-field) for other types of molecules.
### Generalized Amber Force Field (gaff)[¶](#generalized-amber-force-field-gaff)
The [AMBER force field](http://en.wikipedia.org/wiki/AMBER) (or more accurately, family of force fields used with the [AMBER software](http://ambermd.org/) are designed mainly for biomolecules (i.e.,
proteins, DNA, RNA, carbohydrates, etc.).
A general set of parameters for small organic molecules to allow simulations of drugs and small molecule ligands in conjugtion with biomolecules is provided by [GAFF](http://ambermd.org/antechamber/gaff.html). Parameters exist for almost all molecules made of C, N, O, H, S, P, F, Cl, Br, and I, and are compatible with the AMBER functional forms.
Typically, GAFF expects partial charges assigned using quantum chemistry (i.e., HF/6-31G* RESP charges or AM1-BCC). The Open Babel implementation can use other partial charges as available, although with lower resulting accuracy.
In general, GAFF is expected to provide accuracy (in terms of geometry and energies) on par or better than the [MMFF94 Force Field (mmff94)](index.html#mmff94-force-field).
Note
If you use GAFF, you should cite the appropriate paper:
<NAME>., <NAME>.; <NAME>.;<NAME>.;
<NAME>. “Development and testing of a general AMBER force field”. *Journal of Computational Chemistry,* **2004**
v. 25, 1157-1174.
### Ghemical Force Field (ghemical)[¶](#ghemical-force-field-ghemical)
The Ghemical force field matches an existing open source package,
which provided a force field for geometry optimization and molecular dynamics similar to the Tripos-5.2 force field method (which is proprietary). It performs acceptably on providing geometries of organic-like molecules.
We recommend use of either the [Generalized Amber Force Field (gaff)](index.html#generalized-amber-force-field) or
[MMFF94 Force Field (mmff94)](index.html#mmff94-force-field) for organic molecules, and the
[Universal Force Field (uff)](index.html#universal-force-field) for other types of molecules.
### MMFF94 Force Field (mmff94)[¶](#mmff94-force-field-mmff94)
The MMFF94 force field (and the related MMFF94s) were developed by Merck and are sometimes called the Merck Molecular Force Field,
although MMFF94 is no longer considered an acronym.
The method provides good accuracy across a range of organic and
*drug-like* molecules. The core parameterization was provided by high-quality quantum calculations, rather than experimental data,
across ~500 test molecular systems.
The method includes parameters for a wide range of atom types including the following common organic elements: C, H, N, O, F, Si, P,
S, Cl, Br, and I. It also supports the following common ions: Fe+2, Fe+3, F-, Cl-, Br-, Li+, Na+, K+, Zn+2, Ca+2, Cu+1, Cu+2,
and Mg+2. The Open Babel implementation should automatically perform atom typing and recognize these elements.
MMFF94 performs well at optimizing geometries, bond lengths, angles,
etc. and includes electrostatic and hydrogen-bonding effects.
Note
If you use MMFF94 you should cite the appropriate papers:
1. <NAME>, *J. Comput. Chem.,* 17, 490-519 **(1996).**
2. <NAME>, *J. Comput. Chem.,* 17, 520-552 **(1996).**
3. <NAME>, *J. Comput. Chem.,* 17, 553-586 **(1996).**
4. <NAME> and <NAME>, *J. Comput. Chem.,* 17, 587-615 **(1996).**
5. <NAME>, *J. Comput. Chem.,* 17, 616-641 **(1996).**
Some experiments and most theoretical calculations show significant pyramidal “puckering” at nitrogens in isolated structures. The MMFF94s
(static) variant has slightly different out-of-plane bending and dihedral torsion parameters to planarize certain types of delocalized trigonal N atoms, such as aromatic aniline. This provides a better match to the time-average molecular geometry in solution or crystal structures.
If you are comparing force-field optimized molecules to crystal structure geometries, we recommend using the MMFF94s variant for this reason. All other parameters are identical.
However, if you are perfoming “docking” simulations, consideration of active solution conformations, or other types of computational studies, we recommend using the MMFF94 variant, since one form or another of the N geometry will predominate.
Note
If you use MMFF94s, you should also cite the following paper that details that method:
6. <NAME>, *J. Comput. Chem.*, 20, 720-729 **(1999).**
### Universal Force Field (uff)[¶](#universal-force-field-uff)
One problem with traditional force fields is a limited set of elements and atom types. The Universal Force Field (UFF) was developed to provide a set of rules and procedures for producing appropriate parameters across the entire periodic table.
While some implementations of UFF use the QEq partial charge model,
the original manuscript and authors of UFF determined the parameterization without an electrostatic model. Consequently, by default the Open Babel implementation does not use electrostatic interactions.
Note
If you use UFF, you should cite the appropriate paper:
<NAME>.; <NAME>.; <NAME>.;
<NAME>; <NAME>.; “UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations.” *J Am Chem Soc*, **1992** v. 114,
10024-10039.
Write software using the Open Babel library[¶](#write-software-using-the-open-babel-library)
---
Behind the **obabel** command line program lies a complete cheminformatics toolkit, the Open Babel library. Using this library, you can write your own custom scripts and software for yourself or others.
Note
Any software that uses the Open Babel library must abide by terms of the [GNU Public License version 2](http://www.gnu.org/licenses/gpl-2.0.html). This includes all of the supporting language bindings (for example, Python scripts) as well as C++ programs. To summarise, if you are considering distributing your software to other people, you must make your source code available to them on request.
Open Babel is a C++ library and can easily be used from C++. In addition it can be accessed from Python, Perl, Ruby, CSharp and Java. These are referred to as language bindings (the Python bindings, etc.) and they were automatically generated from the C++ library using [SWIG](http://swig.org). For Python we also provide a module (Pybel) that makes it easier to access features of the bindings.
### The Open Babel API[¶](#the-open-babel-api)
The API (Application Programming Interface) is the set of classes, methods and variables that a programming library provides to the user. The Open Babel API is implemented in C++, but the same set of classes, methods and variables are accessed through the various language bindings.
The API documentation is automatically generated from the source code using the Doxygen tool. The following links point to the various versions of the documentation:
* API for the [current release](http://openbabel.org/api/)
* API for the [development version](http://openbabel.org/dev-api/) (updated nightly, with [error report](http://openbabel.org/dev-api/docbuild.out) showing errors in documentation)
* API for specific versions: [2.0](http://openbabel.org/api/2.0/), [2.1](http://openbabel.org/api/2.1/), [2.2](http://openbabel.org/api/2.2/), [2.3](http://openbabel.org/api/2.3/)
The Open Babel toolkit uses a version numbering that indicates how the API has changed over time:
* Bug fix releases (e.g., 2.0.**0**, vs. 2.0.**1**) do not change API at all.
* Minor versions (e.g., 2.**0** vs. 2.**1**) will add function calls, but will be otherwise backwards-compatible.
* Major versions (e.g. **2** vs **3**) are not backwards-compatible, and have changes in the API.
Overall, our goal is for the Open Babel API to remain stable over as long a period as possible. This means that users can be confident that their code will continue to work despite the release of new versions with additional features, file formats and bug fixes. For example, at the time of writing we have been on the version 2 series for almost five years (since November 2005). In other words, a program written using Open Babel almost five years ago still works with the latest release.
### C++[¶](#c)
#### Quickstart example[¶](#quickstart-example)
Here’s an example C++ program that uses the Open Babel toolkit to convert between two chemical file formats:
```
#include <iostream>
#include <openbabel/obconversion.h>
using namespace std;
int main(int argc,char **argv)
{
if(argc<3)
{
cout << "Usage: ProgrameName InputFileName OutputFileName\n";
return 1;
}
ifstream ifs(argv[1]);
if(!ifs)
{
cout << "Cannot open input file\n";
return 1;
}
ofstream ofs(argv[2]);
if(!ofs)
{
cout << "Cannot open output file\n";
return 1;
}
OpenBabel::OBConversion conv(&ifs, &ofs);
if(!conv.SetInAndOutFormats("CML","MOL"))
{
cout << "Formats not available\n";
return 1;
}
int n = conv.Convert();
cout << n << " molecules converted\n";
return 0;
}
```
Next, we’ll look at how to compile this.
#### How to compile against the Open Babel library[¶](#how-to-compile-against-the-open-babel-library)
##### Using Makefiles[¶](#using-makefiles)
The following Makefile can be used to compile the above example, assuming that it’s saved as `example.cpp`. You need to have already installed Open Babel somewhere. If the include files or the library are not automatically found when running **make**, you can specify the location as shown by the commented out statements in CFLAGS and LDFLAGS below.
```
CC = g++
CFLAGS = -c # -I /home/user/Tools/openbabel/install/include/openbabel-2.0 LDFLAGS = -lopenbabel # -L /home/user/Tools/openbabel/install/lib
all: example
example: example.o
$(CC) $(LDFLAGS) example.o -o example
example.o: example.cpp
$(CC) $(CFLAGS) $(LDFLAGS) example.cpp
clean:
rm -rf example.o example
```
##### Using CMake[¶](#using-cmake)
Rather than create a Makefile yourself, you can get CMake to do it for you. The nice thing about using CMake is that it can generate not only Makefiles, but also project files for MSVC++, KDevelop and Eclipse (among others). The following `CMakeLists.txt` can be used to generate any of these. The commented out lines can be used to specify the location of the Open Babel library and include files if necessary.
```
cmake_minimum_required(VERSION 2.6)
add_executable(example example.cpp)
target_link_libraries(example openbabel)
# target_link_libraries(example /home/user/Tools/openbabel/install/lib/libopenbabel.so)
# include_directories(/home/user/Tools/openbabel/install/include/openbabel-2.0)
```
#### Further examples[¶](#further-examples)
##### Output Molecular Weight for a Multi-Molecule SDF File[¶](#output-molecular-weight-for-a-multi-molecule-sdf-file)
Let’s say we want to print out the molecular weights of every molecule in an SD file. Why? Well, we might want to plot a histogram of the distribution, or see whether the average of the distribution is significantly different (in the statistical sense) compared to another SD file.
```
#include <iostream#include <openbabel/obconversion.h>
#include <openbabel/mol.hint main(int argc,char **argv)
{
OBConversion obconversion;
obconversion.SetInFormat("sdf");
OBMol mol;
bool notatend = obconversion.ReadFile(&mol,"../xsaa.sdf");
while (notatend)
{
std::cout << "Molecular Weight: " << mol.GetMolWt() << std::endl;
mol.Clear();
notatend = obconversion.Read(&mol);
}
return(0);
}
```
##### Properties from SMARTS Matches[¶](#properties-from-smarts-matches)
Let’s say that we want to get the average bond length or dihedral angle over particular types of atoms in a large molecule. So we’ll use SMARTS to match a set of atoms and loop through the matches. The following example does this for sulfur-carbon-carbon-sulfur dihedral angles in a polymer and the carbon-carbon bond lengths between the monomer units:
```
OBMol obMol;
OBBond *b1;
OBConversion obConversion;
OBFormat *inFormat;
OBSmartsPattern smarts;
smarts.Init("[#16D2r5][#6D3r5][#6D3r5][#16D2r5]");
string filename;
vector< vector <int> > maplist;
vector< vector <int> >::iterator matches;
double dihedral, bondLength;
for (int i = 1; i < argc; i++)
{
obMol.Clear();
filename = argv[i];
inFormat = obConversion.FormatFromExt(filename.c_str());
obConversion.SetInFormat(inFormat);
obConversion.ReadFile(&obMol, filename);
if (smarts.Match(obMol))
{
dihedral = 0.0;
bondLength = 0.0;
maplist = smarts.GetUMapList();
for (matches = maplist.begin(); matches != maplist.end(); matches++)
{
dihedral += fabs(obMol.GetTorsion((*matches)[0],
(*matches)[1],
(*matches)[2],
(*matches)[3]));
b1 = obMol.GetBond((*matches)[1], (*matches)[2]);
bondLength += b1->GetLength();
}
cout << filename << ": Average Dihedral " << dihedral / maplist.size()
<< " Average Bond Length " << bondLength / maplist.size()
<< " over " << maplist.size() << " matches\n";
}
}
```
### Python[¶](#python)
#### Introduction[¶](#introduction)
The Python interface to Open Babel is perhaps the most popular of the several languages that Open Babel supports. We provide two Python modules that can be used to access the functionality of Open Babel toolkit:
1. The *openbabel* module:
> This contains the standard Python bindings automatically generated using SWIG from the C++ API. See [The openbabel module](index.html#openbabel-python-module).
>
2. The *Pybel* module:
> This is a light-weight wrapper around the classes and methods in the *openbabel* module. Pybel provides more convenient and Pythonic ways to access the Open Babel toolkit. See [Pybel](index.html#pybel-module).
You don’t have to choose between them though - they can be used together.
#### Install Python bindings[¶](#install-python-bindings)
##### Windows[¶](#windows)
###### Install the bindings[¶](#install-the-bindings)
1. First you need to download and install the main Open Babel executable and library as described in [Install a binary package](index.html#install-binaries).
2. Next, use `pip` to install the Python bindings:
```
pip install -U openbabel
```
**Note**: Python is available as either a 32-bit or 64-bit version. You need to install the corresponding version of Open Babel in step 1.
###### Install Pillow (optional)[¶](#install-pillow-optional)
If you want to display 2D depictions using Pybel (rather than just write to a file), you need to install the Pillow library:
```
pip install -U pillow
```
###### Test the installation[¶](#test-the-installation)
Open a Windows command prompt, and type the following commands to make sure that everything is installed okay. If you get an error message, there’s something wrong and you should email the mailing list with the output from these commands.
```
C:\Documents and Settings\Noel> obabel -V Open Babel 3.0.0 -- Oct 7 2019 -- 20:18:16
C:\Documents and Settings\Noel> obabel -Hsdf sdf MDL MOL format Reads and writes V2000 and V3000 versions
Read Options, e.g. -as
s determine chirality from atom parity flags
...
...
C:\Documents and Settings\Noel> dir "%BABEL_DATADIR%"\mr.txt
Volume in drive C has no label.
Volume Serial Number is 68A3-3CC9
Directory of C:\Users\Noel\AppData\Roaming\OpenBabel-3.0.0\data
06/10/2019 16:37 4,295 mr.txt
1 File(s) 4,295 bytes
0 Dir(s) 58,607,575,040 bytes free
C:\Documents and Settings\Noel> py Python 2.7.16 (v2.7.16:413a49145e, Mar 4 2019, 01:37:19) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>> from openbabel import pybel
>>> mol = pybel.readstring("smi", "CC(=O)Br")
>>> mol.make3D()
>>> print(mol.write("sdf"))
OpenBabel01010918183D
7 6 0 0 0 0 0 0 0 0999 V2000
1.0166 -0.0354 -0.0062 C 0 0 0 0 0
2.5200 -0.1269 0.0003 C 0 0 0 0 0
3.0871 -1.2168 0.0026 O 0 0 0 0 0
3.2979 1.4258 0.0015 Br 0 0 0 0 0
0.6684 1.0007 0.0052 H 0 0 0 0 0
0.6255 -0.5416 0.8803 H 0 0 0 0 0
0.6345 -0.5199 -0.9086 H 0 0 0 0 0
1 2 1 0 0 0
1 5 1 0 0 0
1 6 1 0 0 0
1 7 1 0 0 0
2 4 1 0 0 0
2 3 2 0 0 0 M END
$$$$
>>> mol.draw() # If you installed PIL, this will display its structure
>>> (Hit CTRL+Z followed by Enter to exit)
```
##### Linux and MacOSX[¶](#linux-and-macosx)
See [Compile language bindings](index.html#compile-bindings) for information on how to configure CMake to compile the Python bindings. This can be done either globally or locally.
You may need to add the location of `libopenbabel.so` (on my system, the location is `/usr/local/lib`) to the environment variable LD_LIBRARY_PATH if you get the following error when you try to import the OpenBabel library at the Python prompt:
```
$ python
>>> from openbabel import openbabel Traceback (most recent call last):
File "<stdin>", line 1, in
File "/usr/lib/python2.4/site-packages/openbabel.py", line 9, in
import _openbabel ImportError: libopenbabel.so.3: cannot open shared object file: No such file or directory
```
###### Install Pillow (optional)[¶](#id1)
If you want to display 2D depictions using Pybel (rather than just write to a file), you need the Pillow library, and the Python Tkinter library (part of the standard library).
These should be available through your package manager, e.g. on Ubuntu, Pillow is provided by ‘python-pil’ and
‘python-pil.imagetk’, while Tkinter is provided by ‘python-tk’.
#### The openbabel module[¶](#the-openbabel-module)
The **openbabel** module provides direct access to the C++ Open Babel library from Python. This binding is generated using the SWIG package and provides access to almost all of the Open Babel interfaces via Python, including the base classes OBMol, OBAtom,
OBBond, and OBResidue, as well as the conversion framework OBConversion. As such, essentially any call in the C++ API is available to Python scripts with very little difference in syntax.
As a result, the principal documentation is the
[C++ API documentation](index.html#api).
##### Examples[¶](#examples)
Here we give some examples of common Python syntax for the
`openbabel` module and pointers to the appropriate sections of the API documentation.
The example script below creates atoms and bonds one-by-one using the
[:obapi:`OBMol`](#id1), [:obapi:`OBAtom`](#id3), and [:obapi:`OBBond`](#id5) classes.
```
from openbabel import openbabel
mol = openbabel.OBMol()
print(mol.NumAtoms()) #Should print 0 (atoms)
a = mol.NewAtom()
a.SetAtomicNum(6) # carbon atom a.SetVector(0.0, 1.0, 2.0) # coordinates
b = mol.NewAtom()
mol.AddBond(1, 2, 1) # atoms indexed from 1 print(mol.NumAtoms()) #Should print 2 (atoms)
print(mol.NumBonds()) Should print 1 (bond)
mol.Clear();
```
More commonly, Open Babel can be used to read in molecules using the [:obapi:`OBConversion`](#id7)
framework. The following script reads in molecular information (a SMI file) from a string, adds hydrogens, and writes out an MDL file as a string.
```
from openbabel import openbabel
obConversion = openbabel.OBConversion()
obConversion.SetInAndOutFormats("smi", "mdl")
mol = openbabel.OBMol()
obConversion.ReadString(mol, "C1=CC=CS1")
print(mol.NumAtoms()) #Should print 5 (atoms)
mol.AddHydrogens()
print(mol.NumAtoms()) Should print 9 (atoms) after adding hydrogens
outMDL = obConversion.WriteString(mol)
```
The following script writes out a file using a filename, rather than reading and writing to a Python string.
```
from openbabel import openbabel
obConversion = openbabel.OBConversion()
obConversion.SetInAndOutFormats("pdb", "mol2")
mol = openbabel.OBMol()
obConversion.ReadFile(mol, "1ABC.pdb.gz") # Open Babel will uncompress automatically
mol.AddHydrogens()
print(mol.NumAtoms())
print(mol.NumBonds())
print(mol.NumResidues())
obConversion.WriteFile(mol, '1abc.mol2')
```
##### Using iterators[¶](#using-iterators)
A number of Open Babel toolkit classes provide iterators over various objects; these classes are identifiable by the suffix
“Iter” in the
[list of toolkit classes](http://openbabel.sourceforge.net/api/current/annotated.shtml)
in the API:
* [OBAtomAtomIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBAtomAtomIter.shtml)
and
[OBAtomBondIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBAtomBondIter.shtml)
- given an OBAtom, iterate over all neighboring OBAtoms or OBBonds
* [OBMolAtomIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolAtomIter.shtml),
[OBMolBondIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolBondIter.shtml),
[OBMolAngleIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolAngleIter.shtml),
[OBMolTorsionIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolTorsionIter.shtml),
[OBMolRingIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolRingIter.shtml)
- given an OBMol, iterate over all OBAtoms, OBBonds, OBAngles,
OBTorsions or OBRings.
* [OBMolAtomBFSIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolAtomBFSIter.shtml)
- given an OBMol and the index of an atom, OBMolAtomBFSIter iterates over all the neighbouring atoms in a breadth-first manner.
It differs from the other iterators in that it returns two values -
an OBAtom, and the ‘depth’ of the OBAtom in the breadth-first search (this is useful, for example, when creating circular fingerprints)
* [OBMolPairIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBMolPairIter.shtml)
- given an OBMol, iterate over all pairs of OBAtoms separated by more than three bonds
* [OBResidueIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBResidueIter.shtml)
- given an OBMol representing a protein, iterate over all OBResidues
* [OBResidueAtomIter](http://openbabel.sourceforge.net/api/current/classOpenBabel_1_1OBResidueAtomIter.shtml)
- given an OBResidue, iterate over all OBAtoms
These iterator classes can be used using the typical Python syntax for iterators:
```
for obatom in openbabel.OBMolAtomIter(obmol):
print(obatom.GetAtomicMass())
```
Note that OBMolTorsionIter returns atom IDs which are off by one.
That is, you need to add one to each ID to get the correct ID.
Also, if you add or remove atoms, you will need to delete the existing TorsionData before using OBMolTorsionIter. This is done as follows:
```
mol.DeleteData(openbabel.TorsionData)
```
##### Calling a method requiring an array of C doubles[¶](#calling-a-method-requiring-an-array-of-c-doubles)
Some Open Babel toolkit methods, for example [:obapi:`OBMol::Rotate() <OpenBabel::OBMol::Rotate>`](#id9),
require an array of doubles. It’s not possible to directly use a list of floats when calling such a function from Python. Instead,
you need to first explicitly create a C array using the
*double_array()* function:
```
obMol.Rotate([1.0, -54.7, 3])
# Error!
myarray = openbabel.double_array([1.0, -54.7, 3])
obMol.Rotate(myarray)
# Works!
```
##### Accessing OBPairData, OBUnitCell and other OBGenericData[¶](#accessing-obpairdata-obunitcell-and-other-obgenericdata)
If you want to access any subclass of OBGenericData (such as [:obapi:`OBPairData`](#id11)
or [:obapi:`OBUnitCell`](#id13))
associated with a molecule, you need to ‘cast’ the [:obapi:`OBGenericData`](#id15)
returned by [:obapi:`OBMol.GetData() <OpenBabel::OBMol::GetData>`](#id17) using the *toPairData()*, *toUnitCell()* (etc.)
functions:
```
pairdata = [openbabel.toPairData(x) for x in obMol.GetData()
if x.GetDataType()==openbabel.PairData]
print(pairdata[0].GetAttribute(), pairdata[0].GetValue())
unitcell = openbabel.toUnitCell(obMol.GetData(openbabel.UnitCell))
print(unitcell.GetAlpha(), unitcell.GetSpaceGroup())
```
##### Using FastSearch from Python[¶](#using-fastsearch-from-python)
Rather than use the [:obapi:`FastSearch`](#id19) class directly, it’s easiest to use the [:obapi:`OpenInAndOutFiles() <OpenBabel::OBConversion::OpenInAndOutFiles>`](#id21) method as follows:
```
>>> from openbabel import openbabel
>>> conv=openbabel.OBConversion()
>>> conv.OpenInAndOutFiles("1200mols.smi","index.fs")
True
>>> conv.SetInAndOutFormats("smi","fs")
True
>>> conv.Convert()
This will prepare an index of 1200mols.smi and may take some time...
It took 6 seconds 1192
>>> conv.CloseOutFile()
>>> conv.OpenInAndOutFiles("index.fs","results.smi")
True
>>> conv.SetInAndOutFormats("fs","smi")
True
>>> conv.AddOption("s",conv.GENOPTIONS,"C=CC#N")
>>> conv.Convert()
10 candidates from fingerprint search phase 1202
>>> f=open("results.smi")
>>> f.read()
'OC(=O)C(=Cc1ccccc1)C#N\t298\nN#CC(=Cc1ccccc1)C#N\t490\nO=N(=O)c1cc(ccc1)C=C(C#N
)C#N\t491\nClc1ccc(cc1)C=C(C#N)C#N\t492\nClc1ccc(c(c1)Cl)C=C(C#N)C#N\t493\nClc1c cc(cc1Cl)C=C(C#N)C#N\t494\nBrc1ccc(cc1)C=C(C#N)C#N\t532\nClc1ccccc1C=C(C#N)C#N\t 542\nN#CC(=CC=Cc1occc1)C#N\t548\nCCOC(=O)C(C#N)=C(C)C\t1074\n'
```
##### Combining numpy with Open Babel[¶](#combining-numpy-with-open-babel)
If you are using the Python numerical extension, numpy, and you try to pass values from a numpy array to Open Babel, it may not work unless you convert the values to Python built-in types first:
```
import numpy from openbabel import openbabel
mol = openbabel.OBMol()
atom = mol.NewAtom()
coord = numpy.array([1.2, 2.3, 4.6], "float32")
atom.SetVector(coord[0], coord[1], coord[2])
# Error
atom.SetVector(float(coord[0]), float(coord[1]), float(coord[2]))
# No error
coord = numpy.array([1.2, 2.3, 4.6], "float64")
atom.SetVector(coord[0], coord[1], coord[2])
# No error either - not all numpy arrays will cause an error
```
#### Pybel[¶](#pybel)
Pybel provides convenience functions and classes that make it simpler to use the Open Babel libraries from Python, especially for file input/output and for accessing the attributes of atoms and molecules. The Atom and Molecule classes used by Pybel can be converted to and from the OBAtom and OBMol used by the
`openbabel` module. These features are discussed in more detail below.
The rationale and technical details behind Pybel are described in O’Boyle et al [[omh2008]](index.html#omh2008). To support further development of Pybel, please cite this paper if you use Pybel to obtain results for publication.
Information on the Pybel API can be found at the interactive Python prompt using the `help()` function. The full API is also listed in the next section (see [Pybel API](index.html#pybel-api)).
To use Pybel, use `from openbabel import pybel`.
| [[omh2008]](#id1) | <NAME>, <NAME> and <NAME>.
**Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.**
*Chem. Cent. J.* **2008**, *2*, 5.
[[Link](https://doi.org/10.1186/1752-153X-2-5)] |
##### Atoms and Molecules[¶](#atoms-and-molecules)
A
[`Molecule`](index.html#pybel.Molecule)
can be created in any of three ways:
1. From an [:obapi:`OBMol`](#id2), using `Molecule(myOBMol)`
2. By reading from a file (see [Input/Output](#id14)
below)
3. By reading from a string (see [Input/Output](#id14)
below)
An [`Atom`](index.html#pybel.Atom)
be created in two different ways:
1. From an [:obapi:`OBAtom`](#id4), using `Atom(myOBAtom)`
2. By accessing the [`atoms`](index.html#pybel.Molecule.atoms) attribute of a [`Molecule`](index.html#pybel.Molecule)
Molecules have the following attributes: [`atoms`](index.html#pybel.Molecule.atoms), [`charge`](index.html#pybel.Molecule.charge), [`data`](index.html#pybel.Molecule.data), [`dim`](index.html#pybel.Molecule.dim),
[`energy`](index.html#pybel.Molecule.energy), [`exactmass`](index.html#pybel.Molecule.exactmass), [`formula`](index.html#pybel.Molecule.formula), [`molwt`](index.html#pybel.Molecule.molwt), [`spin`](index.html#pybel.Molecule.spin), [`sssr`](index.html#pybel.Molecule.sssr), [`title`](index.html#pybel.Molecule.title)
and [`unitcell`](index.html#pybel.Molecule.unitcell) (if crystal data). The [`atoms`](index.html#pybel.Molecule.atoms) attribute provides a list of the Atoms in a Molecule. The [`data`](index.html#pybel.Molecule.data) attribute returns a dictionary-like object for accessing and editing the data fields associated with the molecule (technically, it’s a
[`MoleculeData`](index.html#pybel.MoleculeData)
object, but you can use it like it’s a regular dictionary). The
[`unitcell`](index.html#pybel.Molecule.unitcell) attribute gives access to any unit cell data associated with the molecule (see
[:obapi:`OBUnitCell`](#id6)).
The remaining attributes correspond directly to attributes of OBMols: e.g. [`formula`](index.html#pybel.Molecule.formula) is equivalent to
[:obapi:`OBMol::GetFormula() <OpenBabel::OBMol::GetFormula>`](#id8). For more information on what these attributes are, please see the Open Babel C++ documentation for
[:obapi:`OBMol`](#id10).
For example, let’s suppose we have an SD file containing descriptor values in the data fields:
```
>>> mol = next(readfile("sdf", "calculatedprops.sdf")) # (readfile is described below)
>>> print(mol.molwt)
100.1
>>> print(len(mol.atoms))
16
>>> print(mol.data.keys())
{'Comment': 'Created by CDK', 'NSC': 1, 'Hydrogen Bond Donors': 3,
'Surface Area': 342.43, .... }
>>> print(mol.data['Hydrogen Bond Donors'])
3
>>> mol.data['Random Value'] = random.randint(0,1000) # Add a descriptor containing noise
```
Molecules have a [`write()`](index.html#pybel.Molecule.write)
method that writes a representation of a Molecule to a file or to a string. See [Input/Output](#Input.2FOutput) below. They also have a [`calcfp()`](index.html#pybel.Molecule.calcfp)
method that calculates a molecular fingerprint. See [Fingerprints](#fingerprints-pybel)
below.
The [`draw()`](index.html#pybel.Molecule.draw)
method of a Molecule generates 2D coordinates and a 2D depiction of a molecule. It uses the
[OASA library](http://bkchem.zirael.org/oasa_en.html) by <NAME> to do this. The default options are to show the image on the screen (`show=True`), not to write to a file (`filename=None`), to calculate 2D coordinates
(`usecoords=False`) but not to store them (`update=False`).
The [`addh()`](index.html#pybel.Molecule.addh)
and [`removeh()`](index.html#pybel.Molecule.removeh)
methods allow hydrogens to be added and removed.
If a molecule does not have 3D coordinates, they can be generated using the [`make3D()`](index.html#pybel.Molecule.make3D)
method. By default, this includes 50 steps of a geometry optimisation using the MMFF94 forcefield. The list of available forcefields is stored in the
[`forcefields`](index.html#pybel.forcefields)
variable. To further optimise the structure, you can use the
[`localopt()`](index.html#pybel.Molecule.localopt)
method, which by default carries out 500 steps of an optimisation using MMFF94. Note that hydrogens need to be added before calling
`localopt()`.
The [`calcdesc()`](index.html#pybel.Molecule.calcdesc)
method of a Molecule returns a dictionary containing descriptor values for LogP, Polar Surface Area (“TPSA”) and Molar Refractivity
(“MR”). A list of the available descriptors is contained in the variable [`descs`](index.html#pybel.descs).
If only one or two descriptor values are required, you can specify the names as follows: `calcdesc(["LogP", "TPSA"])`. Since the
[`data`](index.html#pybel.Molecule.data) attribute of a Molecule is also a dictionary, you can easily add the result of `calcdesc()` to an SD file (for example)
as follows:
```
mol = next(readfile("sdf", "without_desc.sdf"))
descvalues = mol.calcdesc()
# In Python, the update method of a dictionary allows you
# to add the contents of one dictionary to another mol.data.update(descvalues)
output = Outputfile("sdf", "with_desc.sdf")
output.write(mol)
output.close()
```
For convenience, a Molecule provides an iterator over its Atoms.
This is used as follows:
```
for atom in myMolecule:
# do something with atom
```
Atoms have the following attributes: [`atomicmass`](index.html#pybel.Atom.atomicmass), [`atomicnum`](index.html#pybel.Atom.atomicnum),
[`coords`](index.html#pybel.Atom.coords), [`exactmass`](index.html#pybel.Atom.exactmass), [`formalcharge`](index.html#pybel.Atom.formalcharge), [`heavyvalence`](index.html#pybel.Atom.heavyvalence),
[`heterovalence`](index.html#pybel.Atom.heterovalence), [`hyb`](index.html#pybel.Atom.hyb), [`idx`](index.html#pybel.Atom.idx), [`implicitvalence`](index.html#pybel.Atom.implicitvalence), [`isotope`](index.html#pybel.Atom.isotope),
[`partialcharge`](index.html#pybel.Atom.partialcharge), [`spin`](index.html#pybel.Atom.spin), [`type`](index.html#pybel.Atom.type), [`valence`](index.html#pybel.Atom.valence), [`vector`](index.html#pybel.Atom.vector). The `.coords`
attribute provides a tuple (x, y, z) of the atom’s coordinates. The remaining attributes are as for the *Get* methods of
[:obapi:`OBAtom`](#id12).
##### Input/Output[¶](#id14)
One of the strengths of Open Babel is the number of chemical file formats that it can handle (see [Supported File Formats and Options](index.html#file-formats)). Pybel provides a dictionary of the input and output formats in the variables [`informats`](index.html#pybel.informats)
and [`outformats`](index.html#pybel.outformats)
where the keys are the three-letter codes for each format (e.g.
`pdb`) and the values are the descriptions (e.g. `Protein Data Bank format`).
Pybel greatly simplifies the process of reading and writing molecules to and from strings or files. There are two functions for reading Molecules:
1. [`readstring()`](index.html#pybel.readstring)
reads a Molecule from a string 2. [`readfile()`](index.html#pybel.readfile)
provides an iterator over the Molecules in a file
Here are some examples of their use. Note in particular the use of
`next()` to access the first (and possibly only) molecule in a file:
```
>>> mymol = readstring("smi", "CCCC")
>>> print(mymol.molwt)
58
>>> for mymol in readfile("sdf", "largeSDfile.sdf")
... print(mymol.molwt)
>>> singlemol = next(readfile("pdb", "1CRN.pdb"))
```
If a single molecule is to be written to a molecule or string, the
[`write()`](index.html#pybel.Molecule.write)
method of the Molecule should be used:
1. `mymol.write(format)` returns a string 2. `mymol.write(format, filename)` writes the Molecule to a file.
An optional additional parameter, `overwrite`, should be set to
`True` if you wish to overwrite an existing file.
For files containing multiple molecules, the
[`Outputfile`](index.html#pybel.Outputfile)
class should be used instead. This is initialised with a format and filename (and optional `overwrite` parameter). To write a Molecule to the file, the
[`write()`](index.html#pybel.Outputfile.write)
method of the Outputfile is called with the Molecule as a parameter. When all molecules have been written, the
[`close()`](index.html#pybel.Outputfile.close)
method of the Outputfile should be called.
Here are some examples of output using the Pybel methods and classes:
```
>>> print(mymol.write("smi"))
'CCCC'
>>> mymol.write("smi", "outputfile.txt")
>>> largeSDfile = Outputfile("sdf", "multipleSD.sdf")
>>> largeSDfile.write(mymol)
>>> largeSDfile.write(myothermol)
>>> largeSDfile.close()
```
##### Fingerprints[¶](#fingerprints)
A [`Fingerprint`](index.html#pybel.Fingerprint)
can be created in either of two ways:
1. From a vector returned by the OpenBabel GetFingerprint() method,
using `Fingerprint(myvector)`
2. By calling the [`calcfp()`](index.html#pybel.Molecule.calcfp)
method of a Molecule
The [`calcfp()`](index.html#pybel.Molecule.calcfp) method takes an optional argument, `fptype`,
which should be one of the fingerprint types supported by OpenBabel
(see [Molecular fingerprints and similarity searching](index.html#fingerprints)). The list of supported fingerprints is stored in the variable
[`fps`](index.html#pybel.fps).
If unspecified, the default fingerprint (`FP2`) is calculated.
Once created, the Fingerprint has two attributes: `fp` gives the original OpenBabel vector corresponding to the fingerprint, and
[`bits`](index.html#pybel.Fingerprint.bits) gives a list of the bits that are set.
The Tanimoto coefficient of two Fingerprints can be calculated using the `|` operator.
Here is an example of its use:
```
>>> from openbabel import pybel
>>> smiles = ['CCCC', 'CCCN']
>>> mols = [pybel.readstring("smi", x) for x in smiles] # Create a list of two molecules
>>> fps = [x.calcfp() for x in mols] # Calculate their fingerprints
>>> print(fps[0].bits, fps[1].bits)
[261, 385, 671] [83, 261, 349, 671, 907]
>>> print(fps[0] | fps[1]) # Print the Tanimoto coefficient 0.3333
```
##### SMARTS matching[¶](#smarts-matching)
Pybel also provides a simplified API to the Open Babel SMARTS pattern matcher. A
[`Smarts`](index.html#pybel.Smarts)
object is created, and the
[`findall()`](index.html#pybel.Smarts.findall)
method is then used to return a list of the matches to a given Molecule.
Here is an example of its use:
```
>>> mol = readstring("smi","CCN(CC)CC") # triethylamine
>>> smarts = Smarts("[#6][#6]") # Matches an ethyl group
>>> print(smarts.findall(mol))
[(1, 2), (4, 5), (6, 7)]
```
##### Combining Pybel with `openbabel.py`[¶](#combining-pybel-with-openbabel-py)
It is easy to combine the ease of use of Pybel with the comprehensive coverage of the Open Babel toolkit that
`openbabel.py` provides. Pybel is really a wrapper around
`openbabel.py`, with the result that the OBAtom and OBMol used by
`openbabel.py` can be interconverted to the Atom and Molecule used by Pybel.
The following example shows how to read a molecule from a PDB file using Pybel, and then how to use `openbabel.py` to add hydrogens. It also illustrates how to find out information on what methods and classes are available, while at the interactive Python prompt.
```
>>> from openbabel import pybel
>>> mol = next(pybel.readfile("pdb", "1PYB"))
>>> help(mol)
Help on Molecule in module pybel object:
...
| Attributes:
| atoms, charge, dim, energy, exactmass, flags, formula,
| mod, molwt, spin, sssr, title.
...
| The original Open Babel molecule can be accessed using the attribute:
| OBMol
...
>>> print(len(mol.atoms), mol.molwt)
3430 49315.2
>>> dir(mol.OBMol) # Show the list of methods provided by openbabel.py
['AddAtom', 'AddBond', 'AddConformer', 'AddHydrogens', 'AddPolarHydrogens', ... ]
>>> mol.OBMol.AddHydrogens()
>>> print(len(mol.atoms), mol.molwt)
7244 49406.0
```
The next example is an extension of one of the `openbabel.py`
examples at the top of this page. It shows how a molecule could be created using `openbabel.py`, and then written to a file using Pybel:
```
from openbabel import openbabel, pybel
mol = openbabel.OBMol()
a = mol.NewAtom()
a.SetAtomicNum(6) # carbon atom a.SetVector(0.0, 1.0, 2.0) # coordinates b = mol.NewAtom()
mol.AddBond(1, 2, 1) # atoms indexed from 1
pybelmol = pybel.Molecule(mol)
pybelmol.write("sdf", "outputfile.sdf")
```
#### Pybel API[¶](#pybel-api)
**pybel** - A Python module that simplifies access to the Open Babel API
Global variables:
[`informats`](#pybel.informats), [`outformats`](#pybel.outformats), [`descs`](#pybel.descs),
[`fps`](#pybel.fps), [`forcefields`](#pybel.forcefields), [`operations`](#pybel.operations)
Functions:
[`readfile()`](#pybel.readfile), [`readstring()`](#pybel.readstring)
Classes:
[`Atom`](#pybel.Atom), [`Molecule`](#pybel.Molecule), [`Outputfile`](#pybel.Outputfile), [`Fingerprint`](#pybel.Fingerprint),
[`Smarts`](#pybel.Smarts), [`MoleculeData`](#pybel.MoleculeData)
Note
The `openbabel.py` module can be accessed through the `ob` global variable.
*class* `pybel.``Atom`(*OBAtom*)[¶](#pybel.Atom)
Represent an atom.
Required parameter:
**OBAtom** – an Open Babel [:obapi:`OBAtom`](#id2)
Attributes:
[`atomicmass`](#pybel.Atom.atomicmass), [`atomicnum`](#pybel.Atom.atomicnum), [`coords`](#pybel.Atom.coords),
[`exactmass`](#pybel.Atom.exactmass),
[`formalcharge`](#pybel.Atom.formalcharge), [`heavyvalence`](#pybel.Atom.heavyvalence), [`heterovalence`](#pybel.Atom.heterovalence),
[`hyb`](#pybel.Atom.hyb), [`idx`](#pybel.Atom.idx),
[`implicitvalence`](#pybel.Atom.implicitvalence), [`isotope`](#pybel.Atom.isotope), [`partialcharge`](#pybel.Atom.partialcharge),
[`spin`](#pybel.Atom.spin), [`type`](#pybel.Atom.type),
[`valence`](#pybel.Atom.valence), [`vector`](#pybel.Atom.vector).
The underlying Open Babel [:obapi:`OBAtom`](#id4) can be accessed using the attribute:
> `OBAtom`
`atomicmass`[¶](#pybel.Atom.atomicmass)
Atomic mass
`atomicnum`[¶](#pybel.Atom.atomicnum)
Atomic number
`coords`[¶](#pybel.Atom.coords)
Coordinates of the atom
`exactmass`[¶](#pybel.Atom.exactmass)
Exact mass
`formalcharge`[¶](#pybel.Atom.formalcharge)
Formal charge
`heavyvalence`[¶](#pybel.Atom.heavyvalence)
Number of non-hydrogen atoms attached
`heterovalence`[¶](#pybel.Atom.heterovalence)
Number of heteroatoms attached
`hyb`[¶](#pybel.Atom.hyb)
The hybridization of this atom: 1 for sp, 2 for sp2, 3 for sp3, …
For further details see [:obapi:`OBAtom::GetHyb() <OpenBabel::OBAtom::GetHyb>`](#id6)
`idx`[¶](#pybel.Atom.idx)
The index of the atom in the molecule (starts at 1)
`implicitvalence`[¶](#pybel.Atom.implicitvalence)
The maximum number of connections expected for this molecule
`isotope`[¶](#pybel.Atom.isotope)
The isotope for this atom if specified; 0 otherwise.
`partialcharge`[¶](#pybel.Atom.partialcharge)
Partial charge
`spin`[¶](#pybel.Atom.spin)
Spin multiplicity
`type`[¶](#pybel.Atom.type)
Atom type
`valence`[¶](#pybel.Atom.valence)
Number of explicit connections
`vector`[¶](#pybel.Atom.vector)
Coordinates as a [:obapi:`vector3`](#id8) object.
*class* `pybel.``Fingerprint`(*fingerprint*)[¶](#pybel.Fingerprint)
A molecular fingerprint.
Required parameters:
**fingerprint** – a vector calculated by [:obapi:`OBFingerprint::FindFingerprint() <OpenBabel::OBFingerprint::FindFingerprint>`](#id10)
Attributes:
[`bits`](#pybel.Fingerprint.bits)
Methods:
The `|` operator can be used to calculate the Tanimoto coefficient. For example, given two Fingerprints a and b,
the Tanimoto coefficient is given by:
```
tanimoto = a | b
```
The underlying fingerprint object can be accessed using the attribute `fp`.
`bits`[¶](#pybel.Fingerprint.bits)
A list of bits set in the fingerprint
*class* `pybel.``Molecule`(*OBMol*)[¶](#pybel.Molecule)
Represent a Pybel Molecule.
Required parameter:
**OBMol** – an Open Babel [:obapi:`OBMol`](#id12) or any type of Cinfony Molecule Attributes:
[`atoms`](#pybel.Molecule.atoms), [`charge`](#pybel.Molecule.charge), [`conformers`](#pybel.Molecule.conformers), [`data`](#pybel.Molecule.data),
[`dim`](#pybel.Molecule.dim), [`energy`](#pybel.Molecule.energy), [`exactmass`](#pybel.Molecule.exactmass), [`formula`](#pybel.Molecule.formula),
[`molwt`](#pybel.Molecule.molwt), [`spin`](#pybel.Molecule.spin), [`sssr`](#pybel.Molecule.sssr), [`title`](#pybel.Molecule.title), [`unitcell`](#pybel.Molecule.unitcell).
Methods:
[`addh()`](#pybel.Molecule.addh), [`calcfp()`](#pybel.Molecule.calcfp), [`calcdesc()`](#pybel.Molecule.calcdesc), [`draw()`](#pybel.Molecule.draw),
[`localopt()`](#pybel.Molecule.localopt), [`make3D()`](#pybel.Molecule.make3D), [`removeh()`](#pybel.Molecule.removeh), [`write()`](#pybel.Molecule.write)
The underlying Open Babel [:obapi:`OBMol`](#id14) can be accessed using the attribute:
`OBMol`
An iterator (`__iter__()`) is provided that iterates over the atoms of the molecule. This allows constructions such as the following:
```
for atom in mymol:
print(atom)
```
`addh`()[¶](#pybel.Molecule.addh)
Add hydrogens.
`atoms`[¶](#pybel.Molecule.atoms)
A list of atoms of the molecule
`calcdesc`(*descnames=[]*)[¶](#pybel.Molecule.calcdesc)
Calculate descriptor values.
Optional parameter:
**descnames** – a list of names of descriptors See the [`descs`](#pybel.descs) variable for a list of available descriptors.
If descnames is not specified, all available descriptors are calculated.
`calcfp`(*fptype='FP2'*)[¶](#pybel.Molecule.calcfp)
Calculate a molecular fingerprint.
Optional parameters:
**fptype** – the fingerprint type (default is `FP2`).
See the [`fps`](#pybel.fps) variable for a list of of available fingerprint types.
`charge`[¶](#pybel.Molecule.charge)
The charge on the molecule
`conformers`[¶](#pybel.Molecule.conformers)
Conformers of the molecule
`data`[¶](#pybel.Molecule.data)
Access the molecule’s data through a dictionary-like object,
[`MoleculeData`](#pybel.MoleculeData).
`dim`[¶](#pybel.Molecule.dim)
Are the coordinates 2D, 3D or 0D?
`draw`(*show=True*, *filename=None*, *update=False*, *usecoords=False*)[¶](#pybel.Molecule.draw)
Create a 2D depiction of the molecule.
Optional parameters:
> **show** – display on screen (default is `True`)
> **filename** – write to file (default is `None`)
> **update** – update the coordinates of the atoms
> This sets the atom coordinates to those
> determined by the structure diagram generator
> (default is `False`)
> **usecoords** – use the current coordinates
> This causes the current coordinates to be used
> instead of calculating new 2D coordinates
> (default is `False`)
OASA is used for 2D coordinate generation and depiction. Tkinter and Python Imaging Library are required for image display.
`energy`[¶](#pybel.Molecule.energy)
The molecule’s energy
`exactmass`[¶](#pybel.Molecule.exactmass)
The exact mass
`formula`[¶](#pybel.Molecule.formula)
The molecular formula
`localopt`(*forcefield='mmff94'*, *steps=500*)[¶](#pybel.Molecule.localopt)
Locally optimize the coordinates.
Optional parameters:
**forcefield** – default is `mmff94`.
See the [`forcefields`](#pybel.forcefields) variable for a list of available forcefields.
**steps** – default is 500
If the molecule does not have any coordinates, [`make3D()`](#pybel.Molecule.make3D) is called before the optimization. Note that the molecule needs to have explicit hydrogens. If not, call [`addh()`](#pybel.Molecule.addh).
`make3D`(*forcefield='mmff94'*, *steps=50*)[¶](#pybel.Molecule.make3D)
Generate 3D coordinates.
Optional parameters:
**forcefield** – default is `mmff94`.
See the [`forcefields`](#pybel.forcefields) variable for a list of available forcefields.
steps – default is `50`
Once coordinates are generated, hydrogens are added and a quick local optimization is carried out with 50 steps and the MMFF94 forcefield. Call [`localopt()`](#pybel.Molecule.localopt) if you want to improve the coordinates further.
`molwt`[¶](#pybel.Molecule.molwt)
The molecular weight
`removeh`()[¶](#pybel.Molecule.removeh)
Remove hydrogens.
`spin`[¶](#pybel.Molecule.spin)
The spin multiplicity
`sssr`[¶](#pybel.Molecule.sssr)
The Smallest Set of Smallest Rings (SSSR)
`title`[¶](#pybel.Molecule.title)
The molecule title
`unitcell`[¶](#pybel.Molecule.unitcell)
Access any unit cell data
`write`(*format='smi'*, *filename=None*, *overwrite=False*)[¶](#pybel.Molecule.write)
Write the molecule to a file or return a string.
Optional parameters:
**format** – chemical file format See the [`outformats`](#pybel.outformats) variable for a list of available output formats (default is `smi`)
**filename** – default is `None`
**overwrite** – overwrite the output file if it already exists?
Default is `False`.
If a filename is specified, the result is written to a file.
Otherwise, a string is returned containing the result.
To write multiple molecules to the same file you should use the [`Outputfile`](#pybel.Outputfile) class.
*class* `pybel.``MoleculeData`(*obmol*)[¶](#pybel.MoleculeData)
Store molecule data in a dictionary-type object
Required parameters:
obmol – an Open Babel [:obapi:`OBMol`](#id16)
Methods and accessor methods are like those of a dictionary except that the data is retrieved on-the-fly from the underlying [:obapi:`OBMol`](#id18).
Example:
```
>>> mol = readfile("sdf", 'head.sdf').next() # Python 2
>>> # mol = next(readfile("sdf", 'head.sdf')) # Python 3
>>> data = mol.data
>>> print(data)
{'Comment': 'CORINA 2.61 0041 25.10.2001', 'NSC': '1'}
>>> print(len(data), data.keys(), data.has_key("NSC"))
2 ['Comment', 'NSC'] True
>>> print(data['Comment'])
CORINA 2.61 0041 25.10.2001
>>> data['Comment'] = 'This is a new comment'
>>> for k,v in data.items():
... print(k, "-->", v)
Comment --> This is a new comment NSC --> 1
>>> del data['NSC']
>>> print(len(data), data.keys(), data.has_key("NSC"))
1 ['Comment'] False
```
`clear`()[¶](#pybel.MoleculeData.clear)
`has_key`(*key*)[¶](#pybel.MoleculeData.has_key)
`items`()[¶](#pybel.MoleculeData.items)
`iteritems`()[¶](#pybel.MoleculeData.iteritems)
`keys`()[¶](#pybel.MoleculeData.keys)
`update`(*dictionary*)[¶](#pybel.MoleculeData.update)
`values`()[¶](#pybel.MoleculeData.values)
*class* `pybel.``Outputfile`(*format*, *filename*, *overwrite=False*, *opt=None*)[¶](#pybel.Outputfile)
Represent a file to which *output* is to be sent.
Although it’s possible to write a single molecule to a file by calling the [`write()`](#pybel.Molecule.write) method of a [`Molecule`](#pybel.Molecule), if multiple molecules are to be written to the same file you should use the [`Outputfile`](#pybel.Outputfile) class.
Required parameters:
**format** – chemical file format See the [`outformats`](#pybel.outformats) variable for a list of available output formats
**filename**
Optional parameters:
**overwrite** – overwrite the output file if it already exists?
Default is `False`
**opt** – a dictionary of format-specific options For format options with no parameters, specify the value as None.
Methods:
[`write()`](#pybel.Outputfile.write), [`close()`](#pybel.Outputfile.close)
`close`()[¶](#pybel.Outputfile.close)
Close the output file to further writing.
`write`(*molecule*)[¶](#pybel.Outputfile.write)
Write a molecule to the output file.
Required parameters:
**molecule** – A [`Molecule`](#pybel.Molecule)
*class* `pybel.``Smarts`(*smartspattern*)[¶](#pybel.Smarts)
A Smarts Pattern Matcher
Required parameters:
**smartspattern** - A SMARTS pattern Methods:
[`findall()`](#pybel.Smarts.findall)
Example:
```
>>> mol = readstring("smi","CCN(CC)CC") # triethylamine
>>> smarts = Smarts("[#6][#6]") # Matches an ethyl group
>>> print(smarts.findall(mol))
[(1, 2), (4, 5), (6, 7)]
```
The numbers returned are the indices (starting from 1) of the atoms that match the SMARTS pattern. In this case, there are three matches for each of the three ethyl groups in the molecule.
`findall`(*molecule*)[¶](#pybel.Smarts.findall)
Find all matches of the SMARTS pattern to a particular molecule.
Required parameters:
**molecule** - A [`Molecule`](#pybel.Molecule)
`pybel.``descs` *= ['LogP', 'MR', 'TPSA']*[¶](#pybel.descs)
A list of supported descriptors
`pybel.``forcefields` *= ['uff', 'mmff94', 'ghemical']*[¶](#pybel.forcefields)
A list of supported forcefields
`pybel.``fps` *= ['FP2', 'FP3', 'FP4']*[¶](#pybel.fps)
A list of supported fingerprint types
`pybel.``informats` *= {}*[¶](#pybel.informats)
A dictionary of supported input formats
`pybel.``operations` *= ['Gen3D']*[¶](#pybel.operations)
A list of supported operations
`pybel.``outformats` *= {}*[¶](#pybel.outformats)
A dictionary of supported output formats
`pybel.``readfile`(*format*, *filename*, *opt=None*)[¶](#pybel.readfile)
Iterate over the molecules in a file.
Required parameters:
**format** – chemical file format See the [`informats`](#pybel.informats) variable for a list of available input formats
**filename**
Optional parameters:
**opt** – a dictionary of format-specific options For format options with no parameters, specify the value as None.
You can access the first molecule in a file using the next() method of the iterator (or the next() keyword in Python 3):
```
mol = readfile("smi", "myfile.smi").next() # Python 2 mol = next(readfile("smi", "myfile.smi")) # Python 3
```
You can make a list of the molecules in a file using:
```
mols = list(readfile("smi", "myfile.smi"))
```
You can iterate over the molecules in a file as shown in the following code snippet:
```
>>> atomtotal = 0
>>> for mol in readfile("sdf", "head.sdf"):
... atomtotal += len(mol.atoms)
...
>>> print(atomtotal)
43
```
`pybel.``readstring`(*format*, *string*, *opt=None*)[¶](#pybel.readstring)
Read in a molecule from a string.
> Required parameters:
> **format** – chemical file format
> See the [`informats`](#pybel.informats) variable for a list
> of available input formats
> **string**
Optional parameters:
> **opt** – a dictionary of format-specific options
> For format options with no parameters, specify the
> value as None.
Example:
```
>>> input = "C1=CC=CS1"
>>> mymol = readstring("smi", input)
>>> len(mymol.atoms)
5
```
#### Examples[¶](#examples)
##### Output Molecular Weight for a Multi-Molecule SDF File[¶](#output-molecular-weight-for-a-multi-molecule-sdf-file)
Let’s say we want to print out the molecular weights of every molecule in an SD file. Why? Well, we might want to plot a histogram of the distribution, or see whether the average of the distribution is significantly different (in the statistical sense)
compared to another SD file.
openbabel.py
```
from openbabel import openbabel as ob
obconversion = ob.OBConversion()
obconversion.SetInFormat("sdf")
obmol = ob.OBMol()
notatend = obconversion.ReadFile(obmol,"../xsaa.sdf")
while notatend:
print(obmol.GetMolWt())
obmol = ob.OBMol()
notatend = obconversion.Read(obmol)
```
Pybel
```
from openbabel import pybel
for molecule in pybel.readfile("sdf","../xsaa.sdf"):
print(molecule.molwt)
```
##### Find information on all of the atoms and bonds connected to a particular atom[¶](#find-information-on-all-of-the-atoms-and-bonds-connected-to-a-particular-atom)
First of all, look at all of the classes in the [Open Babel API](index.html#api) that end with “Iter”. You should use these whenever you need to do something like iterate over all of the atoms or bonds connected to a particular atom, iterate over all the atoms in a molecule,
iterate over all of the residues in a protein, and so on.
As an example, let’s say we want to find information on all of the bond orders and atoms connected to a particular OBAtom called
‘obatom’. The idea is that we iterate over the neighbouring atoms using OBAtomAtomIter, and then find the bond between the neighbouring atom and ‘obatom’. Alternatively, we could have iterated over the bonds (OBAtomBondIter), but we would need to look at the indices of the two atoms at the ends of the bond to find out which is the neighbouring atom:
```
for neighbour_atom in ob.OBAtomAtomIter(obatom):
print(neighbour_atom.GetAtomicNum())
bond = obatom.GetBond(neighbour_atom)
print(bond.GetBondOrder())
```
##### Examples from around the web[¶](#examples-from-around-the-web)
* Noel O’Blog -
[Hack that SD file](http://baoilleach.blogspot.com/2007/07/pybel-hack-that-sd-file.html),
Just How Unique are your Molecules
[Part I](http://baoilleach.blogspot.com/2007/07/pybel-just-how-unique-are-your.html)
and
[Part II](http://baoilleach.blogspot.com/2007/07/pybel-just-how-unique-are-your_12.html),
[Calculate circular fingerprints with Pybel](http://baoilleach.blogspot.com/2008/02/calculate-circular-fingerprints-with.html),
[Molecular Graph-ics with Pybel](http://baoilleach.blogspot.com/2008/10/molecular-graph-ics-with-pybel.html),
and
[Generating InChI’s Mini-Me, the InChIKey](http://baoilleach.blogspot.com/2008/10/generating-inchis-mini-me-inchikey.html).
* [Filter erroneous structures from the ZINC database](http://blur.compbio.ucsf.edu/pipermail/zinc-fans/2007-September/000293.html)
* Quantum Pharmaceuticals -
[Investigation of datasets for hERG binding](http://drugdiscoverywizzards.blogspot.com/2007/12/how-good-are-biological-experiments.html)
* cclib - Given the coordinates, charge, and multiplicity,
[how to create the corresponding OBMol](http://cclib.svn.sourceforge.net/viewvc/cclib/tags/cclib-0.8/src/cclib/bridge/cclib2openbabel.py?view=markup)
* <NAME> wrote an implementation of [Murcko fragments](http://flo.nigsch.com/?p=29) using Pybel
* <NAME>’s [Chemical Toolkit Rosetta](http://ctr.wikia.com/wiki/Chemistry_Toolkit_Rosetta_Wiki) contains several examples of Python code using openbabel.py and pybel
##### Split an SDF file using the molecule titles[¶](#split-an-sdf-file-using-the-molecule-titles)
The following was a request on the
[CCL.net](http://ccl.net/cgi-bin/ccl/message-new?2009+10+22+002)
list:
> Hi all, Does anyone have a script to split an SDFfile into single
> sdfs named after each after each individual molecule as specified
> in first line of parent multi file?
The solution is simple…
```
from openbabel import pybel for mol in pybel.readfile("sdf", "bigmol.sdf"):
mol.write("sdf", "%s.sdf" % mol.title)
```
##### An implementation of RECAP[¶](#an-implementation-of-recap)
<NAME> (of [gNova](http://www.gnova.com/)) has written an implementation of the RECAP fragmentation algorithm in 130 lines of Python. The code is at [[1]](http://gist.github.com/95387).
TJ’s book,
“[Design and Use of Relational Databases in Chemistry](http://www.amazon.com/Design-Use-Relational-Databases-Chemistry/dp/1420064428/ref=sr_1_1?ie=UTF8&s=books&qid=1221754435&sr=1-1)”,
also contains examples of Python code using Open Babel to create and query molecular databases (see for example the link to Open Babel code in the [Appendix](http://www.gnova.com/book/)).
### Java[¶](#java)
The `openbabel.jar` file in the Open Babel distribution allows you to use the Open Babel C++ library from Java or any of the other JVM languages (Jython, JRuby, BeanShell, etc.).
#### Quickstart Example[¶](#quickstart-example)
Let’s begin by looking at an example program that uses Open Babel. The following program carries out file format conversion, iteration over atoms and SMARTS pattern matching:
```
import org.openbabel.*;
public class Test {
public static void main(String[] args) {
// Initialise
System.loadLibrary("openbabel_java");
// Read molecule from SMILES string
OBConversion conv = new OBConversion();
OBMol mol = new OBMol();
conv.SetInFormat("smi");
conv.ReadString(mol, "C(Cl)(=O)CCC(=O)Cl");
// Print out some general information
conv.SetOutFormat("can");
System.out.print("Canonical SMILES: " +
conv.WriteString(mol));
System.out.println("The molecular weight is "
+ mol.GetMolWt());
for(OBAtom atom : new OBMolAtomIter(mol))
System.out.println("Atom " + atom.GetIdx()
+ ": atomic number = " + atom.GetAtomicNum()
+ ", hybridisation = " + atom.GetHyb());
// What are the indices of the carbon atoms
// of the acid chloride groups?
OBSmartsPattern acidpattern = new OBSmartsPattern();
acidpattern.Init("C(=O)Cl");
acidpattern.Match(mol);
vectorvInt matches = acidpattern.GetUMapList();
System.out.println("There are " + matches.size() +
" acid chloride groups");
System.out.print("Their C atoms have indices: ");
for(int i=0; i<matches.size(); i++)
System.out.print(matches.get(i).get(0) + " ");
}
}
```
Output:
```
Canonical SMILES: ClC(=O)CCC(=O)Cl The molecular weight is 154.9793599 Atom 1: atomic number = 6, hybridisation = 2 Atom 2: atomic number = 17, hybridisation = 0 Atom 3: atomic number = 8, hybridisation = 2 Atom 4: atomic number = 6, hybridisation = 3 Atom 5: atomic number = 6, hybridisation = 3 Atom 6: atomic number = 6, hybridisation = 2 Atom 7: atomic number = 8, hybridisation = 2 Atom 8: atomic number = 17, hybridisation = 0 There are 2 acid chloride groups Their C atoms have indices: 1 6
```
#### Installation[¶](#installation)
##### Windows[¶](#windows)
`openbabel.jar` is installed along with the OpenBabelGUI on Windows, typically in `C:/Program Files (x86)/OpenBabel-2.3.2`. As an example of how to use `openbabel.jar`, download [OBTest.java](http://openbabel.svn.sf.net/viewvc/openbabel/openbabel/tags/openbabel-2-2-1/scripts/java/OBTest.java?revision=2910) and compile and run it as follows:
```
C:\> set CLASSPATH=C:\Program Files (x86)\OpenBabel-2.3.2\openbabel.jar;.
C:\> "C:\Program Files\Java\jdk1.5.0_16\bin\javac.exe" OBTest.java C:\> "C:\Program Files\Java\jdk1.5.0_16\bin\java.exe" OBTest Running OBTest...
Benzene has 6 atoms.
C:\>
```
##### MacOSX and Linux[¶](#macosx-and-linux)
The following instructions describe how to compile and use these bindings on MacOSX and Linux:
> 1. `openbabel.jar` is included in the Open Babel source distribution in `scripts/java`. To compile a Java application that uses this (e.g. the example program shown above), use a command similar to the following:
> ```
> javac Test.java -cp ../openbabel-2.3.2/scripts/java/openbabel.jar
> ```
> 2. To run the resulting `Test.class` on MacOSX or Linux you first need to compile the Java bindings as described in the section [Compile language bindings](index.html#compile-bindings). This creates `lib/libopenbabel_java.so` in the build directory.
> 3. Add the location of `openbabel.jar` to the environment variable CLASSPATH, not forgetting to append the location of `Test.class` (typically “.”):
> ```
> export CLASSPATH=/home/user/Tools/openbabel-2.3.2/scripts/java/openbabel.jar:.
> ```
> 4. Add the location of `libopenbabel_java.so` to the environment variable LD_LIBRARY_PATH. Additionally, if you have not installed Open Babel globally you should set BABEL_LIBDIR to the location of the Open Babel library and BABEL_DATADIR to the `data` directory.
> 5. Now, run the example application. The output should be as shown above.
#### API[¶](#api)
`openbabel.jar` provides direct access to the C++ Open Babel library from Java through the namespace **org.openbabel**. This binding is generated using the SWIG package and provides access to almost all of the Open Babel interfaces from Java, including the base classes [:obapi:`OBMol`](#id1), [:obapi:`OBAtom`](#id3), [:obapi:`OBBond`](#id5), and [:obapi:`OBResidue`](#id7), as well as the conversion framework [:obapi:`OBConversion`](#id9).
Essentially any call in the C++ API is available to Java programs with very little difference in syntax. As a result, the principal documentation is the [Open Babel C++ API documentation](index.html#api). A few differences exist, however:
* Global variables, global functions and constants in the C++ API can be found in **org.openbabel.openbabel_java**. The variables are accessible through get methods.
* When accessing various types of [:obapi:`OBGenericData`](#id11), you will need to cast them to the particular subclass using the global functions, *toPairData*, *toUnitCell*, etc.
* The Java versions of the iterator classes in the C++ API (that is, all those classes ending in *Iter*) implement the *Iterator* and *Iterable* interfaces. This means that the following *foreach* loop is possible:
```
for(OBAtom atom : new OBMolAtomIter(mol)) {
System.out.println(atom.GetAtomicNum());
}
```
* To facilitate use of the [:obapi:`OBMolAtomBFSIter`](#id13), *OBAtom* has been extended to incorporate a *CurrentDepth* value, accessible through a get method:
```
for(OBAtom atom : new OBMolAtomBFSIter(mol)) {
System.out.println(atom.GetCurrentDepth());
}
```
### Perl[¶](#perl)
#### Installation[¶](#installation)
The Perl bindings are available only on MacOSX and Linux. (We could not get them to work on Windows.) See [Compile language bindings](index.html#compile-bindings) for information on how to configure CMake to compile and install the Perl bindings.
#### Using Chemistry::OpenBabel[¶](#using-chemistry-openbabel)
The Chemistry::OpenBabel module is designed to allow Perl scripts to use the C++ Open Babel library. The bindings are generated using the SWIG package and provides access to almost all of the Open Babel interfaces via Perl, including the base classes OBMol,
OBAtom, OBBond, and OBResidue, as well as the conversion framework OBConversion.
As such, essentially any call in the C++ API is available to Perl access with very little difference in syntax. This guide is designed to give examples of common Perl syntax for Chemistry::OpenBabel and pointers to the appropriate sections of the [API documentation](index.html#api).
The example script below creates atoms and bonds one-by-one using the OBMol, OBAtom, and OBBond classes.
```
#!/usr/bin/perl
use Chemistry::OpenBabel;
my $obMol = new Chemistry::OpenBabel::OBMol;
$obMol->NewAtom();
$numAtoms = $obMol->NumAtoms(); # now 1 atom
my $atom1 = $obMol->GetAtom(1); # atoms indexed from 1
$atom1->SetVector(0.0, 1.0, 2.0);
$atom1->SetAtomicNum(6); # carbon atom
$obMol->NewAtom();
$obMol->AddBond(1, 2, 1); # bond between atoms 1 and 2 with bond order 1
$numBonds = $obMol->NumBonds(); # now 1 bond
$obMol->Clear();
```
More commonly, Open Babel can be used to read in molecules using the OBConversion framework. The following script reads in molecular information from a SMILES string, adds hydrogens, and writes out an MDL file as a string.
```
#!/usr/bin/perl
use Chemistry::OpenBabel;
my $obMol = new Chemistry::OpenBabel::OBMol;
my $obConversion = new Chemistry::OpenBabel::OBConversion;
$obConversion->SetInAndOutFormats("smi", "mdl");
$obConversion->ReadString($obMol, "C1=CC=CS1");
$numAtoms = $obMol->NumAtoms(); # now 5 atoms
$obMol->AddHydrogens();
$numAtoms = $obMol->NumAtoms(); # now 9 atoms
my $outMDL = $obConversion->WriteString($obMol);
```
The following script writes out a file using a filename, rather than reading and writing to a Perl string.
```
#!/usr/bin/perl
use Chemistry::OpenBabel;
my $obMol = new Chemistry::OpenBabel::OBMol;
my $obConversion = new Chemistry::OpenBabel::OBConversion;
$obConversion->SetInAndOutFormats("pdb", "mol2");
$obConversion->ReadFile($obMol, "1ABC.pdb");
$obMol->AddHydrogens();
print "# of atoms: $obMol->NumAtoms()";
print "# of bonds: $obMol->NumBonds()";
print "# of residues: $obMol->NumResidues()";
$obConversion->WriteFile($obMol, "1abc.mol2");
```
#### Examples[¶](#examples)
##### Output Molecular Weight for a Multi-Molecule SDF File[¶](#output-molecular-weight-for-a-multi-molecule-sdf-file)
Let’s say we want to print out the molecular weights of every molecule in an SD file. Why? Well, we might want to plot a histogram of the distribution, or see whether the average of the distribution is significantly different (in the statistical sense) compared to another SD file.
```
use Chemistry::OpenBabel;
my $obconversion = new Chemistry::OpenBabel::OBConversion;
$obconversion->SetInFormat("sdf");
my $obmol = new Chemistry::OpenBabel::OBMol;
my $notatend = $obconversion->ReadFile($obmol, "../xsaa.sdf");
while ($notatend) {
print $obmol->GetMolWt(), "\n";
$obmol->Clear();
$notatend = $obconversion->Read($obmol);
}
```
##### Add and Delete Atoms[¶](#add-and-delete-atoms)
This script shows an example of deleting and modifying atoms to transform one structure to a related one. It operates on a set of substituted thiophenes, deletes the sulfur atom (note that R1 and R2 may contain sulfur, so the SMARTS pattern is designed to constrain to the ring sulfur), etc. The result is a substituted ethylene, as indicated in the diagrams.
```
use Chemistry::OpenBabel;
my $obMol = new Chemistry::OpenBabel::OBMol;
my $obConversion = new Chemistry::OpenBabel::OBConversion;
my $filename = shift @ARGV;
$obConversion->SetInAndOutFormats("xyz", "mol");
$obConversion->ReadFile($obMol, $filename);
for (1..$obMol->NumAtoms()) {
$atom = $obMol->GetAtom($_);
# look to see if this atom is a thiophene sulfur atom
if ($atom->MatchesSMARTS("[#16D2]([#6D3H1])[#6D3H1]")) {
$sulfurIdx = $atom->GetIdx();
# see if this atom is one of the carbon atoms bonded to a thiophene sulfur
} elsif ($atom->MatchesSMARTS("[#6D3H1]([#16D2][#6D3H1])[#6]") ) {
if ($c2Idx == 0) { $c2Idx = $atom->GetIdx(); }
else {$c5Idx = $atom->GetIdx(); }
}
}
# Get the actual atom objects -- indexing will change as atoms are added and deleted!
$sulfurAtom = $obMol->GetAtom($sulfurIdx);
$c2Atom = $obMol->GetAtom($c2Idx);
$c5Atom = $obMol->GetAtom($c5Idx);
$obMol->DeleteAtom($sulfurAtom);
$obMol->DeleteHydrogens($c2Atom);
$obMol->DeleteHydrogens($c5Atom);
$c2Atom->SetAtomicNum(1);
$c5Atom->SetAtomicNum(1);
$obConversion->WriteFile($obMol, "$filename.mol");
```
### CSharp and OBDotNet[¶](#csharp-and-obdotnet)
**OBDotNet** is a compiled assembly that allows Open Babel to be used from the various .NET languages (e.g. Visual Basic, C#, IronPython, IronRuby, and J#) on Windows, Linux and MacOSX. The current version is OBDotNet 0.4.
#### Installation[¶](#installation)
##### Windows[¶](#windows)
The `OBDotNet.dll` assembly provided on Windows was compiled using the .NET framework v3.5 for the x86 platform. To use it, you will need to compile your code using .NET v3.5 or newer and you will also need to target x86 (`/platform:x86`).
The following instructions describe how to compile a simple C# program that uses OBDotNet:
> 1. First you need to download and install the **OpenBabelGUI version 2.3.2**
> 2. Next create an example CSharp program that uses the Open Babel API (see below for one or use [this link](http://openbabel.svn.sf.net/viewvc/openbabel/openbabel/tags/openbabel-2-2-1/scripts/csharp/example.cs?revision=2910)). Let’s call this `example.cs`.
> 3. Copy `OBDotNet.dll` from the Open Babel installation into the same folder as `example.cs`.
> 4. Open a command prompt at the location of `example.cs` and compile it as follows:
> ```
> C:\Work> C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe
> /reference:OBDotNet.dll /platform:x86 example.cs
> ```
> 5. Run the created executable, **example.exe**, to discover the molecule weight of propane:
> ```
> C:\Work> example.exe
> 44.09562
> ```
If you prefer to use the MSVC# GUI, note that the Express edition does not have the option to choose x86 as a target. This will be a problem if you are using a 64-bit operating system. There’s some information at [Coffee Driven Development](http://coffeedrivendevelopment.blogspot.com/2008/06/hacking-vs-c-2008-express.html) on how to get around this.
##### MacOSX and Linux[¶](#macosx-and-linux)
On Linux and MacOSX you need to use Mono, the open source implementation of the .NET framework, to compile the bindings. The following instructions describe how to compile and use these bindings:
> 1. `OBDotNet.dll` is included in the Open Babel source distribution in `scripts/csharp`. To compile a CSharp application that uses this (e.g. the example program shown below), use a command similar to the following:
> ```
> gmcs example.cs /reference:../openbabel-2.3.2/scripts/csharp/OBDotNet.dll
> ```
> 2. To run this on MacOSX or Linux you need to compile the CSharp bindings as described in the section [Compile language bindings](index.html#compile-bindings). This creates `lib/libopenbabel_csharp.so` in the build directory.
> 3. Add the location of `OBDotNet.dll` to the environment variable MONO_PATH. Add the location of `libopenbabel_csharp.so` to the environment variable LD_LIBRARY_PATH. Additionally, if you have not installed Open Babel globally you should set BABEL_LIBDIR to the location of the Open Babel library and BABEL_DATADIR to the `data` directory.
> 4. Run `example.exe`:
> ```
> $ ./example.exe
> 44.09562
> ```
#### OBDotNet API[¶](#obdotnet-api)
The API is almost identical to the Open Babel [C++ API](index.html#api). Differences are described here.
Using iterators
In OBDotNet, iterators are provided as methods of the relevant class. The full list is as follows:
* **OBMol** has `.Atoms()`, `.Bonds()`, `.Residues()`, and `.Fragments()`. These correspond to [:obapi:`OBMolAtomIter`](#id1), [:obapi:`OBMolBondIter`](#id3), [:obapi:`OBResidueIter`](#id5) and [:obapi:`OBMolAtomDFSIter`](#id7) respectively.
* **OBAtom** has `.Bonds()` and `.Neighbours()`. These correspond to [:obapi:`OBAtomBondIter`](#id9) and [:obapi:`OBAtomAtomIter`](#id11) respectively.
Such iterators are used as follows:
```
foreach (OBAtom atom in myobmol.Atoms())
System.Console.WriteLine(atom.GetAtomType());
```
Other iterators in the C++ API not listed above can still be used through their IEnumerator methods.
Handling OBGenericData
To cast [:obapi:`OBGenericData`](#id13) to a specific subclass, you should use the `.Downcast <T>` method, where `T` is a subclass of **OBGenericData**.
Open Babel Constants
Open Babel constants are available in the class `openbabelcsharp`.
#### Examples[¶](#examples)
The following sections show how the same example application would be programmed in C#, Visual Basic and IronPython. The programs print out the molecular weight of propane (represented by the SMILES string “CCC”).
C#
```
using System;
using OpenBabel;
namespace MyConsoleApplication
{
class Program
{
static void Main(string[] args)
{
OBConversion obconv = new OBConversion();
obconv.SetInFormat("smi");
OBMol mol = new OBMol();
obconv.ReadString(mol, "CCC");
System.Console.WriteLine(mol.GetMolWt());
}
}
}
```
Visual Basic
```
Imports OpenBabel
Module Module1
Sub Main()
Dim OBConv As New OBConversion()
Dim Mol As New OBMol()
OBConv.SetInFormat("smi")
OBConv.ReadString(Mol, "CCC")
System.Console.Write("The molecular weight of propane is " & Mol.GetMolWt())
End Sub
End Module
```
IronPython
```
import clr clr.AddReference("OBDotNet.dll")
import OpenBabel as ob
conv = ob.OBConversion()
conv.SetInFormat("smi")
mol = ob.OBMol()
conv.ReadString(mol, "CCC")
print(mol.GetMolWt())
```
### Ruby[¶](#ruby)
As with the other language bindings, just follow the instructions at [Compile language bindings](index.html#compile-bindings) to build the Ruby bindings.
Like any Ruby module, the Open Babel bindings can be used from a Ruby script or interactively using **irb** as follows:
```
$ irb irb(main):001:0> require 'openbabel'
=> true irb(main):002:0> c=OpenBabel::OBConversion.new
=> #<OpenBabel::OBConversion:0x2acedbadd020>
irb(main):003:0> c.set_in_format 'smi'
=> true irb(main):004:0> benzene=OpenBabel::OBMol.new
=> #<OpenBabel::OBMol:0x2acedbacfa10>
irb(main):005:0> c.read_string benzene, 'c1ccccc1'
=> true irb(main):006:0> benzene.num_atoms
=> 6
```
### Updating to Open Babel 3.0 from 2.x[¶](#updating-to-open-babel-3-0-from-2-x)
Open Babel 3.0 breaks the API in a number of cases, and introduces some new behavior behind-the-scenes. These changes were necessary to fix some long standing issues impacting chemical accuracy as well as performance.
Here we describe the main changes, and how to change existing code to adapt.
#### Removal of babel[¶](#removal-of-babel)
The `babel` executable has been removed, and `obabel` should be used instead. Essentially **obabel** is a modern version of **babel** with additional capabilities and a more standard interface. Typically the only change needed is to place `-O` before the output filename:
```
$ babel -ismi tmp.smi -omol out.mol
$ obabel -ismi tmp.smi -omol -O out.mol
```
Specifically, the differences are as follows:
* **obabel** requires that the output file be specified with a `-O` option. This is closer to the normal Unix convention for commandline programs, and prevents users accidentally overwriting the input file.
* **obabel** is more flexible when the user needs to specify parameter values on options. For instance, the `--unique` option can be used with or without a parameter (specifying the criteria used). With **babel**, this only works when the option is the last on the line; with **obabel**, no such restriction applies. Because of the original design of **babel**, it is not possible to add this capability in a backwards-compatible way.
* **obabel** has a shortcut for entering SMILES strings. Precede the SMILES by -: and use in place of an input file. The SMILES string should be enclosed in quotation marks. For example:
```
obabel -:"O=C(O)c1ccccc1OC(=O)C" -ocan
```
More than one can be used, and a molecule title can be included if enclosed in quotes:
```
obabel -:"O=C(O)c1ccccc1OC(=O)C aspirin" -:"Oc1ccccc1C(=O)O salicylic acid"
-ofpt
```
* **obabel** cannot use concatenated single-character options.
#### Python module[¶](#python-module)
In OB 3.x, both `openbabel.py` and `pybel.py` live within the `openbabel` module:
```
# OB 2.x import openbabel as ob import pybel
# OB 3.0 from openbabel import openbabel as ob from openbabel import pybel
```
While more verbose, the new arrangement is in line with standard practice and helps avoid conflict with a different Python project, PyBEL.
#### Handling of elements and related information[¶](#handling-of-elements-and-related-information)
The API for interconverting atomic numbers and element symbols has been replaced for performance reasons. The `OBElementTable` class has been removed and its associated functions are now available through the `OBElements` namespace:
```
// OB 2.x OBElementTable etab;
const char *elem = etab.GetSymbol(6);
unsigned int atomic_num = etab.GetAtomicNum(elem);
// OB 3.0
#include <openbabel/elements.h>
const char *elem = OBElements::GetSymbol(6);
unsigned int atomic_num = OBElements::GetAtomicNum(elem);
```
Furthermore, the OBAtom API convenience functions for testing for particular elements (e.g. `IsHydrogen()`) have been removed. Instead, `OBAtom::GetAtomicNum()` should be used along with an element constant or atomic number:
```
// OB 2.x if (atom->IsCarbon()) {...
// OB 3.0 if (atom->GetAtomicNum() == OBElements.Carbon) {...
// or if (atom->GetAtomicNum() == 6) {...
```
Handling of isotope information now longer uses `OBIsotopeTable` but is accessed through the `OBElements` namespace:
```
// OB 2.x OBIsotopeTable isotab;
isotab.GetExactMass(6, 14);
// OB 3.0 double exact = OBElements.GetExactMass(OBElements.Carbon, 14);
```
#### Atom classes[¶](#atom-classes)
In OB 2.x, atom class information was stored as part of an `OBAtomClassData` object attached to an `OBMol` and accessed via `OBMol.GetData("Atom Class")`. In OB 3.0, atom class information is instead stored as an `OBPairInteger` associated with an `OBAtom` and accessed via `OBAtom.GetData("Atom Class")`.
#### OBAtom valence and degree methods[¶](#obatom-valence-and-degree-methods)
OB 2.x referred to the function that returned the explicit degree of an atom as `GetValence()`. This was confusing, at best. To find the explicit valence, the `BOSum()` method was required. OB 3.0 avoids this confusion by renaming methods associated with degree or valence:
* `OBAtom::GetExplicitValence()` (OB 2.x `BOSum()`)
* `OBAtom::GetExplicitDegree()` (OB 2.x `GetValence()`)
* `OBAtom::GetHvyDegree()` (OB 2.x `GetHvyValence()`)
* `OBAtom::GetHeteroDegree()` (OB 2.x `GetHeteroValence()`)
#### Molecule, atom and bond flags[¶](#molecule-atom-and-bond-flags)
The “Unset” methods for molecule, atom and bond flags have been removed. Instead, a value of `false` should be passed to the corresponding “Set” method. For example, `OBMol::UnsetAromaticPerceived()` in OB 2.x is now `OBMol::SetAromaticPerceived(false)`.
#### Removal of deprecated methods[¶](#removal-of-deprecated-methods)
Several deprecated methods have been removed. For the most part, an equivalent function with a different name is present in the API:
* `OBBond::GetBO()`/`SetBO()` removed. `OBBond::GetBondOrder()`/`SetBondOrder()` should be used instead.
* `OBAtom::GetCIdx()` removed. `OBAtom::GetCoordinateIdx()` should be used instead.
* `OBBitVec::Empty()` removed. `OBBitVec::IsEmpty()` should be used instead.
* `OBBitVec::BitIsOn()` removed. `OBBitVec::BitIsSet()` should be used instead.
#### Handling of implicit hydrogens[¶](#handling-of-implicit-hydrogens)
With OB 3.0, the number of implicit hydrogens is stored as a property of the atom. This value can be interrogated and set independently of any other property of the atom. This is how other mature cheminformatics toolkits handle implicit hydrogens. In contrast, in OB 2.x this was a derived property worked out from valence rules and some additional flags set on an atom to indicate non-standard valency.
From the point of view of the user, the advantage of the 2.x approach was that the user never needed to consider the implicit hydrogens; their count was calculated based on the explicit atoms (a behavior known as “floating valence”). The disadvantage was that it was difficult for the user to specify non-standard valencies, may have papered-over problems with the data, gave rise to subtle bugs which were not easily addressed and had poorer performance.
As an example of how the behavior has changed, let’s look at creating a bond. If we read the SMILES string `C.C`, create a bond between the two atoms and write out the SMILES string, we get different answers for OB 2.x (`CC`) versus OB 3.0 (`[CH4][CH4]`). OB 2.x just works out the count based on standard valence rules. With OB 3.0, there were four implicit hydrogens on each carbon before we made the bond, and there still are four - they didn’t go anywhere and weren’t automatically adjusted.
While this may seem like a major change, adapting code to handle the change should be straightforward: adding or removing a bond should be accompanied by incrementing or decrementing the implicit hydrogen count by the bond order. This also applies to deleting an atom, since this deletes any bonds connected to it. Note that care should be taken not to set the hydrogen count to a negative value when decrementing.
```
unsigned int bondorder = 1;
mol->AddBond(1, 2, bondorder);
OBAtom* start = mol->GetAtom(1);
unsigned int hcount = start->GetImplicitHCount();
start->SetImplicitHCount(bondorder >= hcount ? 0 : hcount - bondorder);
OBAtom* end = mol->GetAtom(2);
hcount = end->GetImplicitHCount();
end->SetImplicitHCount(bondorder >= hcount ? 0 : hcount - bondorder);
```
For the particular case of creating a new atom, it is worth noting that the implicit hydrogen count defaults to zero and that users must set it themselves if necessary. To help with this situation a convenience function has been added to OBAtom that sets the implicit hydrogen count to be consistent with normal valence rules. TODO
Regarding specific API functions, the following have been removed:
* `OBAtom::SetImplicitValence()`, `GetImplicitValence()`
* `OBAtom::IncrementImplicitValence()`, `DecrementImplicitValence()`
* `OBAtom::ForceNoH()`, `HasNoHForce()`, `ForceImplH()`, `HasImplHForced()`
* `OBAtom::ImplicitHydrogenCount()`
The following have been added:
* `OBAtom::SetImplicitHCount()`, `GetImplicitHCount()`
#### Handling of aromaticity[¶](#handling-of-aromaticity)
Molecule modification no longer clears the aromaticity perception flag. If the user wishes to force reperception after modification, then they should call `OBMol::SetAromaticPerceived(false)`.
Cheminformatics 101[¶](#cheminformatics-101)
---
****An introduction to the computer science and chemistry of chemical information systems****
**Copyright © 2009 by **<NAME>**, *eMolecules, Inc.***
The original version of this introduction to cheminformatics can be found on the [eMolecules website](http://www.emolecules.com/doc/cheminformatics-101.php). It is included here with the permission of the author.
### Cheminformatics Basics[¶](#cheminformatics-basics)
#### What is Cheminformatics?[¶](#what-is-cheminformatics)
*Cheminformatics* is a cross between Computer Science and Chemistry – the process of storing and retrieving information about chemical compounds.
*Information Systems* are concerned with storing, retrieving, and searching information, and with storing *relationships* between bits of data. For example:
| Operation | Classical Information System | | Chemical Information System | |
| --- | --- | --- | --- | --- |
| Store | Name = ‘<NAME>’ | Stores text, numbers, dates, … | | Stores chemical compounds and information about them |
| Retrieve | Find record #13282 | Retrieves ‘<NAME>’ | Find CC(=O)C4CC3C2CC(C)C1=C(C)…
C(=O)CC(O)C1C2CCC3(C)C4 | Retrieves: |
| Search | Find Presidents named
‘Bush’ | George Bush and George W. Bush | Find molecules containing | Retrieves: |
| Relationship | Year Carter was elected | Answer: Elected in 1976 | What’s the logP(o/w) of | Answer: logP(o/W) = 2.62 |
#### How is Cheminformatics Different?[¶](#how-is-cheminformatics-different)
There are four key problems a cheminformatics system solves:
1. **Store a Molecule**
Computer scientists usually use the *valence model* of chemistry to represent compounds. The next section
[Representing Molecules](index.html#representing-molecules),
discusses this at length.
2. **Find exact molecule**
If you ask, “Is <NAME> in the database?” it’s not hard to find the answer. But, given a specific molecule, is it in the database? What do we know about it? This may seem seem simple at first glance, but it’s not, as we’ll see when we discuss tautomers,
stereochemistry, metals, and other “flaws” in the valence model of chemistry.
3. **Substructure search**
If you ask, “Is anyone named Lincoln in the database?” you usually expect to find the former President and a number of others - this is called a *search* rather than a *lookup*. For a chemical informatics system, we have a *substructure search*: Find all molecules containing a partial molecule (the “substructure”) drawn by the user. The substructure is usually a functional group,
“scaffold”, or core structure representing a class of molecules.
This too is a hard problem, *much* harder than most text searches,
for reasons that go to the very root of mathematics and the theory of computability.
4. **Similarity search**
Some databases can find similar-sounding or misspelled words, such as “Find Lincon” or “find Cincinati”, which respectively might find <NAME> and Cincinnati. Many chemical information systems can find molecules similar to a given molecule, ranked by similarity. There are several ways to measure molecular similarity, discussed further in the section on [Molecular Similarity](index.html#molecular-similarity).
### Representing Molecules[¶](#representing-molecules)
#### What is a Molecule?[¶](#what-is-a-molecule)
One of the greatest achievements in chemistry was the development of the *valence model* of chemistry, where a molecule is represented as *atoms* joined by semi-rigid *bonds* that can be single, double, or triple. This simple mental model has little resemblance to the underlying quantum-mechanical reality of electrons, protons and neutrons, yet it has proved to be a remarkably useful approximation of how atoms behave in close proximity to one another, and has been the foundation of chemical instruction for well over a century.
The valence model is also the foundation of modern chemical information systems. When a Computer Scientist approaches a problem, the first task is to figure out a *datamodel* that represents the problem to be solved as *information*. To the Computer Scientist, the valence model naturally transforms into a
*graph*, where the *nodes* are atoms and the *edges* are bonds.
Computer Scientists know how to manipulate graphs - mathematical graph theory and computer science have been closely allied since the invention of the digital computer.
> *There are atoms and space. Everything else is opinion.*
> —Democritus
However, the valence model of chemistry has many shortcomings. The most obvious is aromaticity, which quickly required adding the concept of a non-integral “aromatic” distributed bond, to the single/double/triple bonds of the simple valence model. And that was just the start - tautomers, ferrocenes, charged molecules and a host of other common molecules simply don’t fit the valence model well.
This complicates life for the computer scientist. As we shall see,
they are the source of most of the complexity of modern cheminformatics systems.
#### Older systems: Connection Tables[¶](#older-systems-connection-tables)
Most of the early (and some modern) representations of molecules were in a *connection table*, literally, a table enumerating the atoms, and a table enumerating the bonds and which atoms each bond connected. Here is an example of connection-table (CTAB) portion of an MDL “SD” file (the data portion is not shown here):
```
MOLCONV
3 2 0 0 1 0 1 V2000 5.9800 -0.0000 -0.0000 Br 0 0 0 0 0 0 4.4000 -0.6600 0.8300 C 0 0 0 0 0 0 3.5400 -1.3500 -0.1900 C 0 0 0 0 0 0 1 2 1 0 2 3 1 0
```
This simple example illustrates most of the key features. The molecule has three atoms, two bonds, and is provided with three-dimensional (x,y,z) coordinates. MDL provides
[extensive documentation](http://www.mdli.com/downloads/downloadable/index.jsp)
for their various CTFile formats if you are interested in the details.
Connection tables can capture the valence model of chemistry fairly well, but they suffer from three problems:
1. They are very inefficient, taking on the order of a dozen or two of bytes of data per atom and per bond. Newer line notations
(discussed below) represent a molecules with an average of 1.2 to 1.5 bytes per atom, or 6-8 bytes per atom if coordinates are added.
2. Many suffered from lack of specificity. For example, since hydrogens are often not specified, there can be ambiguity as to the electronic state of some molecules, because the connection-table format does not explicitly state the valence assumptions.
3. Most mix the concept of *connectivity* (what the atoms are and how they are connected) with other data such as 2D and 3D coordinates.
For example, if you had two different conformers of a molecule,
most connection tables would require you to specify the entire molecule twice, even though the connection table is identical in both.
#### Line Notations: InChI, SMILES, WLN and others[¶](#line-notations-inchi-smiles-wln-and-others)
A *line notation* represents a molecule as a single-line string of characters.
> **WLN - Wisswesser Line Notation**
> WLN, invented by <NAME> in the early 1950’s, was the
> first comprehensive line notation, capable of representing
> arbitrarily complex molecules correctly and compactly.
> ```
> 1H = CH4 Methane
> 2H = CH3-CH3 Ethane
> 3H = CH3-CH2-CH3 Propane
> QVR BG CG DG EG FG = C7HCl5O2 Pentachlorbenzoate
> ```
> WLN was the first line notation to feature a *canonical form*, that
> is, the rules for WLN meant there was only one “correct” WLN for
> any particular molecule. Those versed in WLN were able to write
> molecular structure in a line format, communicate molecular
> structure to one another and to computer programs. Unfortunately,
> WLN’s complexity prevented widespread adoption. The rules for
> correct specification of WLN filled a small book, encoding those
> rules into a computer proved difficult, and the rules for the
> [canonicalization](#canonicalization) were computationally
> intractable.
> **SMILES - Simplified Molecular Input Line Entry System**
> The best-known line notation today is SMILES. It was by Arthur and
> <NAME> in response to a need for a simpler, more “human
> accessible” notation than WLN. While SMILES is not trivial to learn
> and write, most chemists can create correct SMILES with just a few
> minutes training, and the entire SMILES language can be learned in
> an hour or two. You can
> [read more details here](http://www.opensmiles.org/spec/open-smiles.html).
> Here are some examples:
> ```
> C methane
> CC ethane
> C=C ethene
> Oc1ccccc1 phenol
> ```
> SMILES, like WLN, has a *canonical form*, but unlike WLN, Weininger
> relied on the computer, rather than the chemist, to convert a
> non-canonical SMILES to a canonical SMILES. This important
> separation of duties was key to making SMILES easy to enter. (Read
> more about canonicalization below.)
> **InChI**
> InChI is the latest and most modern of the line notations. It
> resolves many of the chemical ambiguities not addressed by SMILES,
> particularly with respect to stereo centers, tautomers and other of
> the “valence model problems” mentioned
> [above](#what-is-a-molecule).
> You can read more about InChI at the
> [Official Web Site](http://www.iupac.org/projects/2000/2000-025-1-800.html),
> or on the
> [Unofficial InChI FAQ page](http://wwmm.ch.cam.ac.uk/inchifaq/index.html).
#### Canonicalization[¶](#id3)
A critical feature of line notations is *canonicalization* - the ability to choose one “blessed” representation from among the many.
Consider:
```
OCC ethanol CCO ethanol
```
Both of these SMILES represent the same molecule. If we could all agree that one of these was the “correct” or “canonical” SMILES for ethanol, then we would *always store it the same way* in our database. More importantly, if we want to ask, “Is ethanol in our database” we know that it will only be there once, and that we can generate the canonical SMILES for ethanol and look it up.
(Note that in theory one can create a canonical connection table,
too, but it’s not as useful since informatics systems usually have trouble indexing BLOBs - large objects.)
#### Line Notation versus Connection Tables: A practical matter[¶](#line-notation-versus-connection-tables-a-practical-matter)
Why are line notations preferred over connection-table formats? In theory, either could express the same information. But there are practical difference, mostly related to the complexity of “parsing”
a connection table. If you know that the whole molecule is on one line of a file, it’s easy to parse.
Line notations are also very nice for database applications.
Relational databases have datatypes that, roughly speaking, are divided into numbers, text, and “everything else”, also known as
“BLOBs” (Binary Large OBjects). You can store line notations in the
“text” fields much more easily than connection tables.
Line notations also have pragmatic advantages. Modern Unix-like systems (such as UNIX, Linux and Cygwin) have a number of very powerful “filter” text-processing programs that can be “piped”
together (connected end-to-end) to perform important tasks. For example, to count the number of molecules containing aliphatic nitrogen in a SMILES file, I can simply:
```
grep N file.smi | wc
```
(**grep** looks for a particular expression, in this case `N`, and prints any line that contains it, and **wc** (“word count”) counts the number of words and lines.)
This is just a simple example of the power available via “script”
programs using “filters” on Unix-like systems. Unix filters are much less useful for connection-table formats, because each molecule is spread over many lines.
#### Query Languages: SMARTS[¶](#query-languages-smarts)
In addition to a typographical way to represent molecules, we also need a way to enter *queries* about molecules, such as, “Find all molecules that contain a phenol.”
With text, we’re familiar with the concept of typing a partial word, such as “ford” to find “<NAME>” as well as “<NAME>”. For chemistry, we can also specify partial structures,
and find anything that contains them. For example:
| Query | Database | Matches? |
| --- | --- | --- |
| | | **YES** (matched portion highlighted in blue) |
| | | **NO** (double bond indicated doesn’t match) |
The simplest query language for chemistry is SMILES itself: Just specify a structure, such as `Oc1ccccc1`, and search. This is how eMolecules’ basic searching works (see Sidebar). It’s simple and, because of the high-performance indexes in eMolecules, is also very fast.
However, for general-purpose cheminformatics, one needs more power.
What if the substructure you’re looking for isn’t a valid molecule?
For example `ClccBr` (1,2- substitution on an aromatic ring) isn’t a whole molecule, since the concept of aromaticity is only sensible in the context of a whole ring system.
Or what if the thing we’re looking for isn’t a simple atom such as Br, but rather a concept like “Halogen”? Or, “A terminal methyl”?
To address this, cheminformatics systems have special
*query languages*, such as SMARTS (SMiles ARbitrary Target Specification). SMARTS is a close cousin to SMILES, but it has
*expressions* instead of simple atoms and bonds. For example, `[C,N]`
will find an atom that is either carbon or nitrogen.
#### IUPAC Names, Trade Names, Common Names[¶](#iupac-names-trade-names-common-names)
Chemistry also has three other important name systems:
**IUPAC Names**
[IUPAC](http://www.iupac.org/dhtml_home.html)
(the International Union of Pure and Applied Chemistry) established a
[naming convention](http://www.chem.qmul.ac.uk/iupac/) that is widely used throughout chemistry. Any chemical can be named, and all IUPAC names are unambiguous. This textual representation is aimed at humans, not computers: Chemists versed in IUPAC nomenclature (which is widely taught) can read an IUPAC name and visualize or draw the molecule.
**Trade Names**
Names such as Tylenol™ and Valium™ are given to compounds and formulations by manufacturers for marketing and sales purposes,
and for regulatory purposes.
**Common names**
Names such as “aspirin” or “alcohol” for substances that are in widespread use.
### Substructure Searching with Indexes[¶](#substructure-searching-with-indexes)
#### What is Indexing?[¶](#what-is-indexing)
Indexing is pre-computing the answers to portions of expected questions *before* they’re asked, so that when the question comes,
it can be answered quickly.
Take your favorite search engine (AOL, Yahoo!, Google, MSN, …)
for example. Without indexing, they might wait until you ask for
“<NAME>”, then start searching the web, and in a year or two find all the web pages about the deceased banjo/fiddle player and steamboat captain. That would probably not impress you.
Instead, these search engines search the web *before* you ask your question, and build an *index* of the words they find. When you type in “Bluegrass <NAME>”, they already know all of the pages that have “John”, all of the pages with “Hartford”, and all of the pages with “Bluegrass”. Instead of searching, they examine their index, and find pages that are on *all three* lists, and quickly find your results. (NB: It’s actually a lot more complex,
but this illustrates the main idea of indexing.)
#### Indexes for Chemicals[¶](#indexes-for-chemicals)
Instead of indexing words, cheminformatics systems index
*substructures*. Although there are many schemes for doing this,
cheminformatics systems all use the same fundamental principle:
they *decompose the molecule* into smaller bits, and index those.
Roughly speaking, a cheminformatics system will index each of the substructures (fragments) above, so that every molecule that contains each fragment is known.
When a query is entered, the cheminformatics system breaks apart the query using the same technique, to find all of the fragments in the query. It then checks its index for each fragment, and combines the lists it finds to get only those molecules that have *all* of those fragments.
This doesn’t mean that all molecules returned by the index actually are matches. In the language of databases, we say the index will return *false positives*, candidate molecules that don’t actually match the substructure search.
Consider our example of searching for “<NAME>” - the index might return many pages that have both “John” and “Hartford”, yet have nothing to do with bluegrass music or steamboats. For example,
it might return a page containing, “President <NAME> visited Hartford, Connecticut today…”. To confirm that the search system has found something relevant, it must check the pages return from the index to ensure that the specific phrase “<NAME>”
is present. However, notice that this is *much* faster than searching every page, since the overwhelming majority of web pages were instantly rejected because they have neither “John” nor
“Hartford” on them.
Similarly, a chemical fragment index serves to find only the most
*likely* molecules for our substructure match - anything that the index didn’t find is definitely not a match. But we still have to examine each of the molecules returned by the indexing system and verify that the complete substructure for which we are searching is present.
#### NP-Complete - A Little about Computability[¶](#np-complete-a-little-about-computability)
Searching through a page of text for the words “<NAME>” is pretty easy for a modern computer. Although false positives returned by the index are a nuisance and impair performance, they are not a catastrophe. Not so for substructure matching.
Unfortunately, substructure matching falls into a category of
“hard” mathematical problems, which means false positives from the index are a big problem.
Substructure matching (finding a certain functional group within a molecule) is an example of what mathematicians call
[graph isomorphism](http://planetmath.org/?op=getobj&from=objects&id=1708),
and is in a class of problems called
[NP Complete](http://en.wikipedia.org/wiki/Np_complete).
Roughly speaking, this means the time it takes to do a substructure search is non-polynomial, i.e. exponential in the number of atoms and bonds. To see why this is a computational disaster, compare two tasks, one that takes polynomial time,
k1*N2, versus one that takes exponential time k2*2N. Our polynomial task is bad enough: If we double N, it takes *four times* as long to solve. But the exponential task is worse:
*Every time we add an atom it doubles*. So going from one atom to two doubles the time, and going from 100 atoms to 101 atoms doubles the time. Even if we can get k2 down to a millionth of k1, we’re still in trouble - a million is just 220 or twenty atoms away.
It has been mathematically proven that substructure searching is in the set of NP Complete problems, so there’s no point wasting our time searching for a polynomial algorithm. The good news is that most molecules have “low connectivity”, meaning most atoms have fewer than four bonds, unlike the weird and twisted graphs that mathematicians consider. In practice, most substructure matching can be done in polynomial time around N2 or N3. But even with this improvement, substructure matching is an “expensive” time-consuming task for a computer.
The key point is that indexing is particularly important for cheminformatics systems. The typical modern computer can only examine a few thousand molecules per second, so examining millions of molecules one-by-one is out of the question. The indexing done by a modern cheminformatics system is the key to its performance.
### Molecular Similarity[¶](#molecular-similarity)
Substructure searching is a very powerful technique, but sometimes it misses answers for seemingly trivial differences. We saw this earlier with the following:
| Query | Target |
| --- | --- |
| | |
| We’re looking for
steroids… | But we don’t find this one because of the double bond |
It is somewhat like searching for “221b Baker Street” and finding nothing because the database contains “221B Baker Street” and the system doesn’t consider “b” and “B” a match.
A good similarity search would find the target structure shown above, because even though it is not a substructure match, it is highly similar to our query.
There are many ways to measure similarity.
**2D topology**
The best-known and most widely used similarity metrics compare the two-dimensional topology, that is, they only use the molecule’s atoms and bonds without considering its shape.
Tanimoto similarity is perhaps the best known as it is easy to implement and fast to compute. An excellent summary of 2D similarity metrics can be found in section 5.3 of the
[Daylight Theory Manual](http://www.daylight.com/dayhtml/doc/theory/theory.finger.html).
**3D configuration**
One of the most important uses of similarity is in the discovery of new drugs, and a molecule’s shape is critical to it’s medicinal value (see [QSAR](http://en.wikipedia.org/wiki/QSAR)).
3D similarity searches compare the configuration (also called the
“conformation”) of a molecule to other molecules. The “electronic surface” of the molecule is the important bit - the part that can interact with other molecules. 3D searches compare the surfaces of two molecules, and how polarized or polarizable each bit of the surface is.
3D similarity searches are uncommon, for two reasons: It’s difficult and it’s slow. The difficulty comes from the complexity of molecular interactions - a molecule is not a fixed shape, but rather a dynamic object that changes according to its environment.
And the slowness comes from the difficulty: To get better results,
scientists employ more and more complex programs.
**Physical Properties**
The above 2D and 3D similarity are based on the molecule’s structure. Another technique compares the properties - either computed or measured or both - and declares that molecules with many properties in common are likely to have similar structure. It is the idea of QSAR taken to the database.
**Clustering**
“Clustering” is the process of differentiating a set of things into groups where each group has common features. Molecules can be clustered using a variety of techniques, such as common 2D and/or 3D features.
Note that clustering is not a similarity metric *per se* (the topic of this section), but it may use various similarity metrics when computing clusters. It is included here because it can be used as a
“cheap substitute”. That is, when someone wants to find compounds similar to a known compound, you can show them the group (the cluster) to which the compound belongs. It allows you to pre-compute the clusters, spending lots of computational time up front, and then give answers very quickly.
Many cheminformatics databases have one or more similarity searches available.
### Chemical Registration Systems[¶](#chemical-registration-systems)
Chemical Registration is the “big brother” of cheminformatics.
A cheminformatics system is primarily devoted to recording chemical structure. Chemical Registration systems are additionally concerned with:
* Structural novelty - ensure that each compound is only registered once
* Structural normalization - ensure that structures with alternative representations (such as nitro groups, ferrocenes, and tautomers) are entered in a uniform way.
* Structure drawing - ensure that compounds are drawn in a uniform fashion, so that they can be quickly recognized “by eye”.
* Maintaining relationships among related compounds. For example,
all salt forms of a compound should be recognized as being related to one another, and compounds in different solvates are also related.
* Registering mixtures, formulations and alternative structures.
* Registering compounds the structure of which is unknown.
* Roles, responsibilities, security, and company workflow.
* Updates, amendments and corrections, and controlling propagation of changes (e.g. does changing a compound change a mixture containing that compound?)
The scope of Chemical Registration Systems is far beyond the goals of this brief introduction to cheminformatics. However, to illustrate just one of the points above, let’s consider structural novelty. In real life, chemical structure can be very ambiguous.
Imagine you have five bottles of a particular compound that has a stereo center:
1. The contents of the first bottle were carefully analyzed, and found to be a single stereoisomer.
2. The contents of the second bottle were carefully analyzed and found to contain a racemic mixture of the stereoisomers.
3. The stereoisomers of the third bottle are unknown. It may be pure, or have one predominant form, or be a racemic mixture.
4. The fourth bottle was obtained by running part of the contents of bottle #2 through a chromatographic separation. It is isotopically pure, but you don’t know which stereoisomer.
5. The fifth bottle is the other fraction from the same separation of #4. It is also isotopically pure, but you don’t know which stereoisomer, *but you know it’s the opposite of #4*.
Which of these five bottles contain the same compound, and which are different? That is the essential task of a chemical registry system, which would consider all five to be different. After all,
you probably have data about each bottle (that’s why you have them), and you must be able to record it and not confuse it with the other bottles.
In this example above, consider what is known and not known:
| Bottle | Known | Not Known |
| --- | --- | --- |
| 1 | Everything | Nothing |
| 2 | Everything | Nothing |
| 3 | Compound is known | Stereochemistry |
| 4 | Compound and purity known, stereochemistry is opposite of #5 | Specific stereochemistry |
| 5 | Compound and purity known, stereochemistry is opposite of #4 | Specific stereochemistry |
A cheminformatics system has no way to record the contents of the five bottles; it is only concerned with structure. By contrast, a chemical registration system can record both *what is known* as well as *what is not known*. This is the critical difference between the two.
Stereochemistry[¶](#stereochemistry)
---
Open Babel stores stereochemistry as the relative arrangement of a set of atoms in space. For example, for a tetrahedral stereocenter, we store information like “looking from atom 2, atoms 4, 5 and 6 are arranged clockwise around atom 3”. This section describes how a user can work with or manipulate this information. This might be useful to invert a particular center, replace a substituent at a stereocenter, enumerate stereoisomers or determine the number of unspecified stereocenters.
Although Open Babel has data structures to support a variety of forms of stereochemistry, currently little use is made of any stereochemistry other than tetrahedral and cis/trans (and square planar to a certain degree).
We will look first of all at how stereochemistry information is stored, accessed, and modified. Then we describe how this information is deduced from the chemical structure. This chapter should be read in combination with the API documentation (see the Stereochemistry overview page found under “Modules”).
### Accessing stereochemistry information[¶](#accessing-stereochemistry-information)
Each record of stereochemistry information around an atom or bond is stored as StereoData associated with the OBMol. First of all, let’s look at direct access to the StereoData. The following code counts the number of tetrahedral centers with specified stereochemistry, as well as the number of double bonds with specified cis/trans stereochemistry:
```
num_cistrans = 0 num_tetra = 0
mol = pybel.readstring("smi", "F/C=C/C[C@@H](Cl)Br")
m = mol.OBMol
for genericdata in m.GetAllData(ob.StereoData):
stereodata = ob.toStereoBase(genericdata)
stereotype = stereodata.GetType()
if stereotype == ob.OBStereo.CisTrans:
cistrans = ob.toCisTransStereo(stereodata)
if cistrans.IsSpecified():
num_cistrans += 1
elif stereotype == ob.OBStereo.Tetrahedral:
tetra = ob.toTetrahedralStereo(stereodata)
if tetra.IsSpecified():
num_tetra += 1
```
The code above is quite verbose, and requires iteration through all of the stereo data. To make it simpler to access stereo data for a particular atom or bond, a facade class OBStereoFacade can instead be used, which provides convenience functions for these operations:
```
num_cistrans = 0 num_tetra = 0
mol = pybel.readstring("smi", "F/C=C/C[C@@H](Cl)Br")
m = mol.OBMol
facade = ob.OBStereoFacade(m)
for atom in ob.OBMolAtomIter(m):
mid = atom.GetId()
if facade.HasTetrahedralStereo(mid):
tetra = facade.GetTetrahedralStereo(mid)
if tetra.IsSpecified():
num_tetra += 1
for bond in ob.OBMolBondIter(m):
mid = bond.GetId()
if facade.HasCisTransStereo(mid):
cistrans = facade.GetCisTransStereo(mid)
if cistrans.IsSpecified():
num_cistrans += 1
```
Note that every time you create a new OBStereoFacade, a certain amount of work is done building up the correspondance between atoms/bonds and stereo data. For this reason, a single OBStereoFacade should be created for a molecule and reused.
### The Config() object[¶](#the-config-object)
The description of the stereochemical configuration is accessed via a Config() object associated with each StereoData. The contents of this object will be different depending on the specific type of stereochemistry, e.g. `OBCisTransStereo::Config` (`OBCisTransConfig` from Python) records the begin and end Ids of the associated bond, the Ids of the attached atoms, the spatial relationship of those atoms, and whether stereo is specified.
Let’s read the SMILES string `F[C@@](Cl)(Br)I` and access the stereo. When we read this SMILES string, the tetrahedral center will be the second atom, that with Idx 2.:
```
smi = "F[C@@](Cl)(Br)I"
mol = pybel.readstring("smi", smi).OBMol secondatom = mol.GetAtom(2)
atomid = secondatom.GetId()
stereofacade = ob.OBStereoFacade(mol)
print("Does this atom have tet stereo info?", stereofacade.HasTetrahedralStereo(atomid))
tetstereo = stereofacade.GetTetrahedralStereo(atomid)
config = tetstereo.GetConfig()
print("The stereocenter is at atom Id {}".format(config.center))
print("Is the configuration specified? {}".format("Yes" if config.specified else "No"))
print("Looking from atom Id {0}, the atoms Ids {1} are arranged clockwise".format(config.from_or_towards, config.refs))
```
Which prints:
```
Does this atom have tet stereo info? True The stereocenter is at atom Id 1 Is the configuration specified? Yes Looking from atom Id 0, the atoms Ids (2, 3, 4) are arranged clockwise
```
How do I know that I’m looking from atom Id 0, and that the atom Ids are arranged clockwise? From the documentation for `OBTetrahedralStereo::GetConfig`, which states that this is the default. You may be used to thinking “How are these atoms arranged looking from here?”. With GetConfig(), you are instead making the request “Give me the atoms in clockwise order looking from here”. It follows from this that you should never need to test the value of the winding, the direction, or the from/towards atom; you provide these, and their values will be whatever you provided. For example, you could instead ask for the anticlockwise arrangement of atoms looking *towards* the atom with Id 0:
```
configB = tetstereo.GetConfig(0, ob.OBStereo.AntiClockwise, ob.OBStereo.ViewTowards)
print("Looking towards atom Id {0}, the atoms Ids {1} are arranged anticlockwise".format(config.from_or_towards, config.refs))
```
Which prints:
```
Looking towards atom Id 0, the atoms Ids (2, 3, 4) are arranged anticlockwise
```
To check whether two Configs represent the same stereo configuration, use the equality operator:
```
assert config == configB
```
It should be noted that the Config objects returned by GetConfig() are *copies* of the stereo configuration. That is, modifying them has no affect on the stereochemistry of the molecule (see the next section). As a result, it is straightforward to keep a copy of the stereo configuration, modify the molecule, and then check whether the modification has altered the stereochemistry using the equality operator of the Config.
### Modifying the stereochemistry[¶](#modifying-the-stereochemistry)
We discuss below the interaction between 2D and 3D structural information and how stereochemistry is perceived. For now, let’s avoid these issues by using a 0D structure and modifying its stereochemistry:
```
from openbabel import pybel ob = pybel.ob
mol = pybel.readstring("smi", "C[C@@H](Cl)F")
print(mol.write("smi", opt={"nonewline": True}))
# Invert the stereo m = mol.OBMol facade = ob.OBStereoFacade(m)
tetstereo = facade.GetTetrahedralStereo(m.GetAtom(2).GetId())
config = tetstereo.GetConfig()
config.winding = ob.OBStereo.AntiClockwise tetstereo.SetConfig(config)
print(mol.write("smi", opt={"nonewline": True}))
config.specified = False tetstereo.SetConfig(config)
print(mol.write("smi", opt={"nonewline": True}))
```
which prints:
```
C[C@@H](Cl)F C[C@H](Cl)F CC(Cl)F
```
How did I know that setting the relative arrangement to anti-clockwise would invert the stereo? Again, as described above, by default GetConfig() returns the atoms in clockwise order. Another way to invert the stereo would be to swap two of the refs, or to set the direction from ‘from’ to ‘towards’.
### Stereo perception[¶](#stereo-perception)
Until now we have not mentioned where this stereo information came from; we have read a SMILES string and somehow the resulting molecule has stereo data associated with it.
Stereo perception is the identification of stereo centers from the molecule and its associated data, which may include 3D coordinates, stereobonds and existing stereo data. Passing an OBMol to the global function `PerceiveStereo` triggers stereo perception, and sets a flag marking stereo as perceived (`OBMol::SetChiralityPerceived(true)`). If, in the first place, stereo was already marked as perceived then stereo perception is not performed. Any operations that require stereo information should call PerceiveStereo before accessing stereo information.
Behind the scenes, the code for stereo perception is quite different depending on the dimensionality (`OBMol::GetDimension()`) of the molecule.
3D structures
Perhaps the most straightforward is when the structure has 3D coordinates. In this case, a symmetry analysis identifies stereogenic centers whose stereoconfigurations are then perceived from the coordinates. Some file formats such as the MOL file allow atoms and double bonds to be marked as have unspecified stereochemistry, and this information is applied to the detected stereocenters. For the specific case of the MOL file, the flag in the atom block that marks this is ignored by default (as required by the specification) but an option (`s`) is provided to read it:
```
$ obabel -:"I/C=C/C[C@@](Br)(Cl)F" --gen3d -omol | obabel -imol -osmi I/C=C/C[C@@](Br)(Cl)F
$ obabel -:"IC=CCC(Br)(Cl)F" --gen3d -omol | obabel -imol -osmi IC=CC[C@@](Br)(Cl)F
$ obabel -:"IC=CCC(Br)(Cl)F" --gen3d -omol | obabel -imol -as -osmi IC=CCC(Br)(Cl)F
```
As just described, the flow of information is from the 3D coordinates to Open Babel’s internal record of stereo centers, and this flow is triggered by calling stereo perception (which does nothing if the stereo is marked as already perceived). It follows from this that altering the coordinates *after* stereo perception (e.g. by reflecting through an axis, thereby inverting chirality) has no affect on the internal stereo data. If operations are performed on the molecule that require stereo is be reperceived, then `OBMol::SetChiralityPerceived(false)` should be called.
It should also be clear from the discussion above that changing the stereo data (e.g. using SetConfig() to invert a tetrahedral stereocenter) has no affect on the molecule’s coordinates (though it may affect downstream processing, such as the information written to a SMILES string). If this is needed, the user will have to manipulate the coordinates themselves, or generate coordinates for the whole molecule using the associated library functions (e.g. the `--gen3d` operation).
2D structures
2D structures represent a depiction of a molecule, where stereochemistry is usually indicated by wedge or hash bonds. It is sometimes indicated by adopting particular conventions (e.g. the Fischer or Haworth projection of monosaccharides). It should be noted that Open Babel does not support any of these conventions, nor does it support the use of wedge or hash bonds for perspective drawing (e.g. where a thick bond is supported by two wedges). This may change in future, of course, but it’s worth noting that Open Babel is not the only toolkit with these limitations and so what you think you are storing in your database may not be what the ‘computer’ thinks it is.
Stereo centers are identified based on a symmetry analysis, and their configuration inferred either from the geometry (for cis/trans bonds) or from bonds marked as wedge/hash (tetrahedral centers). File format readers record information about which bonds were marked as wedges or hashes and this can be accessed with OBBond:IsWedge/IsHash, where the Begin atom of the bond is considered the origin of the wedge/hash. Similar to the situation with 3D perception, changing a bond from a wedge to a hash (or vice versa) has no affect on the stereo objects once stereo has been perceived, but triggering reperception will regenerate the desired stereo data.
It should also be noted that the file writers regenerate the wedges or hashes from the stereo data at the point of writing; in other words, the particular location of the wedge/hash (or even whether it is present) may change on writing. This was done to ensure that the written structure accurately represents Open Babel’s internal view of the molecule; passing wedges/hashes through unchanged may not represent this (consider the case where a wedge bond is attached to a tetrahedral center which cannot be a stereocenter).
0D structures
A SMILES string is sometimes referred to as describing a 0.5D structure, as it can describe the relative arrangement of atoms around stereocenters. The SMILES reader simply reads and records this information as stereo data, and then the molecule is marked as having stereo perceived (unless the `S` option is passed - see below).
Being able to skip the symmetry analysis associated with stereo perception means that SMILES strings can be read quickly - a useful feature if dealing with millions of molecules. However, if you wish to identify additional stereocenters whose stereo configuration is unspecified, or if the SMILES strings come from an untrusted source and stereo may have been incorrectly specified (e.g. on a tetrahedral center with two groups the same), then you may wish to trigger perception.
Without any additional information, stereo cannot be perceived from a structure that has neither 2D nor 3D coordinates. Triggering stereo perception on such a structure will generate stereo data if stereogenic centers are present, but their configuration will be marked as unspecified. However, where existing stereo data is present (e.g. after reading a SMILES string), that data will be retained if the stereocenter is identified by the perception routine as a true stereocenter. This can be illustrated using the `S` option to the SMILES reader, which tells it not to mark the stereo as perceived on reading; as a result, reperception will occur if triggered by a writer yielding different results in the case of an erroneously specified stereocenter:
```
$ obabel -:"F[C@@](F)(F)[C@@H](I)Br" -osmi F[C@@](F)(F)[C@@H](I)Br
$ obabel -:"F[C@@](F)(F)[C@@H](I)Br" -aS -osmi FC(F)(F)[C@@H](I)Br
```
### Miscellaneous stereo functions in the API[¶](#miscellaneous-stereo-functions-in-the-api)
* `OBAtom::IsChiral` - this is a convenience function that checks whether there is any tetrahedral stereo data associated with a particular atom. OBStereoFacade should be used in preference to this.
Handling of aromaticity[¶](#handling-of-aromaticity)
---
The purpose of this section is to give an overview of how Open Babel handles aromaticity. Given that atoms can be aromatic, bonds can be aromatic, and that molecules have a flag for aromaticity perceived, it’s important to understand how these all work together.
### How is aromaticity information stored?[¶](#how-is-aromaticity-information-stored)
Like many other toolkits, Open Babel stores aromaticity information separate from bond order information. This means that there isn’t a special bond order to indicate aromatic bond. Instead, aromaticity is stored as a flag on an atom as well as a flag on a bond. You can access and set this information using the following API functions:
* OBAtom::IsAromatic(), OBAtom::SetAromatic(), OBBond::UnsetAromatic()
* OBBond::IsAromatic(), OBBond::SetAromatic(), OBBond::UnsetAromatic()
There is a catch though, or rather a key point to note. OBMols have a flag to indicate whether aromaticity has been perceived. This is set via the following API functions:
* OBMol::SetAromaticPerceived(), OBMol::UnsetAromaticPerceived()
The value of this flag determines the behaviour of the OBAtom and OBBond IsAromatic() functions.
* If the flag is set, then IsAromatic() simply returns the corresponding value of the atom or bond flag.
* If unset, then IsAromatic() triggers aromaticity perception (from scratch), and then returns the value of the flag.
### Perception of aromaticity[¶](#perception-of-aromaticity)
It’s convenient to use the term “perception”, but what we mean is to apply an aromaticity model. Currently Open Babel only has a single aromaticity model, which is close to the Daylight aromaticity model. An aromaticity model describes how many pi electrons are contributed by each atom; if this sums to 4n+2 within a cycle, then all atoms and bonds in that cycle will be marked as aromatic.
Applying a model involves creating an instance of OBAromaticTyper(), and calling AssignAromaticFlags() passing an OBMol as a parameter. This wipes any existing flags, sets the atom and bond flags according to the model, and marks the aromaticity as perceived.
If you wish (and know what you are doing), you can apply your own aromaticity model by setting various atoms and bonds as aromatic and then marking the molecule as having aromaticity perceived. Naturally, not all models will make sense chemically. Even more problematic is the situation where no Kekulé form exists that corresponds to the aromatic form. And finally, there is the philosophical question of the meaning of an aromatic atom without aromatic bonds, and vice versa.
### SMILES reading and writing[¶](#smiles-reading-and-writing)
Putting the pieces together, let’s look at the interaction between SMILES reading/writing and the handling of aromaticity.
Writing SMILES
Unless Kekulé SMILES are requested (via the `k` output option), the SMILES writer will always write an aromatic SMILES string. IsAromatic() will be called on atoms and bonds to determine whether to use lowercase letters. As described earlier, this will trigger aromaticity perception according to the default model if the molecules is not marked as having its aromaticity perceived.
Reading SMILES
The situation when reading SMILES is a bit more involved. If the SMILES string contains lowercase characters and aromatic bonds, this information is used to mark atoms and bonds as aromatic. The molecule is then kekulized to assign bond orders to aromatic bonds. Next, unless the `a` option is supplied, the molecule is marked as having its aromaticity unperceived.
That last step might seem strange. Why, after going to the trouble of reading the aromaticity and using it to kekulize, do we then effectively ignore it?
The reason is simply this: when writing an aromatic SMILES, we usually want to use our own aromaticity model and not that present in the input SMILES string. Otherwise, SMILES strings for the same molecule from different sources (that may use different aromaticity models) would not yield the same canonical SMILES string.
Of course, if the SMILES string came from Open Babel in the first place, we are doing unnecessary work when we keep reapplying the same aromaticity model. In this case, you can speed things up by using the `a` option, so that the aromaticity information present in the input is retained. The following examples show this in action:
```
$ obabel -:cc -osmi C=C
$ obabel -:cc -osmi -aa cc
```
### Effect of modifying the structure[¶](#effect-of-modifying-the-structure)
Perhaps surprisingly, modifying the structure has no effect on the existing aromaticity flags; deleting an atom does not mark aromaticity as unperceived, nor indeed does any other change to the structure such as changing the atomic number of an atom or setting its charge; nor does the use of Begin/EndModify() affect the aromaticity flags. The only way to ensure that aromaticity is reperceived after modifying the structure is to explicitly mark it as unperceived.
The rationale for this is that an efficient toolkit should avoid unnecessary work. The toolkit does not know if a particular modification invalidates any aromaticity already perceived, or even if it did know, it cannot know whether the user actually wishes to invalidate it. It’s up to the user to tell the toolkit. This places more responsibility in the hands of the user, but also more power.
To illustrate, let’s consider what happens when the user reads benzene from the SMILES string `c1ccccc1`, and then modifies the structure by deleting an aromatic atom.
As this is an aromatic SMILES string, the SMILES reader will mark all atoms and bonds as aromatic. Next, the molecule itself is marked as not having aromaticity perceived (see previous section). After reading, we can trigger aromaticity perception by calling IsAromatic() on an atom; now, in addition to the atoms and bonds being marked as aromatic, the molecule itself will be marked as having aromaticity perceived.
If at this point we delete a carbon and write out a SMILES string, what will the result be? You may expect something like `[CH]=CC=C[CH]` (or `C=CC=CC` if we also adjust the hydrogen count on the neighbor atoms) but instead it will be `[cH]ccc[cH]` (or `ccccc` if hydrogens were adjusted).
This follows from the discussion above - structural modifications have no effect on aromaticity flags. If instead the user wishes the SMILES writer to reperceive aromaticity, all that is necessary is to mark the molecule as not having aromaticity perceived, in which case the Kekulé form will instead be obtained.
Radicals and SMILES extensions[¶](#radicals-and-smiles-extensions)
---
### The need for radicals and implicit hydrogen to coexist[¶](#the-need-for-radicals-and-implicit-hydrogen-to-coexist)
Hydrogen deficient molecules, radicals, carbenes, etc., are not well catered for by chemical software aimed at pharmaceuticals. But radicals are important reaction intermediates in living systems as well as many other fields, such as polymers, paints, oils,
combustion and atmospheric chemistry. The examples given here are small molecules, relevant to the last two applications.
Chemistry software to handle radicals is complicated by the common use of implicit hydrogen when describing molecules. How is the program to know when you type “O” whether you mean an oxygen atom or water? This ambiguity leads some to say that hydrogens should always be explicit in any chemical description. But this is not the way that most chemists work. A straight paraffinic chain from which a hydrogen had been abstracted might commonly be represented by something like:
This uses implicit hydrogens and an explicit radical centre. But sometimes the hydrogens are explicit and the radical centre implicit, as when CH3is used to represent the methyl radical.
### How Open Babel does it[¶](#how-open-babel-does-it)
Open Babel accepts molecules with explicit or implicit hydrogens and can convert between the two. It will also handle radicals (and other hydrogen-deficient species) with implicit hydrogen by using internally a property of an atom, _spinmultiplicity, modelled on the RAD property in MDL MOL files and also used in CML. This can be regarded in the present context as a measure of the hydrogen deficiency of an atom. Its value is:
* 0 for normal atoms,
* 2 for radical (missing one hydrogen) and
* 1 or 3 for carbenes and nitrenes (missing two hydrogens).
It happens that for some doubly deficient species, like carbene CH2 and oxygen atoms, the singlet and triplet species are fairly close in energy and both may be significant in certain applications such as combustion, atmospheric or preparative organic chemistry, so it is convenient that they can be described separately. There are of course an infinity of other electronic configurations of molecules but Open Babel has no special descriptors for them. However, even more hydrogen-deficient atoms are indicated by the highest possible value of spinmultiplicity (C atom has spin multiplicity of 5).
(This extends MDL’s RAD property which has a maximum value of 3.)
If the spin multiplicity of an atom is not input explicitly, it is set (in [:obapi:`OBMol::AssignSpinMultiplicity() <OpenBabel::OBMol::AssignSpinMultiplicity>`](#id1)) when the input format is MOL, SMI, CML or Therm. This routine is called after all the atoms and bonds of the molecule are known. It detects hydrogen deficiency in an atom and assigns spin multiplicity appropriately. But because hydrogen may be implicit it only does this for atoms which have at least one explicit hydrogen or on atoms which have had
[:obapi:`ForceNoH() <OpenBabel::OBAtom::ForceNoH>`](#id3) called for them - which is effectively zero explicit hydrogens. The latter is used, for instance, when SMILES inputs `[O]`
to ensure that it is seen as an oxygen atom (spin multiplicity=3)
rather than water. Otherwise, atoms with no explicit hydrogen are assumed to have a spin multiplicity of 0, i.e with full complement of implicit hydrogens.
In deciding which atoms should be have spin multiplicity assigned,
hydrogen atoms which have an isotope specification (D,T or even 1H)
do not count. So SMILES `N[2H]` is NH2D (spin multiplicity left at 0, so with a full content of implicit hydrogens), whereas
`N[H]` is NH (spin multiplicity=3). A deuterated radical like NHD is represented by `[NH][2H]`.
### In radicals either the hydrogen or the spin multiplicity can be implicit[¶](#in-radicals-either-the-hydrogen-or-the-spin-multiplicity-can-be-implicit)
Once the spin multiplicity has been set on an atom, the hydrogens can be implicit even if it is a radical. For instance, the following mol file, with explicit hydrogens, is one way of representing the ethyl radical:
```
ethyl radical
OpenBabel04010617172D Has explicit hydrogen and implicit spin multiplicity
7 6 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0
0.0000 0.0000 0.0000 H 0 0 0 0 0
0.0000 0.0000 0.0000 H 0 0 0 0 0
0.0000 0.0000 0.0000 H 0 0 0 0 0
0.0000 0.0000 0.0000 H 0 0 0 0 0
0.0000 0.0000 0.0000 H 0 0 0 0 0
1 2 1 0 0 0
1 3 1 0 0 0
1 4 1 0 0 0
1 5 1 0 0 0
2 6 1 0 0 0
2 7 1 0 0 0 M END
```
When read by Open Babel the spinmultiplicity is set to 2 on the C atom 2. If the hydrogens are made implicit, perhaps by the `-d`
option, and the molecule output again, an alternative representation is produced:
```
ethyl radical
OpenBabel04010617192D Has explicit spin multiplicity and implicit hydrogen
2 1 0 0 0 0 0 0 0 0999 V2000
0.0000 0.0000 0.0000 C 0 0 0 0 0
0.0000 0.0000 0.0000 C 0 0 0 0 0
1 2 1 0 0 0 M RAD 1 2 2 M END
```
### SMILES extensions for radicals[¶](#smiles-extensions-for-radicals)
Although radical structures can be represented in SMILES by specifying the hydrogens explicitly, e.g. `[CH3]` is the methyl radical, some chemists have apparently felt the need to devise non-standard extensions that represent the radical centre explicitly. Open Babel will recognize `C[O.]` as well as `C[O]` as the methoxy radical CH3O during input, but the non-standard form is not supported in output.
By default, radical centres are output in explict hydrogen form,
e.g. `C[CH2]` for the ethyl radical. All the atoms will be in explict H form, i.e. `[CH3][CH2]`, if [:obapi:`AddHydrogens() <OpenBabel::OBMol::AddHydrogens>`](#id5) or the `-h` option has been specified. The output is always standard SMILES, although other programs may not interpret radicals correctly.
Open Babel supports another SMILES extension for both input and output: the use of lower case atomic symbols to represent radical centres. (This is supported on the ACCORD Chemistry Control and maybe elsewhere.) So the ethyl radical is `Cc` and the methoxy radical is `Co`. This form is input transparently and can be output by using the `-xr` option “radicals lower case”. It is a useful shorthand in writing radicals, and in many cases is easier to read since the emphasis is on the radical centre rather than the number of hydrogens which is less chemically significant.
In addition, this extension interprets multiple lower case `c`
without ring closure as a conjugated carbon chain, so that `cccc` is input as 1,3-butadiene. Lycopene (the red in tomatoes) is
`Cc(C)cCCc(C)cccc(C)cccc(C)ccccc(C)cccc(C)cccc(C)CCcc(C)C` (without the stereochemical specifications). This conjugated chain form is not used on output - except in the standard SMILES aromatic form,
`c1ccccc1` benzene.
It is interesting to note that the lower case extension actually improves the chemical representation in a few cases. The allyl radical C3H5 would be conventionally `[CH2]=[CH][CH2]` (in its explict H form),
but could be represented as `ccc` with the extended syntax. The latter more accurately represents the symmetry of the molecule caused by delocalisation.
This extension is not as robust or as carefully considered as standard SMILES and should be used with restraint. A structure that uses `c`
as a radical centre close to aromatic carbons can be confusing to read, and Open Babel’s SMILES parser can also be confused. For example, it recognizes `c1ccccc1c` as the benzyl radical, but it doesn’t like
`c1cc(c)ccc1`. Radical centres should not be involved in ring closure: for cyclohexyl radical `C1cCCCC1` is ok, but `c1CCCCC1` is not.
### Other Supported Extensions[¶](#other-supported-extensions)
Open Babel supports quadruple bonds `$`, e.g. `[Rh-](Cl)(Cl)(Cl)(Cl)$[Rh-](Cl)(Cl)(Cl)Cl`
and aromatic `[te]`, e.g. `Cc1[te]ccc1`. In addition, ring closures up to 5 digits `%(N)` are supported, e.g. `C%(100)CC%(100)`.
Contributing to Open Babel[¶](#contributing-to-open-babel)
---
### Overview[¶](#overview)
Open Babel is developed using open, community-oriented development made possible by an active community – developers, testers, writers, implementers and most of all users. No matter which ‘er’ you happen to be, or how much time you can provide, you can make valuable contributions.
Not sure where to start? This section aims to give you some ideas.
Provide input
You can help us by:
* helping to answer questions on our [mailing list](https://lists.sourceforge.net/lists/listinfo/openbabel-discuss)
* suggesting new [features](https://github.com/openbabel/openbabel/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Afeature) or file formats
* reporting [bugs](https://github.com/openbabel/openbabel/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+)
Spread the word
If you find Open Babel useful, there’s a chance that others will also. You can help us by:
* promoting and citing Open Babel in talks and publications
* writing blog posts about Open Babel
* helping with documentation and our website
* building your own software on Open Babel
To get started, just send an email to our [mailing list](https://lists.sourceforge.net/lists/listinfo/openbabel-discuss).
Code a storm
As an open source project, Open Babel has a very open development process. This means that many contributors have helped with the project with a variety of help – some for long periods of time, and some with small, single changes. All types of assistance has been valuable to the growth of the project over the years.
New developers are always very welcome to OpenBabel so if you’re interested, just send an email to the developer list ([join here](http://lists.sourceforge.net/lists/listinfo/openbabel-devel)) about what you would like to work on, or else we can come up with some ideas for areas where you could contribute. Here are some possibilities:
* Implement the latest algorithms described in the literature
* Add a new file format (see [How to add a new file format](index.html#add-file-format))
* Perform ‘software archaeology’ (see [Software Archaeology](index.html#software-archaeology))
* Fix some [bugs](https://github.com/openbabel/openbabel/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+)
* Add a requested [feature](https://github.com/openbabel/openbabel/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Afeature)
* Implement a feature from our roadmap
### Developing Open Babel[¶](#developing-open-babel)
Due to its open nature of its development, Open Babel contains code contributed by a wide variety of developers (see [Thanks](index.html#thanks)). This section describes some general guidelines and “best practices” for code developers.
#### Developer Resources[¶](#developer-resources)
For new and existing developers here are some useful resources:
* GitHub [project page](http://github.com/openbabel)
* Development version [API documentation](http://openbabel.org/dev-api)
* Development version [Sphinx documentation](https://open-babel.readthedocs.io/en/latest/)
#### Working with the Development Code[¶](#working-with-the-development-code)
To download and update the latest version of the Open Babel source code, you need Git. Git is the name of the project used to maintain the Open Babel version control repository. There are many clients for Git, including command-line and GUI applications.
##### Keeping up to date with Git[¶](#keeping-up-to-date-with-git)
1. Check out the latest development version:
```
git clone https://github.com/openbabel/openbabel.git openbabel-dev
```
This creates a directory called `openbabel-dev`, which contains the latest source code from Open Babel.
2. Configure and compile this using CMake (see [Compiling Open Babel](index.html#compiling-open-babel)).
3. After some time passes, and you want the latest bug fixes or new features, you may want to update your source code. To do this, go into the `openbabel-dev` directory you created above, and type:
```
git pull -u
```
4. Do step (2) again.
5. If, after updating, the compilation fails please report it to the Open Babel mailing list. In the meanwhile, if you want to go back to a particular revision (that is, if you don’t want to use the latest one), just use `git log` to find the checksum of the current revision, and update to an earlier revision using this:
> $ git log
> …
> commit 1c2916cc5e6ed31a23291524b08291c904506c3f
> Author: <NAME> <[<EMAIL>](mailto:<EMAIL>)>
> Date: Mon Apr 30 07:33:17 2018 +0100
> $ git checkout 1c2916cc5
#### Modular design of code base[¶](#modular-design-of-code-base)
Since version 2.0, Open Babel has had a modular structure. Particularly for the use of Open Babel as a chemical file format converter, it aims to:
* separate the chemistry, the conversion process and the user interfaces, reducing, as far as possible, the dependency of one on another.
* put all the code for each chemical format in one place (usually a single cpp file) and make the addition of new formats simple.
* allow the format conversion of not just molecules, but also any other chemical objects, such as reactions.
The structure of the Open Babel codebase broken down into modules
The separate parts of the OpenBabel program are:
> * The **Chemical** core, which contains OBMol etc. and has all the chemical structure description and manipulation. This bit is the heart of the application and its API can be used as a chemical toolbox. It has no input/output capabilities.
> * The **Formats**, which read and write to files of different types. These classes are derived from a common base class, OBFormat, which is in the Conversion Control module. They also make use of the chemical routines in the Chemical Core module. Each format file contains a global object of the format class. When the format is loaded the class constructor registers the presence of the class with OBConversion. This means the formats are plugins - new formats can be added without changing any framework code.
> * **Common Formats** include OBMoleculeFormats and XMLBaseFormat from which most other formats (like Format A and Format B in the diagram) are derived. Independent formats like Format C are also possible.
> * The **Conversion** control, which also keeps track of the available formats, the conversion options and the input and output streams. It can be compiled without reference to any other parts of the program. In particular, it knows nothing of the Chemical core: mol.h is not included.
> * The **User interface**, which may be a command line (in main.cpp), a Graphical User Interface(GUI), especially suited to Windows users and novices, or may be part of another program which uses OpenBabel’s input and output facilities. This depends only on the Conversion control module (obconversion.h is included), but not on the Chemical core or on any of the Formats.
> * The **Fingerprint API**, as well as being usable in external programs, is employed by the fastsearch and fingerprint formats.
> * The **Fingerprints**, which are bit arrays which describe an object and which facilitate fast searching. They are also built as plugins, registering themselves with their base class OBFingerprint which is in the Fingerprint API.
> * The **Error handling** can be used throughout the program to log and display errors and warnings (see below).
It is possible to build each box in the diagram as a separate DLL or shared library and the restricted dependencies can help to limit the amount of recompilation. For the formats or the fingerprints built in this way it may be possible to use only those whose DLL or so files are present when the program starts. Several formats or fingerprints may be present in a single dynamic library.
Alternatively, and most commonly, the same source code can be built into a single executable. The restricted dependencies still provide easier program maintenance.
This segregation means that a module can directly call code only in other modules connected to it by forward arrows. So some discipline is needed when adding new code, and sometimes non-obvious work-arounds are necessary. For instance, since the user interface doesn’t know about the Chemical Core, if it were necessary to set any parameters in it, then this would have to be done through a pseudo format OBAPIInterface.
Sometimes one format needs to use code from another format, for example, rxnformat needs to read mol files with code from mdlformat. The calling format should not use the code directly but should do it through a OBConversion object configured with the appropriate helper format.
The objects passed between the modules in the diagram above are polymorphic [:obapi:`OBBase`](#id1) pointers. This means that the conversion framework can be used by any object derived from OBBase (which essentially means anything - chemical or not). Most commonly these refer to OBMol objects, less commonly to OBReaction objects, but could be extended to anything else without needing to change any existing code.
#### Error Handling and Warnings[¶](#error-handling-and-warnings)
The general philosophy of the Open Babel project is to attempt to gracefully recover from error conditions. Depending on the severity of the error, a message may or may not be sent to the user – users can filter out developer debugging messages and minor errors, but should be notified of significant problems.
Errors and warnings in Open Babel are handled internally by a flexible system motivated by a few factors:
* End users often do not wish to be deluged by debugging or other messages during operation.
* Other developers may wish to redirect or filter error/warning output (e.g., in a GUI).
* The operation of Open Babel should be open to developers and users alike to monitor an “audit trail” of operations on files and molecules, and debug the program and library itself when the need arises.
Multiple error/warning levels exist and should be used by code. These are defined in the [:obapi:`obMessageLevel`](#id3) enum as follows:
* `obError` – for critical errors (e.g., cannot read a file)
* `obWarning` – for non-critical problems (e.g., molecule appears empty)
* `obInfo` – for informative messages (e.g., file is a non-standard format)
* `obAuditMsg` – for messages auditing methods which destroy or perceive molecular data (e.g., kekulization, atom typing, etc.)
* `obDebug` – for messages only useful for debugging purposes
The default filter level is set to `obWarning`, which means that users are told of critical errors, but not non-standard formatting of input files.
A global error handler [:obapi:`obErrorLog`](#id5) (an instance of [:obapi:`OBMessageHandler`](#id7)) is defined and should be used as shown in the API documentation for the [:obapi:`OBMessageHandler`](#id9) class.
#### Lazy Evaluation[¶](#lazy-evaluation)
The [:obapi:`OBMol::BeginModify() <OpenBabel::OBMol::BeginModify>`](#id12) and [:obapi:`OBMol::EndModify() <OpenBabel::OBMol::EndModify>`](#id14) calls are part of Open Babel’s lazy evaluation mechanism.
In some cases, code may desire to make a large number of changes to an OBMol object at once. Ideally, this should all happen without triggering unintended perception routines. Therefore, the `BeginModify()` call marks the beginning of such code, and `EndModify()` triggers any needed updates of lazy evaluation methods.
For example:
```
mol.BeginModify();
double x,y,z;
OBAtom *atom;
vector<string> vs;
for (i = 1; i <= natoms; i ++)
{
if (!ifs.getline(buffer,BUFF_SIZE))
return(false);
tokenize(vs,buffer);
if (vs.size() != 4)
return(false);
atom = mol.NewAtom();
x = atof((char*)vs[1].c_str());
y = atof((char*)vs[2].c_str());
z = atof((char*)vs[3].c_str());
atom->SetVector(x,y,z); //set coordinates
atom->SetAtomicNum(atoi(vs[0].c_str())); // set atomic number
}
mol.ConnectTheDots();
mol.PerceiveBondOrders();
mol.EndModify();
```
This code reads in a list of atoms with XYZ coordinates and the atomic number in the first column (`vs[0]`). Since hundreds or thousands of atoms could be added to a molecule, followed by creating bonds, the code is enclosed in a `BeginModify()`/`EndModify()` pair.
### Documentation[¶](#documentation)
Documenting Open Babel is an important and ongoing task. As an open source project, code must be documented, both for other developers to use the API and for others to follow your code. This includes clear documentation on the interfaces of particular classes and methods (that is, the [API](http://openbabel.org/api) documentation) but also tutorials and examples of using the Open Babel library to accomplish clear tasks.
Beyond the documentation described above, as an open-source project involving many, many contributors, the internal code should be clearly commented and easy to read (in English, preferably, since this is the common language of developers on the project).
#### Adding New Code[¶](#adding-new-code)
The golden rule is **write the documentation, then code to the specs**.
You should never, ever start writing code unless you’ve specified, clearly and exactly, what your code will do. This makes life easier for you (i.e., you know exactly what the code should do), and for others reading your code.
This mantra also facilitates writing tests (see [Adding a new test](index.html#testing)).
#### Modifying Old Code[¶](#modifying-old-code)
When modifying old code, please take a little time to improve the documentation of the function.
Even an “obvious” function must be documented, if for no other reason than to say, “This function does what you think, and has no side effects.”
Take [:obapi:`OBAtom::SetAtomicNum() <OpenBabel::OBAtom::SetAtomicNum>`](#id2) - should be “obvious”, right? Wrong.
> * Does it affect the charge?
> * The spin multiplicity?
> * The implicit valence?
> * The hybridization?
> * What happens if I do SetHybridization(3) and then SetAtomicNum(1)?
> * Does the molecule have to be in the modify state?
> * If the molecule is not in the modify state, is it put into the modify state by SetAtomicNum()?
> * Does SetAtomicNum() cause a recomputation of aromaticity?
#### User documentation and tutorials[¶](#user-documentation-and-tutorials)
There’s no point spending time adding new features to Open Babel unless you describe how to use them and give examples. The best place to do this is in the user documentation…which you’re reading right now.
This documentation is automatically generated from text files in a simple markup language (*reStructuredText*) using the [Sphinx](http://sphinx.pocoo.org/) documentation system. This allows us to generate web pages, PDF files, and even ePub eBooks all from the same source (which is currently maintained at [BitBucket](http://bitbucket.org/baoilleach/openbabel-user-docs)).
If you notice any errors or feel like adding a section, please let us know at [openbabel-devel](https://lists.sourceforge.net/lists/listinfo/openbabel-devel).
### Adding a new test[¶](#adding-a-new-test)
Tests allow us to maintain code quality, ensure that code is working, prevent regressions, and facilitate refactoring. Personally, I find that there is no better motivation for writing tests than knowing that that bug I fixed will stay fixed, and that feature I implemented will not be broken by others. As an open source developer, I never have enough time; tests ensure that what time I have is not wasted.
We can divide the existing tests into three classes, based on how they test the Open Babel codebase:
1. Tests written in C++ that test the public API 2. Tests written in Python that use the SWIG bindings to test the public API 3. Tests written in Python that use the command-line executables for testing
Which type of test should you write? It doesn’t really matter - it’s more important that you write *some* type of test. Personally, I can more quickly test more if I write the test in Python, so generally I write and check-in tests of type (2) above; when I need to run a testcase in a debugger, I write a short test of type (1) so that I can step through and see what’s happening.
#### Running tests[¶](#running-tests)
To begin with, we need to configure CMake to enable tests: `-DENABLE_TESTS=ON`. This adds the `make test` target and builds the C++ tests. For tests of type 3 (above), you will also need to enable the Python bindings: `-DPYTHON_BINDINGS=ON -DRUN_SWIG=ON`. Some tests are dependent on optional dependencies; if you don’t build with support for these, then the corresponding tests will not be run.
To actually run the tests, you can run the entire test suite in one go or run individual tests. To run the entire suite, use `make test` or `ctest` (note that you can use the `-j` option to speed up ctest). The ctest command also allows a single test or a list of tests to be specified, and in combination with `-VV` (verbose) may be useful to run an individual test. However, I find it more useful to run individual tests directly. Here is an example of how to run an individual test for each of the three types discussed earlier:
1. `test_runner regressionstest 1`
This will run test number 1 in `regressionstest.cpp`. Nothing will happen…unless the test fails. (test_runner is a testing harness generated by CMake.)
2. `python test\testbindings.py TestSuite.testAsterisk`
This will run the testAsterisk test in `testbindings.py`. This will write out a single dot, and some summary information.
3. `python test\testbabel.py testOBabel.testSMItoInChI`
This will run the testSMItoInChI test in `testbabel.py`.
The next few sections describe adding a new test of types 1 to 3. The same test will be added, a test to ensure that the molecular weight of ethanol is reported as 46.07.
#### Test using C++[¶](#test-using-c)
The easiest place to add new tests is into `test/regressionstest.cpp`. Look at the switch statement at the end of the file and pick a number for the test. Let’s say 260. Add the following:
```
case 260:
test_Ethanol_MolWt();
break;
```
Now add the value of 260 to `test/CMakeLists.txt` so that it will be run as part of the testsuite.:
```
set (regressions_parts 1 221 222 223 224 225 226 227 260)
```
Now let’s add the actual test somewhere near the top of the file:
```
void test_Ethanol_MolWt()
{
OBMol mol;
OBConversion conv;
OB_REQUIRE(conv.SetInFormat("smi"));
conv.ReadString(&mol, "CCO");
double molwt = mol.GetMolWt();
OB_ASSERT(molwt - 46.07 < 0.01);
}
```
The various assert methods are listed in `obtest.h` and are as follows:
* **OB_REQUIRE(exp)** - This must evaluate to `true` or else the test will be marked as failing and will terminate. This is useful for conditions that *must* be true or else the remaining tests cannot be run (e.g. was the necessary OBFormat found?).
* **OB_ASSERT(exp)** - This must evaluate to `true` or else the test will be marked as failing. In contrast to OB_REQUIRE, the test does not terminate in this case, but continues to run. This feature can be useful because it lets you know (based on the output) how many and which OB_ASSERT statements failed.
* **OB_COMPARE(expA, expB)** - Expressions A and B must be equal or else the test fails (but does not terminate).
It is often useful to write a test that uses a checked-in testfile. Let’s do this for our example testcase. If you place a file `ethanol.smi` into `test/files`, then the following will read it using a convenience function provided by `obtest.h`.:
```
void test_Ethanol_MolWt()
{
OBMolPtr mol = OBTestUtil::ReadFile("ethanol.smi")
double molwt = mol.GetMolWt();
OB_ASSERT(molwt - 46.07 < 0.01);
}
```
As well as `ReadFile` (which is convenient for single molecules), the `OBTestUtil` struct provides `GetFilename` which will return the full path to the testfile, if you wish to open it yourself.
#### Test using a command-line executable[¶](#test-using-a-command-line-executable)
At the command-line we can calculate the molecular weight of ethanol as shown below. We are going to do something similar using the Python test framework:
```
> obabel -:CCO --append MW -otxt 46.0684
```
Open `test/testbabel.py` in an editor. I have grouped tests related to the `obabel` executable into a class testOBabel, so let’s add a new test there. Somewhere in that class (for example, at the end), add a function such as the following (note: it must begin with the word “test”):
```
def testMolWtEthanol(self):
"""Check the molecular weight of ethanol"""
self.canFindExecutable("obabel")
answers = [
("CCO", 46.07),
("[H]", 1.01),
("[2H]", 2.01),
]
for smi, molwt in answers:
output, error = run_exec('obabel -:%s --append mw -otxt' % smi)
my_molwt = round(float(output), 2)
self.assertEqual(my_molwt, molwt)
```
We provide a few convenience functions to help write these tests. The most important of these is `run_exec(command)` which runs the commandline executable returns a tuple of stdout and stderr. Behind the scenes, it adds the full path to the named executable. In the example above, `run_exec(stdin, command)` took a single argument; the next example will show its use with two arguments - the additional argument is a string which is treated as stdin, and piped through the executable.
In the previous example, each SMILES string was passed in one-at-a-time. However, it is more efficient to do them all in one go as in the following example:
```
def testMolWtEthanol(self):
"""Check the molecular weight of ethanol"""
self.canFindExecutable("obabel")
smifile = """CCO
[H]
[2H]
"""
answers = [46.07, 1.01, 2.01]
output, error = run_exec(smifile, 'obabel -ismi --append mw -otxt')
for ans, my_ans in zip(answers, output.split("\n")):
self.assertEqual(ans, round(float(my_ans), 2))
```
To use a testfile placed in `test/files`, the `getTestFile()` member function is provided:
```
def testMolWtEthanol(self):
"""Check the molecular weight of ethanol"""
self.canFindExecutable("obabel")
answers = [46.07, 1.01, 2.01]
smifile = self.getTestFile("ethanol.smi")
output, error = run_exec('obabel %s --append mw -otxt' % smifile)
for ans, my_ans in zip(answers, output.split("\n")):
self.assertEqual(ans, round(float(my_ans), 2))
```
The full list of provided convenience functions is:
* **run_exec(command)**, **run_exec(stdin, command)** - see above
* **BaseTest.getTestFile(filename)** - returns the full path to a testfile
* **BaseTest.canFindExecutable(executable)** - checks whether the executable exists in the expected location
* **BaseTest.assertConverted(stderr, N)** - An assert statement that takes the stderr from run_exec and will check whether the number of molecules reported as converted matches N
#### Test the API using Python[¶](#test-the-api-using-python)
The easiest place to add new tests is into `test/testbindings.py`. Classes are used to organise the tests, but for a single ‘miscellaneous’ test a good place is the TestSuite class. Somewhere in that class add the following function:
```
def testMolWtEthanol(self):
"""Check the molecular weight of ethanol"""
answers = [
("CCO", 46.07),
("[H]", 1.01),
("[2H]", 2.01),
]
for smi, molwt in answers:
my_molwt = round(pybel.readstring("smi", smi).molwt, 2)
self.assertEqual(my_molwt, molwt)
```
The variable `here` is defined in `testbindings.py` and may be used find the path to testfiles. For example, given the `test/ethanol.smi`, the following may be used to read it:
```
def testMolWtEthanol(self):
"""Check the molecular weight of ethanol"""
answers = [46.07, 1.01, 2.01]
testfile = os.path.join(here, "test", "ethanol.smi")
for mol, answer in zip(pybel.readfile("smi", testfile), answers):
my_molwt = round(mol.molwt, 2)
self.assertEqual(my_molwt, molwt)
```
The tests use the standard unittest framework. One thing to note, which is not obvious, is how to test for exceptions. A typical case is checking that a dodgy SMILES is rejected on reading; in this instance, `Pybel.readstring()` will raise an IOError. To assert that this is the case, rather than use try/except, the following syntax is required:
```
self.assertRaises(IOError, pybel.readstring, "smi", "~*&*($")
```
If you have multiple tests to add on a single ‘topic’, you will probably want to add your own class either into `testbindings.py` or a new Python file. Note that if you create a new Python file, it should start with the word `test` and you will need to add the rest of the name to the `pybindtest` list in `test/CMakeLists.txt`.
#### Some final comments[¶](#some-final-comments)
Some thoughts on the topic of the perfect test:
* When adding a regression test for a bug fix, the test should fail without the fix, but pass afterwards.
* When adding a test for a new feature, the test should provide complete coverage for all code paths.
* Test not just perfect input, but all sorts of dodgy input like molecules with no atoms, empty strings, and so forth.
* Don’t be afraid to add tests for things which you already (think you) know will pass; such tests may surprise you, and even if they don’t they will prevent regressions.
Potential problems/gotchas:
* Why isn’t your Python test being run? Test functions name must begin with the word `test`.
* If your new test passes first time, check that it is actually running correctly, by changing your asserts so that they should fail.
* The C++ tests will be marked as failing if the test writes any of the following to stdout: `ERROR`, `FAIL`, `Test failed`. This is actually how the assert methods work.
* It’s best to avoid writing to disk, and instead write to a variable or stdout and capture it (as in the examples above).
### Software Archaeology[¶](#software-archaeology)
In any large software project, some parts of the code are revised and kept up-to-date more than others.
Conversely, some parts of the code begin to fall behind – the code may be poorly tested, poorly documented, and not always up to best practices.
With that in mind, the following sections describe the important task of software archeology – diving in to older parts of code and bringing them up to date. Whenever editing a file, please keep these in mind.
#### Documentation and Code Readability[¶](#documentation-and-code-readability)
* Add clear documentation for every public function (see [Documentation](index.html#documentation)).
* Add clear comments on the internal operation of functions so that anyone can read through the code quickly.
> + If you’re not sure what a function does, e-mail the [openbabel-devel](https://lists.sourceforge.net/lists/listinfo/openbabel-devel) list and it can be worked out.
>
* Mark functions which should be publicly visible and functions which are only useful internally. Many methods are not particularly useful except inside the library itself.
* Improve code indentation
> + It seems like a minor point, but the format of your code is important. As open source software, your code is read by many, many people.
> + Different contributions have often had different indentation styles. Simply making the code indentation consistent across an entire file makes the code easier to read.
> + The current accepted scheme for Open Babel is a default indent of two spaces, and use of spaces instead of tabs.
> + For tips on changing your editor to use this indentation style, please e-mail the [openbabel-devel](https://lists.sourceforge.net/lists/listinfo/openbabel-devel) list.
>
* Delete code which is commented out. The SVN version control system maintains history, so if we need it later, we can go back and get that code. Dead code like this simply makes it harder to read the important code!
* Marking areas of code which use [:obapi:`OBAtom::GetIdx() <OpenBabel::OBAtom::GetIdx>`](#id2) or other accesses to atom indexes, which may break when atom indexing changes.
#### Code Maintenance[¶](#code-maintenance)
* Minimize `#if`/`#endif` conditional compilation. Some is required for portability, but these should be minimized where possible. If there seems to be some magic #define which accesses parts of the file, it’s probably dead code. As above, dead code makes it harder to maintain and read everything else.
* Removing calls to cout, cerr, STDOUT, perror etc. These should use the global error reporting code.
* Minimize warnings from compilers (e.g., GCC flags `-Wextra -Wall`). Sometimes these are innocuous, but it’s usually better to fix the problems before they become bugs.
* Use static code analysis tools to find potential bugs in the code and remove them.
* Insure proper use of atom and bond iterators, e.g., `FOR_ATOMS_OF_MOL` rather than atom or bond index access, which will break if indexing changes.
Patches and contributions towards any of these tasks will be greatly appreciated.
Adding plugins[¶](#adding-plugins)
---
Open Babel uses a plugin architecture for file formats, ‘operations’, charge models, forcefields, fingerprints and descriptors. The general idea behind plugins is described on [Wikipedia](http://en.wikipedia.org/wiki/Plug-in_%28computing%29). When you start an application that uses the Open Babel library, it searches for available plugins and loads them. This means, for example, that plugins could be distributed separately to the Open Babel distribution.
In fact, even the plugin types are themselves plugins; this makes it easy to add new categories of plugin. The different types of plugins can be listed using:
```
C:\>babel -L charges
descriptors fingerprints
forcefields formats
loaders ops
```
To list the plugins of a particular type, for example, charge models, just specify the plugin type:
```
C:\>babel -L charges gasteiger Assign Gasteiger-Marsili sigma partial charges mmff94 Assign MMFF94 partial charges qeq Assign QEq (charge equilibration) partial charges (Rappe and Goddard, 199 1)
qtpie Assign QTPIE (charge transfer, polarization and equilibration) partial charges (Chen and Martinez, 2007)
```
To add a new plugin of any type, the general method is very simple:
1. Make a copy of an existing plugin .cpp file 2. Edit it so that it does what you want 3. Add the name of the .cpp file to the appropriate `CMakeLists.txt`.
The following sections describe in depth how to add support for a new file format or operation to Open Babel. Remember that if you do add a new plugin, please contribute the code back to the Open Babel project.
### How to add a new file format[¶](#how-to-add-a-new-file-format)
Adding support for a new file format is a relatively easy process, particularly with Open Babel 2.3 and later. Here are several important steps to remember when developing a format translator:
> 1. Create a file for your format in `src/formats/` or `src/formats/xml/` (for XML-based formats). Ideally, this file is self-contained although several formats modules are compiled across multiple source code files.
> 2. Add the name of the new .cpp file to an appropriate place in `src/formats/CMakeLists.txt`. It will now be compiled as part of the build process.
> 3. Take a look at other file format code, particularly `exampleformat.cpp`, which contains a heavily-annotated description of writing a new format. XML formats need to take a different approach; see the code in `xcmlformat.cpp` or `pubchemformat.cpp`.
> 4. When reading in molecules (and thus performing a lot of molecular modifications) call [:obapi:`OBMol::BeginModify() <OpenBabel::OBMol::BeginModify>`](#id1) at the beginning and [:obapi:`OBMol::EndModify() <OpenBabel::OBMol::EndModify>`](#id3) at the end. This will ensure that perception routines do not run while you read in a molecule and are reset after your code finishes (see [Lazy Evaluation](index.html#lazy-evaluation)).
> 5. Currently, lazy perception does not include connectivity and bond order assignment. If your format does not include bonds, make sure to call [:obapi:`OBMol::ConnectTheDots() <OpenBabel::OBMol::ConnectTheDots>`](#id5) and [:obapi:`OBMol::PerceiveBondOrders() <OpenBabel::OBMol::PerceiveBondOrders>`](#id7) after [:obapi:`OBMol::EndModify() <OpenBabel::OBMol::EndModify>`](#id9) to ensure bonds are assigned.
> 6. Consider various input and output options that users can set from the command-line or GUI. For example, many quantum mechanics formats (as well as other formats which do not recognize bonds) offer the following options:
> `-as` Call only [:obapi:`OBMol::ConnectTheDots() <OpenBabel::OBMol::ConnectTheDots>`](#id11) (single bonds only)
> `-ab` No bond perception
> 7. Make sure to use generic data classes like [:obapi:`OBUnitCell`](#id13) and others as appropriate. If your format stores any sort of common data types, consider adding a subclass of [:obapi:`OBGenericData`](#id15) for use by other formats and user code.
> 8. Please make sure to add several example files to the test set repository. Ideally, these should work several areas of your import code – in the end, the more robust the test set, the more stable and useful Open Babel will be. The test files should include at least one example of a correct file and one example of an invalid file (i.e., something which will properly be ignored and not crash **babel**).
> 9. Make sure to document your format using the string returned by `Description()`. At the minimum this should include a description of all options, along with examples. However, the more information you add (e.g. unimplemented features, applications of the format, and so forth) the more confident users will be in using it.
> 10. That’s it! Contact the [openbabel-discuss](http://lists.sourceforge.net/lists/listinfo/openbabel-discuss) mailing list with any questions, comments, or to contribute your new format code.
### Adding new operations and options[¶](#adding-new-operations-and-options)
The **babel** command line has the form:
```
babel inputfile [outputfile] [options]
```
There are several types of options:
> Options that control the conversion process
> For example `-i`, `-o` and `-m`
> Options specific to particular input or output formats
> These are specified with the `-a` and `-x` prefixes
> General options
> These usually operate on a molecule after it has been read by the input format and before it has been written by the output format.
The ones of interest here are the general options. These can be single letter options like `-c` (which centers coordinates), or multi-character options like `--separate` (which makes separate molecules from disconnected fragments). The ones mentioned are hardwired into the code, but it is possible to define new options that work in a similar way. This is done using the [:obapi:`OBOp`](#id1) class.
#### The OBOp class[¶](#the-obop-class)
The name [:obapi:`OBOp`](#id3) is intended to imply an operation as well as an option. This is a plugin class, which means that new ops are easily added without a need to alter any existing code.
The ops that are installed can be found using:
```
babel -L ops
```
or in the plugins menu item in the GUI. An example is the `--gen3D` option, which adds 3D coordinates to a molecule:
| | |
| --- | --- |
|
```
1
2
3
4
5
6
7
8
9 10
11 12
13 14
15 16
17 18
19 20
21 22
23 24
25
```
|
```
class OpGen3D : public OBOp
{
public:
OpGen3D(const char* ID) : OBOp(ID, false){};
const char* Description(){ return "Generate 3D coordinates"; }
virtual bool WorksWith(OBBase* pOb)const
{ return dynamic_cast<OBMol*>(pOb)!=NULL; }
virtual bool Do(OBBase* pOb, OpMap* pmap, const char* OptionText);
};
OpGen3D theOpGen3D("gen3D");
bool OpGen3D::Do(OBBase* pOb, OpMap* pmap, const char* OptionText)
{
OBMol* pmol = dynamic_cast<OBMol*>(pOb);
if(!pmol)
return false;
OBBuilder builder;
builder.Build(*pmol);
pmol->SetDimension(3);
return true;
}
```
|
The real work is done in the *Do* function, but there is a bit of boilerplate code that is necessary.
Line **4**: The constructor calls the base class constructor, which registers the class with the system. There could be additional parameters on the constructor if necessary, provided the base constructor is called in this way. (The `false` parameter value is to do with setting a default instance which is not relevant here.)
Line **5**: It is necessary to provide a description. The first line is used as a caption for the GUI checkbox. Subsequent lines are shown when listed with the verbose option.
Line **7**: *WorksWith()* identifies the type of object. Usually this is a molecule (*OBMol*) and the line is used as shown. The function is used by the GUI to display the option only when it is relevant.
> The *OBOp* base class doesn’t know about *OBMol* or *OBConversion* and so it can be used with any kind of object derived from *OBBase* (essentially anything). Although this means that the dependencies between one bit of the program and another are reduced, it does lead to some compromises, such as having to code *WorksWith()* explicitly rather than as a base class default.
Line **12**: This is a global instance which defines the Id of the class. This is the option name used on the command line, preceded by `--`.
Line **14**: The *Do()* function carries out the operation on the target object. It should normally return `true`. Returning `false` prevents the molecule being sent to the output format. Although this means that it is possible to use an *OBOp* class as a filter, it is better to do this using the `--filter` option.
Any other general options specified on the command line (or the GUI) can be accessed by calling *find* on the parameter *pmap*. For example, to determine whether the `-c` option was also specified:
```
OpMap::const_iterator iter = pmap->find("c");
if(iter!=pmap->end())
do something;
```
### How to add a new descriptor[¶](#how-to-add-a-new-descriptor)
[Some text here]
#### Add a new group contribution descriptor[¶](#add-a-new-group-contribution-descriptor)
Group contribution descriptors are a common type of molecular descriptor whose value is a sum of contributions from substructures of the molecule. Such a descriptor can easily be added to Open Babel without the need to recompile the code. All you need is a set of SMARTS strings for each group, and their corresponding contributions to the descriptor value.
The following example shows how to add a new descriptor, *hellohalo*, whose value increments by 1, 2, 3 or 4 for each F, Cl, Br, and I (respectively) in the molecule.
1. Create a working directory, for example `C:\Work`.
2. Copy the plugin definition file, `plugindefines.txt` to the working directory. This file can be found in the Open Babel data directory (typically in `/usr/share/openbabel` on Linux systems, or `C:\Users\username\AppDataRoaming\OpenBabel-2.3.2\data` on Windows).
3. For the *hellohalo* descriptor, add the following to the end of `plugindefines.txt` (make sure to include a blank line between it and other descriptor definitions):
```
OBGroupContrib hellohalo # name of descriptor hellohalo_smarts.txt # data file Count up the number of halogens (sort of)\n # brief description This descriptor is not correlated with any\n # longer description known property, living or dead.
```
4. Now create a file `hellohalo_smarts.txt`, again in the working directory, containing the following SMARTS definitions and contribution values:
```
# These are the SMARTS strings and contribution values
# for the 'hellohalo' group contribution descriptor.
;heavy F 1 # This is for fluorines Cl 2 # And this is for chlorines Br 3 # Etc.
I 4 # Ditto
```
That’s it!
Now let’s test it. Open a command prompt, and change directory to the working directory. We can find information on the new descriptor using **obabel**’s `-L` option:
```
C:\Work>obabel -L descriptors abonds Number of aromatic bonds atoms Number of atoms
...
hellohalo Count up the number of halogens (sort of)
...
title For comparing a molecule's title TPSA topological polar surface area
C:\Work>obabel -L hellohalo One of the descriptors hellohalo Count up the number of halogens (sort of)
This descriptor is not correlated with any known property, living or dead.
Datafile: hellohalo_smarts.txt OBGroupContrib is definable
```
An easy way to test the descriptor is to use the title output format, and append the descriptor value to the title:
```
C:\Work>obabel -:C(Cl)(Cl)I -otxt --append hellohalo 8
1 molecule converted
```
There are a couple of points to note about the pattern file:
1. Although a SMARTS string may match a substructure of a molecule, the descriptor contribution is only assigned to the first atom of the match.
2. Where several SMARTS strings assign values to the same atom, only the final assignment is retained. As an example, the following set of patterns will assign a contribution of 0.4 to all atoms except for carbon atoms, which have a value of 1.0:
```
;heavy
[*] 0.4 # All atoms
[#6] 1.0 # All carbon atoms
```
3. If you wish to take into account contributions from hydrogen atoms, you should precede the `;heavy` section by a `;hydrogen` section. The values for the contributions in the latter section are multiplied by the number of hydrogens attached to the matching atom. For example, consider the following set of patterns:
```
;hydrogen
[*] 0.2 # Hydrogens attached to all atoms C 1.0 # Hydrogens attached to an aliphatic carbon
;heavy C 10.0 # An aliphatic carbon
```
For ethanol, this gives a value of 25.2: two carbons (20.0), five hydrogens attached to a carbon (5.0), and one other hydrogen (0.2).
For further inspiration, check out `psa.txt`, `mr.txt` and `logp.txt` in the `data` directory. These are the group contribution descriptions for Polar Surface Area, Molar Refractivity and LogP.
Supported File Formats and Options[¶](#supported-file-formats-and-options)
---
Chemists are a very imaginative group. They keep thinking of new file formats.
Indeed, these are not just simple differences in how chemical data is stored, but often completely different views on molecular representations. For example, some file formats ignore hydrogen atoms as “implicit,” while others do not store bonding information. This is,
in fact, a key reason for Open Babel’s existence.
OpenBabel has support for 146 formats in total. It can read 108 formats and can write 107 formats. These formats are identified by a name (for example, `ShelX format`) and one or more short codes (in this case, `ins` or `res`). The titles of each section provide this information (for example, [ShelX format (ins, res)](index.html#shelx-format)).
The short code is used when using **obabel** or **babel** to convert files from one format to another:
```
obabel -iins myfile.ins -ocml
```
converts from ShelX format to Chemical Markup Language (in this case, no output file is specified and the output will be written to screen [stdout]). In fact, if the filename extension is the same as the file format code, then there is no need to specify the code. In other words, the following command will behave identically:
```
babel myfile.ins -ocml
```
As well as the general conversion options described elsewhere (see [Options](index.html#babel-options)), each format may have its own options for either reading or writing. For example, the ShelX format has two options that affect reading of files, `s` and `b`. To set a file format option:
* For **Read Options**, precede the option with `-a` at the command line
* For **Write Options**, precede the option with `-x`
For example, if we wanted to set all bonds to single bonds when reading a ShelX format file, we could specify the `s` option:
```
babel -iins myfile.ins -ocml -as
```
More than one read (or write) option can be specified (e.g. `-ax -ay -az`). **babel** (but not **obabel**) also allows you to specify several options together (e.g. as `-axyz`).
**Developer Note**
To set the file formats for an `OBConversion` object, use `SetInAndOutFormat(InCode, OutCode)`. To set a Read Option `s`, use `SetOptions("s", OBConversion::INOPTIONS)`.
### Common cheminformatics formats[¶](#common-cheminformatics-formats)
#### Canonical SMILES format (can)[¶](#canonical-smiles-format-can)
**A canonical form of the SMILES linear text format**
The SMILES format is a linear text format which can describe the connectivity and chirality of a molecule. Canonical SMILES gives a single
‘canonical’ form for any particular molecule.
See also
The “regular” [SMILES format (smi, smiles)](index.html#smiles-format) gives faster output, since no canonical numbering is performed.
##### Write Options[¶](#write-options)
| `-a` | *Output atomclass like [C:2], if available* |
| `-h` | *Output explicit hydrogens as such* |
| `-i` | *Do not include isotopic or chiral markings* |
| `-n` | *No molecule name* |
| `-r` | *Radicals lower case eg ethyl is Cc* |
| `-t` | *Molecule name only* |
| `-F <atom numbers>` |
| | *Generate Canonical SMILES for a fragment*
The atom numbers should be specified like “1 2 4 7”. |
| `-f <atomno>` | *Specify the first atom*
This atom will be used to begin the SMILES string. |
| `-l <atomno>` | *Specify the last atom*
The output will be rearranged so that any additional SMILES added to the end will be attached to this atom.
See the [SMILES format (smi, smiles)](index.html#smiles-format) for more information. |
#### Chemical Markup Language (cml, mrv)[¶](#chemical-markup-language-cml-mrv)
**An XML format for interchange of chemical information.**
This format writes and reads CML XML files. To write CML1 format rather than the default CML2, use the `-x1` option. To write the array form use `-xa`
and to specify all hydrogens using the hydrogenCount attribute on atoms use
`-xh`.
Crystal structures are written using the <crystal>, <xfract> (,…etc.)
elements if the OBMol has a OBGenericDataType::UnitCell data.
All these forms are handled transparently during reading. Only a subset of CML elements and attributes are recognised, but these include most of those which define chemical structure, see below.
The following are read:
* Elements:
+ molecule, atomArray, atom, bondArray, bond, atomParity, bondStereo
+ name, formula, crystal, scalar (contains crystal data)
+ string, stringArray, integer, integerArray, float floatArray, builtin
* Attributes:
+ On <molecule>: id, title, ref(in CMLReact)
+ On <atom>: id, atomId, atomID, elementType, x2, y2, x3, y3, z3, xy2, xyz3,
xFract, yFract, zFract, xyzFract, hydrogenCount, formalCharge, isotope,
isotopeNumber, spinMultiplicity, radical(from Marvin),
atomRefs4 (for atomParity)
+ On <bond>: atomRefs2, order, CML1: atomRef, atomRef1, atomRef2
Atom classes are also read and written. This is done using a specially formed atom id. When reading, if the atom id is of the form aN_M (where N and M are positive integers), then M is interpreted as the atom class.
Such atom ids are automatically generated when writing an atom with an atom class.
##### Read Options[¶](#read-options)
| `-2` | *read 2D rather than 3D coordinates if both provided* |
##### Write Options[¶](#write-options)
| `-1` | *write CML1 (rather than CML2)* |
| `-a` | *write array format for atoms and bonds* |
| `-A` | *write aromatic bonds as such, not Kekule form* |
| `-m` | *write metadata* |
| `-x` | *omit XML and namespace declarations* |
| `-c` | *continuous output: no formatting* |
| `-p` | *write properties* |
| `-N <prefix>` | *add namespace prefix to elements* |
##### Comments[¶](#comments)
In the absence of hydrogenCount and any explicit hydrogen on an atom, implicit hydrogen is assumed to be present appropriate to the radical or spinMultiplicity attributes on the atom or its normal valency if they are not present.
The XML formats require the XML text to be well formed but generally interpret it fairly tolerantly. Unrecognised elements and attributes are ignored and there are rather few error messages when any required structures are not found. This laxity allows, for instance, the reactant and product molecules to be picked out of a CML React file using CML. Each format has an element which is regarded as defining the object that OpenBabel will convert. For CML this is
<molecule>. Files can have multiple objects and these can be treated the same as with other multiple object formats like SMILES and MDL Molfile. So conversion can start at the nth object using the `-fn` option and finish before the end using the `-ln` option. Multiple object XML files also can be indexed and searched using FastSearch, although this has not yet been extensively tested.
#### InChI format (inchi)[¶](#inchi-format-inchi)
**IUPAC/NIST molecular identifier**
##### Read Options[¶](#read-options)
| `-X <Option string>` |
| | *List of InChI options* |
| `-n` | *molecule name follows InChI on same line* |
| `-a` | *add InChI string to molecule name* |
##### Write Options[¶](#write-options)
> Standard InChI is written unless certain InChI options are used
| `-K` | *output InChIKey only* |
| `-t` | *add molecule name after InChI* |
| `-w` | *ignore less important warnings*
These are:
‘Omitted undefined stereo’
‘Charges were rearranged’
‘Proton(s) added/removed’
‘Metal was disconnected’ |
| `-a` | *output auxiliary information* |
| `-l` | *display InChI log* |
| `-r` | *recalculate InChI; normally an input InChI is reused* |
| `-s` | *recalculate wedge and hash bonds(2D structures only)*
**Uniqueness options** (see also `--unique` and `--sort` which are more versatile) |
| `-u` | *output only unique molecules* |
| `-U` | *output only unique molecules and sort them* |
| `-e` | *compare first molecule to others*
This can also be done with [InChICompare format](index.html#compare-molecules-using-inchi):
```
babel first.smi second.mol third.cml -ok
```
|
| `-T <param>` | *truncate InChI according to various parameters*
See below for possible truncation parameters. |
| `-X <Option string>` |
| | *Additional InChI options*
See InChI documentation.
These options should be space delimited in a single quoted string.* Structure perception (compatible with stdInChI): `NEWPSOFF`, `DoNotAddH`, `SNon`
* Stereo interpretation (produces non-standard InChI): `SRel`, `SRac`,
`SUCF`, `ChiralFlagON`, `ChiralFlagOFF`
* InChI creation options (produces non-standard InChI): `SUU`, `SLUUD`,
`FixedH`, `RecMet`, `KET`, `15T`
The following options are for convenience, e.g. `-xF`
but produce non-standard InChI. |
| `-F` | *include fixed hydrogen layer* |
| `-M` | *include bonds to metal* |
##### Comments[¶](#comments)
Truncation parameters used with `-xT`:
| `/formula` | formula only |
| `/connect` | formula and connectivity only |
| `/nostereo` | ignore E/Z and sp3 stereochemistry |
| `/nosp3` | ignore sp3 stereochemistry |
| `/noEZ` | ignore E/Z steroeochemistry |
| `/nochg` | ignore charge and protonation |
| `/noiso` | ignore isotopes |
Note that these can also be combined, e.g. `/nochg/noiso`
#### InChIKey (inchikey)[¶](#inchikey-inchikey)
**A hashed representation of the InChI.**
The InChIKey is a fixed-length (27-character) condensed digital representation of an InChI, developed to make it easy to perform web searches for chemical structures.
An InChIKey consists of 14 characters (derived from the connectivity layer in the InChI), a hyphen, 9 characters (derived from the remaining layers), a character indicating the InChI version, a hyphen and a final checksum character. Contrast the InChI and InChIKey of the molecule represented by the SMILES string CC(=O)Cl:
```
obabel -:CC(=O)Cl -oinchi InChI=1S/C2H3ClO/c1-2(3)4/h1H3
obabel -:CC(=O)Cl -oinchikey WETWJCDKMRHUPV-UHFFFAOYSA-N
```
This is the same as using `-oinchi -xK` and can take the same options as the InChI format (see [InChI format (inchi)](index.html#inchi-format)):
```
obabel -:CC(=O)Cl -oinchi -xK WETWJCDKMRHUPV-UHFFFAOYSA-N
```
Note that while a molecule with a particular InChI will always give the same InChIKey, the reverse is not true; there may exist more than one molecule which have different InChIs but yield the same InChIKey.
Note
This is a write-only format.
#### MDL MOL format (mdl, mol, sd, sdf)[¶](#mdl-mol-format-mdl-mol-sd-sdf)
**Reads and writes V2000 and V3000 versions**
Open Babel supports an extension to the MOL file standard that allows cis/trans and tetrahedral stereochemistry to be stored in 0D MOL files. The tetrahedral stereochemistry is stored as the atom parity, while the cis/trans stereochemistry is stored using Up and Down bonds similar to how it is represented in a SMILES string. Use the `S` option when reading or writing if you want to avoid storing or interpreting stereochemistry in 0D MOL files.
##### Read Options[¶](#read-options)
| `-s` | *determine chirality from atom parity flags*
The default setting for 2D and 3D is to ignore atom parity and work out the chirality based on the bond stereochemistry (2D) or coordinates (3D).
For 0D the default is already to determine the chirality from the atom parity. |
| `-S` | *do not read stereochemistry from 0D MOL files*
Open Babel supports reading and writing cis/trans and tetrahedral stereochemistry to 0D MOL files.
This is an extension to the standard which you can turn off using this option. |
| `-T` | *read title only* |
| `-P` | *read title and properties only*
When filtering an sdf file on title or properties only, avoid lengthy chemical interpretation by using the `T` or `P` option together with the
[copy format](index.html#copy-raw-text). |
##### Write Options[¶](#write-options)
| `-3` | *output V3000 not V2000 (used for >999 atoms/bonds)* |
| `-a` | *write atomclass if available* |
| `-m` | *write no properties* |
| `-w` | *use wedge and hash bonds from input (2D only)* |
| `-v` | *always specify the valence in the valence field*
The default behavior is to only specify the valence if it is not consistent with the MDL valence model.
So, for CH4 we don’t specify it, but we do for CH3.
This option may be useful to preserve the correct number of implicit hydrogens if a downstream tool does not correctly implement the MDL valence model (but does honor the valence field). |
| `-S` | *do not store cis/trans stereochemistry in 0D MOL files* |
| `-A` | *output in Alias form, e.g. Ph, if present* |
| `-E` | *add an ASCII depiction of the molecule as a property* |
| `-H` | *use HYD extension (always on if mol contains zero-order bonds)* |
#### Protein Data Bank format (ent, pdb)[¶](#protein-data-bank-format-ent-pdb)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
| `-c` | *Ignore CONECT records* |
##### Write Options[¶](#write-options)
| `-n` | *Do not write duplicate CONECT records to indicate bond order* |
| `-o` | *Write origin in space group label (CRYST1 section)* |
#### SMILES format (smi, smiles)[¶](#smiles-format-smi-smiles)
**A linear text format which can describe the connectivity and chirality of a molecule**
Open Babel implements the [OpenSMILES specification](http://opensmiles.org).
It also implements an extension to this specification for radicals.
Note that the `l <atomno>` option, used to specify a “last” atom, is intended for the generation of SMILES strings to which additional atoms will be concatenated. If the atom specified has an explicit H within a bracket
(e.g. `[nH]` or `[C@@H]`) the output will have the H removed along with any associated stereo symbols.
See also
The [Canonical SMILES format (can)](index.html#canonical-smiles-format) produces a canonical representation of the molecule in SMILES format. This is the same as the `c` option below but may be more convenient to use.
##### Read Options[¶](#read-options)
| `-a` | *Preserve aromaticity present in the SMILES*
This option should only be used if reading aromatic SMILES generated by the same version of Open Babel. Any other use will lead to undefined behavior. The advantage of this option is that it avoids aromaticity perception, thus speeding up reading SMILES. |
| `-S` | *Clean stereochemistry*
By default, stereochemistry is accepted as given. If you wish to clean up stereochemistry (e.g. by removing tetrahedral stereochemistry where two of the substituents are identical)
then specifying this option will reperceive stereocenters. |
##### Write Options[¶](#write-options)
| `-a` | *Output atomclass like [C:2], if available* |
| `-c` | *Output in canonical form* |
| `-U` | *Universal SMILES* |
| `-I` | *Inchified SMILES* |
| `-h` | *Output explicit hydrogens as such* |
| `-i` | *Do not include isotopic or chiral markings* |
| `-k` | *Create Kekule SMILES instead of aromatic* |
| `-n` | *No molecule name* |
| `-r` | *Radicals lower case eg ethyl is Cc* |
| `-t` | *Molecule name only* |
| `-x` | *append X/Y coordinates in canonical-SMILES order* |
| `-C` | *‘anti-canonical’ random order (mostly for testing)* |
| `-o <ordering>` | *Output in user-specified order*
Ordering should be specified like 4-2-1-3 for a 4-atom molecule.
This gives canonical labels 1,2,3,4 to atoms 4,2,1,3 respectively,
so that atom 4 will be visited first and the remaining atoms visited in a depth-first manner following the lowest canonical labels. |
| `-O` | *Store the SMILES atom order as a space-separated string*
The string is stored as an OBPairData wth the name
‘SMILES Atom Order’. |
| `-F <atom numbers>` |
| | *Generate SMILES for a fragment*
The atom numbers should be specified like “1 2 4 7”. |
| `-R` | *Do not reuse bond closure symbols* |
| `-f <atomno>` | *Specify the first atom*
This atom will be used to begin the SMILES string. |
| `-l <atomno>` | *Specify the last atom*
The output will be rearranged so that any additional SMILES added to the end will be attached to this atom. |
| `-T <max seconds>` |
| | *Specify the canonicalization timeout*
Canonicalization can take a while for symmetric molecules and a timeout is used. The default is 5 seconds. |
#### SMILES format using Smiley parser (smy)[¶](#smiles-format-using-smiley-parser-smy)
The Smiley parser presents an alternative to the standard SMILES parser
([SMILES format (smi, smiles)](index.html#smiles-format)). It was written to be strictly compatible with the OpenSMILES standard (<http://opensmiles.org>). In comparison, the standard parser is more forgiving to erroneous input, and also supports some extensions such as for radicals.
In addition, the Smiley parser returns detailed error messages when problems arise parsing or validating the SMILES, whereas the standard parser seldom describes the specific problem. For a detailed description of the OpenSMILES semantics, the specification should be consulted. In addition to syntactical and grammatical correctness, the Smiley parser also verifies some basic semantics.
Here are some examples of the errors reported:
```
SyntaxError: Bracket atom expression contains invalid trailing characters.
F.FB(F)F.[NH2+251][C@@H](CP(c1ccccc1)c1ccccc1)C(C)(C)C 31586112
^^
SyntaxError: Unmatched branch opening.
CC(CC
^^^
SyntaxError: Unmatched branch closing.
CC)CC
^^^
SemanticsError: Unmatched ring bond.
C1CCC
^
SemanticsError: Conflicing ring bonds.
C-1CCCCC=1
```
##### Hydrogen with Hydrogen Count[¶](#hydrogen-with-hydrogen-count)
Hydrogen atoms can not have a hydrogen count. Hydrogen bound to a hydrogen atom should be specified by two bracket atom expressions.
Examples:
```
[HH] invalid
[HH1] invalid (same as [HH]
[HH3] invalid
[HH0] valid (same as [H])
[H][H] valid
```
##### Unmatched Ring Bond[¶](#unmatched-ring-bond)
Report unmatched ring bonds.
Example:
```
C1CCC
```
##### Conflicting Ring Bonds[¶](#conflicting-ring-bonds)
When the bond type for ring bonds are explicitly specified at both ends,
these should be the same.
Example:
```
C-1CCCCCC=1
```
##### Invalid Ring Bonds[¶](#invalid-ring-bonds)
There are two types of invalid ring bonds. The first is when two atoms both have the same two ring bonds. This would mean adding a parallel edge in the graph which is not allowed. The second type is similar but results in a self-loop by having a ring bond number twice.
Examples:
```
C12CCCC12 parallel bond C11 self-loop bond
```
##### Invalid Chiral Valence[¶](#invalid-chiral-valence)
When an atom is specified as being chiral, it should have the correct number of neighboring atoms (possibly including an implicit H inside the bracket.
The valid valences are:
```
Tetrahedral (TH) : 4 Allene (AL) : 4 (*)
Square Planar (SP) : 4 Trigonal Bypiramidal (TB) : 5 Octahedral(OH) : 6
(*) The chiral atom has only 2 bonds but the neighbor's neighbors are
counted: NC(Br)=[C@AL1]=C(F)I
```
##### Invalid Chiral Hydrogen Count[¶](#invalid-chiral-hydrogen-count)
Chiral atoms can only have one hydrogen in their bracket since multiple hydrogens would make them not chiral.
Example:
```
C[C@H2]F
```
Note
This is a read-only format.
#### Sybyl Mol2 format (ml2, mol2, sy2)[¶](#sybyl-mol2-format-ml2-mol2-sy2)
##### Read Options[¶](#read-options)
| `-c` | *Read UCSF Dock scores saved in comments preceding molecules* |
##### Write Options[¶](#write-options)
| `-l` | *Output ignores residue information (only ligands)* |
| `-c` | *Write UCSF Dock scores saved in comments preceding molecules* |
| `-u` | *Do not write formal charge information in UNITY records* |
### Utility formats[¶](#utility-formats)
#### Compare molecules using InChI (k)[¶](#compare-molecules-using-inchi-k)
**A utility format that allows you to compare molecules using their InChIs**
The first molecule is compared with the rest, e.g.:
```
babel first.smi second.mol third.cml -ok
```
This is the same as using `-oinchi -xet` and can take the same options as InChI format
(see [InChI format (inchi)](index.html#inchi-format)).
Note
This is a write-only format.
#### Confab report format (confabreport)[¶](#confab-report-format-confabreport)
**Assess performance of a conformer generator relative to a set of reference structures**
Once a file containing conformers has been generated by [Confab](index.html#confab),
the result can be compared to the original input structures or a set of reference structures using this output format.
Conformers are matched with reference structures using the molecule title. For every conformer, there should be a reference structure
(but not necessarily vice versa).
Further information is available in the section describing [Confab](index.html#confab).
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-f <filename>` | *File containing reference structures* |
| `-r <rmsd>` | *RMSD cutoff (default 0.5 Angstrom)*
The number of structures with conformers within this RMSD cutoff of the reference will be reported. |
#### Copy raw text (copy)[¶](#copy-raw-text-copy)
**A utility format for exactly copying the text of a chemical file format**
This format allows you to filter molecules from multimolecule files without the risk of losing any additional information they contain,
since no format conversion is carried out.
Warning
Currently not working correctly for files with Windows line endings.
Example:
> Extract only structures that include at least one aromatic carbon
> (by matching the SMARTS pattern `[c]`):
> ```
> babel -s '[c]' database.sdf -ocopy new.sd
> ```
Note
XML files may be missing non-object elements at the start or end and so may no longer be well formed.
Note
This is a write-only format.
#### General XML format (xml)[¶](#general-xml-format-xml)
**Calls a particular XML format depending on the XML namespace.**
This is a general XML “format” which reads a generic XML file and infers its format from the namespace as given in a xmlns attribute on an element.
If a namespace is recognised as associated with one of the XML formats in Open Babel, and the type of the object (e.g. a molecule) is appropriate to the output format then this is used to input a single object. If no namespace declaration is found the default format (currently CML) is used.
The process is repeated for any subsequent input so that it is possible to input objects written in several different schemas from the same document.
The file `CMLandPubChem.xml` illustrates this and contains molecules in both CML and PubChem formats.
This implementation uses libxml2.
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-n` | *Read objects of first namespace only* |
#### Generic Output file format (dat, log, out, output)[¶](#generic-output-file-format-dat-log-out-output)
**Automatically detect and read computational chemistry output files**
This format can be used to read ADF, Gaussian, GAMESS, PWSCF, Q-Chem,
MOPAC, ORCA etc. output files by automatically detecting the file type.
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### MolPrint2D format (mpd)[¶](#molprint2d-format-mpd)
**An implementation of the circular fingerprint MolPrint2D**
MolPrint2D is an atom-environment fingerprint developed by Bender et al [[bmg2004]](index.html#bmg2004)
which has been used in QSAR studies and for measuring molecular similarity.
The format of the output is as follows:
```
[Molec_name]\t[atomtype];[layer]-[frequency]-[neighbour_type];
```
Example for the SMILES string `CC(=O)Cl`:
```
acid chloride 1;1-1-2;2-1-9;2-1-15; 2;1-1-1;1-1-9;1-1-15;
9;1-1-2;2-1-1;2-1-15; 15;1-1-2;2-1-1;2-1-9;
```
| [[bmg2004]](#id1) | <NAME>, <NAME>, and <NAME>. **Molecular Similarity Searching Using Atom Environments, Information-Based Feature Selection, and a Naive Bayesian Classifier.**
*J. Chem. Inf. Comput. Sci.* **2004**, *44*, 170-178.
[[Link](https://doi.org/10.1021/ci034207y)] |
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-n` | *prefix molecule names with name of file* |
| `-c` | *use XML style separators instead* |
| `-i` | *use IDX atom types of babel internal* |
#### Multilevel Neighborhoods of Atoms (MNA) (mna)[¶](#multilevel-neighborhoods-of-atoms-mna-mna)
**Iteratively generated 2D descriptors suitable for QSAR**
Multilevel Neighborhoods of Atoms (MNA) descriptors are 2D molecular fragments suitable for use in QSAR modelling [[fpbg99]](index.html#fpbg99).
The format outputs a complete descriptor fingerprint per molecule. Thus, a 27-atom (including hydrogen) molecule would result in 27 descriptors, one per line.
MNA descriptors are generated recursively. Starting at the origin,
each atom is appended to the descriptor immediately followed by a parenthesized list of its neighbours. This process iterates until the specified distance from the origin, also known as the depth of the descriptor.
Elements are simplified into 32 groups. Each group has a representative symbol used to stand for any element in that group:
| Type | Elements |
| --- | --- |
| H | H |
| C | C |
| N | N |
| O | O |
| F | F |
| Si | Si |
| P | P |
| S | S |
| Cl | Cl |
| Ca | Ca |
| As | As |
| Se | Se |
| Br | Br |
| Li | Li, Na |
| B | B, Re |
| Mg | Mg, Mn |
| Sn | Sn, Pb |
| Te | Te, Po |
| I | I, At |
| Os | Os, Ir |
| Sc | Sc, Ti, Zr |
| Fe | Fe, Hf, Ta |
| Co | Co, Sb, W |
| Sr | Sr, Ba, Ra |
| Pd | Pd, Pt, Au |
| Be | Be, Zn, Cd, Hg |
| K | K, Rb, Cs, Fr |
| V | V, Cr, Nb, Mo, Tc |
| Ni | Ni, Cu, Ge, Ru, Rh, Ag, Bi |
| In | In, La, Ce, Pr, Nd, Pm, Sm, Eu |
| Al | Al, Ga, Y, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, Tl |
| R | R, He, Ne, Ar, Kr, Xe, Rn, Ac, Th, Pa, U, Np, Pu, Am, Cm, Bk, Cf, Es, Fm, Md, No, Lr, Db, Jl |
Acyclic atoms are preceded by a hyphen “-” mark.
Here’s the multi-level neighborhood for the molecule represented by the SMILES string CC(=O)Cl:
```
# The contents of this file were derived from
# Title = acid chloride
-C(-H(-C)-H(-C)-H(-C)-C(-C-O-Cl))
-C(-C(-H-H-H-C)-O(-C)-Cl(-C))
-O(-C(-C-O-Cl))
-Cl(-C(-C-O-Cl))
-H(-C(-H-H-H-C))
-H(-C(-H-H-H-C))
-H(-C(-H-H-H-C))
```
| [[fpbg99]](#id1) | <NAME>, <NAME>, <NAME>, and <NAME>. **Chemical Similarity Assessment through Multilevel Neighborhoods of Atoms: Definition and Comparison with the Other Descriptors.** *J. Chem. Inf. Comput. Sci.* **1999**, *39*, 666-670.
[[Link](https://doi.org/10.1021/ci980335o)] |
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-L <num>` | *Levels (default = 2)* |
#### Open Babel molecule report (molreport)[¶](#open-babel-molecule-report-molreport)
**Generates a summary of the atoms and bonds in a molecule**
Example output:
```
TITLE: Ethanol.mopout FORMULA: C2H6O MASS: 46.0684 ATOM: 1 C TYPE: C3 HYB: 3 CHARGE: -0.2151 ATOM: 2 C TYPE: C3 HYB: 3 CHARGE: -0.0192 ATOM: 3 O TYPE: O3 HYB: 3 CHARGE: -0.3295 ATOM: 4 H TYPE: HC HYB: 0 CHARGE: 0.0771 ATOM: 5 H TYPE: HC HYB: 0 CHARGE: 0.0873 ATOM: 6 H TYPE: HC HYB: 0 CHARGE: 0.0874 ATOM: 7 H TYPE: HC HYB: 0 CHARGE: 0.0577 ATOM: 8 H TYPE: HC HYB: 0 CHARGE: 0.0577 ATOM: 9 H TYPE: HC HYB: 0 CHARGE: 0.1966 BOND: 0 START: 8 END: 2 ORDER: 1 BOND: 1 START: 6 END: 1 ORDER: 1 BOND: 2 START: 1 END: 2 ORDER: 1 BOND: 3 START: 1 END: 4 ORDER: 1 BOND: 4 START: 1 END: 5 ORDER: 1 BOND: 5 START: 2 END: 3 ORDER: 1 BOND: 6 START: 2 END: 7 ORDER: 1 BOND: 7 START: 3 END: 9 ORDER: 1
```
See also
[Open Babel report format (report)](index.html#open-babel-report-format)
Note
This is a write-only format.
#### Open Babel report format (report)[¶](#open-babel-report-format-report)
**A detailed report on the geometry of a molecule**
The report format presents a report of various molecular information,
including:
* Filename / molecule title
* Molecular formula
* Mass
* Exact mass (i.e., for high-resolution mass spectrometry, the mass of the most abundant elements)
* Total charge (if not electrically neutral)
* Total spin (if not singlet)
* Interatomic distances
* Atomic charges
* Bond angles
* Dihedral angles
* Chirality information (including which atoms are chiral)
* Additional comments in the input file
Example for benzene:
```
FILENAME: benzene.report FORMULA: C6H6 MASS: 78.1118 EXACT MASS: 78.0469502 INTERATOMIC DISTANCES
C 1 C 2 C 3 C 4 C 5 C 6
---
C 1 0.0000
C 2 1.3958 0.0000
C 3 2.4176 1.3958 0.0000
C 4 2.7916 2.4176 1.3958 0.0000
C 5 2.4176 2.7916 2.4176 1.3958 0.0000
C 6 1.3958 2.4176 2.7916 2.4176 1.3958 0.0000
H 7 1.0846 2.1537 3.4003 3.8761 3.4003 2.1537
H 8 2.1537 1.0846 2.1537 3.4003 3.8761 3.4003
H 9 3.4003 2.1537 1.0846 2.1537 3.4003 3.8761
H 10 3.8761 3.4003 2.1537 1.0846 2.1537 3.4003
H 11 3.4003 3.8761 3.4003 2.1537 1.0846 2.1537
H 12 2.1537 3.4003 3.8761 3.4003 2.1537 1.0846
H 7 H 8 H 9 H 10 H 11 H 12
---
H 7 0.0000
H 8 2.4803 0.0000
H 9 4.2961 2.4804 0.0000
H 10 4.9607 4.2961 2.4803 0.0000
H 11 4.2961 4.9607 4.2961 2.4803 0.0000
H 12 2.4803 4.2961 4.9607 4.2961 2.4804 0.0000
ATOMIC CHARGES
C 1 -0.1000000000
C 2 -0.1000000000
C 3 -0.1000000000
C 4 -0.1000000000
C 5 -0.1000000000
C 6 -0.1000000000
H 7 0.1000000000
H 8 0.1000000000
H 9 0.1000000000
H 10 0.1000000000
H 11 0.1000000000
H 12 0.1000000000
BOND ANGLES
7 1 2 HC Car Car 120.000
1 2 3 Car Car Car 120.000
1 2 8 Car Car HC 120.000
8 2 3 HC Car Car 120.000
2 3 4 Car Car Car 120.000
2 3 9 Car Car HC 120.000
9 3 4 HC Car Car 120.000
3 4 5 Car Car Car 120.000
3 4 10 Car Car HC 120.000
10 4 5 HC Car Car 120.000
4 5 6 Car Car Car 120.000
4 5 11 Car Car HC 120.000
11 5 6 HC Car Car 120.000
5 6 1 Car Car Car 120.000
5 6 12 Car Car HC 120.000
12 6 1 HC Car Car 120.000
6 1 2 Car Car Car 120.000
6 1 7 Car Car HC 120.000
2 1 7 Car Car HC 120.000
3 2 8 Car Car HC 120.000
4 3 9 Car Car HC 120.000
5 4 10 Car Car HC 120.000
6 5 11 Car Car HC 120.000
1 6 12 Car Car HC 120.000
TORSION ANGLES
6 1 2 3 0.026
6 1 2 8 -179.974
7 1 2 3 179.974
7 1 2 8 -0.026
1 2 3 4 -0.026
1 2 3 9 -179.974
8 2 3 4 179.974
8 2 3 9 0.026
2 3 4 5 0.026
2 3 4 10 179.974
9 3 4 5 179.974
9 3 4 10 -0.026
3 4 5 6 -0.026
3 4 5 11 179.974
10 4 5 6 -179.974
10 4 5 11 0.026
4 5 6 1 0.026
4 5 6 12 179.974
11 5 6 1 -179.974
11 5 6 12 -0.026
5 6 1 2 -0.026
5 6 1 7 -179.974
12 6 1 2 179.974
12 6 1 7 0.026
```
See also
[Open Babel molecule report (molreport)](index.html#open-babel-molecule-report)
Note
This is a write-only format.
#### Outputs nothing (nul)[¶](#outputs-nothing-nul)
Note
This is a write-only format.
#### Read and write raw text (text)[¶](#read-and-write-raw-text-text)
**Facilitates the input of boilerplate text with babel commandline**
#### Title format (txt)[¶](#title-format-txt)
**Displays and reads molecule titles**
#### XYZ cartesian coordinates format (xyz)[¶](#xyz-cartesian-coordinates-format-xyz)
**A generic coordinate format**
The “XYZ” chemical file format is widely supported by many programs, although no formal specification has been published. Consequently, Open Babel attempts to be extremely flexible in parsing XYZ format files. Similar formats include Tinker XYZ and UniChem XYZ which differ slightly in the format of the files. (Notably, UniChem XYZ uses the atomic number rather than element symbol for the first column.)
* Line one of the file contains the number of atoms in the file.
* Line two of the file contains a title, comment, or filename.
Any remaining lines are parsed for atom information. Lines start with the element symbol, followed by X, Y, and Z coordinates in angstroms separated by whitespace.
Multiple molecules / frames can be contained within one file.
On **output**, the first line written is the number of atoms in the molecule
(warning - the number of digits is limited to three for some programs,
e.g. Maestro). Line two is the title of the molecule or the filename if no title is defined. Remaining lines define the atoms in the file. The first column is the atomic symbol (right-aligned on the third character),
followed by the XYZ coordinates in “10.5” format, in angstroms. This means that all coordinates are printed with five decimal places.
Example:
```
12 benzene example
C 0.00000 1.40272 0.00000
H 0.00000 2.49029 0.00000
C -1.21479 0.70136 0.00000
H -2.15666 1.24515 0.00000
C -1.21479 -0.70136 0.00000
H -2.15666 -1.24515 0.00000
C 0.00000 -1.40272 0.00000
H 0.00000 -2.49029 0.00000
C 1.21479 -0.70136 0.00000
H 2.15666 -1.24515 0.00000
C 1.21479 0.70136 0.00000
H 2.15666 1.24515 0.00000
```
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
### Other cheminformatics formats[¶](#other-cheminformatics-formats)
#### Accelrys/MSI Biosym/Insight II CAR format (arc, car)[¶](#accelrys-msi-biosym-insight-ii-car-format-arc-car)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Accelrys/MSI Cerius II MSI format (msi)[¶](#accelrys-msi-cerius-ii-msi-format-msi)
Note
This is a read-only format.
#### Accelrys/MSI Quanta CSR format (csr)[¶](#accelrys-msi-quanta-csr-format-csr)
Note
This is a write-only format.
#### MCDL format (mcdl)[¶](#mcdl-format-mcdl)
**Modular Chemical Descriptor Language**
As described in [[gb2001]](index.html#gb2001).
| [[gb2001]](#id1) | <NAME> and <NAME>. **Modular Chemical Descriptor Language (MCDL): Composition, Connectivity and Supplementary Modules.**
*J. Chem. Inf. Comput. Sci.*, **2004**, *41*, 1491-1499.
[[Link](https://doi.org/10.1021/ci000108y)] |
Here’s an example conversion from SMILES to MCDL:
```
obabel -:"CC(=O)Cl" -omcdl CHHH;COCl[2]
```
#### MSI BGF format (bgf)[¶](#msi-bgf-format-bgf)
#### PubChem format (pc)[¶](#pubchem-format-pc)
**An XML format containing information on PubChem entries.**
[PubChem](http://pubchem.ncbi.nlm.nih.gov/) is a freely-available database of chemical compounds and their properties.
OpenBabel only extracts the chemical structure information, and the potentially large amount of other information is currently ignored.
The format seems to handle multiple conformers, but only one is read
(this needs testing).
Note
This is a read-only format.
#### Wiswesser Line Notation (wln)[¶](#wiswesser-line-notation-wln)
**A chemical line notation developed by Wiswesser**
WLN was invented in 1949, by <NAME>, as one of the first attempts to codify chemical structure as a line notation, enabling collation on punched cards using automatic tabulating machines and early electronic computers. WLN was a forerunner to the SMILES notation used in modern cheminformatics systems,
which attempted to simplify the complex rules used in WLN encoding (at the expense of brevity) to come up with an algorithmic system more suitable for implementation on computers, where historically WLN was typically encoded by hand by trained registrars.
WLN encoding makes use of uppercase letters, digits, spaces and punctuation:
* E Bromine atom
* F Fluorine atom
* G Chlorine atom
* H Hydrogen atom
* I Iodine atom
* Q Hydroxyl group, -OH
* R Benzene ring
* S Sulfur atom
* U Double bond
* UU Triple bond
* V Carbonyl, -C(=O)-
* C Unbranched carbon multiply bonded to non-carbon atom
* K Nitrogen atom bonded to more than three other atoms
* L First symbol of a carbocyclic ring notation
* M Imino or imido -NH-group
* N Nitrogen atom, hydrogen free, bonded to fewer than 4 atoms
* O Oxygen atom, hydrogen-free
* T First symbol of a heterocyclic ring notation
* W Non-linear dioxo group, as in -NO2 or -SO2-
* X Carbon attached to four atoms other than hydrogen
* Y Carbon attached to three atoms other then hydrogen
* Z Amino and amido NH2 group
* <digit> Digits ‘1’ to ‘9’ denote unbranched alkyl chains
* & Sidechain terminator or, after a space, a component separator
For a more complete description of the grammar see Smith’s book [1], which more accurately reflects the WLN commonly encountered than Wiswesser’s book [2].
Additional WLN dialects include inorganic salts, and methyl contractions.
Here are some examples of WLN strings along with a corresponding SMILES string:
* WN3 [O-][N+](=O)CCC
* G1UU1G ClC#CCl
* VH3 O=CCCC
* NCCN N#CC#N
* ZYZUM NC(=N)N
* QY CC(C)O
* OV1 &-NA- CC(=O)[O-].[Na+]
* RM1R c1ccccc1NCc2ccccc2
* T56 BMJ B D - DT6N CNJ BMR BO1 DN1 & 2N1 & 1 EMV1U1 (osimertinib)
Cn1cc(c2c1cccc2)c3ccnc(n3)Nc4cc(c(cc4OC)N(C)CCN(C)C)NC(=O)C=C
This reader was contributed by <NAME> (NextMove Software). The text of this description was taken from his Bio-IT World poster [3]. Note that not all of WLN is currently supported; however, about 76% of the WLN strings found in PubChem can be interpreted.
1. <NAME>, “The Wiswesser Line-Formula Chemical Notation”,
McGraw-Hill Book Company publishers, 1968.
2. William <NAME>, “A Line-Formula Chemical Notation”, Thomas Crowell Company publishers, 1954.
3. <NAME>, <NAME>, <NAME>, <NAME>. “Open sourcing a Wiswesser Line Notation (WLN) parser to facilitate electronic lab notebook (ELN) record transfer using the Pistoia Alliance’s UDM
(Unified Data Model) standard.” BioIT World. Apr 2019.
<https://www.nextmovesoftware.com/posters/Sayle_WisswesserLineNotation_BioIT_201904.pdfNote
This is a read-only format.
### Computational chemistry formats[¶](#computational-chemistry-formats)
#### ABINIT Output Format (abinit)[¶](#abinit-output-format-abinit)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### ACES input format (acesin)[¶](#aces-input-format-acesin)
**ACES is a set of programs that performs ab initio quantum chemistry calculations.**
Note
This is a write-only format.
#### ACES output format (acesout)[¶](#aces-output-format-acesout)
**ACES is a set of programs that performs ab initio quantum chemistry calculations.**
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### ADF Band output format (adfband)[¶](#adf-band-output-format-adfband)
Note
This is a read-only format.
#### ADF DFTB output format (adfdftb)[¶](#adf-dftb-output-format-adfdftb)
Note
This is a read-only format.
#### ADF cartesian input format (adf)[¶](#adf-cartesian-input-format-adf)
Note
This is a write-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### ADF output format (adfout)[¶](#adf-output-format-adfout)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### CAChe MolStruct format (cac, cache)[¶](#cache-molstruct-format-cac-cache)
Note
This is a write-only format.
#### CASTEP format (castep)[¶](#castep-format-castep)
**The format used by CASTEP.**
Note
This is a read-only format.
#### Cacao Cartesian format (caccrt)[¶](#cacao-cartesian-format-caccrt)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Cacao Internal format (cacint)[¶](#cacao-internal-format-cacint)
Note
This is a write-only format.
#### Crystal 09 output format (c09out)[¶](#crystal-09-output-format-c09out)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Consider single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Culgi object file format (cof)[¶](#culgi-object-file-format-cof)
**Culgi format**
No options currently
#### DALTON input format (dalmol)[¶](#dalton-input-format-dalmol)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-a` | *write input in atomic units instead of Angstrom* |
| `-b` | *write input using the ATOMBASIS format* |
| `-k <basis>` | *specify basis set to use*
e.g. `-xk STO-3G` |
#### DALTON output format (dallog)[¶](#dalton-output-format-dallog)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### DMol3 coordinates format (dmol, outmol)[¶](#dmol3-coordinates-format-dmol-outmol)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Extended XYZ cartesian coordinates format (exyz)[¶](#extended-xyz-cartesian-coordinates-format-exyz)
**A format used by ORCA-AICCM**
The “EXYZ” chemical file format is an extended version of the standard
“XYZ” chemical file format with additional keywords and informations about the unit cell and virtual atoms.
* Line one of the file contains the number of atoms in the file.
* Line two of the file contains a title, comment, filename and/or the following keywords: `%PBC` or `%VIRTUAL`
Any remaining lines are parsed for atom information until a blank line. These lines start with the element symbol, followed by X, Y, and Z coordinates in angstroms separated by whitespace and - if `%VIRTUAL` is specified - the optional word `VIRTUAL` to mark virtual atoms. If `%PBC` is specified a second block will be present containing the 3 vectors for the unit cell in angstrom and the offset as shown in the example below:
```
4
%PBC
C 0.00000 1.40272 0.00000
H 0.00000 2.49029 0.00000
C -1.21479 0.70136 0.00000
H -2.15666 1.24515 0.00000
Vector1 2.445200 0.000000 0.000000 Vector2 0.000000 1.000000 0.000000 Vector3 0.000000 0.000000 1.000000 Offset 0.000000 0.000000 0.000000
```
On **output**, the first line written is the number of atoms in the molecule.
Line two is the title of the molecule or the filename if no title is defined.
Remaining lines define the atoms in the file. The first column is the atomic symbol (right-aligned on the third character), followed by the XYZ coordinates in “15.5” format separated by an addition whitespace, in angstroms. This means that all coordinates are printed with five decimal places.
The next block starts with a blank line to separate the coordinates from the unit cell vectors followed by the vectors of the unit cell marked with the keywords Vector1/2/3. The vectors themselves are written in the same format as the atom coordinates. The last line contains the keyword Offset and the offset of the unit cell. The unit is always angstrom.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### FHIaims XYZ format (fhiaims)[¶](#fhiaims-xyz-format-fhiaims)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Fenske-Hall Z-Matrix format (fh)[¶](#fenske-hall-z-matrix-format-fh)
Note
This is a write-only format.
#### GAMESS Input (gamin, inp)[¶](#gamess-input-gamin-inp)
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
#### GAMESS Output (gam, gamess, gamout)[¶](#gamess-output-gam-gamess-gamout)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
| `-c` | *Read multiple conformers* |
#### GAMESS-UK Input (gukin)[¶](#gamess-uk-input-gukin)
#### GAMESS-UK Output (gukout)[¶](#gamess-uk-output-gukout)
#### GULP format (got)[¶](#gulp-format-got)
**The format used by GULP (General Utility Lattice Program).**
Note
This is a read-only format.
#### Gaussian Input (com, gau, gjc, gjf)[¶](#gaussian-input-com-gau-gjc-gjf)
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-b` | *Output includes bonds* |
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
| `-u` | *Write the crystallographic unit cell, if present.* |
#### Gaussian Output (g03, g09, g16, g92, g94, g98, gal)[¶](#gaussian-output-g03-g09-g16-g92-g94-g98-gal)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Gaussian Z-Matrix Input (gzmat)[¶](#gaussian-z-matrix-input-gzmat)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
#### Gaussian formatted checkpoint file format (fch, fchk, fck)[¶](#gaussian-formatted-checkpoint-file-format-fch-fchk-fck)
**A formatted text file containing the results of a Gaussian calculation**
Currently supports reading molecular geometries from fchk files. More to come.
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Single bonds only* |
| `-b` | *No bond perception* |
#### HyperChem HIN format (hin)[¶](#hyperchem-hin-format-hin)
#### Jaguar input format (jin)[¶](#jaguar-input-format-jin)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Jaguar output format (jout)[¶](#jaguar-output-format-jout)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### MOPAC Cartesian format (mop, mopcrt, mpc)[¶](#mopac-cartesian-format-mop-mopcrt-mpc)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
| `-u` | *Write the crystallographic unit cell, if present.* |
#### MOPAC Internal (mopin)[¶](#mopac-internal-mopin)
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
#### MOPAC Output format (moo, mopout)[¶](#mopac-output-format-moo-mopout)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### MPQC output format (mpqc)[¶](#mpqc-output-format-mpqc)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### MPQC simplified input format (mpqcin)[¶](#mpqc-simplified-input-format-mpqcin)
Note
This is a write-only format.
#### Molpro input format (mp)[¶](#molpro-input-format-mp)
Note
This is a write-only format.
#### Molpro output format (mpo)[¶](#molpro-output-format-mpo)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### NWChem input format (nw)[¶](#nwchem-input-format-nw)
Note
This is a write-only format.
#### NWChem output format (nwo)[¶](#nwchem-output-format-nwo)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-f` | *Overwrite molecule if more than one*
calculation with different molecules is present in the output file
(last calculation will be prefered) |
| `-b` | *Disable bonding entirely* |
#### ORCA input format (orcainp)[¶](#orca-input-format-orcainp)
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
#### ORCA output format (orca)[¶](#orca-output-format-orca)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### PWscf format (pwscf)[¶](#pwscf-format-pwscf)
**The format used by PWscf, part of Quantum Espresso.**
Note
This is a read-only format.
#### Parallel Quantum Solutions format (pqs)[¶](#parallel-quantum-solutions-format-pqs)
#### Q-Chem input format (qcin)[¶](#q-chem-input-format-qcin)
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-k <keywords>` | *Use the specified keywords for input* |
| `-f <file>` | *Read the file specified for input keywords* |
#### Q-Chem output format (qcout)[¶](#q-chem-output-format-qcout)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### TurboMole Coordinate format (tmol)[¶](#turbomole-coordinate-format-tmol)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
| `-a` | *Input in Angstroms* |
##### Write Options[¶](#write-options)
| `-a` | *Output Angstroms* |
#### Turbomole AOFORCE output format (aoforce)[¶](#turbomole-aoforce-output-format-aoforce)
**Read vibrational frequencies and intensities**
Note
This is a read-only format.
#### VASP format (CONTCAR, POSCAR, VASP)[¶](#vasp-format-contcar-poscar-vasp)
**Reads in data from POSCAR and CONTCAR to obtain information from VASP calculations.**
Due to limitations in Open Babel’s file handling, reading in VASP files can be a bit tricky; the client that is using Open Babel must use OBConversion::ReadFile() to begin the conversion. This change is usually trivial. Also, the complete path to the CONTCAR/POSCAR file must be provided, otherwise the other files needed will not be found.
Both VASP 4.x and 5.x POSCAR formats are supported.
By default, atoms are written out in the order they are present in the input molecule. To sort by atomic number specify `-xw`. To specify the sort order, use the `-xz` option.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-w` | *Sort atoms by atomic number* |
| `-z <list of atoms>` |
| | *Specify the order to write out atoms*
‘atom1 atom2 …’: atom1 first, atom2 second, etc. The remaining atoms are written in the default order or (if `-xw` is specified)
in order of atomic number. |
| `-4` | *Write a POSCAR using the VASP 4.x specification.*
The default is to use the VASP 5.x specification. |
#### ZINDO input format (zin)[¶](#zindo-input-format-zin)
**The input format for the semiempirical quantum-mechanics program ZINDO.**
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-c` | *Write an input file for the CNDO/INDO program.* |
### Molecular fingerprint formats[¶](#molecular-fingerprint-formats)
#### FPS text fingerprint format (Dalke) (fps)[¶](#fps-text-fingerprint-format-dalke-fps)
The FPS file format for fingerprints was developed by <NAME> to define and promote common file formats for storing and exchanging cheminformatics fingerprint data sets, and to develop tools which work with that format. For more information, see
<http://chem-fingerprints.googlecode.comAny molecule without a title is given its index in the file as title.
A list of available fingerprint types can be obtained by:
```
obabel -L fingerprints
```
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-f <id>` | *Fingerprint type* |
| `-N <num>` | *Fold to specified number of bits, 32, 64, 128, etc.* |
| `-p` | *Use full input path as source, not just filename* |
| `-t <text>` | *Use <text> as source in header* |
#### Fastsearch format (fs)[¶](#fastsearch-format-fs)
**Fingerprint-aided substructure and similarity searching**
Writing to the fs format makes an index of a multi-molecule datafile:
```
obabel dataset.sdf -ofs
```
This prepares an index `dataset.fs` with default parameters, and is slow
(~30 minutes for a 250,000 molecule file).
However, when reading from the fs format searches are much faster, a few seconds,
and so can be done interactively.
The search target is the parameter of the `-s` option and can be slightly extended SMILES (with `[#n]` atoms and `~` bonds) or the name of a file containing a molecule.
Several types of searches are possible:
* Identical molecule:
```
obabel index.fs -O outfile.yyy -s SMILES exact
```
* Substructure:
```
obabel index.fs -O outfile.yyy -s SMILES or obabel index.fs -O outfile.yyy -s filename.xxx
```
where `xxx` is a format id known to OpenBabel, e.g. sdf
* Molecular similarity based on Tanimoto coefficient:
```
obabel index.fs -O outfile.yyy -at15 -sSMILES # best 15 molecules obabel index.fs -O outfile.yyy -at0.7 -sSMILES # Tanimoto >0.7 obabel index.fs -O outfile.yyy -at0.7,0.9 -sSMILES
# Tanimoto >0.7 && Tanimoto < 0.9
```
The datafile plus the `-ifs` option can be used instead of the index file.
NOTE on 32-bit systems the datafile MUST NOT be larger than 4GB.
Dative bonds like -[N+][O-](=O) are indexed as -N(=O)(=O), and when searching the target molecule should be in the second form.
See also
[Molecular fingerprints and similarity searching](index.html#fingerprints)
##### Read Options[¶](#read-options)
| `-t <num>` | *Do similarity search:<num>mols or <num> as min Tanimoto* |
| `-a` | *Add Tanimoto coeff to title in similarity search* |
| `-l <num>` | *Maximum number of candidates. Default<4000>* |
| `-e` | *Exact match*
Alternative to using exact in `-s` parameter, see above |
| `-n` | *No further SMARTS filtering after fingerprint phase* |
##### Write Options[¶](#write-options)
| `-f <num>` | *Fingerprint type*
If not specified, the default fingerprint (currently FP2) is used |
| `-N <num>` | *Fold fingerprint to <num> bits* |
| `-u` | *Update an existing index* |
#### Fingerprint format (fpt)[¶](#fingerprint-format-fpt)
**Generate or display molecular fingerprints.**
This format constructs and displays fingerprints and (for multiple input objects) the Tanimoto coefficient and whether a superstructure of the first object.
A list of available fingerprint types can be obtained by:
```
babel -L fingerprints
```
The current default type FP2 is is of the Daylight type, indexing a molecule based on the occurrence of linear fragment up to 7 atoms in length. To use a fingerprint type other than the default, use the `-xf` option, for example:
```
babel infile.xxx -ofpt -xfFP3
```
For a single molecule the fingerprint is output in hexadecimal form
(intended mainly for debugging).
With multiple molecules the hexadecimal form is output only if the `-xh`
option is specified. But in addition the Tanimoto coefficient between the first molecule and each of the subsequent ones is displayed. If the first molecule is a substructure of the target molecule a note saying this is also displayed.
The Tanimoto coefficient is defined as:
```
Number of bits set in (patternFP & targetFP) / Number of bits in (patternFP | targetFP)
```
where the boolean operations between the fingerprints are bitwise.
The Tanimoto coefficient has no absolute meaning and depends on the design of the fingerprint.
Use the `-xs` option to describe the bits that are set in the fingerprint.
The output depends on the fingerprint type. For Fingerprint FP4, each bit corresponds to a particular chemical feature, which are specified as SMARTS patterns in `SMARTS_InteLigand.txt`, and the output is a tab-separated list of the features of a molecule. For instance, a well-known molecule gives:
```
Primary_carbon: Carboxylic_acid: Carboxylic_ester: Carboxylic_acid_derivative:
Vinylogous_carbonyl_or_carboxyl_derivative: Vinylogous_ester: Aromatic:
Conjugated_double_bond: C_ONS_bond: 1,3-Tautomerizable: Rotatable_bond: CH-acidic:
```
For the path-based fingerprint FP2, the output from the `-xs` option is instead a list of the chemical fragments used to set bits, e.g.:
```
$ obabel -:"CCC(=O)Cl" -ofpt -xs -xf FP2
>
0 6 1 6 <670>
0 6 1 6 1 6 <260>
0 8 2 6 <623>
...etc
```
where the first digit is 0 for linear fragments but is a bond order for cyclic fragments. The remaining digits indicate the atomic number and bond order alternatively. Note that a bond order of 5 is used for aromatic bonds. For example, bit 623 above is the linear fragment O=C
(8 for oxygen, 2 for double bond and 6 for carbon).
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-f <id>` | *fingerprint type* |
| `-N <num>` | *fold to specified number of bits, 32, 64, 128, etc.* |
| `-h` | *hex output when multiple molecules* |
| `-o` | *hex output only* |
| `-s` | *describe each set bit* |
| `-u` | *describe each unset bit* |
### Crystallography formats[¶](#crystallography-formats)
#### ACR format (acr)[¶](#acr-format-acr)
**CaRIne ASCII Crystal format (ACR)**
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Consider single bonds only* |
#### CSD CSSR format (cssr)[¶](#csd-cssr-format-cssr)
Note
This is a write-only format.
#### Crystallographic Information File (cif)[¶](#crystallographic-information-file-cif)
**The CIF file format is the standard interchange format for small-molecule crystal structures**
Fractional coordinates are converted to cartesian ones using the following convention:
* The x axis is parallel to a
* The y axis is in the (a,b) plane
* The z axis is along c*
Ref: Int. Tables for Crystallography (2006), vol. B, sec 3.3.1.1.1
(the matrix used is the 2nd form listed)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
| `-B` | *Use bonds listed in CIF file from _geom_bond_etc records (overrides option b)* |
##### Write Options[¶](#write-options)
| `-g` | *Write bonds using _geom_bond_etc fields* |
#### Free Form Fractional format (fract)[¶](#free-form-fractional-format-fract)
**General purpose crystallographic format**
The “free-form” fractional format attempts to allow for input from a range of fractional / crystallography file formats. As such, it has only a few restrictions on input:
* Line one of the file contains a title or comment.
* Line two of the file contains the unit cell parameters separated by whitespace and/or commas (i.e. “a b c alpha beta gamma”).
* Any remaining lines are parsed for atom information. Lines start with the element symbol, followed by fractional X, Y, and Z coordinates
(in angstroms) separated by whitespace.
Any numeric input (i.e., unit cell parameters, XYZ coordinates) can include designations of errors, although this is currently ignored. For example:
```
C 1.00067(3) 2.75(2) 3.0678(12)
```
will be parsed as:
```
C 1.00067 2.75 3.0678
```
When used as an **output** format, The first line written is the title of the molecule or the filename if no title is defined. If a molecule has a defined unit cell, then the second line will be formatted as:
```
a b c alpha beta gamma
```
where a, b, c are the unit cell vector lengths, and alpha, beta, and gamma are the angles between them. These numbers are formatted as “10.5”, which means that 5 decimal places will be output for all numbers. In the case where no unit cell is defined for the molecule, the vector lengths will be defined as 1.0, and the angles to 90.0 degrees.
Remaining lines define the atoms in the file. The first column is the atomic symbol, followed by the XYZ coordinates in 10.5 format (in angstroms).
Here is an example file:
```
ZnO test file 3.14 3.24 5.18 90.0 90.0 120.0 O 0.66667 0.33333 0.3750 O 0.33333 0.66667 0.8750 Zn 0.66667 0.33333 0.0000 Zn 0.33333 0.66667 0.5000
```
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Macromolecular Crystallographic Info (mcif, mmcif)[¶](#macromolecular-crystallographic-info-mcif-mmcif)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-p` | *Apply periodic boundary conditions for bonds* |
| `-b` | *Disable bonding entirely* |
| `-w` | *Wrap atomic coordinates into unit cell box* |
#### POS cartesian coordinates format (pos)[¶](#pos-cartesian-coordinates-format-pos)
**A generic coordinate format**
The “POS” file format is a modified version of the “XYZ” general format
* Line one of the file contains the number of atoms in the file.
* Line two of the file contains a title, comment, or filename.
Example:
```
.. note:: This is a read-only format.
```
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### ShelX format (ins, res)[¶](#shelx-format-ins-res)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
### Reaction formats[¶](#reaction-formats)
#### CML Reaction format (cmlr)[¶](#cml-reaction-format-cmlr)
**A minimal implementation of the CML Reaction format**
This implementation uses libxml2.
##### Write Options[¶](#write-options)
| `-1` | *output CML1 (rather than CML2)* |
| `-a` | *output array format for atoms and bonds* |
| `-l` | *molecules NOT in MoleculeList* |
| `-h` | *use hydrogenCount for all hydrogens* |
| `-x` | *omit XML declaration* |
| `-r` | *omit rate constant data* |
| `-N <prefix>` | *add namespace prefix to elements* |
| `-M` | *add obr prefix on non-CMLReact elements* |
| `-p` | *add properties to molecules* |
##### Comments[¶](#comments)
The implementation of this format which reads and writes to and from OBReaction objects is fairly minimal at present. (Currently the only other reaction format in OpenBabel is RXN.) During reading, only the elements <reaction>, <reactant>, <product> and <molecule> are acted upon (the last through CML). The molecules can be collected together in a list at the start of the file and referenced in the reactant and product via e.g. <molecule ref=”mol1”>.
On writing, the list format can be specified with the `-xl` option. The list containers are <moleculeList> and <reactionList> and the overall wrapper is <mechanism>. These are non-standard CMLReact element names and would have to be changed (in the code) to <list>,<list> and <cml>
if this was unacceptable.
#### MDL RXN format (rxn)[¶](#mdl-rxn-format-rxn)
**The MDL reaction format is used to store information on chemical reactions.**
Output Options, e.g. -xA A output in Alias form, e.g. Ph, if present G <option> how to handle any agents present
> One of the following options should be specifed:
> * agent - Treat as an agent (default). Note that some programs
> may not read agents in RXN files.
> * reactant - Treat any agent as a reactant
> * product - Treat any agent as a product
> * ignore - Ignore any agent
> * both - Treat as both a reactant and a product
#### RInChI (rinchi)[¶](#rinchi-rinchi)
**The Reaction InChI**
The Reaction InChI (or RInChI) is intended to be a unique string that describes a reaction. This may be useful for indexing and searching reaction databases. As with the InChI it is recommended that you always keep the original reaction information and use the RInChI in addition.
The RInChI format is a hierarchical, layered description of a reaction with different levels based on the Standard InChI representation of each structural component participating in the reaction.
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-e` | *Treat this reaction as an equilibrium reaction*
Layer 5 of the generated RInChI will have /d= |
#### Reaction SMILES format (rsmi)[¶](#reaction-smiles-format-rsmi)
##### Write Options[¶](#write-options)
| `-r` | *radicals lower case eg ethyl is Cc* |
### Image formats[¶](#image-formats)
#### ASCII format (ascii)[¶](#ascii-format-ascii)
**2D depiction of a single molecule as ASCII text**
This format generates a 2D depiction of a molecule using only ASCII text suitable for a command-line console, or a text file. For example:
```
obabel -:c1ccccc1C(=O)Cl -oascii -xh 20
__
__/__\_
_/__/ \__
_/_/ \__
| |
| | |
| | |
| | |
| | |
|___ _ Cl
\_\__ _/ \_ __
\_\_ __/ \__ __/
\__/ \__/
| |
| |
| |
| |
O
```
If the image appears elongated or squat, the aspect ratio should be changed from its default value of 1.5 using the `-xa <ratio>` option. To help determine the correct value, use the `-xs` option to display a square.
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-w <characters>` |
| | *Image width in characters, default 79* |
| `-h <characters>` |
| | *Image height in characters, default is width/aspect* |
| `-a <ratio>` | *Aspect ratio of character height:width, default is 1.5* |
| `-s` | *Display a square - this is useful for correcting the aspect ratio* |
| `-t` | *Write the output molecule index and the title* |
| `-m` | *Include a margin around the depiction* |
#### PNG 2D depiction (png)[¶](#png-2d-depiction-png)
**or add/extract chemical structures from a .png file**
The PNG format has several uses. The most common is to generate a
`.png` file for one or more molecules.
2D coordinates are generated if not present:
```
obabel mymol.smi -O image.png
```
Chemical structure data can be embedded in the `.png` file
(in a `tEXt` chunk):
```
obabel mymol.mol -O image.png -xO molfile
```
The parameter of the `-xO` option specifies the format (“file”can be added).
Note that if you intend to embed a 2D or 3D format, you may have to call
`--gen2d` or `--gen3d` to generate the required coordinates if they are not present in the input.
Molecules can also be embedded in an existing PNG file:
```
obabel existing.png mymol1.smi mymol2.mol -O augmented.png -xO mol
```
Reading from a PNG file will extract any embedded chemical structure data:
```
obabel augmented.png -O contents.sdf
```
##### Read Options[¶](#read-options)
| `-y <additional chunk ID>` |
| | *Look also in chunks with specified ID* |
##### Write Options[¶](#write-options)
| `-p <pixels>` | *image size, default 300* |
| `-w <pixels>` | *image width (or from image size)* |
| `-h <pixels>` | *image height (or from image size)* |
| `-c <num>` | *number of columns in table* |
| `-r <num>` | *number of rows in table* |
| `-N <num>` | *max number objects to be output* |
| `-u` | *no element-specific atom coloring*
Use this option to produce a black and white diagram |
| `-U` | *do not use internally-specified color*
e.g. atom color read from cml or generated by internal code |
| `-b <color>` | *background color, default white*
e.g `-xb yellow` or `-xb #88ff00` `-xb none` is transparent.
Just `-xb` is black with white bonds.
The atom symbol colors work with black and white backgrounds,
but may not with other colors. |
| `-B <color>` | *bond color, default black*
e.g `-xB` yellow or `-xB #88ff00` |
| `-C` | *do not draw terminal C (and attached H) explicitly*
The default is to draw all hetero atoms and terminal C explicitly,
together with their attched hydrogens. |
| `-a` | *draw all carbon atoms*
So propane would display as H3C-CH2-CH3 |
| `-d` | *do not display molecule name* |
| `-m` | *do not add margins to the image*
This only applies if there is a single molecule to depict.
Implies -xd. |
| `-s` | *use asymmetric double bonds* |
| `-t` | *use thicker lines* |
| `-A` | *display aliases, if present*
This applies to structures which have an alternative, usually shorter, representation already present. This might have been input from an A or S superatom entry in an sd or mol file, or can be generated using the –genalias option. For example:
```
obabel -:"c1cc(C=O)ccc1C(=O)O" -O out.png
--genalias -xA
```
would add a aliases COOH and CHO to represent the carboxyl and aldehyde groups and would display them as such in the svg diagram.
The aliases which are recognized are in data/superatom.txt, which can be edited. |
| `-O <format ID>` | *Format of embedded text*
For example, `molfile` or `smi`.
If there is no parameter, input format is used. |
| `-y <additional chunk ID>` |
| | *Write to a chunk with specified ID* |
##### Comments[¶](#comments)
If Cairo was not found when Open Babel was compiled, then the 2D depiction will be unavailable. However, it will still be possible to extract and embed chemical data in `.png` files.
See also
[PNG 2D depiction (png)](#png-2d-depiction)
#### POV-Ray input format (pov)[¶](#pov-ray-input-format-pov)
**Generate an input file for the open source POV-Ray ray tracer.**
The POV-Ray file generated by Open Babel should be considered a starting point for the user to create a rendered image of a molecule. Although care is taken to center the camera on the molecule, the user will probably want to adjust the viewpoint, change the lighting, textures, etc.
The file `babel_povray3.inc` is required to render the povray file generated by Open Babel. This file is included in the Open Babel distribution, and it should be copied into the same directory as the
`.pov` file before rendering. By editing the settings in
`babel_povray3.inc` it is possible to tune the appearance of the molecule.
For example, the image below was generated by rendering the output from the following command after setting the reflection of non-metal atoms to 0
(line 121 in `babel_povray3.inc`):
```
obabel -:"CC(=O)Cl acid chloride" --gen3d -O chloride.pov -xc -xf -xs -m SPF
```
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-c` | *Add a black and white checkerboard* |
| `-f` | *Add a mirror sphere* |
| `-m <model-type>` |
| | *BAS (ball-and-stick), SPF (space-fill) or CST (capped sticks)*
The default option is ball-and-stick. To choose space-fill, you would use the following command line:
```
obabel aspirin.mol -O aspirin.pov -xm SPF
```
|
| `-s` | *Add a sky (with clouds)* |
| `-t` | *Use transparent textures* |
#### Painter format (paint)[¶](#painter-format-paint)
**Commands used to generate a 2D depiction of a molecule**
This is a utility format that is useful if you want to generate a depiction of a molecule yourself, for example by drawing on a Graphics2D canvas in Java. The format writes out a list of drawing commands as shown in the following example:
```
obabel -:CC(=O)Cl -opaint
NewCanvas 149.3 140.0 SetPenColor 0.0 0.0 0.0 1.0 (rgba)
DrawLine 109.3 100.0 to 74.6 80.0 SetPenColor 0.0 0.0 0.0 1.0 (rgba)
DrawLine 71.6 80.0 to 71.6 53.0 DrawLine 77.6 80.0 to 77.6 53.0 SetPenColor 0.0 0.0 0.0 1.0 (rgba)
DrawLine 74.6 80.0 to 51.3 93.5 SetPenColor 0.4 0.4 0.4 1.0 (rgba)
SetPenColor 0.4 0.4 0.4 1.0 (rgba)
SetPenColor 1.0 0.1 0.1 1.0 (rgba)
SetFontSize 16 SetFontSize 16 SetFontSize 16 DrawText 74.6 40.0 "O"
SetPenColor 0.1 0.9 0.1 1.0 (rgba)
SetFontSize 16 SetFontSize 16 SetFontSize 16 SetFontSize 16 DrawText 40.0 100.0 "Cl"
```
Note that the origin is considered to be in the top left corner.
The following image was drawn using the information in this format as described at
<http://baoilleach.blogspot.co.uk/2012/04/painting-molecules-your-way-introducing.html>:
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-M` | *Do not include a margin around the depiction* |
#### SVG 2D depiction (svg)[¶](#svg-2d-depiction-svg)
**Scalable Vector Graphics 2D rendering of molecular structure.**
When called from commandline or GUI or otherwise via Convert(),
single molecules are displayed at a fixed scale, as in normal diagrams,
but multiple molecules are displayed in a table which expands to fill the containing element, such as a browser window.
When WriteMolecule() is called directly, without going through WriteChemObject, e.g. via OBConversion::Write(), a fixed size image by default 200 x 200px containing a single molecule is written. The size can be specified by the P output option.
Multiple molecules are displayed in a grid of dimensions specified by the `-xr` and `-xc` options (number of rows and columns respectively and `--rows`, `--cols` with babel).
When displayed in most modern browsers, like Firefox, there is javascript support for zooming (with the mouse wheel)
and panning (by dragging with the left mouse button).
If both `-xr` and `-xc` are specified, they define the maximum number of molecules that are displayed.
If only one of them is displayed, then the other is calculated so that ALL the molecules are displayed.
If neither are specified, all the molecules are output in an approximately square table.
By default, 2D atom coordinates are generated (using gen2D) unless they are already present. This can be slow with a large number of molecules.
(3D coordinates are ignored.) Include `--gen2D` explicitly if you wish any existing 2D coordinates to be recalculated.
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-u` | *no element-specific atom coloring*
Use this option to produce a black and white diagram |
| `-U` | *do not use internally-specified color*
e.g. atom color read from cml or generated by internal code |
| `-b <color>` | *background color, default white*
e.g `-xb yellow` or `-xb #88ff00` `-xb none` is transparent.
Just `-xb` is black with white bonds.
The atom symbol colors work with black and white backgrounds,
but may not with other colors. |
| `-B <color>` | *bond color, default black*
e.g `-xB` yellow or `-xB #88ff00` |
| `-C` | *do not draw terminal C (and attached H) explicitly*
The default is to draw all hetero atoms and terminal C explicitly,
together with their attched hydrogens. |
| `-a` | *draw all carbon atoms*
So propane would display as H3C-CH2-CH3 |
| `-d` | *do not display molecule name* |
| `-s` | *use asymmetric double bonds* |
| `-t` | *use thicker lines* |
| `-e` | *embed molecule as CML*
OpenBabel can read the resulting svg file as a cml file. |
| `-p <num>` | *px Scale to bond length(single mol only)* |
| `-P <num>` | *px Single mol in defined size image*
The General option –px # is an alternative to the above. |
| `-c <num>` | *number of columns in table* |
| `-c` | *ols<num> number of columns in table(not displayed in GUI)* |
| `-r <num>` | *number of rows in table* |
| `-r` | *ows<num> number of rows in table(not displayed in GUI)* |
| `-N <num>` | *max number objects to be output* |
| `-l` | *draw grid lines* |
| `-h <condition>` | *highlight mol if condition is met*
The condition can use descriptors and properties,
See documentation on `--filter` option for details.
To highlight in a particular color, follow the condition by a color. |
| `-i` | *add index to each atom*
These indices are those in sd or mol files and correspond to the order of atoms in a SMILES string. |
| `-j` | *do not embed javascript*
Javascript is not usually embedded if there is only one molecule,
but it is if the rows and columns have been specified as 1: `-xr1 -xc1` |
| `-x` | *omit XML declaration (not displayed in GUI)*
Useful if the output is to be embedded in another xml file. |
| `-X` | *All atoms are explicitly declared*
Useful if we don’t want any extra hydrogens drawn to fill the valence. |
| `-A` | *display aliases, if present*
This applies to structures which have an alternative, usually shorter, representation already present. This might have been input from an A or S superatom entry in an sd or mol file, or can be generated using the –genalias option. For example:
```
obabel -:"c1cc(C=O)ccc1C(=O)O" -O out.svg
--genalias -xA
```
would add a aliases COOH and CHO to represent the carboxyl and aldehyde groups and would display them as such in the svg diagram.
The aliases which are recognized are in data/superatom.txt, which can be edited. |
| `-S` | *Ball and stick depiction of molecules*
Depicts the molecules as balls and sticks instead of the normal line style. |
##### Comments[¶](#comments)
If the input molecule(s) contain explicit hydrogen, you could consider improving the appearance of the diagram by adding an option `-d` to make it implicit. Hydrogen on hetero atoms and on explicitly drawn C is always shown.
For example, if input.smi had 10 molecules:
```
obabel input.smi -O out.svg -xb -xC -xe
```
would produce a svg file with a black background, with no explicit terminal carbon, and with an embedded cml representation of each molecule. The structures would be in two rows of four and one row of two.
### 2D drawing formats[¶](#d-drawing-formats)
#### ChemDraw CDXML format (cdxml)[¶](#chemdraw-cdxml-format-cdxml)
**Minimal support of chemical structure information only.**
#### ChemDraw Connection Table format (ct)[¶](#chemdraw-connection-table-format-ct)
#### ChemDraw binary format (cdx)[¶](#chemdraw-binary-format-cdx)
**Read only**
The whole file is read in one call.
Note that a file may contain a mixture of reactions and molecules.
With the -ad option, a human-readable representation of the CDX tree structure is output as an OBText object. Use textformat to view it:
```
obabel input.cdx -otext -ad
```
Many reactions in CDX files are not fully specified with reaction data structures, and may not be completely interpreted by this parser.
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-m` | *read molecules only; no reactions* |
| `-d` | *output CDX tree to OBText object* |
| `-o` | *display only objects in tree output* |
#### Chemical Resource Kit diagram(2D) (crk2d)[¶](#chemical-resource-kit-diagram-2d-crk2d)
#### Chemtool format (cht)[¶](#chemtool-format-cht)
Note
This is a write-only format.
### 3D viewer formats[¶](#d-viewer-formats)
#### Ball and Stick format (bs)[¶](#ball-and-stick-format-bs)
#### Chem3D Cartesian 1 format (c3d1)[¶](#chem3d-cartesian-1-format-c3d1)
#### Chem3D Cartesian 2 format (c3d2)[¶](#chem3d-cartesian-2-format-c3d2)
#### Chemical Resource Kit 3D format (crk3d)[¶](#chemical-resource-kit-3d-format-crk3d)
#### Ghemical format (gpr)[¶](#ghemical-format-gpr)
**Open source molecular modelling**
#### Maestro format (mae, maegz)[¶](#maestro-format-mae-maegz)
**File format of Schrödinger Software**
#### Molden format (mold, molden, molf)[¶](#molden-format-mold-molden-molf)
##### Read Options[¶](#read-options)
| `-b` | *no bonds* |
| `-s` | *no multiple bonds* |
#### PCModel Format (pcm)[¶](#pcmodel-format-pcm)
#### UniChem XYZ format (unixyz)[¶](#unichem-xyz-format-unixyz)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### ViewMol format (vmol)[¶](#viewmol-format-vmol)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### XCrySDen Structure Format (axsf, xsf)[¶](#xcrysden-structure-format-axsf-xsf)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### YASARA.org YOB format (yob)[¶](#yasara-org-yob-format-yob)
**The native YASARA format.**
### Kinetics and Thermodynamics formats[¶](#kinetics-and-thermodynamics-formats)
#### ChemKin format (ck)[¶](#chemkin-format-ck)
##### Read Options[¶](#read-options)
| `-f <file>` | *File with standard thermo data: default therm.dat* |
| `-z` | *Use standard thermo only* |
| `-L` | *Reactions have labels (Usually optional)* |
##### Write Options[¶](#write-options)
| `-s` | *Simple output: reactions only* |
| `-t` | *Do not include species thermo data* |
| `-0` | *Omit reactions with zero rates* |
#### Thermo format (tdd, therm)[¶](#thermo-format-tdd-therm)
**Reads and writes old-style NASA polynomials in original fixed format**
##### Read Options[¶](#read-options)
| `-e` | *Terminate on “END”* |
### Molecular dynamics and docking formats[¶](#molecular-dynamics-and-docking-formats)
#### Amber Prep format (prep)[¶](#amber-prep-format-prep)
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### AutoDock PDBQT format (pdbqt)[¶](#autodock-pdbqt-format-pdbqt)
**Reads and writes AutoDock PDBQT (Protein Data Bank, Partial Charge (Q), & Atom Type (T)) format**
Note that the torsion tree is by default. Use the `r` write option to prevent this.
##### Read Options[¶](#read-options)
| `-b` | *Disable automatic bonding* |
| `-d` | *Input file is in dlg (AutoDock docking log) format* |
##### Write Options[¶](#write-options)
| `-b` | *Enable automatic bonding* |
| `-r` | *Output as a rigid molecule (i.e. no branches or torsion tree)* |
| `-c` | *Combine separate molecular pieces of input into a single rigid molecule (requires “r” option or will have no effect)* |
| `-s` | *Output as a flexible residue* |
| `-p` | *Preserve atom indices from input file (default is to renumber atoms sequentially)* |
| `-h` | *Preserve hydrogens* |
| `-n` | *Preserve atom names* |
#### DL-POLY CONFIG (CONFIG)[¶](#dl-poly-config-config)
#### DL-POLY HISTORY (HISTORY)[¶](#dl-poly-history-history)
Note
This is a read-only format.
#### Dock 3.5 Box format (box)[¶](#dock-3-5-box-format-box)
#### GRO format (gro)[¶](#gro-format-gro)
**This is GRO file format as used in Gromacs.**
Right now there is only limited support for element perception. It works for elements with one letter symbols if the atomtype starts with the same letter.
##### Read Options[¶](#read-options)
| `-s` | *Consider single bonds only* |
| `-b` | *Disable bonding entierly* |
#### GROMOS96 format (gr96)[¶](#gromos96-format-gr96)
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-n` | *output nm (not Angstroms)* |
#### LPMD format (lpmd)[¶](#lpmd-format-lpmd)
**Read and write LPMD’s atomic configuration file**
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-f <num>` | *Indicate the level of the output file: 0 (default), 1 or 2.* |
| `-m <num>` | *Indicate the mode for level 2 output files*
0 (default) is for accelerations and 1 for forces |
| `-c <vectorcells>` |
| | *Set the cell vectors if not present*
Example: `-xc 10.0,0,0,0.0,10.0,0.0,0.0,0.0,20.0` |
| `-e` | *Add the charge to the output file* |
#### MDFF format (CONTFF, MDFF, POSFF)[¶](#mdff-format-contff-mdff-posff)
**The format used in the POSFF and CONTFF files used by MDFF**
POSFF and CONTFF are read to obtain information from MDFF calculations.
The program will try to read the IONS.POT file if the name of the input file is POSFF or CONTFF.
##### Write Options[¶](#write-options)
| `-w` | *Sort atoms by atomic number* |
| `-u <elementlist>` |
| | *Sort atoms by list of element symbols provided in comma-separated string w/o spaces* |
| `-i` | *Write IONS.POT file* |
#### MacroModel format (mmd, mmod)[¶](#macromodel-format-mmd-mmod)
#### SIESTA format (siesta)[¶](#siesta-format-siesta)
**The format used by SIESTA (Spanish Initiative for Electronic Simulations with Thousands of Atoms).**
Note
This is a read-only format.
#### The LAMMPS data format (lmpdat)[¶](#the-lammps-data-format-lmpdat)
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator.
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-q <water-model>` |
| | *Set atomic charges for water.*
There are two options: SPC (default) or SPCE |
| `-d <length>` | *Set the length of the boundary box around the molecule.*
The default is to make a cube around the molecule adding 50% to the most positive and negative cartesian coordinate. |
#### Tinker XYZ format (txyz)[¶](#tinker-xyz-format-txyz)
**The cartesian XYZ file format used by the molecular mechanics package TINKER.**
By default, the MM2 atom types are used for writing files but MM3 atom types are provided as an option. Another option provides the ability to take the atom type from the atom class (e.g. as used in SMILES, or set via the API).
##### Read Options[¶](#read-options)
| `-s` | *Generate single bonds only* |
##### Write Options[¶](#write-options)
| `-m` | *Write an input file for the CNDO/INDO program.* |
| `-c` | *Write atom types using custom atom classes, if available* |
| `-3` | *Write atom types for the MM3 forcefield.* |
#### XTC format (xtc)[¶](#xtc-format-xtc)
**A portable format for trajectories (gromacs)**
Note
This is a read-only format.
### Volume data formats[¶](#volume-data-formats)
#### ADF TAPE41 format (t41)[¶](#adf-tape41-format-t41)
Currently the ADF Tape41 support reads grids from TAPE41 text files. To generate an ASCII version from the default binary, use the dmpkf program.
Note
This is a read-only format.
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### Gaussian cube format (cub, cube)[¶](#gaussian-cube-format-cub-cube)
**A grid format for volume data used by Gaussian**
Open Babel supports reading and writing Gaussian cubes, including multiple grids in one file.
##### Read Options[¶](#read-options)
| `-b` | *no bonds* |
| `-s` | *no multiple bonds* |
#### OpenDX cube format for APBS (dx)[¶](#opendx-cube-format-for-apbs-dx)
**A volume data format for IBM’s Open Source visualization software**
The OpenDX support is currently designed to read the OpenDX cube files from APBS.
#### Point cloud on VDW surface (pointcloud)[¶](#point-cloud-on-vdw-surface-pointcloud)
**Generates a point cloud on the VDW surface around the molecule**
The surface location is calculated by adding the probe atom radius
(if specified) to the Van der Waal radius of the particular atom multipled by the specified multiple (1.0 if unspecified).Output is a list of {x,y,z} tuples in Angstrom. Alternatively, if the `x`
option is specified, the [XYZ cartesian coordinates format (xyz)](index.html#xyz-cartesian-coordinates-format) is used instead.
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-r <radii>` | *create a surface for each VDS radius (default 1.0)*
A comma-separated list of VDW radius multiples |
| `-d <densities>` | *for each surface, specify the point density (default 1.0 Angstrom^2)*
A comma-separated list of densities |
| `-p <radius>` | *radius of the probe atom in Angstrom (default 0.0)* |
| `-x` | *output in xyz format* |
#### STL 3D-printing format (stl)[¶](#stl-3d-printing-format-stl)
**The STereoLithography format developed by 3D Systems**
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-p <radius>` | *radius for probe particle (default 0.0 A)* |
| `-s <scale>` | *scale-factor for VDW radius (default 1.0 A)* |
| `-c` | *add CPK colours* |
### JSON formats[¶](#json-formats)
#### ChemDoodle JSON (cdjson)[¶](#chemdoodle-json-cdjson)
**The native way to present data to the ChemDoodle Web Components**
##### Read Options[¶](#read-options)
| `-c <num>` | *coordinate multiplier (default: 20)* |
##### Write Options[¶](#write-options)
| `-c <num>` | *coordinate multiplier (default: 20)* |
| `-m` | *minified output formatting, with no line breaks or indents* |
| `-v` | *verbose output (include default values)* |
| `-w` | *use wedge/hash bonds from input instead of perceived stereochemistry* |
#### PubChem JSON (pcjson)[¶](#pubchem-json-pcjson)
**The JSON format returned by the PubChem PUG REST service**
The data contained in this format closely resembles PubChem’s internal data structure.
##### Read Options[¶](#read-options)
| `-s` | *disable stereo perception and just read stereo information from input* |
##### Write Options[¶](#write-options)
| `-m` | *minified output, with no line breaks or indents* |
| `-w` | *use bond styles from input instead of perceived stereochemistry* |
### Miscellaneous formats[¶](#miscellaneous-formats)
#### <NAME>’s MSMS input format (msms)[¶](#m-f-sanner-s-msms-input-format-msms)
**Generates input to the MSMS (Michael Sanner Molecular Surface) program to compute solvent surfaces.**
Note
This is a write-only format.
##### Write Options[¶](#write-options)
| `-a` | *output atom names* |
### Biological data formats[¶](#biological-data-formats)
#### FASTA format (fa, fasta, fsa)[¶](#fasta-format-fa-fasta-fsa)
**A file format used to exchange information between genetic sequence databases**
##### Read Options[¶](#read-options)
| `-1` | *Output single-stranded DNA* |
| `-t <turns>` | *Use the specified number of base pairs per turn (e.g., 10)* |
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
##### Write Options[¶](#write-options)
| `-n` | *Omit title and comments* |
#### PQR format (pqr)[¶](#pqr-format-pqr)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
### Obscure formats[¶](#obscure-formats)
#### Alchemy format (alc)[¶](#alchemy-format-alc)
#### CCC format (ccc)[¶](#ccc-format-ccc)
Note
This is a read-only format.
#### Feature format (feat)[¶](#feature-format-feat)
##### Read Options[¶](#read-options)
| `-s` | *Output single bonds only* |
| `-b` | *Disable bonding entirely* |
#### SMILES FIX format (fix)[¶](#smiles-fix-format-fix)
Note
This is a write-only format.
#### XED format (xed)[¶](#xed-format-xed)
Note
This is a write-only format.
Descriptors[¶](#descriptors)
---
### Numerical descriptors[¶](#numerical-descriptors)
Number of atoms (atoms)
Add or remove hydrogens to count total or heavy atoms SMARTS: *
Number of bonds (bonds)
Add or remove hydrogens to count total or bonds between heavy atoms SMARTS: *~*
Number of Hydrogen Bond Donors (JoelLib) (HBD)
SMARTS: [!#6;!H0]
Number of Hydrogen Bond Acceptors 1 (JoelLib) (HBA1)
Identification of Biological Activity Profiles Using Substructural Analysis and Genetic Algorithms – <NAME> and Bradshaw,
U. of Sheffield and Glaxo Wellcome.
Presented at Random & Rational: Drug Discovery via Rational Design and Combinitorial Chemistry, Strategic Research Institute, Princeton NJ, Sept. 1995 SMARTS: [$([!#6;+0]);!$([F,Cl,Br,I]);!$([o,s,nX3]);!$([Nv5,Pv5,Sv4,Sv6])]
Number of Hydrogen Bond Acceptors 2 (JoelLib) (HBA2)
SMARTS: [$([$([#8,#16]);!$(*=N~O);!$(*~N=O);X1,X2]),$([#7;v3;!$([nH]);!$(*(-a)-a)])]
Number of Fluorine Atoms (nF)
SMARTS: F
octanol/water partition coefficient (logP)
Datafile: logp.txt
Molecular Weight filter (MW)
Number of triple bonds (tbonds)
SMARTS: *#*
molar refractivity (MR)
Datafile: mr.txt
Number of aromatic bonds (abonds)
SMARTS: *:*
Number of single bonds (sbonds)
SMARTS: *-*
Number of double bonds (dbonds)
SMARTS: *=*
topological polar surface area (TPSA)
Datafile: psa.txt
Rotatable bonds filter (rotors)
Melting point (MP)
This is a melting point descriptor developed by <NAME>. For details see:
<http://onschallenge.wikispaces.com/MeltingPointModel011>
Datafile: mpC.txt
### Textual descriptors[¶](#textual-descriptors)
Canonical SMILES (cansmi)
Canonical SMILES without isotopes or stereo (cansmiNS)
IUPAC InChI identifier (InChI)
InChIKey (InChIKey)
Chemical formula (formula)
For comparing a molecule’s title (title)
### Descriptors for filtering[¶](#descriptors-for-filtering)
Lipinski Rule of Five (L5)
HBD<5 HBA1<10 MW<500 logP<5
SMARTS filter (smarts)
SMARTS filter (s)
Charge models[¶](#charge-models)
---
Insert text here.
### Cheminformatics charge models[¶](#cheminformatics-charge-models)
Assign Gasteiger-Marsili sigma partial charges (gasteiger)
Assign MMFF94 partial charges (mmff94)
### Special charge models[¶](#special-charge-models)
Assign Electronegativity Equilization Method (EEM) atomic partial charges (eem)
Assign QEq (charge equilibration) partial charges (Rappe and Goddard, 1991) (qeq)
Assign QTPIE (charge transfer, polarization and equilibration) partial charges (Chen and Martinez, 2007) (qtpie)
Release Notes[¶](#release-notes)
---
### Open Babel 3.0.0[¶](#open-babel-3-0-0)
Released on 2019-10-10.
This is a major release. It fixes some long-standing issues affecting performance in terms of chemical accuracy and speed, and all users are recommended to upgrade. It also removes deprecated components and breaks the API in a few places. For information on migrating from the previous version, please see [Updating to Open Babel 3.0 from 2.x](index.html#migrating-to-3-0).
#### Notable changes[¶](#notable-changes)
* The babel program has been removed, and the replacement obabel should be used instead. The obabel program fixes some flaws with the original babel (not least that the user could accidentally overwrite an input file) and so has been preferred for many years.
* The Python bindings are now accessed via “from openbabel import pybel” or “from openbabel import openbabel”.
* Under the hood, the code for handling implicit hydrogens and kekulization has been entirely replaced in order to address problems with the original approach that had resulted in multiple bug reports over the years. As well as being accurate, the new approach is much faster.
* The speed of reading and writing SMILES has been improved by more than 50-fold.
* A faster and more accurate fragment-based 3D coordinate generation code has been added, part of Google Summer of Code 2018 and 2019, detailed in *J. Cheminf.* (2019) **11**, Art. 49.<https://doi.org/10.1186/s13321-019-0372-5>
* New functionality in the API:
+ A new class for managing reactions stored as OBMols (OBReactionFacade)
+ A new method to copy part of an OBMol into a new OBMol (OBMol::CopySubstructure)
#### New file formats[¶](#new-file-formats)
* Add basic support for RInChI (Reaction InChI) (by baoilleach, PR#1667)
* Added basic ADF Band and ADF DFTB readers (by psavery, PR#1793)
* Add support for COF format (Culgi object file) plus tests (by pbecherer, PR#1944)
* Add maeparser support to openbabel (by lorton, PR#1993)
#### New file format capabilities and options[¶](#new-file-format-capabilities-and-options)
* Improve svg ball and stick (by ghutchis, PR#360)
* Add an option to the canonical SMILES format to specify the timeout. (by timvdm, PR#386)
* Allow to set space group origin in PDB CRYST1 section (by afonari, PR#1558)
* Parse _space_group_symop_operation_xyz in mmcif (by afonari, PR#1578)
* Improve performance of SMILES parser (by baoilleach, PR#1589)
* Handle undervalent atoms and radicals in Mol files and Smiles (by baoilleach, PR#1626)
* Add support for agents to RXN file format (by baoilleach, PR#1656)
* Allow RSMI format to read partial reactions (by baoilleach, PR#1660)
* Add support for %(NNN) notation for SMILES ring closures (by baoilleach, PR#1677)
* By default, don’t perceive stereo when reading SMILES, but have the option (by baoilleach, PR#1696)
* Speed up the SMILES writer (by baoilleach, PR#1699)
* Faster SMILES: Replace std::endl by “n” (by baoilleach, PR#1706)
* Speed up SMILES writer by replacement of SSSR in SMILES writer with a bounded BFS (by baoilleach, PR#1715)
* Speed up SMILES reading: don’t pre-scan the SMILES string for illegal characters (by baoilleach, PR#1716)
* Minor speedup in SMILES: avoid repeated calls to IsOption by caching some options (by baoilleach, PR#1718)
* Read reaction map from ChemDraw CDX files (by CamAnNguyen, PR#1720)
* Two minor SMILES speed improvements (by baoilleach, PR#1725)
* Speed up SMILES reading: Moved more inside the switch statement for SMILES parsing (by baoilleach, PR#1727)
* Speed up SMILES reading: In the SMILES reader, avoid allocating a BUFSIZE buffer, and the associated string copy (by baoilleach, PR#1728)
* Speed up SMILES writing: Make generation of the SMILES atom order vector optional (by baoilleach, PR#1712)
* Add support for using atom classes as Tinker atom types. (by ghutchis, PR#1734)
* Gaussformat reading electrostatic potentials (by mmghahremanpour, PR#1748)
* Reading Exact Polairzability from Gaussian log file (by mmghahremanpour, PR#1751)
* Gaussformat reading multiple charge models (by mmghahremanpour, PR#1752)
* Write atom occupancy (if present) to PDB (by afonari, PR#1799)
* Update reaction support in ChemDraw (by baoilleach, PR#1878)
* ADF DFTB: Add new detection string for ADF 2018 (by psavery, PR#1888)
* Update Gaussian format (by e-kwsm, PR#1969)
* Update URLs of specification of gromacs (by e-kwsm, PR#1974)
* Update URL of specification of MDL MOL (by e-kwsm, PR#1980)
* Add SMILES support for elements specified by 3-digit number, e.g. [#101] (by baoilleach, PR#1997)
#### Other new features and improvements[¶](#other-new-features-and-improvements)
* Include original when there are zero rotatable bonds in confab (by cowsandmilk, PR#370)
* Improve thread safety for global objects (by baoilleach, PR#381)
* Change the OBAromTyper from using SMARTS patterns to a switch statement (rebased) (by baoilleach, PR#1545)
* Keep count of implicit hydrogens instead of inferring them (by baoilleach, PR#1576)
* Obthermo update patch (by mmghahremanpour, PR#1598)
* Improve performance of element handling (by baoilleach, PR#1601)
* Implement the Daylight aromaticity model as described by <NAME> (by baoilleach, PR#1638)
* Allow multiple agents in OBReaction (by baoilleach, PR#1640)
* Clarify python examples (by theavey, PR#1657)
* Add support for wrapping GetRGB() call to return r, g, b params. (by ghutchis, PR#1670)
* Adding missing manpages (by merkys, PR#1678)
* Expose obfunctions api through python bindings (by cstein, PR#1697)
* Avoid logging messages that are taking time (by baoilleach, PR#1714)
* warning/error messages for fastindex when the structure file is compressed (by adalke, PR#1733)
* Refactor atom class to being data on an atom rather than on a molecule (by baoilleach, PR#1741)
* Add Molecule.make2D function (by eloyfelix, PR#1765)
* Change the behavior of OBMol.Separate so that it preserves atom order (by baoilleach, PR#1773)
* When calling OBMol.Separate, preserve whether aromaticity has been perceived (by baoilleach, PR#1800)
* Add OBMol::CopySubstructure (by baoilleach, PR#1811)
* Add OBMol::SetChainsPerceived(false) (by baoilleach, PR#1813)
* Add stereo + obfunctions + kekulize to ruby binding (by CamAnNguyen, PR#1824)
* Generate useful error messages if plugins can’t be found. (by dkoes, PR#1826)
* Allow public access to retrieve gradients (by ghutchis, PR#1833)
* Re-enable vector.clear() to allow wrapped std::vectors to be reused (by baoilleach, PR#1834)
* Implement reaction handling as part of OBMol (by baoilleach, PR#1836)
* Added rotors as a descriptor/filter. (by ghutchis, PR#1846)
* Keep aromaticity in EndModify() (by baoilleach, PR#1847)
* Fragment-based coordinate generation (by n-yoshikawa, PR#1850)
* Rebuild OBMM tool for interactive MM optimization (by ghutchis, PR#1873)
* Update fragment based builder (by n-yoshikawa, PR#1931)
* Refactor python bindings so that openbabel.py and pybel.py are within an openbabel folder (by baoilleach, PR#1946)
* Tidy setting/unsetting of molecule perception flags (by baoilleach, PR#1951)
* Remove outdated stereo code (by baoilleach, PR#1967)
* Remove OBBond::GetBO() and SetBO() (by baoilleach, PR#1953)
* Remove OBRandom from the public API (by baoilleach, PR#1954)
* Remove miscellanous headers from mol.h, atom.h and bond.h (by baoilleach, PR#1958)
* enhancements to obrms to support optimization of pose alignment (by dkoes, PR#1961)
* Remove GetGenericValueDef from OBGenericData (by baoilleach, PR#1964)
* Remove low-hanging deprecated methods (by baoilleach, PR#1968)
* Improve python script (by e-kwsm, PR#1970)
* Make pybel.Outputfile compatible with with statment (by yishutu, PR#1971)
* Obrms enhancement (by dkoes, PR#1978)
* Move to a single function for setting/unsetting bond and atom flags (by baoilleach, PR#1965)
* Rename/add valence and degree methods (by baoilleach, PR#1975)
* Do not stoke around the (svg) text (by Artoria2e5, PR#2012)
* Add a warning message when both -p and -h options are set (by yishutu, PR#2031)
* “Bye bye babel” - remove the babel binary (by baoilleach, PR#1976)
* Add force field support for dielectric constants in charge terms. (by ghutchis, PR#2022)
#### Development/Build/Install Improvements[¶](#development-build-install-improvements)
* Change default build type to RELEASE and add -O3 switch (by baoilleach, PR#352)
* Add a default issue template for Open Babel - Suggestions welcome (by ghutchis, PR#383)
* Compile position independent code for shared libraries. (by susilehtola, PR#1575)
* Introduce std:isnan for older versions of MSVC (by mwojcikowski, PR#1586)
* Prepend to LD_LIBRARY_PATH instead of overwrite (by barrymoo, PR#1588)
* Changes needed to compile with C++17 (by arkose, PR#1619)
* Compiler version parsing and comparison from CMake 2.8 (by cowsandmilk, PR#1630)
* Create CODE_OF_CONDUCT.md (by ghutchis, PR#1671)
* Clarify option needed to generate SWIG bindings. (by jeffjanes, PR#1686)
* Correct spelling of file name for Perl bindings (by jeffjanes, PR#1687)
* In the Python bindings, avoid adding methods from the iterated object to the iterator itself (by baoilleach, PR#1729)
* Ensure portability to ARM platforms (by baoilleach, PR#1744)
* Switch to rapidjson library for JSON parsing/writing (by mcs07, PR#1776)
* Fix linking of python bindings on Mac (by mcs07, PR#1807)
* Using pillow instead of PIL (by hille721, PR#1822)
* Ignore compile warnings on inchi directory. (by ghutchis, PR#1864)
* Compile project in Cygwin without xtcformat (by bbucior, PR#1894)
* Hyperlink DOIs to preferred resolver (by katrinleinweber, PR#1909)
* For Travis builds, include output for build failures (by baoilleach, PR#1959)
* Add __init__.py to gitignore (by yishutu, PR#1972)
* Ignore in-source installation (by RMeli, PR#2027)
* Add a GitHub funding link to the open collective page. (by ghutchis, PR#2042)
#### Bug Fixes[¶](#bug-fixes)
* Fix for missing ZLIB on win32 (by philthiel, PR#357)
* Depict headers were missing in the installation (by tgaudin, PR#359)
* Avoid IndexError for plugins with empty names (by langner, PR#361)
* Fixed a few errors in space-groups.txt (by psavery, PR#367)
* SF #909 - Fix segfault when ReadMolecule() called with PubChem document but file extension was generic .xml (by derekharmon, PR#369)
* Preserve triple bond when reading SMILES with a triple bond in an aromatic ring (by baoilleach, PR#371)
* Fix bug #368: Python3.6 openbabel: No module named ‘DLFCN’ (by hseara, PR#372)
* Fastsearch 64 fix (by dkoes, PR#1546)
* Don’t try to install aromatic.txt as it is no longer present (by baoilleach, PR#1547)
* Make sure to add conformers *after* performing bond perception. (by ghutchis, PR#1549)
* Set default coordinates before doing bond perception. (by ghutchis, PR#1550)
* Ignore some non-functioning python SWIG bindings. (by djhogan, PR#1554)
* Remove delete statement. (by djhogan, PR#1556)
* Link libinchi with math library (by nsoranzo, PR#1564)
* Fix segfault in OBMol::GetSpacedFormula (by bbucior, PR#1565)
* Fix regression + minor cppcheck report (by serval2412, PR#1567)
* Convert tabs to spaces in testpdbformat.py (by adamjstewart, PR#1568)
* cppcheck: Condition ‘1==0’ is always false (by serval2412, PR#1572)
* UFF: Fix conversion constant (by aandi, PR#1579)
* Remove the change in resonance structure from the vinylogous carboxylic acid pH model (by kyle-roberts-arzeda, PR#1580)
* Fix wedge/hash in cyclopropyl (by fredrikw, PR#1582)
* Fix multifragment depiction (by fredrikw, PR#1585)
* Fix wrong spin multiplicity assignment (by nakatamaho, PR#1592)
* Change silicon to correct MM3 atom type (by keipertk, PR#1593)
* Fix pubchem JSON handling of enum types as ints (by mcs07, PR#1596)
* Correct MM3 carboxyl oxygen atom type definition (by keipertk, PR#1599)
* Fix for calculating implicit H count when reading SMILES (by baoilleach, PR#1606)
* Fix some small misspellings in the csharp bindings (by cmanion, PR#1608)
* Tweak the handling of implicit Hs when reading SMILES (by baoilleach, PR#1609)
* Fix underflow causing a noticeable delay when e.g. writing a molfile (by baoilleach, PR#1610)
* Fix install regression with element data (by bbucior, PR#1617)
* Added some missing formats to the static build (by psavery, PR#1622)
* In SiestaFormat, print warnings to cerr (by psavery, PR#1623)
* For SIESTA format, use obErrorLog instead of cerr (by psavery, PR#1627)
* Correct the spelling of the Frerejacque number in a comment (by baoilleach, PR#1629)
* Lowercase second element letter in PDB and test (by cowsandmilk, PR#1631)
* Remove erroneous -1 in switch statement (by baoilleach, PR#1632)
* Make sure to handle molecular total charge by default for keywords (by ghutchis, PR#1634)
* Added fix for OBMolAtomBFSIter in Python3 (by oititov, PR#1637)
* space-groups.txt: correct Hall symbol for C -4 2 b (by wojdyr, PR#1645)
* Reset path to empty in kekulization code (potential segfault) (by baoilleach, PR#1650)
* Correct handling of stereo when writing InChIs (by baoilleach, PR#1652)
* ECFP Fixup (by johnmay, PR#1653)
* Fix “folding” for fingerprints to larger bit sizes - #1654. (by ghutchis, PR#1658)
* Fix reading atom symbols from XSF file (by sencer, PR#1663)
* Minor fixes in the nwchem format reader (by xomachine, PR#1666)
* use isinstance to test if filename is bytes (by cowsandmilk, PR#1673)
* Fix bug found due to MSVC warning (by baoilleach, PR#1674)
* Fix MSVC warning about unused variable (by baoilleach, PR#1675)
* Correct handling of atom maps (by baoilleach, PR#1698)
* Fix #1701 - a GCC compiler error (by baoilleach, PR#1704)
* Remove some audit messages (by baoilleach, PR#1707)
* Fix bug when copying stereo during obmol += obmolB (by baoilleach, PR#1719)
* Fix uninitialized read in kekulize.cpp found by Dr Memory. (by baoilleach, PR#1721)
* Fixes for ring closure parsing (by baoilleach, PR#1723)
* Make sure that OBAtom::IsInRing always triggers ring perception if not set as perceived (by baoilleach, PR#1724)
* Fix code error found from @baoilleach compiler warnings (by ghutchis, PR#1736)
* Fix Python3 compatibility (by ghutchis, PR#1737)
* Fix ChemDraw CDX incremental value (by CamAnNguyen, PR#1743)
* Fix error in VASPformat found by static code analysis (by baoilleach, PR#1745)
* Fix for 1731. Store atom classes in CML atomids by appending _ATOMCLASS. (by baoilleach, PR#1746)
* Fix GCC warnings (by baoilleach, PR#1747)
* Fix warning in fastsearch substructure fingerprint screen (by baoilleach, PR#1749)
* Fix #1684 - string comparison does not work with numeric sd titles (by cowsandmilk, PR#1750)
* Fixing minor things for reading ESP from log files (by mmghahremanpour, PR#1753)
* Fix #1569 - OB 2.4.1 loses the second molecule in a HIN file (by yishutu, PR#1755)
* Fix TESTDIR definition to allow space in path (by mcs07, PR#1757)
* Fix regression. Ensure that asterisk is unbracketed when writing a SMILES string (by baoilleach, PR#1759)
* Fix MSVC warning about type conversion (by baoilleach, PR#1762)
* Fix SMILES parsing fuzz test failures from AFL (by baoilleach, PR#1770)
* Fix warning about size_t versus int cast (by baoilleach, PR#1771)
* A small improvement of a bugfix solving segfault when reading GAMESS output with vibrations (by boryszef, PR#1772)
* In the Python bindings, reset the DL open flags after importing _openbabel (by baoilleach, PR#1775)
* fix cdxml stereo bonds (by JasonYCHuang, PR#1777)
* Install obabel target if using static build (by torcolvin, PR#1779)
* Fix #1769 by correctly handling the mass difference field in MDL mol files (by baoilleach, PR#1784)
* Kekulize hypervalent aromatic N and S (by baoilleach, PR#1787)
* Pdbqt fix (by dkoes, PR#1790)
* Raise a warning when coordinate is NaN (by n-yoshikawa, PR#1792)
* Use the InChI values for the average atomic mass when reading/writing isotopes (by baoilleach, PR#1795)
* Fix compile failure after recent Molden commit (by baoilleach, PR#1796)
* Fix segfault due to running off the start of an iterator in PDBQT format (by baoilleach, PR#1797)
* Fix#1768: Segfault upon reading GAMESS outputs of DFTB3 calculations (by serval2412, PR#1798)
* Always ensure hybridization (by ghutchis, PR#1801)
* Fix #1786 by changing the return value of OBResidue::GetNum() (by baoilleach, PR#1804)
* Apply fixes from <NAME> to address int/double type warnings. (by baoilleach, PR#1806)
* Fix#1607: check dynamic cast return (by serval2412, PR#1815)
* Fixes #1282: check format input is provided (by serval2412, PR#1818)
* Fix#1331: avoid crash with Q-Chem fragment (by serval2412, PR#1820)
* Set default to read CIFs with specified coordinates, no wrapping. (by ghutchis, PR#1823)
* Fix#1056: remove a debug output (by serval2412, PR#1825)
* Get ECFP working (by baoilleach, PR#1829)
* Fix cdxml upside down format (by JasonYCHuang, PR#1831)
* Fix to CopySubstructure found when running over ChEMBL (by baoilleach, PR#1832)
* Fix#192: parse and use ‘-a’ flag for obrotate (by serval2412, PR#1835)
* Ensure carbonyl groups are checked at both 0 and 180. (by ghutchis, PR#1845)
* Ensure that the check for OBBond::IsInRing obeys the OBMol perception flags (by baoilleach, PR#1848)
* Simplify/fix behavior of OBAtom::GetResidue so that it behaves like other lazy properties (by baoilleach, PR#1849)
* Fixes #1851: check some limits when converting smi to sdf using –gen2D (by serval2412, PR#1852)
* Modify cleaning blank line behaviors (by yishutu, PR#1855)
* Ring membership of atoms and bonds was not being reset during perception (by baoilleach, PR#1856)
* Update qeq.txt (by mkrykunov, PR#1882)
* Support lone pair stereo on nitrogen as well as sulfur (by baoilleach, PR#1885)
* Changed indexing of fragments, should fix #1889 (by fredrikw, PR#1890)
* Avoid out-of-range access in OBMolBondBFSIter (by baoilleach, PR#1892)
* Fix OBChemTsfm wrapping of implicit H counts (by baoilleach, PR#1896)
* Updated the coordinate generation from templates. (by fredrikw, PR#1902)
* Fix incorrect use of memcpy. (by sunoru, PR#1908)
* Add SetChainsPerceived() after EndModify() in formats that add residues (by baoilleach, PR#1914)
* define isfinite removed. (by orex, PR#1928)
* Teach the isomorphism mapper to respect atom identity (by johnmay, PR#1939)
* Fix memory leak in OBSmartsPattern::Init() (by n-yoshikawa, PR#1945)
* Address CMake build warning about policy CMP0005 being set to OLD (by baoilleach, PR#1948)
* Fix clang warning about in-class init of a non-static data member (by baoilleach, PR#1949)
* Update bindings for changes to headers (by baoilleach, PR#1963)
* Fix randomly failing Python gradient test (by baoilleach, PR#1966)
* Exit with non-zero if an error occurs (by e-kwsm, PR#1973)
* Avoid non-finite bond vectors (by dkoes, PR#1981)
* Include babelconfig in vector3.h (by dkoes, PR#1985)
* Fix #1987: CMake failing at FindRapidJSON (by RMeli, PR#1988)
* fpsformat.cpp: compile bugfix header added. (by orex, PR#1991)
* Address Ubuntu bug in defining python install dir (by dkoes, PR#1992)
* PDB and PDBQT Insertion Code Fixes (by RMeli, PR#1998)
* Make pybel compatible with #1975 (by yishutu, PR#2005)
* H vector fix (by dkoes, PR#2010)
* Change forcefield.cpp so that steepest descent and conjugate gradient update maxgrad (by PeaWagon, PR#2017)
* Update coordinates in the fast option of obabel (by n-yoshikawa, PR#2026)
* Update the CSharp bindings (by baoilleach, PR#2032)
* Don’t make kekule SMILES the default in the GUI (by baoilleach, PR#2039)
* Bumping the major version requires more changes throughout the library. (by baoilleach, PR#2036)
* Fix reading of uninitialized data. (by dkoes, PR#2038)
* Remove minor version from some names (by baoilleach, PR#2040)
* Fixed alias expansion for files with multiple aliases (by fredrikw, PR#2035)
* Update doc (by e-kwsm, PR#1979)
* Fix compilation with GCC 4.8 (standard compiler on CentOS 7.5) (by baoilleach, PR#2047)
* Some tests (by dkoes, PR#2008)
#### Cast of contributors[¶](#cast-of-contributors)
aandi, adalke (<NAME>), adamjstewart (<NAME>), afonari (<NAME>), artoria2e5 (<NAME>), baoilleach (<NAME>), barrymoo (<NAME>), bbucior (<NAME>), boryszef (<NAME>), camannguyen (<NAME>), cmanion (<NAME>), cowsandmilk (<NAME>), cstein (<NAME>), derekharmon (<NAME>), djhogan (<NAME>), dkoes (<NAME>), e-kwsm (<NAME>), eloyfelix (<NAME>), fredrikw (<NAME>), ghutchis (<NAME>), hille721 (<NAME>), hseara (<NAME>), jasonychuang (<NAME>), jeffjanes (<NAME>), johnmay (<NAME>), katrinleinweber (<NAME>), keipertk (<NAME>), kyle-roberts-arzeda, langner (<NAME>), lorton (<NAME>), mcs07 (<NAME>), merkys (<NAME>), mkrykunov, mmghahremanpour (<NAME>), mwojcikowski (<NAME>), n-yoshikawa (Naruki Yoshikawa), nakatamaho (Nakata Maho), nsoranzo (<NAME>), oititov (<NAME>), orex (<NAME>), pbecherer (<NAME>), peawagon (Jen), philthiel (<NAME>), psavery (<NAME>), rmeli (<NAME>), serval2412 (<NAME>), sunoru, susilehtola (<NAME>), tgaudin (<NAME>), theavey (<NAME>), timvdm (<NAME>), torcolvin (<NAME>), wojdyr (<NAME>), xomachine (<NAME>), yishutu (<NAME>)
### Open Babel 2.4.0[¶](#open-babel-2-4-0)
Released on 2016-09-21.
Note that this release deprecates the babel executable in favor of obabel. A future release will remove babel entirely. For information on the differences, please see <http://openbabel.org/docs/current/Command-line_tools/babel.html>.
#### New file formats[¶](#new-file-formats)
* DALTON output files (read only) and DALTON input files (read/write) (<NAME>)
* JSON format used by ChemDoodle (read/write) (Matt Swain)
* JSON format used by PubChem (read/write) (Matt Swain)
* LPMD’s atomic configuration file (read/write) (<NAME>)
* The format used by the CONTFF and POSFF files in MDFF (read/write) (Kirill Okhotnikov)
* ORCA output files (read only) and ORCA input files (write only) (Dagmar Lenk)
* ORCA-AICCM’s extended XYZ format (read/write) (Dagmar Lenk)
* Painter format for custom 2D depictions (write only) (<NAME>)
* Siesta output files (read only) (<NAME>)
* Smiley parser for parsing SMILES according to the OpenSMILES specification (read only) (Tim Vandermeersch)
* STL 3D-printing format (write only) (Matt Harvey)
* Turbomole AOFORCE output (read only) (<NAME>)
* A representation of the VDW surface as a point cloud (write only) (Matt Harvey)
#### New file format capabilities and options[¶](#new-file-format-capabilities-and-options)
* AutoDock PDBQT: Options to preserve hydrogens and/or atom names (<NAME>)
* CAR: Improved space group support in .car files (kartlee)
* CDXML: Read/write isotopes (Roger Sayle)
* CIF: Extract charges (Kirill Okhotnikov)
* CIF: Improved support for space-groups and symmetries (<NAME>)
* DL_Poly: Cell information is now read (Kirill Okhotnikov)
* Gaussian FCHK: Parse alpha and beta orbitals (<NAME>)
* Gaussian out: Extract true enthalpy of formation, quadrupole, polarizability tensor, electrostatic potential fitting points and potential values, and more (<NAME> der Spoel)
* MDL Mol: Read in atom class information by default and optionally write it out (Roger Sayle)
* MDL Mol: Support added for ZBO, ZCH and HYD extensions (Matt Swain)
* MDL Mol: Implement the MDL valence model on reading (Roger Sayle)
* MDL SDF: Option to write out an ASCII depiction as a property (Noel O’Boyle)
* mmCIF: Improved mmCIF reading (<NAME>)
* mmCIF: Support for atom occupancy and atom_type (Kirill Okhotnikov)
* Mol2: Option to read UCSF Dock scores (<NAME>owski)
* MOPAC: Read z-matrix data and parse (and prefer) ESP charges (<NAME>)
* NWChem: Support sequential calculations by optionally overwriting earlier ones (Dmitriy Fomichev)
* NWChem: Extract info on MEP(IRC), NEB and quadrupole moments (Dmitriy Fomichev)
* PDB: Read/write PDB insertion codes (Steffen Möller)
* PNG: Options to crop the margin, and control the background and bond colors (<NAME>)
* PQR: Use a stored atom radius (if present) in preference to the generic element radius (Zhixiong Zhao)
* PWSCF: Extend parsing of lattice vectors (<NAME>)
* PWSCF: Support newer versions, and the ‘alat’ term (<NAME>)
* SVG: Option to avoid addition of hydrogens to fill valence (Lee-Ping)
* SVG: Option to draw as ball-and-stick (Jean-Noël Avila)
* VASP: Vibration intensities are calculated (<NAME>, <NAME>)
* VASP: Custom atom element sorting on writing (Kirill Okhotnikov)
#### Other new features and improvements[¶](#other-new-features-and-improvements)
* 2D layout: Improved the choice of which bonds to designate as hash/wedge bonds around a stereo center (<NAME>)
* 3D builder: Use bond length corrections based on bond order from Pyykko and Atsumi (<https://doi.org/10.1002/chem.200901472>) (Geoff Hutchison)
* 3D generation: “–gen3d”, allow user to specify the desired speed/quality (Geoff Hutchison)
* Aromaticity: Improved detection (Geoff Hutchison)
* Canonicalisation: Changed behaviour for multi-molecule SMILES. Now each molecule is canonicalized individually and then sorted. (Ge<NAME>/<NAME>)
* Charge models: “–print” writes the partial charges to standard output after calculation (Geoff Hutchison)
* Conformations: Confab, the systematic conformation generator, has been incorporated into Open Babel (David Hall/Noel O’Boyle)
* Conformations: Initial support for ring rotamer sampling (Geoff Hutchison)
* Conformer searching: Performance improvement by avoiding gradient calculation and optimising the default parameters (Geoff Hutchison)
* EEM charge model: Extend to use additional params from <https://doi.org/10.1186/s13321-015-0107-1> (<NAME>ek)
* FillUnitCell operation: Improved behavior (<NAME>)
* Find duplicates: The “–duplicate” option can now return duplicates instead of just removing them (<NAME>)
* GAFF forcefield: Atom types updated to match Wang et al. J. Comp. Chem. 2004, 25, 1157 (<NAME>)
* New charge model: EQeq crystal charge equilibration method (a speed-optimized crystal-focused charge estimator, <http://pubs.acs.org/doi/abs/10.1021/jz3008485>) (<NAME>)
* New charge model: “fromfile” reads partial charges from a named file (<NAME>)
* New conversion operation: “changecell”, for changing cell dimensions (<NAME>)
* New command-line utility: “obthermo”, for extracting thermochemistry data from QM calculations (<NAME>)
* New fingerprint: ECFP (<NAME>/<NAME>/<NAME>)
* OBConversion: Improvements and API changes to deal with a long-standing memory leak (<NAME>)
* OBAtom::IsHBondAcceptor(): Definition updated to take into account the atom environment (<NAME>)
* Performance: Faster ring-finding algorithm (<NAME>)
* Performance: Faster fingerprint similarity calculations if compiled with -DOPTIMIZE_NATIVE=ON (<NAME>/<NAME>)
* SMARTS matching: The “-s” option now accepts an integer specifying the number of matches required (<NAME>)
* UFF: Update to use traditional Rappe angle potential (<NAME>)
#### Language bindings[¶](#language-bindings)
* Bindings: Support compiling only the bindings against system libopenbabel (Reinis Danne)
* Java bindings: Add example Scala program using the Java bindings (Reinis Danne)
* New bindings: PHP (<NAME>)
* PHP bindings: BaPHPel, a simplified interface (<NAME>)
* Python bindings: Add 3D depiction support for Jupyter notebook (<NAME>)
* Python bindings, Pybel: calccharges() and convertdbonds() added (<NAME>, <NAME>)
* Python bindings, Pybel: compress output if filename ends with .gz (Maciej Wójcikowski)
* Python bindings, Pybel: Residue support (Maciej Wójcikowski)
#### Development/Build/Install Improvements[¶](#development-build-install-improvements)
* Version control: move to git and GitHub from subversion and SourceForge
* Continuous integration: Travis for Linux builds and Appveyor for Windows builds (David Lonie and Noel O’Boyle)
* Python installer: Improvements to the Python setup.py installer and “pip install openbabel” (David Hall, M<NAME>, Joshua Swamidass)
* Compilation speedup: Speed up compilation by combining the tests (Noel O’Boyle)
* MacOSX: Support compiling with libc++ on MacOSX (Matt Swain)
#### Cast of contributors[¶](#cast-of-contributors)
<NAME>, <NAME>, <NAME>, arkose, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Lee-Ping, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
### Open Babel 2.3.1[¶](#open-babel-2-3-1)
Released on 2011-10-14.
This release represents a major bug-fix release and is a stable upgrade, strongly recommended for all users of Open Babel. Many bugs and enhancements have been added since the 2.3.0 release.
After 10 years, we finally published a paper discussing Open Babel. Please consider citing this work if you publish work which used Open Babel: <NAME> , <NAME> , <NAME> , <NAME> , <NAME> and <NAME>. “Open Babel: An open chemical toolbox.” Journal of Cheminformatics 2011, 3:33. <http://www.jcheminf.com/content/3/1/33#### What’s new from 2.3.0[¶](#what-s-new-from-2-3-0)
* Better support for unknown stereochemistry, including a “wobbly” bond in 2D depiction.
* Many fixes for rare bugs with stereochemical conversions, including unusual valences.
* Significantly improved 2D depiction code, improving performance and cis/trans stereochemical accuracy
* Added support for direct 2D depiction to PNG files using the Cairo library, if available.
* PNG files from Open Babel contain molecular information and can be read to give the MDL Molfile.
* SVG files with 2D depiction can now include a grid of molecules with embedded JavaScript to zoom and scroll.
* Molecular formulas now include the total charge (e.g., HCO2-)
* Added the EEM partial charge model from Bultinck, et. al.
* Fixed problems with FastSearch databases larger than 4GB, now checking for large files.
* Improved performance with force field minimization, particularly the UFF and GAFF methods.
* Several MMFF94 atom typing bugs fixed.
* Updated GAFF parameters from the AmberTools distribution.
* Improvements in 3D coordinate generation, particularly more accurate sp3 bond angles
* Fixed tests for auto-typing molecules with force fields when running through different isomers.
* Improvements in scripting bindings, particularly Python, Ruby, and Java
* Pybel now uses the built-in 2D depiction, and no longer needs OASA.
* Added initial support for MM3 atom typing with the Tinker package
* Significant bug fixes for the PDBQT format.
* Reading FASTA files can now generate 3D coordinates for single-stranded DNA in addition to the default double-strand.
* Support for reading/writing unit cell information from MOPAC files.
* Support for re-numbering SMILES by specifying the first and last atoms with -xf and -xl flags.
* Better support for InChI -> InChI key generation by direct conversion, rather than re-perception of the InChI.
* Fix for rare stack overflow crash in SMARTS perception.
* Improved UNIX man pages.
* Many bug fixes and small enhancements
#### New File Formats[¶](#new-file-formats)
* Import and Export:
** Gromacs GRO
* Import:
** ABINIT
** XCrySDen XSF
* Export:
** InChI Key
### Open Babel 2.3.0[¶](#open-babel-2-3-0)
Released on 2010-10-23.
This release represents a major update and should be a stable upgrade,
strongly recommended for all users of Open Babel. Highlights include a completely rewritten stereochemistry engine, Spectrophore fingerprint generation, 2D depiction, improved 3D coordinate generation, conformer searching, and more. Many formats are improved or added, including CIF, PDBQT, SVG, and more. Improved developer API and scripting support and many, many bug fixes are also included.
#### What’s new from 2.2.3[¶](#what-s-new-from-2-2-3)
* Completely rewritten stereochemistry perception, including support for tetrahedral, square planar, and higher-order stereochemistry.
* Dramatically improved canonicalization algorithm (Note that in general, canonical SMILES have changed since the 2.2.x release.)
* 2D depiction, including SVG vector graphics generation using code from MCDL.
* New Spectrophore generation, contributed by Silicos NV.
* New ChargeMethod API including support for partial charge assignment from Gasteiger, MMFF94, QEq, QTPIE methods and plugin interface for adding more.
* Improved 3D coordinate generation.
* New conformer generation framework, including support for diverse conformer generation and genetic algorithm lowest-energy searching.
* Improved user documentation.
* Improved aromaticity / Kekule bond assignment.
* Improved unit test suite using the CMake-based CTest program.
* Improved support for crystallographic unit cells (e.g., in CIF format).
* Improved UFF force field method, including hypervalent 5, 6, 7 and higher coordination numbers.
* Support for the GAFF (Generalized Amber Force Field) method.
* Support for reading geometry optimizations as multiple conformers from Gaussian, GAMESS-US, and other quantum chemistry packages.
* Support for reading molecular orbital energies from quantum chemistry formats.
* Several memory leaks fixed.
* Fixed many compiler warnings.
* Fixed support for MinGW and Cygwin environments.
* Fixed bugs with Gaussian 09 output files.
* Latest released version of the InChI library (1.0.3) generating standard InChI.
* Many more bug fixes and small feature improvements.
#### New Command-Line Operations[¶](#new-command-line-operations)
* –canonical: Output atoms in canonical order for any file format (i.e., not just SMILES)
* –conformer: Run a conformer search on the input molecules (has many options)
* –gen2D: Generate a 2D depiction of the molecule
* –partialcharge <model>: Use the partial charge model supplied to generate charges (i.e., instead of default Gasteiger sigma model)
* –sort <descriptor>: Sort molecules by a specified descriptor
* –unique: Only output unique molecules (as determined by InChI generation)
#### New File Formats[¶](#new-file-formats)
Import & Export:
- DL-POLY CONFIG
- FHIaims XYZ
- PDBQT
Import only:
- DL-POLY HISTORY
- GULP output
- PWscf output
- Text
Export only:
- MNA (Multilevel Neighborhoods of Atoms)
- SVG vector graphics
### Open Babel 2.2.3[¶](#open-babel-2-2-3)
Released on 2009-07-31.
#### What’s new from 2.2.2[¶](#what-s-new-from-2-2-2)
This release represents an important bug-fix upgrade, strongly recommended for all users of Open Babel.
* Fixed bug in fingerprints in 2.2.2, where the default fingerprint format and bit length was changed inadvertently.
* Fixed detection of shared_ptr in tr1/memory.
* Fixed additional aromaticity / Kekule assignment bugs.
* Fixed several bugs in the MMCIF format.
* Additional bug fixes.
### Open Babel 2.2.2[¶](#open-babel-2-2-2)
Released on 2009-07-04.
#### What’s new from 2.2.1[¶](#what-s-new-from-2-2-1)
This release represents a major bug-fix release and is a stable upgrade, strongly recommended for all users of Open Babel. While there may not be many new features, many crashes and other bugs have been fixed since 2.2.1.
* Upgraded to the new InChI 1.02 release to produce standardized InChI and InChIKey output.
* Fixed many stereochemistry errors when reading/writing SMILES. This is part of a larger project which will be finished in the 2.3 release.
* Fixed compilation and installation on Cygwin and MinGW platforms.
* Significantly improved aromaticity and Kekule bond assignment.
* Improved 2D -> 3D coordinate generation
* Improved coordinate generation using the –gen3d command-line operation
* Improved performance for coordinate generation.
* New –fillUC command-line operation for babel.
* Fixes to pH-dependent hydrogen addition.
* Added support for reading vibrational data from Molden, Molpro, and NWChem output files.
* Updated atomic radii from recent theoretical calculations.
* Fixed bug when reading gzip-compressed Mol2 or XML files.
* Close files after an error. Fixes a bug with Pybel where files would remain open.
* Many more bug fixes and small feature improvements.
#### New File Formats[¶](#new-file-formats)
Import & Export:
- Molpro input and output.
- VASP coordinate files (CONTCAR and POSCAR).
### Open Babel 2.2.1[¶](#open-babel-2-2-1)
Released on 2009-03-01.
#### What’s new from 2.2.0[¶](#what-s-new-from-2-2-0)
This release represents a major bug-fix release and is a stable upgrade, strongly recommended for all users of Open Babel. While there may not be many new features, many crashes and other bugs have been fixed since 2.2.0.
* Improved scripting interfaces, including Python 3 support and improved Java and C# support.
* Added support for MACCS fingerprints. Thanks to the RDKit project.
* Many fixes and enhancements to the force field code. In particular,
the UFF force field implementation should handle many more molecules.
* Improved 3D coordinate generation, particularly with ring fragments. You can give this a try with the obgen utility.
* Fixed a variety of PDB import errors with atom types.
* Added support for reading charges and radii from PQR file formats.
* Added support for reading and writing unit cells in PDB formats.
* New “output” file format for taking generic “.out”, “.log”, and
“.dat” files and reading with appropriate file type based on contents. Currently works extremely well for quantum chemistry packages.
* Added improved error handling and reporting when unable to load file formats.
* Improved CIF file format support.
* Many, many, many additional bug fixes and small enhancements.
### Open Babel 2.2.0[¶](#open-babel-2-2-0)
Released on 2008-07-04.
#### What’s new from 2.1.1[¶](#what-s-new-from-2-1-1)
* New support for 3D coordinate generation using the OBBuilder class.
Note that this code directly supports non-chiral compounds Stereochemistry may or may not be supported in this release
* Significantly faster force fields (up to 200x faster) and support for constrained optimization.
* New force fields, including complete UFF, MMFF94, and MMFF94s implementations.
* Monte Carlo conformer search support, including a new obconformer tool.
* Unified framework for plugin classes, including easy-to program file formats, descriptors, filters, force fields, fingerprints, etc.
* A new “descriptor” plugin framework for QSAR descriptors, etc.
Initial descriptors include hydrogen-bond donors, acceptors,
octanol/water partition, topological polar surface area, molar refractivity, molecular weight, InChI, SMARTS, titles, Lipinski Rule of Five, etc.
* A new “filter” plugin framework for selecting molecules by title,
molecular weight, etc.
* Facility to add new “ops”, commandline options or operations on the conversion process as plugin code.
Initial operations include 3D coordinate generation, tautomer standarization, and addition of polar hydrogens.
* Code for integrating Open Babel and the BOOST graph library.
* Improved scripting support, including new bindings for C# and improved Java, Ruby, Python, and Perl bindings.
* Space group support and thoroughly revised and improved CIF format.
* Initial support for 3D point group symmetry perception.
* Improved support for “grids” or “cubes” of molecular data, such as from quantum mechanics programs. (See below for supported file formats.)
* Initial support for reading trajectories and animations.
* Improved support for reaction formats, including CML, RXN, and Reaction SMILES.
* Improved residue handling in PDB and Mol2 formats.
* Improved pH-dependent hydrogen addition.
* Latest released version of the InChI library, including use of the latest “preferred” options for InChI generation.
* Support for the cross-platform CMake build system.
* File format modules are now installed in a version-specific directory on unix, preventing problems between 2.2.x and 2.1.x (or older) plugin libraries.
* Framework to support “aliases” for group abbreviations, partially implemented for MDL formats.
* Many more bug fixes and small feature improvements.
#### New File Formats[¶](#new-file-formats)
Import & Export:
Chemkin Gaussian Cube Gaussian Z-matrix GROMACS xtc trajectories MCDL
mmCIF OpenDX cube (e.g., from APBS)
Reaction SMILES Import only:
Accelrys/MSI Cerius II MSI text format ADF output ADF Tape41 ASCII data GAMESS-UK input and output Molden structure PNG (for embedded chemical data)
PQR Export only:
MSMS input ADF input InChI Keys
### Open Babel 2.1.1[¶](#open-babel-2-1-1)
Released on 2007-07-07.
#### What’s new from 2.1.0[¶](#what-s-new-from-2-1-0)
* Improved scripting support, including dictionary-support for OBGenericData in pybel, casting from OBUnitCell, etc. Improved access to OBRings from OBMol.GetSSSR().
* Added support for descriptors (e.g., PSA, logP) from scripting interfaces.
* Added support for reading all PDB records (beyond current atom and bond connections). Records not handled directly by Open Babel are added as key/value pairs through OBPairData.
* Added a new configure flag –with-pkglibdir to allow Linux package distributors to define version-specific directories for file format plugins.
* Fixed a bug which would not output chirality information for canonical SMILES with 3D files.
* Fixed problems with new line-ending code. Now correctly reads DOS and old Mac OS files with non-UNIX line endings.
* Correctly rejects SMILES with incorrect ring closures. Thanks to <NAME> for the report.
* Fixed a crash when output to canonical SMILES.
* Fixed a crash when converting from SMILES to InChI.
* Fixed a crash when reading some PDB files on Windows.
* Fixed a crash when reading invalid MDL/SDF files.
* Fixed a bug which made it impossible to read some GAMESS files.
* Fixed a problem when reading ChemDraw CDX files on Mac OS X.
* A large number of additional fixes, including some rare crashes.
### Open Babel 2.1.0[¶](#open-babel-2-1-0)
Released on 2007-04-07.
#### What’s new from 2.0.2[¶](#what-s-new-from-2-0-2)
* Now handles molecules with >65536 atoms or bonds. Some PDB entries,
in particular have such large molecular systems.
* New features for molecular mechanics force fields, including energy evaluation and geometry optimization. Ultimately, this will enable coordinate generation and refinement for SMILES and other formats.
(A flexible force field framework is available for developers.)
* Implementation of the open source Ghemical all atom force field.
* Framework for canonical atom numbering, including a new canonical SMILES format.
* New support for Ruby and Java interfaces to the Open Babel library.
* Improved scripting interfaces through Perl and Python, including the new “pybel”
module with a more Python-like syntax.
* Automatically handles reading from text files with DOS or Mac OS 9 line endings.
* Many enhancements to the Open Babel API: See the Developers API Notes for more information.
* New obenergy tool - evaluate the energy of a molecule using molecular mechanics.
* New obminimize tool - optimize the geometry of structures using molecular mechanics.
* Improved obprop tool - outputs a variety of molecular properties including Topological Polar Surface Area (TPSA), Molar Refractivity (MR), and logP.
* The babel tool can now setting program keywords for some quantum mechanics formats from the command-line, including: GAMESS, Gaussian, Q-Chem, and MOPAC. (This feature can also be accessed by developers and expanded to other formats.)
* New options for babel tool, including:
-e for continuing after errors
-k for translating computational keywords (e.g., GAMESS, Gaussian, etc.)
–join to join all input molecules into a single output
–separate to separate disconnected fragments into separate molecular records
-C (combine mols in first file with others having the same name)
–property to add or replace a property (e.g., in an MDL SD file)
–title to add or replace the molecule title
–addtotitle to append text to the current molecule title
–addformula to append the molecular formula to the current title
* Many more bug fixes and small feature improvements.
#### New File Formats[¶](#new-file-formats)
> Import & Export:
> Carine’s ASCII Crystal (ACR)
> ChemDraw CDX & CDXML
> Crystallographic Interchange Format (CIF)
> Fasta Sequence
> Thermo Format
> Import:
> Gaussian fchk
> InChI
> Export:
> Open Babel MolReport
> Titles
### Open Babel 2.0.2[¶](#open-babel-2-0-2)
Released on 2006-07-24.
#### What’s new from 2.0.1[¶](#what-s-new-from-2-0-1)
* Substantial fixes to the SMILES and SMARTS parsing support, thanks to a variety of bug reports.
* A variety of fixes to aromaticity perception and Kekule form assignment.
* Fixed gzip support, broken in version 2.0.1 inadvertantly.
* Output a warning when a multi-molecule files is converted to a single-molecule format.
* Better support for command-line tools such as obgrep on Cygwin.
* Fixed a variety of crashes.
* Countless other bug fixes.
### Open Babel 2.0.1[¶](#open-babel-2-0-1)
Released on 2006-04-17.
#### What’s new from 2.0.0[¶](#what-s-new-from-2-0-0)
* Support for dynamic building on the Cygwin environment. This fixes a long-standing problem that made Open Babel useless to Cygwin users.
* Fixed a variety of memory leaks and improved overall memory use.
More work to reduce memory consumption is underway for the 2.1 release.
* Improved Perl and Python scripting wrappers, including many bug-fixes.
* Fixes to the “make check” test suite, which should prevent problems running before babel is installed.
* Fixes compilation problems with AIX, Fedora Core 4, and the newly-released GCC-4.1.
* Fixed several reported compilation problems with Windows builds using VisualC++.
* Fixed several reported crashes.
* Fixed problems with the Turbomole format, thanks to <NAME>.
* Fixed a bug with PDB files with coordinates < -1000 Ang.
* Improved support for the Sybyl mol2 format, thanks to <NAME>.
* Fixed a variety of typos in the API documentation.
* Countless bug fixes.
### Open Babel 2.0[¶](#open-babel-2-0)
Released on 2005-11-26.
#### What’s new from 1.100.2[¶](#what-s-new-from-1-100-2)
This release represents Open Babel’s fourth “birthday” and a milestone for a stable, flexible interface for developers and users alike.
* New conversion framework. The new framework allows dynamic loading/unloading of file translator modules (i.e., shared libraries, DLLs, DSO, etc.). More importantly, it facilitates adding new formats, since each format is self-contained and no editing of other files is required.
* Improved support for XML chemistry formats, including CML, PubChem XML,
* Support for fingerprinting and calculation of Tanimoto coefficients for similarity consideration.
(A flexible fingerprint framework is available for developers.)
* New support for Perl and Python “wrappers” of the Open Babel library.
* Many enhancements to the Open Babel API: See the Developers API Notes for more information. Some code will require updating, see the Developer’s Migration Guide for details.
* Support for automatically reading .gz compressed files.
(e.g., 1abc.pdb.gz is uncompressed and treated as a PDB file)
Use of the -z flag creates gzip-compressed output files.
* Support for the new IUPAC InChI identifiers.
* Improved bond order typing, including flexible SMARTS matching in bondtyp.txt.
* New Kekulization routine – improves aromaticity detection in aromatic amines like pyrroles, porphyrins, etc.
* Improved support for radicals and spin multiplicity, including assignment of hydrogens to radicals.
* Improved support for 2D vs. 3D file formats.
* New error logging framework keeps an “audit log” of changes to files
(hydrogen addition, bond order assignment) and different levels of error reporting / debugging.
Use the “—errorlevel 4” flag to access this information.
* Improved atom typing and hydrogen addition rules.
* Improved obfit utility will output RMSD and find matches with the best RMSD.
* Updated isotope data from 2003 IUPAC standard.
* Updated elemental data from the Blue Obelisk Data Repository.
(project started, in part, to validate the old Open Babel data)
* Improved z-matrix code (CartesianToInternal / InternalToCartesian).
* Countless bug fixes.
#### New File Formats[¶](#new-file-formats)
* Import & Export:
ChemDraw CT (Connection Table)
CML Reaction files MDL Molfile V3000 MDL Rxn files Open Babel free-form fractional (crystallographic coordinates)
Open Babel fastsearch database format Open Babel fingerprint formats PCModel format YASARA.org YOB format Turbomole
Improved CML support Improved Gaussian 98/03 support Improved SMILES import / export
* Import-Only:
PubChem XML
* Export-Only:
MPQC input Open Babel “copy” format (i.e., copy the raw input file)
Sybyl MPD descriptor format IUPAC InChI descriptor
* Changed formats:
+ MMADS - eliminated
+ bin - OpenEye binary v 1, eliminated
+ GROMOS96 - changed from separate g96a & g96nm types to a
unified g96 type. Defaults to output Angstroms, Use -xn
to output nm.
+ Titles - eliminated – can be produced with SMILES -xt
### Open Babel 1.100.2[¶](#open-babel-1-100-2)
Released on 2004-02-22.
#### What’s new from 1.100.1[¶](#what-s-new-from-1-100-1)
> * Shared library (version 0:0:0) built by default on POSIX systems
> (e.g. Linux, BSD, Mac OS X…)
> * Fixed installation of header files. The headers in the math/
> subdirectory were not installed alongside the other headers.
> * Added tools/ directory with small examples of using libopenbabel:
> * obgrep: Use SMARTS patterns to grep through multi-molecule files.
> * obfit: Use SMARTS patterns to align molecules on substructures.
> * obrotate: Rotate a torsional bond matching a SMARTS pattern.
> * Improved PDB support: uses HETATM records more appropriately, attempts to
> determine chain/residue information if not available.
> * Fixed a variety of bugs in ShelX support.
> * Added support for handling atom and molecule spin multiplicity.
> * Updated documentation – not yet complete, but significantly improved.
> * Fixed major omissions in CML readers and writers. All versions of CML are now
> supported (CML1/2 and array/nonArray). Also added *.bat
> file for roundtripping between these formats for both 2- and 3-D data.
> Fixed bugs in test/cmltest/cs2a.mol.cml.
> * Building and running the test-suite in a build-directory other than the
> source-directory is now fully supported.
> * Support for the Intel C++ Compiler on GNU/Linux.
> * Miscellaneous fixes to make it easier to compile on non-POSIX machines.
#### New File Formats[¶](#new-file-formats)
> -Export: Chemtool
> Chemical Resource Kit (CRK) 2D and 3D
> Parallel Quantum Solutions (PQS)
> -Import: CRK 2D and 3D
> PQS
### Open Babel 1.100.1[¶](#open-babel-1-100-1)
Released on 2003-6-24.
#### What’s new from 1.100.0[¶](#what-s-new-from-1-100-0)
> * Much better bond typing overall for files converted from formats
> without bond information (e.g. XYZ, QM codes). Fixed some bugs in
> 1.100.1 and added additional improvments.
> * Support for the command-line “babel” program to convert some or
> all structures in a file with multiple molecules. By default this
> version will convert all molecules in a file. To change this, use
> the -f and -l command-line options as documented in the man page.
> * Isotope support, including exact masses in the “report” file
> format and SMILES data.
> * Updated API documentation.
> * Support for the Borland C++ compiler.
> * Fixed a variety of bugs in the PDB file format support, including
> better bond typing.
> * Support for output of residue information in the Sybyl Mol2 file
> format.
> * Some support for conversion of unit cell information, both in the
> library and in some file formats (i.e. DMol3, Cacao).
> * Coordinates now use double-precision floating point libraries for
> greater accuracy in conversions.
> * Fixed a variety of bugs uncovered in roundtrip testing.
> * Fixed a bug when attempting to perceive bond information on 2D
> structures.
> * Fixed several rare bugs that could cause segmentation faults.
#### New File Formats[¶](#new-file-formats)
> -Import: ShelX
> -Export: ZINDO input
### Open Babel 1.100.0[¶](#open-babel-1-100-0)
Released on 2002-12-12.
#### What’s new from 1.99[¶](#what-s-new-from-1-99)
> * Bond order typing is performed when importing from formats with no notion of
> bonds (quantum chemistry programs, XYZ, etc.). -Now better conforms to the ISO
> C++ standard, should compile on most modern C++ compilers.
> * Improved test suite, including “roundtrip” testing, ensuring more accurate translations.
> * Support for the Chemical Markup Language (CML) and other file formats. (see below)
> * Improved PDB support – should read PDB files more accurately and hew closer to the current PDB standard for export.
> * Improved Gaussian input generation.
> * Added support for the Chemical MIME standards, including command-line switches.
> * Added support for using the babel program as a pipe for a “translation filter” for other programs.
> * Can add hydrogen atoms based on pH.
> * Fixed a variety of memory leaks, sometimes causing other bugs.
> * Fixed a wide variety of bugs in various file formats.
> * Faster SMARTS matching and some overall speedups across the program.
> * API documentation using the Doxygen system.
> * Of course there are *many* other bug-fixes and improvements.
#### New File Formats[¶](#new-file-formats)
> -Import: NWChem Output
> -Export: POV-Ray, NWChem Input
> -Both: CML, ViewMol, Chem3D
### Open Babel 1.99[¶](#open-babel-1-99)
Released on 2002-1-29.
The Open Babel team is pleased to announce the release of Open Babel 1.99, a first beta release for the 2.0 version of the free, open-source replacement for the Babel chemistry file translation program.
At the moment, the beta release is not a drop-in replacement for babel as some file formats are not implemented and bond orders are not calculated for QM file formats.
Open Babel includes two components, a command-line utility and a C++ library. The command-line utility is intended to be used as a replacement for the original babel program, to translate between various chemical file formats. The C++ library includes all of the file-translation code as well as a wide variety of utilities to foster development of other open source chemistry software.
### Navigation
* [Open Babel 3.0.1 documentation](index.html#document-index) » |
github.com/1Password/onepassword-operator | go | Go | README
[¶](#section-readme)
---
![](https://blog.1password.com/posts/2021/secrets-automation-launch/header.svg)
1Password Connect Kubernetes Operator
===
Integrate [1Password Connect](https://developer.1password.com/docs/connect) with your Kubernetes Infrastructure
[![Get started](https://user-images.githubusercontent.com/45081667/226940040-16d3684b-60f4-4d95-adb2-5757a8f1bc15.png)](https://github.com/1Password/onepassword-operator#-get-started)
---
The 1Password Connect Kubernetes Operator provides the ability to integrate Kubernetes Secrets with 1Password. The operator also handles autorestarting deployments when 1Password items are updated.
### ✨ Get started
#### 🚀 Quickstart
1. Add the [1Passsword Helm Chart](https://github.com/1Password/connect-helm-charts) to your repository.
2. Run the following command to install Connect and the 1Password Kubernetes Operator in your infrastructure:
```
helm install connect 1password/connect --set-file connect.credentials=1password-credentials-demo.json --set operator.create=true --set operator.token.value = <your connect token>
```
1. Create a Kubernetes Secret from a 1Password item:
```
kind: OnePasswordItem metadata:
name: <item_name> #this name will also be used for naming the generated kubernetes secret spec:
itemPath: "vaults/<vault_id_or_title>/items/<item_id_or_title>"
```
Deploy the OnePasswordItem to Kubernetes:
```
kubectl apply -f <your_item>.yaml
```
Check that the Kubernetes Secret has been generated:
```
kubectl get secret <secret_name>
```
#### 📄 Usage
Refer to the [Usage Guide](https://github.com/1Password/onepassword-operator/blob/v1.8.0/USAGEGUIDE.md) for documentation on how to deploy and use the 1Password Operator.
### 💙 Community & Support
* File an [issue](https://github.com/1Password/onepassword-operator/issues) for bugs and feature requests.
* Join the [Developer Slack workspace](https://join.slack.com/t/1password-devs/shared_invite/zt-1halo11ps-6o9pEv96xZ3LtX_VE0fJQA).
* Subscribe to the [Developer Newsletter](https://1password.com/dev-subscribe/).
### 🔐 Security
1Password requests you practice responsible disclosure if you discover a vulnerability.
Please file requests via [**BugCrowd**](https://bugcrowd.com/agilebits).
For information about security practices, please visit the [1Password Bug Bounty Program](https://bugcrowd.com/agilebits).
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
sheetsync | readthedoc | Unknown | SheetSync Documentation
Release 0.2.3
<NAME>
Oct 03, 2017
Contents 1.1 Installatio... 3 1.2 Setting up OAuth 2.0 acces... 3 1.2.1 New Projec... 4 1.2.2 Create a new Client I... 4 1.2.3 Enable Drive AP... 6 1.3 Injecting data to a Google shee... 6 2.1 Customizing the spreadshee... 9 2.1.1 Key Column Header... 9 2.1.2 Templates for Formattin... 9 2.1.3 Folder... 10 2.1.4 Formula... 10 2.1.5 Synchronizing dat... 11 2.2 Taking backup... 12 2.3 Debuggin... 12 3.1 Shee... 13 3.2 UpdateResult... 16 3.3 ia_credentials_helpe... 16
i
ii
SheetSync Documentation, Release 0.2.3 A python library to create, update and delete rows of data in a google spreadsheet.
SheetSync Documentation, Release 0.2.3 2 Contents
CHAPTER 1
Getting Started SheetSync is a python library to create, update and delete rows of data in a google spreadsheet.
Installation Install from PyPi using pip:
pip install sheetsync Or you can clone the git repo and install from the code:
git clone [email protected]:mbrenig/sheetsync.git LocalSheetSync pip install LocalSheetSync Note, you may need to run the commands above with sudo.
Setting up OAuth 2.0 access In May 2015 Google retired old API access methods, and recommended users migrate to OAuth 2.0. OAuth2.0 is better for security and privacy but it means getting started with sheetsync involves a bit of extra configuration.
The steps below (written in 2015) guide you through API configuration and a simple script to manipulate a Google sheet. They will take around 20 minutes to complete.
Warning: This tutorial is designed to get you using sheetsync quickly. It is insecure because your client secret is
stored in plain text. If someone obtains your client secret, they could use it to consume your quota, incur charges
or request access to user data.
SheetSync Documentation, Release 0.2.3
Before using sheetsync in production you should learn about Client IDs and replace the
ia_credentials_helper() function with your own function that manages authentication and creates
an OAuth2Credentials object.
New Project Start by setting up a new project via Google’s developer console, console.developers.google.com:
Pick a project name:
Create a new Client ID From your new project’s configuration panel, in the console, select “Credentials” from the lefthand menu and then
“Create new Client ID” for OAuth:
For this tutorial, choose the type Installed application:
SheetSync Documentation, Release 0.2.3 The consent screen is what users will see when the sheetsync script asks for access to their Google drive.
Finally select “Other” for Installed application type:
SheetSync Documentation, Release 0.2.3 The steps above should have got to you a page that displays your new Client ID and Client Secret. For example:
Enable Drive API Next we need to associate Drive API access with these OAuth credentials. From the lefthand menu choose API and search for Drive:
Click through to the Drive API and “Enable API”:
You’re now ready to start using this Client ID information with sheetsync.
Injecting data to a Google sheet sheetsync works with data in a dictionary of dictionaries. Each row is represented by a dictionary, and these are themselves stored in a dictionary indexed by a row-specific key. For example this dictionary represents two rows of
SheetSync Documentation, Release 0.2.3
data each with columns “Color” and “Performer”:
1 data = { "Kermit": {"Color" : "Green", "Performer" : "<NAME>"},
2 "<NAME>" : {"Color" : "Pink", "Performer" : "<NAME>"}
3 }
To insert this data (add or update rows) into a target worksheet in a google spreadsheet doc use this code:
1 import logging
2 from sheetsync import Sheet, ia_credentials_helper
3 # Turn on logging so you can see what sheetsync is doing.
4 logging.getLogger('sheetsync').setLevel(logging.DEBUG)
5 logging.basicConfig()
6
7 # Create OAuth2 credentials, or reload them from a local cache file.
8 CLIENT_ID = '171566521677-3ppd15g5u4lv93van0eri4tbk4fmaq2c.apps.googleusercontent.com'
9 CLIENT_SECRET = '<KEY>'
10 creds = ia_credentials_helper(CLIENT_ID, CLIENT_SECRET,
11 credentials_cache_file='cred_cache.json')
12 13 data = { "Kermit": {"Color" : "Green", "Performer" : "<NAME>"},
14 "<NAME>" : {"Color" : "Pink", "Performer" : "<NAME>"} }
15 16 # Find or create a spreadsheet, then inject data.
17 target = Sheet(credentials=creds, document_name="sheetsync Getting Started")
18 target.inject(data)
19 print "Spreadsheet created here: %s" % target.document_href
The first part of this script (lines 1-11) imports the Sheet object and ia_credentials_helper function. This
function is included to help you quickly generate an OAuth2Credentials object using your Client ID and Secret.
When the ia_credentials_helper function runs it will print a URL to allow you to grant the script access, like
this:
From this URL (you may have to log in to a Google Drive account) you will be prompted to give the API Client you
set up in section 1.2, access to your documents:
SheetSync Documentation, Release 0.2.3
After accepting you’re presented with a verification code that you must paste back into the script:
At this point ia_credentials_helper also caches the credentials - so that you don’t need to repeat this step on
future runs of the script.
The later code defines the table data (lines 13,14) then line 17 creates a new spreadsheet document in your google
drive. Finaly line 18 inserts the data resulting in:
It also prints the URL of the google sheet so you can view the result for yourself.
Since you’ll probably want to update this spreadsheet, take note of the spreadsheet’s document key from the URL:
and then you can inject new data to the existing document by initializing the sheet as follows:
1 target = Sheet(credentials=creds,
2 document_key="<KEY>",
3 worksheet_name="Sheet1")
Note: The ‘inject’ method only adds or updates rows. If you want to delete rows from the spreadsheet to keep it in
sync with the input data then use the ‘sync’ method described in the next section.
CHAPTER 2
Tutorial Let’s extend the example from Getting Started, and use more of sheetsync’s features. (With apologies in advance to the Muppets involved).
Customizing the spreadsheet Key Column Headers The first thing we’ll fix is that top-left cell with the value ‘Key’. The keys for our data are Names and the column header should reflect that. This is easy enough to do with the key_column_headers field:
target = sheetsync.Sheet(credentials=creds,
document_name="Muppet Show Tonight",
key_column_headers=["Name"])
Templates for Formatting Google’s spreadsheet API doesn’t currently allow control over cell formatting, but you can specify a template spread-
sheet that has the formatting you want - and use sheetsync to add data to a copy of the template. Here’s a template spreadsheet created to keep my list of Muppets:
SheetSync Documentation, Release 0.2.3
https://docs.google.com/spreadsheets/d/<KEY>/edit#gid=0
The template’s document key is <KEY> we can instruct
sheetsync to use this as a basis for the new spreadsheet it creates as follows:
1 target = sheetsync.Sheet(credentials=creds,
2 document_name="Muppet Show Tonight",
3 worksheet_name="Muppets",
4 template_key="<KEY>",
5 key_column_headers=["Name"])
Note that I’ve also specified the worksheet name in that example with the ‘worksheet_name’ parameter.
Folders
If you use folders to organize your Google drive, you can specify the folder a new spreadsheet will be created
in. Use either the ‘folder_name’ or ‘folder_key’ parameters. Here for example I have a folder with the key
<KEY>:
and instruct sheetsync to move the new spreadsheet into that folder with this code:
1 target = sheetsync.Sheet(credentials=creds,
2 document_name="Muppet Show Tonight",
3 worksheet_name="Muppets",
4 key_column_headers=["Name"],
5 template_key="<KEY>",
6 folder_key="<KEY>")
Formulas
Often you’ll need some columns to contain formulas that depend on data in other columns, and when new rows are
inserted by sheetsync, ideally you’d want those formulas to be added too. When initializing the spreadsheet you can
SheetSync Documentation, Release 0.2.3 specify a row (typically above the header row) that contains reference formulas. Best illustrated by this example https://docs.google.com/spreadsheets/d/1tn-lGqGHDrVbnW2PRvwie4LMmC9ZgYHWlbyTjCvwru8/edit#gid=0 Here row 2 contains formulas (Written out in row 1 for readability) that reference hidden columns. Row 3 contains the headers.
When new rows are added to this spreadsheet the ‘Photo’ and ‘Muppet’ columns will be populated with a formula similar to the reference row. Here are the parameters to set this up:
target = sheetsync.Sheet(credentials=creds,
document_key="<KEY>",
worksheet_name="Muppets",
key_column_headers=["Name"],
header_row_ix=3,
formula_ref_row_ix=2)
animal = {'Animal': {'Color': 'Red',
'Image URL': 'http://upload.wikimedia.org/wikipedia/en/e/e7/
˓→Animal_%28Muppet%29.jpg',
'Performer': '<NAME>',
'Wikipedia': 'http://en.wikipedia.org/wiki/Animal_(Muppet)'} }
target.inject(animal)
Synchronizing data Until now all examples have used the ‘inject’ method to add data into a spreadsheet or update existing rows. As the name suggests, sheetsync also has a ‘sync’ method which will make sure the rows in the spreadsheet match the rows passed to the function. This might require that rows are deleted from the spreadsheet.
The default behavior is to not actually delete rows, but instead flag them for deletion with the text “(DELETED)”
being appended to the values of the Key columns on rows to delete. This is to help recovery from accidental deletions.
Full row deletion can be enabled by passing the flag_deletes argument as follows:
SheetSync Documentation, Release 0.2.3
target = sheetsync.Sheet(credentials=creds,
document_key="<KEY>",
worksheet_name="Muppets",
key_column_headers=["Name"],
flag_deletes=False)
new_list = { 'Kermit' : { 'Color' : 'Green',
'Performer' : '<NAME>' },
'Fozzie Bear' : {'Color' : 'Orange' } }
target.sync(new_list)
With rows for Miss Piggy and Kermit already in the spreadsheet, the sync function (in the example above) would remove Miss Piggy and add Fozzie Bear.
Taking backups
Warning: The sync function could delete a lot of data from your worksheet if the Key values get corrupted
somehow. You should use the backup function to protect yourself from errors like this.
Some simple mistakes can cause bad results. For instance, if the key column headers on the spreadsheet don’t match those passed to the Sheet constructor the sync method will delete all the existing rows and add new ones! You could protect rows and ranges to guard against this, but perhaps the simplest way to mitigate the risk is by creating a backup of your spreadsheet before syncing data. Here’s an example:
target.backup("Backup of my important sheet. 16th June",
folder_name = "sheetsync Backups.")
This code would take a copy of the entire spreadsheet that the Sheet instance ‘target’ belongs to, name it “Backup of my important sheet. 16th June”, and move it to a folder named “sheetsync Backups.”.
Debugging sheetsync uses the standard python logging module, the easiest way to find out what’s going on under the covers is to turn on all logging:
import sheetsync import logging
# Set all loggers to DEBUG level..
logging.getLogger('').setLevel(logging.DEBUG)
# Register the default log handler to send logs to console..
logging.basicConfig()
If you find issues please raise them on github, and if you have fixes please submit pull requests. Thanks!
CHAPTER 3
The sheetsync package API Sheet
class sheetsync.Sheet(credentials=None, document_key=None, document_name=None, work-
sheet_name=None, key_column_headers=None, header_row_ix=1, for-
mula_ref_row_ix=None, flag_deletes=True, protected_fields=None, tem-
plate_key=None, template_name=None, folder_key=None, folder_name=None)
Represents a single worksheet within a google spreadsheet.
This class tracks the google connection, the reference to the worksheet, as well as options controlling the struc-
ture of the data in the worksheet.. for .. rubric:: example
•Which row is used as the table header
•What header names should be used for the key column(s)
•Whether some columns are protected from overwriting
document_key
str – The spreadsheet’s document key assigned by google drive. If you are using sheetsync to create a
spreadsheet then use this attribute to saved the document_key, and make sure you pass it as a parameter in
subsequent calls to __init__
document_name
str – The title of the google spreadsheet document
document_href
str – The HTML href for the google spreadsheet document
__init__(credentials=None, document_key=None, document_name=None, worksheet_name=None,
key_column_headers=None, header_row_ix=1, formula_ref_row_ix=None,
flag_deletes=True, protected_fields=None, template_key=None, template_name=None,
folder_key=None, folder_name=None)
Creates a worksheet object (also creating a new Google sheet doc if required)
Parameters
SheetSync Documentation, Release 0.2.3
• credentials (OAuth2Credentials) – Credentials object returned by the google
authorization server. Described in detail in this article: https://developers.google.com/
api-client-library/python/guide/aaa_oauth For testing and development consider using the
ia_credentials_helper helper function
• document_key (Optional) (str) – Document key for the existing spreadsheet
to sync data to. More info here: https://productforums.google.com/forum/#!topic/docs/
XPOR9bTTS50 If this is not provided sheetsync will use document_name to try and find
the correct spreadsheet.
• document_name (Optional) (str) – The name of the spreadsheet document to
access. If this is not found it will be created. If you know the document_key then using
that is faster and more reliable.
• worksheet_name (str) – The name of the worksheet inside the spreadsheet that data
will be synced to. If omitted then the default name “Sheet1” will be used, and a matching
worksheet created if necessary.
• key_column_headers (Optional) (list of str) – Data in the key col-
umn(s) uniquely identifies a row in your data. So, for example, if your data is indexed
by a single username string, that you want to store in a column with the header ‘User-
name’, you would pass this:
key_column_headers=[’Username’]
However, sheetsync also supports component keys. Python dictionaries can use tuples as
keys, for example if you had a tuple key like this:
(‘Tesla’, ‘Model-S’, ‘2013’)
You can make the column meanings clear by passing in a list of three
key_column_headers:
[’Make’, ‘Model’, ‘Year’]
If no value is given, then the default behavior is to name the column “Key”; or “Key-1”,
“Key-2”, ... if your data dictionaries keys are tuples.
• header_row_ix (Optional) (int) – The row number we expect to see column
headers in. Defaults to 1 (the very top row).
• formula_ref_row_ix (Optional) (int) – If you want formulas to be added to
some cells when inserting new rows then use a formula reference row. See Formulas for
an example use.
• flag_deletes (Optional) (bool) – Specify if deleted rows should only be
flagged for deletion. By default sheetsync does not delete rows of data, it just marks
that they are deleted by appending the string ” (DELETED)” to key values. If you pass
in the value “False” then rows of data will be deleted by the sync method if they are not
found in the input data. Note, use the inject method if you only want to add or modify data
to in a worksheet.
• protected_fields (Optional) (list of str) – An list of fields (column
headers) that contain protected data. sheetsync will only write to cells in these columns if
they are blank. This can be useful if you are expecting users of the spreadsheet to colab-
orate on the document and edit values in certain columns (e.g. modifying a “Test result”
column from “PENDING” to “PASSED”) and don’t want to overwrite their edits.
• template_key (Optional) (str) – This optional key references the spreadsheet
that will be copied if a new spreadsheet needs to be created. This is useful for copying over
formatting, a specific header order, or apps-script functions. See Templates for Formatting.
SheetSync Documentation, Release 0.2.3
• template_name (Optional) (str) – As with template_key but the name of the
template spreadsheet. If known, using the template_key will be faster.
• folder_key (Optional) (str) – This optional key references the folder that a new
spreadsheet will be moved to if a new spreadsheet needs to be created.
• folder_name (Optional) (str) – Like folder_key this parameter specifies the op-
tional folder that a spreadsheet will be created in (if required). If a folder matching the
name cannot be found, sheetsync will attempt to create it.
backup(backup_name, folder_key=None, folder_name=None)
Copies the google spreadsheet to the backup_name and folder specified.
Parameters
• backup_name (str) – The name of the backup document to create.
• folder_key (Optional) (str) – The key of a folder that the new copy will be
moved to.
• folder_name (Optional) (str) – Like folder_key, references the folder to move
a backup to. If the folder can’t be found, sheetsync will create it.
data(as_cells=False)
Reads the worksheet and returns an indexed dictionary of the row objects.
For example:
>>>print sheet.data()
{‘<NAME>’: {‘Color’: ‘Pink’, ‘Performer’: ‘<NAME>’}, ‘Kermit’: {‘Color’: ‘Green’, ‘Performer’:
‘<NAME>’}}
inject(raw_data, row_change_callback=None)
Use this function to add rows or update existing rows in the spreadsheet.
Parameters
• raw_data (dict) – A dictionary of dictionaries. Where the keys of the outer dictionary
uniquely identify each row of data, and the inner dictionaries represent the field,value pairs
for a row of data.
• row_change_callback (Optional) (func) – A callback function that you can
use to track changes to rows on the spreadsheet. The row_change_callback function must
take four parameters like so:
change_callback(row_key, row_dict_before, row_dict_after, list_of_changed_keys)
Returns
A simple counter object providing statistics about the changes made by sheetsync.
Return type UpdateResults (object)
sync(raw_data, row_change_callback=None)
Equivalent to the inject method but will delete rows from the google spreadsheet if their key is not found
in the input (raw_data) dictionary.
Parameters
• raw_data (dict) – See inject method
• row_change_callback (Optional) (func) – See inject method
Returns See inject method
SheetSync Documentation, Release 0.2.3
Return type UpdateResults (object)
UpdateResults class sheetsync.UpdateResults
A lightweight counter object that holds statistics about number of updates made after using the ‘sync’ or ‘inject’
method.
added
int – Number of rows added
changed
int – Number of rows changed
nochange
int – Number of rows that were not modified.
deleted
int – Number of rows deleted (which will always be 0 when using the ‘inject’ function)
ia_credentials_helper sheetsync.ia_credentials_helper(client_id, client_secret, creden-
tials_cache_file=’credentials.json’, cache_key=’default’)
Helper function to manage a credentials cache during testing.
This function attempts to load and refresh a credentials object from a json cache file, using the cache_key and
client_id as a lookup.
If this isn’t found then it starts an OAuth2 authentication flow, using the client_id and client_secret and if
successful, saves those to the local cache. See Injecting data to a Google sheet.
Parameters
• client_id (str) – Google Drive API client id string for an installed app
• client_secret (str) – The corresponding client secret.
• credentials_cache_file (str) – Filepath to the json credentials cache file
• cache_key (str) – Optional string to allow multiple credentials for a client to be stored
in the cache.
Returns A google api credentials object. As described here: https://developers.google.com/
api-client-library/python/guide/aaa_oauth
Return type OAuth2Credentials |
googleErrorReportingR | cran | R | Package ‘googleErrorReportingR’
October 27, 2022
Title Send Error Reports to the Google Error Reporting Service API
Version 0.0.4
Description Send error reports to the Google Error Reporting ser-
vice <https://cloud.google.com/error-reporting/> and view errors and assign error sta-
tus in the Google Error Reporting user interface.
License MIT + file LICENSE
URL https://github.com/ixpantia/googleErrorReportingR,
https://ixpantia.github.io/googleErrorReportingR/
Encoding UTF-8
Imports jsonlite, httr, magrittr
Suggests knitr, rmarkdown, pkgdown, testthat (>= 3.0.0)
RoxygenNote 7.2.1
VignetteBuilder knitr
Config/testthat/edition 3
NeedsCompilation no
Author ixpantia, SRL [cph],
<NAME> [cre, aut] (<https://orcid.org/0000-0002-7853-2811>),
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2022-10-27 15:25:13 UTC
R topics documented:
config_messag... 2
format_error_messag... 2
list_error... 4
report_erro... 4
config_message Format messages before sending to Google Error Reporting
Description
Format messages before sending to Google Error Reporting
Usage
config_message(message, req, status_code)
Arguments
message Error message returned by validation
req Request object used by plumber
status_code Valid HTTP status code
Value
formatted message
Examples
## Not run:
your_function_call <- tryCatch(
your_function(),
error = function(e) {
message$message <- as.character(e)
googleErrorReportingR::report_error(message)
message <- config_message(message, req, "401")
stop("Error", call. = FALSE)
})
## End(Not run)
format_error_message Format Error Message for Google Error Reporting
Description
Format Error Message for Google Error Reporting
Usage
format_error_message(
message = "Error description",
service = "My Service",
version = "0.0.1",
method = "GET",
url = "https://example.com",
user_agent = "",
referrer = "",
response_status_code = "500",
remote_ip = "192.178.0.0.1",
user_id = "UserID",
filepath = "/",
line_number = 0,
function_name = "my_function"
)
Arguments
message the error message you want in the logs
service the name of your service
version the version of the service
method the http method used for hte call
url hte unique resource identifier that was called
user_agent the user agente identifier
referrer the referrer to the service
response_status_code
http response code
remote_ip remote ip
user_id user id
filepath filepath of the code where the error originates
line_number line number where the error originates
function_name function name where the error originates
Value
message object, a list to be formated as JSON in the error report body
Examples
## Not run:
message <- format_error_message()
message$serviceContext$service <- "A demo service"
message$serviceContext$version <- "v0.3.4"
## End(Not run)
list_errors Get list of errors from Google Error Reporting
Description
Get list of errors from Google Error Reporting
Usage
list_errors(project_id, api_key)
Arguments
project_id the project ID of your project on GCP
api_key an API key with permissions to write to Google Error Reporting
Value
No return, we focus on side effect
report_error Report error to Google Error Reporting
Description
Report error to Google Error Reporting
Usage
report_error(message, project_id = NULL, api_key = NULL)
Arguments
message the error report to be written out to the
project_id the project id where you want to monitor the error reports
api_key the google API key with authorisation to write to the Google Error Reporting
API
Value
No return, we focus on side effect
Examples
## Not run:
report_error(project_id, api_key, message)
#If you have set the environmental variables "PROJECT_ID" and
#"ERROR_REPORTING_API_KEY" then you can make shorter calls like so
report_error(message)
## End(Not run) |
github.com/cosmos72/gomacro | go | Go | README
[¶](#section-readme)
---
### gomacro - interactive Go interpreter and debugger with generics and macros
gomacro is an almost complete Go interpreter, implemented in pure Go. It offers both an interactive REPL and a scripting mode, and does not require a Go toolchain at runtime
(except in one very specific case: import of a 3rd party package at runtime).
It has two dependencies beyond the Go standard library:
[github.com/peterh/liner](https://github.com/peterh/liner)
and
[golang.org/x/tools/go/packages](https://golang.org/x/tools/go/packages)
Gomacro can be used as:
* a standalone executable with interactive Go REPL, line editing and code completion:
just run `gomacro` from your command line, then type Go code. Example:
```
$ gomacro
[greeting message...]
gomacro> import "fmt"
gomacro> fmt.Println("hello, world!")
hello, world!
14 // int
<nil> // error gomacro>
```
press TAB to autocomplete a word, and press it again to cycle on possible completions.
Line editing follows mostly Emacs: Ctrl+A or Home jumps to start of line,
Ctrl+E or End jumps to end of line, Ald+D deletes word starting at cursor...
For the full list of key bindings, see <https://github.com/peterh/liner>
* a tool to experiment with Go **generics**: see [Generics](#readme-generics)
* a Go source code debugger: see [Debugger](#readme-debugger)
* an interactive tool to make science more productive and more fun.
If you use compiled Go with scientific libraries (physics, bioinformatics, statistics...)
you can import the same libraries from gomacro REPL (immediate on Linux and Mac OS X,
requires restarting on other platforms,
see [Importing packages](#readme-importing-packages) below), call them interactively,
inspect the results, feed them to other functions/libraries, all in a single session.
The imported libraries will be **compiled**, not interpreted,
so they will be as fast as in compiled Go.
For a graphical user interface on top of gomacro, see [Gophernotes](https://github.com/gopherdata/gophernotes).
It is a Go kernel for Jupyter notebooks and nteract, and uses gomacro for Go code evaluation.
* a library that adds Eval() and scripting capabilities to your Go programs in few lines of code:
```
package main import (
"fmt"
"reflect"
"github.com/cosmos72/gomacro/fast"
)
func RunGomacro(toeval string) reflect.Value {
interp := fast.New()
vals, _ := interp.Eval(toeval)
// for simplicity, only use the first returned value
return vals[0].ReflectValue()
}
func main() {
fmt.Println(RunGomacro("1+1"))
}
```
```
Also, [github issue #13](https://github.com/cosmos72/gomacro/issues/13) explains how to have your application's functions, variable, constants and types available in the interpreter.
Note: gomacro license is [MPL 2.0](LICENSE), which imposes some restrictions on programs that use gomacro.
See [MPL 2.0 FAQ](https://www.mozilla.org/en-US/MPL/2.0/FAQ/) for common questions regarding the license terms and conditions.
```
* a way to execute Go source code on-the-fly without a Go compiler:
you can either run `gomacro FILENAME.go` (works on every supported platform)
or you can insert a line `#!/usr/bin/env gomacro` at the beginning of a Go source file,
then mark the file as executable with `chmod +x FILENAME.go` and finally execute it with `./FILENAME.go` (works only on Unix-like systems: Linux, *BSD, Mac OS X ...)
* a Go code generation tool:
gomacro was started as an experiment to add Lisp-like macros to Go, and they are extremely useful (in the author's opinion) to simplify code generation.
Macros are normal Go functions, they are special only in one aspect:
they are executed **before** compiling code, and their input and output **is** code
(abstract syntax trees, in the form of go/ast.Node)
Don't confuse them with C preprocessor macros: in Lisp, Scheme and now in Go,
macros are regular functions written in the same programming language as the rest of the source code. They can perform arbitrary computations and call any other function or library: they can even read and write files,
open network connections, etc... as a normal Go function can do.
See [doc/code_generation.pdf](https://github.com/cosmos72/gomacro/raw/master/doc/code_generation.pdf)
for an introduction to the topic.
### Installation
#### Prerequites
* [Go 1.13+](https://golang.org/doc/install)
#### Supported platforms
Gomacro is pure Go, and in theory it should work on any platform supported by the Go compiler.
The following combinations are tested and known to work:
* Linux: amd64, 386, arm64, arm, mips, ppc64le
* Mac OS X: amd64, 386 (386 binaries running on amd64 system)
* Windows: amd64, 386
* FreeBSD: amd64, 386
* Android: arm64, arm (tested with [Termux](https://termux.com/) and the Go compiler distributed with it)
#### How to install
The command
```
go install github.com/cosmos72/gomacro@latest
```
downloads, compiles and installs gomacro and its dependencies
### Current Status
Almost complete.
The main limitations and missing features are:
* importing 3rd party libraries at runtime currently only works on Linux and Mac OS X.
On other systems as Windows, Android and *BSD it is cumbersome and requires recompiling - see [Importing packages](#readme-importing-packages).
* conversions from/to unsafe.Pointer are not supported
* some corner cases using interpreted interfaces, as interface -> interface type assertions and type switches, are not implemented yet.
* some corner cases using recursive types may not work correctly.
* goto can only jump backward, not forward
* out-of-order code is under testing - some corner cases, as for example out-of-order declarations used in keys of composite literals, are not supported.
Clearly, at REPL code is still executed as soon as possible, so it makes a difference mostly if you separate multiple declarations with ; on a single line. Example: `var a = b; var b = 42`
Support for "batch mode" is in progress - it reads as much source code as possible before executing it,
and it's useful mostly to execute whole files or directories.
The [documentation](https://github.com/cosmos72/gomacro/blob/6835e0d66346/doc) also contains the [full list of features and limitations](https://github.com/cosmos72/gomacro/blob/6835e0d66346/doc/features-and-limitations.md)
### Extensions
Compared to compiled Go, gomacro supports several extensions:
* generics (experimental) - see [Generics](#readme-generics)
* an integrated debugger, see [Debugger](#readme-debugger)
* configurable special commands. Type `:help` at REPL to list them,
and see [cmd.go:37](https://github.com/cosmos72/gomacro/raw/master/fast/cmd.go#L37)
for the documentation and API to define new ones.
* untyped constants can be manipulated directly at REPL. Examples:
```
gomacro> 1<<100
{int 1267650600228229401496703205376} // untyped.Lit gomacro> const c = 1<<100; c * c / 100000000000
{int 16069380442589902755419620923411626025222029937827} // untyped.Lit
```
```
This provides a handy arbitrary-precision calculator.
Note: operations on large untyped integer constants are always exact,
while operations on large untyped float constants are implemented with `go/constant.Value`,
and are exact as long as both numerator and denominator are <= 5e1232.
Beyond that, `go/constant.Value` switches from `*big.Rat` to `*big.Float`
with precision = 512, which can accumulate rounding errors.
If you need **exact** results, convert the untyped float constant to `*big.Rat`
(see next item) before exceeding 5e1232.
```
* untyped constants can be converted implicitly to `*big.Int`, `*big.Rat` and `*big.Float`. Examples:
```
import "math/big"
var i *big.Int = 1<<1000 // exact - would overflow int var r *big.Rat = 1.000000000000000000001 // exact - different from 1.0 var s *big.Rat = 5e1232 // exact - would overflow float64 var t *big.Rat = 1e1234 // approximate, exceeds 5e1232 var f *big.Float = 1e646456992 // largest untyped float constant that is different from +Inf
```
Note: every time such a conversion is evaluated, it creates a new value - no risk to modify the constant.
Be aware that converting a huge value to string, as typing `f` at REPL would do, can be very slow.
* zero value constructors: for any type `T`, the expression `T()`
returns the zero value of the type
* macros, quoting and quasiquoting: see
[doc/code_generation.pdf](https://github.com/cosmos72/gomacro/raw/master/doc/code_generation.pdf)
and slightly relaxed checks:
* unused variables and unused return values never cause errors
### Examples
Some short, notable examples - to run them on non-Linux platforms, see [Importing packages](#readme-importing-packages) first.
#### plot mathematical functions
* install libraries: `go get gonum.org/v1/plot gonum.org/v1/plot/plotter gonum.org/v1/plot/vg`
* start the interpreter: `gomacro`
* at interpreter prompt, paste the whole Go code listed at <https://github.com/gonum/plot/wiki/Example-plots#functions>
(the source code starts after the picture under the section "Functions", and ends just before the section "Histograms")
* still at interpreter prompt, enter `main()`
If all goes well, it will create a file named "functions.png" in current directory containing the plotted functions.
#### simple mandelbrot web server
* install libraries: `go get github.com/sverrirab/mandelbrot-go`
* chdir to mandelbrot-go source folder: `cd; cd go/src/github.com/sverrirab/mandelbrot-go`
* start interpreter with arguments: `gomacro -i mbrot.go`
* at interpreter prompt, enter `init(); main()`
* visit http://localhost:8090/
Be patient, rendering and zooming mandelbrot set with an interpreter is a little slow.
Further examples are listed by [Gophernotes](https://github.com/gopherdata/gophernotes/#example-notebooks-dowload-and-run-them-locally-follow-the-links-to-view-in-github-or-use-the-jupyter-notebook-viewer)
### Importing packages
Gomacro supports the standard Go syntax `import`, including package renaming. Examples:
```
import "fmt"
import (
"io"
"net/http"
r "reflect"
)
```
Third party packages - i.e. packages not in Go standard library - can also be imported with the same syntax.
Extension: unpublished packages can also be imported from a local filesystem directory (implemented on 2022-05-28). Supported syntaxes are:
```
import (
"." // imports the package in current directory
".." // imports the package in parent directory
"./some/relative/path" // "./" means relative to current directory
"../some/other/relative/path" // "../" means relative to parent directory
"/some/absolute/path" // "/" means absolute
)
```
For an import to work, you usually need to follow its installation procedure: sometimes there are additional prerequisites to install, and the typical command `go get PACKAGE-PATH` may or may not be needed.
The next steps depend on the system you are running gomacro on:
#### Linux and Mac OS X
If you are running gomacro on Linux or Mac OS X, `import` will then just work:
it will automatically download, compile and import a package. Example:
```
$ gomacro
[greeting message...]
gomacro> import ( "gonum.org/v1/gonum/floats"; "gonum.org/v1/plot" )
// debug: running "go get gonum.org/v1/plot gonum.org/v1/gonum/floats" ...
go: downloading gonum.org/v1/gonum v0.12.0 go: downloading gonum.org/v1/plot v0.12.0
[ more "go: downloading " messages for dependencies...]
go: added gonum.org/v1/gonum v0.12.0 go: added gonum.org/v1/plot v0.12.0
// debug: running "go mod tidy" ...
go: downloading golang.org/x/exp v0.0.0-20220827204233-334a2380cb91 go: downloading github.com/go-fonts/latin-modern v0.2.0 go: downloading rsc.io/pdf v0.1.1 go: downloading github.com/go-fonts/dejavu v0.1.0
// debug: compiling plugin "/home/max/go/src/gomacro.imports/gomacro_pid_44092/import_1" ...
gomacro> floats.Sum([]float64{1,2,3})
6 // float64
```
Note: internally, gomacro will compile and load a **single** Go plugin containing the exported declarations of all the packages listed in `import ( ... )`.
The command `go mod tidy` is automatically executed before compiling the plugin, and it tries - among other things -
to resolve any version conflict due to different versions of the same package being imported directly
(i.e. listed in `import ( ... )`) or indirectly (i.e. as a required dependency).
Go plugins are currently supported only on Linux and Mac OS X.
**WARNING** On Mac OS X, **never** execute `strip gomacro`: it breaks plugin support,
and loading third party packages stops working.
#### Other systems
On all other systems as Windows, Android and *BSD you can still use `import`,
but there are more steps: you need to manually download the package,
and you also need to recompile gomacro after the `import` (it will tell you).
Example:
```
$ go get gonum.org/v1/plot
$ gomacro
[greeting message...]
gomacro> import "gonum.org/v1/plot"
// warning: created file "/home/max/go/src/github.com/cosmos72/gomacro/imports/thirdparty/gonum_org_v1_plot.go", recompile gomacro to use it
```
Now quit gomacro, recompile and reinstall it:
```
gomacro> :quit
$ go install github.com/cosmos72/gomacro
```
Finally restart it. Your import is now linked **inside** gomacro and will work:
```
$ gomacro
[greeting message...]
gomacro> import "gonum.org/v1/plot"
gomacro> plot.New()
&{...} // *plot.Plot
<nil> // error
```
Note: if you need several packages, you can first `import` all of them,
then quit and recompile gomacro only once.
### Generics
gomacro contains two alternative, experimental versions of Go generics:
* the first version is modeled after C++ templates, and is appropriately named "C++ style"
See [doc/generics-c++.md](https://github.com/cosmos72/gomacro/blob/6835e0d66346/doc/generics-c++.md) for how to enable and use them.
* the second version is named "contracts are interfaces" - or more briefly "CTI".
It is modeled after several published proposals for Go generics,
most notably <NAME>'s [Type Parameters in Go](https://github.com/golang/proposal/blob/master/design/15292/2013-12-type-params.md)
It has some additions inspired from [Haskell generics](https://wiki.haskell.org/Generics)
and original contributions from the author - in particular to create a simpler alternative to
[Go 2 contracts](https://go.googlesource.com/proposal/+/master/design/go2draft-contracts.md)
For their design document and reasoning behind some of the design choices, see [doc/generics-cti.md](https://github.com/cosmos72/gomacro/blob/6835e0d66346/doc/generics-cti.md)
The second version of generics "CTI" is enabled by default in gomacro.
They are in beta status, and at the moment only generic types and functions are supported.
Syntax and examples:
```
// declare a generic type with two type arguments T and U type Pair#[T,U] struct {
First T
Second U
}
// instantiate the generic type using explicit types for T and U,
// and create a variable of such type.
var pair Pair#[complex64, struct{}]
// equivalent:
pair := Pair#[complex64, struct{}] {}
// a more complex example, showing higher-order functions func Transform#[T,U](slice []T, trans func(T) U) []U {
ret := make([]U, len(slice))
for i := range slice {
ret[i] = trans(slice[i])
}
return ret
}
Transform#[string,int] // returns func([]string, func(string) int) []int
// returns []int{3, 2, 1} i.e. the len() of each string in input slice:
Transform#[string,int]([]string{"abc","xy","z"}, func(s string) int { return len(s) })
```
Contracts specify the available methods of a generic type.
For simplicity, they do not introduce a new syntax or new language concepts:
contracts are just (generic) interfaces.
With a tiny addition, actually: the ability to optionally indicate the receiver type.
For example, the contract specifying that values of type `T` can be compared with each other to determine if the first is less, equal or greater than the second is:
```
type Comparable#[T] interface {
// returns -1 if a is less than b
// returns 0 if a is equal to b
// returns 1 if a is greater than b
func (a T) Cmp(b T) int
}
```
A type `T` implements `Comparable#[T]` if it has a method `func (T) Cmp(T) int`.
This interface is carefully chosen to match the existing methods of
`*math/big.Float`, `*math/big.Int` and `*math/big.Rat`.
In other words, `*math/big.Float`, `*math/big.Int` and `*math/big.Rat` already implement it.
What about basic types as `int8`, `int16`, `int32`, `uint`... `float*`, `complex*` ... ?
Gomacro extends them, automatically adding many methods equivalent to the ones declared on `*math/big.Int` to perform arithmetic and comparison, including `Cmp` which is internally defined as (no need to define it yourself):
```
func (a int) Cmp(b int) int {
if a < b {
return -1
} else if a > b {
return 1
} else {
return 0
}
}
```
Thus the generic functions `Min` and `Max` can be written as
```
func Min#[T: Comparable] (a, b T) T {
if a.Cmp(b) < 0 { // also <= would work
return a
}
return b
}
func Max#[T: Comparable] (a, b T) T {
if a.Cmp(b) > 0 { // also >= would work
return a
}
return b
}
```
Where the syntax `#[T: Comparable]` or equivalently `#[T: Comparable#[T]]`
indicates that `T` must satisfy the contract (implement the interface) `Comparable#[T]`
Such functions `Min` and `Max` will then work automatically for every type `T`
that satisfies the contract (implements the interface) `Comparable#[T]`:
all basic integers and floats, plus `*math/big.Float`, `*math/big.Int` and `*math/big.Rat`,
plus every user-defined type `T` that has a method `func (T) Cmp(T) int`
If you do not specify the contract(s) that a type must satisfy, generic functions cannot access the fields and methods of a such type, which is then treated as a "black box", similarly to `interface{}`.
Two values of type `T` can be added if `T` has an appropriate method.
But which name and signature should we choose to add values?
Copying again from `math/big`, the method we choose is `func (T) Add(T,T) T`
If receiver is a pointer, it will be set to the result - in any case,
the result will also be returned.
Similarly to `Comparable`, the contract `Addable` is then
```
type Addable#[T] interface {
// Add two values a, b and return the result.
// If recv is a pointer, it must be non-nil
// and it will be set to the result
func (recv T) Add(a, b T) T
}
```
With such a contract, a generic function `Sum` is quite straightforward:
```
func Sum#[T: Addable] (args ...T) T {
// to create the zero value of T,
// one can write 'var sum T' or equivalently 'sum := T()'
// Unluckily, that's not enough for math/big numbers, which require
// the receiver of method calls to be created with a function `New()`
// Once math/big numbers have such method, the following
// will be fully general - currently it works only on basic types.
sum := T().New()
for _, elem := range args {
// use the method T.Add(T, T)
//
// as an optimization, relevant at least for math/big numbers,
// also use sum as the receiver where result of Add will be stored
// if the method Add has pointer receiver.
//
// To cover the case where method Add has instead value receiver,
// also assign the returned value to sum
sum = sum.Add(sum, elem)
}
return sum
}
Sum#[int] // returns func(...int) int Sum#[int] (1,2,3) // returns int(6)
Sum#[complex64] // returns func(...complex64) complex64 Sum#[complex64] (1.1+2.2i, 3.3) // returns complex64(4.4+2.2i)
Sum#[string] // returns func(...string) string Sum#[string]("abc.","def.","xy","z") // returns "abc.def.xyz"
```
Partial and full specialization of generics is **not** supported in CTI generics,
both for simplicity and to avoid accidentally providing Turing completeness at compile-time.
Instantiation of generic types and functions is on-demand.
Current limitations:
* type inference on generic arguments #[...] is not yet implemented,
thus generic arguments #[...] must be explicit.
* generic methods are not yet implemented.
* types are not checked to actually satisfy contracts.
### Debugger
Since version 2.6, gomacro also has an integrated debugger.
There are three ways to enter it:
* hit CTRL+C while interpreted code is running.
* type `:debug STATEMENT-OR-FUNCTION-CALL` at the prompt.
* add a statement (an expression is not enough) `"break"` or `_ = "break"` to your code, then execute it normally.
In all cases, execution will be suspended and you will get a `debug>` prompt, which accepts the following commands:
`step`, `next`, `finish`, `continue`, `env [NAME]`, `inspect EXPR`, `list`, `print EXPR-OR-STATEMENT`
Also,
* commands can be abbreviated.
* `print` fully supports expressions or statements with side effects, including function calls and modifying local variables.
* `env` without arguments prints all global and local variables.
* an empty command (i.e. just pressing enter) repeats the last command.
Only interpreted statements can be debugged: expressions and compiled code will be executed, but you cannot step into them.
The debugger is quite new, and may have some minor glitches.
### Why it was created
First of all, to experiment with Go :)
Second, to simplify Go code generation tools (keep reading for the gory details)
---
Problem: "go generate" and many other Go tools automatically create Go source code from some kind of description - usually an interface specifications as WSDL, XSD, JSON...
Such specification may be written in Go, for example when creating JSON marshallers/unmarshallers from Go structs, or in some other language,
for example when creating Go structs from JSON sample data.
In both cases, a variety of external programs are needed to generate Go source code: such programs need to be installed separately from the code being generated and compiled.
Also, Go is currently lacking generics (read: C++-like templates)
because of the rationale "we do not yet know how to do them right,
and once you do them wrong everybody is stuck with them"
The purpose of Lisp-like macros is to execute arbitrary code while compiling, **in particular** to generate source code.
This makes them very well suited (although arguably a bit low level)
for both purposes: code generation and C++-like templates, which are a special case of code generation - for a demonstration of how to implement C++-like templates on top of Lisp-like macros,
see for example the project <https://github.com/cosmos72/cl-parametric-types>
from the same author.
Building a Go interpreter that supports Lisp-like macros,
allows to embed all these code-generation activities into regular Go source code, without the need for external programs
(except for the interpreter itself).
As a free bonus, we get support for Eval()
### LEGAL
Gomacro is distributed under the terms of [Mozilla Public License 2.0](https://github.com/cosmos72/gomacro/blob/6835e0d66346/LICENSE)
or any later version.
Documentation
[¶](#section-documentation)
---
![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg)
There is no documentation for this package. |
travis-async-listener | ruby | Ruby | The Travis Client [Build Status](https://travis-ci.org/travis-ci/travis.rb)
===
![The Travis Mascot](http://about.travis-ci.org/images/travis-mascot-200px.png)
The [travis gem](https://rubygems.org/gems/travis) includes both a [command line client](#command-line-client) and a [Ruby library](#ruby-library) to interface with a Travis CI service. Both work with [travis-ci.org](https://travis-ci.org), [travis-ci.com](https://travis-ci.com) or any custom Travis CI setup you might have. Check out the [installation instructions](#installation) to get it running in no time.
Table of Contents
---
* [Command Line Client](#command-line-client)
+ [Non-API Commands](#non-api-commands)
- [`help`](#help) - helps you out when in dire need of information
- [`version`](#version) - outputs the client version
+ [General API Commands](#general-api-commands)
- [`accounts`](#accounts) - displays accounts and their subscription status
- [`console`](#console) - interactive shell
- [`endpoint`](#endpoint) - displays or changes the API endpoint
- [`login`](#login) - authenticates against the API and stores the token
- [`monitor`](#monitor) - live monitor for what's going on
- [`raw`](#raw) - makes an (authenticated) API call and prints out the result
- [`report`](#report) - generates a report useful for filing issues
- [`repos`](#repos) - lists repositories the user has certain permissions on
- [`sync`](#sync) - triggers a new sync with GitHub
- [`lint`](#lint) - display warnings for a .travis.yml
- [`token`](#token) - outputs the secret API token
- [`whatsup`](#whatsup) - lists most recent builds
- [`whoami`](#whoami) - outputs the current user
+ [Repository Commands](#repository-commands)
- [`branches`](#branches) - displays the most recent build for each branch
- [`cache`](#cache) - lists or deletes repository caches
- [`cancel`](#cancel) - cancels a job or build
- [`disable`](#disable) - disables a project
- [`enable`](#enable) - enables a project
- [`encrypt`](#encrypt) - encrypts values for the .travis.yml
- [`encrypt-file`](#encrypt-file) - encrypts a file and adds decryption steps to .travis.yml
- [`env`](#env) - show or modify build environment variables
- [`history`](#history) - displays a projects build history
- [`init`](#init) - generates a .travis.yml and enables the project
- [`logs`](#logs) - streams test logs
- [`open`](#open) - opens a build or job in the browser
- [`pubkey`](#pubkey) - prints out a repository's public key
- [`requests`](#requests) - lists recent requests
- [`restart`](#restart) - restarts a build or job
- [`settings`](#settings) - access repository settings
- [`setup`](#setup) - sets up an addon or deploy target
- [`show`](#show) - displays a build or job
- [`sshkey`](#sshkey) - checks, updates or deletes an SSH key
- [`status`](#status) - checks status of the latest build
+ [Pro and Enterprise](#pro-and-enterprise)
+ [Environment Variables](#environment-variables)
+ [Desktop Notifications](#desktop-notifications)
+ [Plugins](#plugins)
- [Official Plugins](#official-plugins)
* [Ruby Library](#ruby-library)
+ [Authentication](#authentication)
+ [Using Pro](#using-pro)
+ [Entities](#entities)
- [Stateful Entities](#stateful-entities)
- [Repositories](#repositories)
- [Builds](#builds)
- [Jobs](#jobs)
- [Artifacts](#artifacts)
- [Users](#users)
- [Commits](#commits)
- [Caches](#caches)
- [Repository Settings](#repository-settings)
- [Build Environment Variables](#build-environment-variables)
+ [Listening for Events](#listening-for-events)
+ [Dealing with Sessions](#dealing-with-sessions)
+ [Using Namespaces](#using-namespaces)
* [Installation](#installation)
+ [Updating your Ruby](#updating-your-ruby)
- [Mac OS X via Homebrew](#mac-os-x-via-homebrew)
- [Windows](#windows)
- [Other Unix systems](#other-unix-systems)
- [Ruby versioning tools](#ruby-versioning-tools)
+ [Troubleshooting](#troubleshooting)
- [Ubuntu](#ubuntu)
- [Mac OS X](#mac-os-x)
- [Upgrading from travis-cli](#upgrading-from-travis-cli)
* [Version History](#version-history)
Command Line Client
---
![](http://about.travis-ci.org/images/new-tricks.png)
There are three types of commands: [Non-API Commands](#non-api-commands), [General API Commands](#general-api-commands) and [Repository Commands](#repository-commands). All commands take the form of `travis COMMAND [ARGUMENTS] [OPTIONS]`. You can get a list of commands by running [`help`](#help).
### Non-API Commands
Every Travis command takes three global options:
```
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
```
The `--help` option is equivalent to running `travis help COMMAND`.
The `--interactive` options determines wether to include additional information and colors in the output or not (except on Windows, we never display colors on Windows, sorry). If you don't set this option explicitly, you will run in interactive mode if you invoke the command directly in a shell and in non-interactive mode if you pipe it somewhere.
You probably want to use `--explode` if you are working on a patch for the Travis client, as it will give you the Ruby exception instead of a nice error message.
#### `help`
The `help` command will inform you about the arguments and options that the commands take, for instance:
```
$ travis help help Usage: travis help [command] [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
```
Running `help` without a command name will give you a list of all available commands.
#### `version`
As you might have guessed, this command prints out the client's version.
### General API Commands
API commands inherit all options from [Non-API Commands](#non-api-commands).
Additionally, every API command understands the following options:
```
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
--adapter ADAPTER Faraday adapter to use for HTTP requests
```
You can supply an access token via `--token` if you want to make an authenticated call. If you don't have an access token stored for the API endpoint, it will remember it for subsequent requests. Keep in mind, this is not the "Travis token" used when setting up GitHub hooks (due to security). You probably don't have an access token handy right now. Don't worry, usually you won't use this option but instead just do a [`travis login`](#login).
The `--debug` option will print HTTP requests to STDERR. Like `--explode`, this is really helpful when contributing to this project.
There are many libraries out there to do HTTP requests in Ruby. You can switch amongst common ones with `--adapter`:
```
$ travis show --adapter net-http
...
$ gem install excon
...
$ travis show --adapter excon
...
```
#### `accounts`
The accounts command can be used to list all the accounts you can set up repositories for.
```
$ travis accounts rkh (<NAME>ase): subscribed, 160 repositories sinatra (Sinatra): subscribed, 9 repositories rack (Official Rack repositories): subscribed, 3 repositories travis-ci (Travis CI): subscribed, 57 repositories
...
```
#### `console`
Running `travis console` gives you an interactive Ruby session with all the [entities](#entities) imported into global namespace.
But why use this over just `irb -r travis`? For one, it will take care of authentication, setting the correct endpoint, etc, and it also allows you to pass in `--debug` if you are curious as to what's actually going on.
```
$ travis console
>> User.current
=> #<User: rkh>
>> Repository.find('sinatra/sinatra')
=> #<Repository: sinatra/sinatra>
>> _.last_build
=> #<Travis::Client::Build: sinatra/sinatra#360>
```
#### `endpoint`
Prints out the API endpoint you're talking to.
```
$ travis endpoint API endpoint: https://api.travis-ci.org/
```
Handy for using it when working with shell scripts:
```
$ curl "$(travis endpoint)/docs" > docs.html
```
It can also be used to set the default API endpoint used for [General API Commands](#general-api-commands):
```
$ travis endpoint --pro --set-default API endpoint: https://api.travis-ci.com/ (stored as default)
```
You can use `--drop-default` to remove the setting again:
```
$ travis endpoint --drop-default default API endpoint dropped (was https://api.travis-ci.com/)
```
#### `login`
The `login` command will, well, log you in. That way, all subsequent commands that run against the same endpoint will be authenticated.
```
$ travis login We need your GitHub login to identify you.
This information will not be sent to Travis CI, only to GitHub.
The password will not be displayed.
Try running with --github-token or --auto if you don't want to enter your password anyway.
Username: rkh Password: *******************
Successfully logged in!
```
As you can see above, it will ask you for your GitHub user name and password, but not send these to Travis CI. Instead, it will use them to create a GitHub API token, show the token to Travis, which then on its own checks if you really are who you say you are, and gives you an access token for the Travis API in return. The client will then delete the GitHub token again, just to be sure. But don't worry, all that happens under the hood and fully automatic.
If you don't want it to send your credentials to GitHub, you can create a GitHub token on your own and supply it via `--github-token`. In that case, the client will not delete the GitHub token (as it can't, it needs your password to do this). Travis CI will not store the token, though - after all, it already should have a valid token for you in the database.
A third option is for the really lazy: `--auto`. In this mode the client will try to find a GitHub token for you and just use that. This will only work if you have a [global GitHub token](https://help.github.com/articles/git-over-https-using-oauth-token) stored in your [.netrc](http://blogdown.io/c4d42f87-80dd-45d5-8927-4299cbdf261c/posts/574baa68-f663-4dcf-88b9-9d41310baf2f). If you haven't heard of this, it's worth looking into in general. Again: Travis CI will not store that token.
#### `logout`
This command makes Travis CI forget your access token.
```
$ travis logout --pro Successfully logged out!
```
#### `monitor`
```
Usage: travis monitor [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-m, --my-repos Only monitor my own repositories
-r, --repo SLUG monitor given repository (can be used more than once)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-n, --[no-]notify [TYPE] send out desktop notifications (optional type: osx, growl, libnotify)
-b, --builds only monitor builds, not jobs
-p, --push monitor push events
-P, --pull monitor pull request events
```
With `monitor` you can watch a live stream of what's going on:
```
$ travis monitor Monitoring travis-ci.org:
2013-08-05 01:22:40 questmaster/FATpRemote#45 started 2013-08-05 01:22:40 questmaster/FATpRemote#45.1 started 2013-08-05 01:22:41 grangier/python-goose#33.1 passed 2013-08-05 01:22:42 plataformatec/simple_form#666 passed
...
```
You can limit it to a single repository via `--repo SLUG`.
By default, you will receive events for both builds and jobs, you can limit it to builds only via `--build` (short `-b`):
```
$ travis monitor Monitoring travis-ci.org:
2013-08-05 01:22:40 questmaster/FATpRemote#45 started 2013-08-05 01:22:42 plataformatec/simple_form#666 passed
...
```
Similarly, you can limit it to builds/jobs for pull requests via `--pull` and for normal pushes via `--push`.
The monitor command can also send out [desktop notifications](#desktop-notifications):
```
$ travis monitor --pro -n Monitoring travis-ci.com:
...
```
When monitoring specific repositories, notifications will be turned on by default. Disable with `--no-notify`.
#### `raw`
This is really helpful both when working on this client and when exploring the [Travis API](https://api.travis-ci.org). It will simply fire a request against the API endpoint, parse the output and pretty print it. Keep in mind that the client takes care of authentication for you:
```
$ travis raw /repos/travis-ci/travis.rb
{"repo"=>
{"id"=>409371,
"slug"=>"travis-ci/travis.rb",
"description"=>"Travis CI Client (CLI and Ruby library)",
"last_build_id"=>4251410,
"last_build_number"=>"77",
"last_build_state"=>"passed",
"last_build_duration"=>351,
"last_build_language"=>nil,
"last_build_started_at"=>"2013-01-19T18:00:49Z",
"last_build_finished_at"=>"2013-01-19T18:02:17Z"}}
```
Use `--json` if you'd rather prefer the output to be JSON.
#### `report`
When inspecting a bug or reporting an issue, it can be handy to include a report about the system and configuration used for running a command.
```
$ travis report --pro System
Ruby: Ruby 2.0.0-p195 Operating System: Mac OS X 10.8.5 RubyGems: RubyGems 2.0.7
CLI Version: 1.5.8 Plugins: "travis-as-user", "travis-build", "travis-cli-pr"
Auto-Completion: yes Last Version Check: 2013-11-02 16:25:03 +0100
Session API Endpoint: https://api.travis-ci.com/
Logged In: as "rkh"
Verify SSL: yes Enterprise: no
Endpoints pro: https://api.travis-ci.com/ (access token, current)
org: https://api.travis-ci.org/ (access token)
Last Exception An error occurred running `travis whoami --pro`:
Travis::Client::Error: access denied
from ...
For issues with the command line tool, please visit https://github.com/travis-ci/travis.rb/issues.
For Travis CI in general, go to https://github.com/travis-ci/travis-ci/issues or email [[email protected]](/cdn-cgi/l/email-protection).
```
This command can also list all known repos and the endpoint to use for them via the `--known-repos` option.
#### `repos`
```
Lists repositories the user has certain permissions on.
Usage: travis repos [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
--adapter ADAPTER Faraday adapter to use for HTTP requests
-m, --match PATTERN only list repositories matching the given pattern (shell style)
-o, --owner LOGIN only list repos for a certain owner
-n, --name NAME only list repos with a given name
-a, --active only list active repositories
-A, --inactive only list inactive repositories
-d, --admin only list repos with (or without) admin access
-D, --no-admin only list repos without admin access
```
Lists repositories and displays whether these are active or not. Has a variety of options to filter repositories.
```
$ travis repos -m 'rkh/travis-*'
rkh/travis-chat (active: yes, admin: yes, push: yes, pull: yes)
Description: example app demoing travis-sso usage
rkh/travis-encrypt (active: yes, admin: yes, push: yes, pull: yes)
Description: proof of concept in browser encryption of travis settings
rkh/travis-lite (active: no, admin: yes, push: yes, pull: yes)
Description: Travis CI without the JavaScript
rkh/travis-surveillance (active: no, admin: yes, push: yes, pull: yes)
Description: Veille sur un projet.
```
In non-interactive mode, it will only output the repository slug, which goes well with xargs:
```
$ travis repos --active --owner travis-ci | xargs -I % travis disable -r %
travis-ci/artifacts: disabled :(
travis-ci/canary: disabled :(
travis-ci/docs-travis-ci-com: disabled :(
travis-ci/dpl: disabled :(
travis-ci/gh: disabled :(
...
```
#### `sync`
```
Usage: travis sync [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-c, --check only check the sync status
-b, --background will trigger sync but not block until sync is done
-f, --force will force sync, even if one is already running
```
Sometimes the infos Travis CI has about users and repositories become out of date. If that should happen, you can manually trigger a sync:
```
$ travis sync synchronizing: ........... done
```
The command blocks until the synchronization is done. You can avoid that with `--background`:
```
$ travis sync --background starting synchronization
```
If you just want to know if your account is being synchronized right now, use `--check`:
```
$ travis sync --check rkh is currently syncing
```
#### `lint`
This checks a `.travis.yml` file for any issues it might detect.
By default, it will read a file named `.travis.yml` in the current directory:
```
$ travis lint Warnings for .travis.yml:
[x] your repository must be feature flagged for the os setting to be used
```
You can also give it a path to a different file:
```
$ travis lint example.yml
...
```
Or pipe the content into it:
```
$ echo "foo: bar" | travis lint Warnings for STDIN:
[x] unexpected key foo, dropping
[x] missing key language, defaulting to ruby
```
Like the [`status` command](#status), you can use `-q` to suppress any output, and `-x` to have it set the exit code to 1 if there are any warnings.
```
$ travis lint -qx || echo ".travis.yml does not validate"
```
#### `token`
In order to use the Ruby library you will need to obtain an access token first. To do this simply run the `travis login` command. Once logged in you can check your token with `travis token`:
```
$ travis token Your access token is super-secret
```
You can use that token for instance with curl:
```
$ curl -H "Authorization: token $(travis token)" https://api.travis-ci.org/users/
{"login":"rkh","name":"<NAME>","email":"[[email protected]](/cdn-cgi/l/email-protection)","gravatar_id":"5c2b452f6eea4a6d84c105ebd971d2a4","locale":"en","is_syncing":false,"synced_at":"2013-01-21T20:31:06Z"}
```
Note that if you just need it for looking at API payloads, that we also have the [`raw`](#raw) command.
#### `whatsup`
It's just a tiny feature, but it allows you to take a look at repositories that have recently seen some action (ie the left hand sidebar on [travis-ci.org](https://travis-ci.org)):
```
$ travis whatsup mysociety/fixmystreet started: #154 eloquent/typhoon started: #228 Pajk/apipie-rails started: #84 qcubed/framework failed: #21
...
```
If you only want to see what happened in your repositories, add the `--my-repos` flag (short: `-m`):
```
$ travis whatsup -m travis-ci/travis.rb passed: #169 rkh/dpl passed: #50 rubinius/rubinius passed: #3235 sinatra/sinatra errored: #619 rtomayko/tilt failed: #162 ruby-no-kai/rubykaigi2013 passed: #50 rack/rack passed: #519
...
```
#### `whoami`
This command is useful to verify that you're in fact logged in:
```
$ travis whoami You are rkh (<NAME>)
```
Again, like most other commands, goes well with shell scripting:
```
$ git clone "https://github.com/$(travis whoami)/some_project"
```
### Repository Commands
```
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
```
Repository commands have all the options [General API Commands](#general-api-commands) have.
Additionally, you can specify the Repository to talk to by providing `--repo owner/name`. However, if you invoke the command inside a clone of the project, the client will figure out this option on its own. Note that it uses the tracked [git remote](http://www.kernel.org/pub/software/scm/git/docs/git-remote.html) for the current branch (and defaults to 'origin' if no tracking is set) to do so. You can use `--store-repo SLUG` once to override it permanently.
It will also automatically pick [Travis Pro](https://travis-ci.com) if it is a private project. You can of course override this decission with `--pro`, `--org` or `--api-endpoint URL`
#### `branches`
Displays the most recent build for each branch:
```
$ travis branches hh-add-warning-old-style: #35 passed Add a warning if old-style encrypt is being used hh-multiline-encrypt: #55 passed Merge branch 'master' into hh-multiline-encrypt rkh-show-logs-history: #72 passed regenerate gemspec rkh-debug: #75 passed what?
hh-add-clear-cache-to-global-session: #135 passed Add clear_cache(!) to Travis::Namespace hh-annotations: #146 passed Initial annotation support hh-remove-newlines-from-encrypted-string: #148 errored Remove all whitespace from an encrypted string version-check: #157 passed check travis version for updates from time to time master: #163 passed add Repository#branches and Repository#branch(name)
```
For more fine grained control and older builds on a specific branch, see [`history`](#history).
#### `cache`
```
Lists or deletes repository caches.
Usage: travis cache [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-d, --delete delete listed caches
-b, --branch BRANCH only list/delete caches on given branch
-m, --match STRING only list/delete caches where slug matches given string
-f, --force do not ask user to confirm deleting the caches
```
Lists or deletes [directory caches](http://about.travis-ci.org/docs/user/caching/) for a repository:
```
$ travis cache On branch master:
cache--rvm-2.0.0--gemfile-Gemfile last modified: 2013-11-04 13:45:44 size: 62.21 MiB cache--rvm-ruby-head--gemfile-Gemfile last modified: 2013-11-04 13:46:55 size: 62.65 MiB
On branch example:
cache--rvm-2.0.0--gemfile-Gemfile last modified: 2013-11-04 13:45:44 size: 62.21 MiB
Overall size of above caches: 187.07 MiB
```
You can filter by branch:
```
$ travis cache --branch master On branch master:
cache--rvm-2.0.0--gemfile-Gemfile last modified: 2013-11-04 13:45:44 size: 62.21 MiB cache--rvm-ruby-head--gemfile-Gemfile last modified: 2013-11-04 13:46:55 size: 62.65 MiB
Overall size of above caches: 124.86 MiB
```
And by matching against the slug:
```
$ travis cache --match 2.0.0 On branch master:
cache--rvm-2.0.0--gemfile-Gemfile last modified: 2013-11-04 13:45:44 size: 62.21 MiB
Overall size of above caches: 62.21 MiB
```
You can also use this command to delete caches:
```
$ travis cache -b example -m 2.0.0 --delete DANGER ZONE: Do you really want to delete all caches on branch example that match 2.0.0? |no| yes Deleted the following caches:
On branch example:
cache--rvm-2.0.0--gemfile-Gemfile last modified: 2013-11-04 13:45:44 size: 62.21 MiB
Overall size of above caches: 62.21 MiB
```
#### `cancel`
This command will cancel the latest build:
```
$ travis cancel build #85 has been canceled
```
You can also cancel any build by giving a build number:
```
$ travis cancel 57 build #57 has been canceled
```
Or a single job:
```
$ travis cancel 57.1 job #57.1 has been canceled
```
#### `disable`
If you want to turn of a repository temporarily or indefinitely, you can do so with the `disable` command:
```
$ travis disable travis-ci/travis.rb: disabled :(
```
#### `enable`
With the `enable` command, you can easily activate a project on Travis CI:
```
$ travis enable travis-ci/travis.rb: enabled :)
```
It even works when enabling a repo Travis didn't know existed by triggering a sync:
```
$ travis enable -r rkh/test repository not known to Travis CI (or no access?)
triggering sync: ............. done rkh/test: enabled
```
If you don't want the sync to be triggered, use `--skip-sync`.
#### `encrypt`
```
Usage: travis encrypt [args..] [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
--adapter ADAPTER Faraday adapter to use for HTTP requests
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-a, --add [KEY] adds it to .travis.yml under KEY (default: env.global)
-s, --[no-]split treat each line as a separate input
-p, --append don't override existing values, instead treat as list
-x, --override override existing value
```
This command is useful to encrypt [environment variables](http://about.travis-ci.org/docs/user/encryption-keys/) or deploy keys for private dependencies.
```
$ travis encrypt FOO=bar Please add the following to your .travis.yml file:
secure: "<KEY>
Pro Tip™: You can add it automatically by running with --add.
```
For deploy keys, it is really handy to pipe them into the command:
```
$ cat id_rsa | travis encrypt
```
Another use case for piping files into it: If you have a file with sensitive environment variables, like foreman's [.env](https://ddollar.github.com/foreman/#ENVIRONMENT) file, you can add tell the client to encrypt every line separately via `--split`:
```
$ cat .env | travis encrypt --split Please add the following to your .travis.yml file:
secure: "<KEY>
secure: "<KEY>
Pro Tip: You can add it automatically by running with --add.
```
As suggested, the client can also add them to your `.travis.yml` for you:
```
$ travis encrypt FOO=bar --add
```
This will by default add it as global variables for every job. You can also add it as matrix entries by providing a key:
```
$ travis encrypt FOO=bar --add env.matrix
```
There are two ways the client can treat existing values:
* Turn existing value into a list if it isn't already, append new value to that list. This is the default behavior for keys that start with `env.` and can be enforced with `--append`.
* Replace existing value. This is the default behavior for keys that do not start with `env.` and can be enforced with `--override`.
#### `encrypt-file`
```
Encrypts a file and adds decryption steps to .travis.yml.
Usage: travis encrypt-file INPUT_PATH [OUTPUT_PATH] [OPTIONS]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-K, --key KEY encryption key to be used (randomly generated otherwise)
--iv IV encryption IV to be used (randomly generated otherwise)
-d, --decrypt decrypt the file instead of encrypting it, requires key and iv
-f, --force override output file if it exists
-p, --print-key print (possibly generated) key and iv
-w, --decrypt-to PATH where to write the decrypted file to on the Travis CI VM
-a, --add [STAGE] automatically add command to .travis.yml (default stage is before_install)
```
This command will encrypt a file for you using a symmetric encryption (AES-256), and it will store the secret in a [secure variable](#env). It will output the command you can use in your build script to decrypt the file.
```
$ travis encrypt-file bacon.txt encrypting bacon.txt for rkh/travis-encrypt-file-example storing result as bacon.txt.enc storing secure env variables for decryption
Please add the following to your build script (before_install stage in your .travis.yml, for instance):
openssl aes-256-cbc -K $encrypted_0a6446eb3ae3_key -iv $encrypted_0a6446eb3ae3_key -in bacon.txt.enc -out bacon.txt -d
Pro Tip: You can add it automatically by running with --add.
Make sure to add bacon.txt.enc to the git repository.
Make sure not to add bacon.txt to the git repository.
Commit all changes to your .travis.yml.
```
You can also use `--add` to have it automatically add the decrypt command to your `.travis.yml`
```
$ travis encrypt-file bacon.txt --add encrypting bacon.txt for rkh/travis-encrypt-file-example storing result as bacon.txt.enc storing secure env variables for decryption
Make sure to add bacon.txt.enc to the git repository.
Make sure not to add bacon.txt to the git repository.
Commit all changes to your .travis.yml.
```
#### `env`
```
Show or modify build environment variables.
Usage: travis env list [options]
travis env set name value [options]
travis env unset [names..] [options]
travis env copy [names..] [options]
travis env clear [OPTIONS]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
--staging talks to staging system
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
--adapter ADAPTER Faraday adapter to use for HTTP requests
--as USER authenticate as given user
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-P, --[no-]public make new values public
-p, --[no-]private make new values private
-u, --[no-]unescape do not escape values
-f, --force do not ask for confirmation when clearing out all variables
```
You can set, list and unset environment variables, or copy them from the current environment:
```
$ travis env set foo bar --public
[+] setting environment variable $foo
$ travis env list
# environment variables for travis-ci/travis.rb foo=bar
$ export foo=foobar
$ travis env copy foo bar
[+] setting environment variable $foo
[+] setting environment variable $bar
$ travis env list
# environment variables for travis-ci/travis.rb foo=foobar bar=[secure]
$ travis env unset foo bar
[x] removing environment variable $foo
[x] removing environment variable $bar
```
#### `history`
```
Displays a projects build history.
Usage: travis history [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
--skip-completion-check don't check if auto-completion is set up
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-a, --after BUILD Only show history after a given build number
-p, --pull-request NUMBER Only show history for the given Pull Request
-b, --branch BRANCH Only show history for the given branch
-l, --limit LIMIT Maximum number of history items
-d, --date Include date in output
--[no-]all Display all history items
```
You can check out what the recent builds look like:
```
$ travis history
#77 passed: master fix name clash
#76 failed: master Merge pull request #11 from travis-ci/rkh-show-logs-history
#75 passed: rkh-debug what?
#74 passed: rkh-debug all tests pass locally and on the travis vm I spin up :(
#73 failed: Pull Request #11 regenerate gemspec
#72 passed: rkh-show-logs-history regenerate gemspec
#71 failed: Pull Request #11 spec fix for (older) rubinius
#70 passed: rkh-show-logs-history spec fix for (older) rubinius
#69 failed: Pull Request #11 strange fix for rubinius
#68 failed: rkh-show-logs-history strange fix for rubinius
```
By default, it will display the last 10 builds. You can limit (or extend) the number of builds with `--limit`:
```
$ travis history --limit 2
#77 passed: master fix name clash
#76 failed: master Merge pull request #11 from travis-ci/rkh-show-logs-history
```
You can use `--after` to display builds after a certain build number (or, well, before, but it's called after to use the same phrases as the API):
```
$ travis history --limit 2 --after 76
#75 passed: rkh-debug what?
#74 passed: rkh-debug all tests pass locally and on the travis vm I spin up :(
```
You can also limit the history to builds for a certain branch:
```
$ travis history --limit 3 --branch master
#77 passed: master fix name clash
#76 failed: master Merge pull request #11 from travis-ci/rkh-show-logs-history
#57 passed: master Merge pull request #5 from travis-ci/hh-multiline-encrypt
```
Or a certain Pull Request:
```
$ travis history --limit 3 --pull-request 5
#56 passed: Pull Request #5 Merge branch 'master' into hh-multiline-encrypt
#49 passed: Pull Request #5 improve output
#48 passed: Pull Request #5 let it generate accessor for line splitting automatically
```
#### `init`
```
Usage: travis init [language] [file] [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
--adapter ADAPTER Faraday adapter to use for HTTP requests
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-s, --skip-sync don't trigger a sync if the repo is unknown
-f, --force override .travis.yml if it already exists
-k, --skip-enable do not enable project, only add .travis.yml
-p, --print-conf print generated config instead of writing to file
--script VALUE sets script option in .travis.yml (can be used more than once)
--before-script VALUE sets before_script option in .travis.yml (can be used more than once)
--after-script VALUE sets after_script option in .travis.yml (can be used more than once)
--after-success VALUE sets after_success option in .travis.yml (can be used more than once)
--install VALUE sets install option in .travis.yml (can be used more than once)
--before-install VALUE sets before_install option in .travis.yml (can be used more than once)
--compiler VALUE sets compiler option in .travis.yml (can be used more than once)
--otp-release VALUE sets otp_release option in .travis.yml (can be used more than once)
--go VALUE sets go option in .travis.yml (can be used more than once)
--jdk VALUE sets jdk option in .travis.yml (can be used more than once)
--node-js VALUE sets node_js option in .travis.yml (can be used more than once)
--perl VALUE sets perl option in .travis.yml (can be used more than once)
--php VALUE sets php option in .travis.yml (can be used more than once)
--python VALUE sets python option in .travis.yml (can be used more than once)
--rvm VALUE sets rvm option in .travis.yml (can be used more than once)
--scala VALUE sets scala option in .travis.yml (can be used more than once)
--env VALUE sets env option in .travis.yml (can be used more than once)
--gemfile VALUE sets gemfile option in .travis.yml (can be used more than once)
```
When setting up a new project, you can run `travis init` to generate a `.travis.yml` and [enable](#enable) the project:
```
$ travis init java
.travis.yml file created!
travis-ci/java-example: enabled :)
```
You can also set certain values via command line flags (see list above):
```
$ travis init c --compiler clang
.travis.yml file created!
travis-ci/c-example: enabled :)
```
#### `logs`
Given a job number, logs simply prints out that job's logs. By default it will display the first job of the latest build.
```
$ travis logs displaying logs for travis-ci/travis.rb#317.1
[... more logs ...]
Your bundle is complete! Use `bundle show [gemname]` to see where a bundled gem is installed.
$ bundle exec rake
/home/travis/.rvm/rubies/ruby-1.8.7-p371/bin/ruby -S rspec spec -c
..............................................................................................................................................................................................................................................................................
Finished in 4.46 seconds 270 examples, 0 failures
Done. Build script exited with: 0
```
The info line about the job being displayed is written to stderr, the logs itself are written to stdout.
It takes an optional argument that can be a job number:
```
$ travis logs 100.3 displaying logs for travis-ci/travis.rb#100.3
```
A build number (in which case it will pick the build's first job):
```
$ travis logs 100 displaying logs for travis-ci/travis.rb#100.1
```
Just the job suffix, which will pick the corresponding job from the latest build:
```
$ travis logs .2 displaying logs for travis-ci/travis.rb#317.2
```
A branch name:
```
$ travis logs ghe displaying logs for travis-ci/travis.rb#270.1
```
You can delete the logs with the `--delete` flag, which optionally takes a reason as argument:
```
$ travis logs --delete DANGER ZONE: Do you really want to delete the build log for travis-ci/travis.rb#559.1? |no| yes deleting log for travis-ci/travis.rb#559.1
$ travis logs 1.7 --delete "contained confidential data" --force deleting log for travis-ci/travis.rb#1.7
```
#### `open`
Opens the project view in the Travis CI web interface. If you pass it a build or job number, it will open that specific view:
```
$ travis open
```
If you just want the URL printed out instead of opened in a browser, pass `--print`.
If instead you want to open the repository, compare or pull request view on GitHub, use `--github`.
```
$ travis open 56 --print --github web view: https://github.com/travis-ci/travis.rb/pull/5
```
#### `pubkey`
Outputs the public key for a repository.
```
$ travis pubkey Public key for travis-ci/travis.rb:
ssh-rsa ...
$ travis pubkey -r rails/rails > rails.key
```
The `--pem` flag will print out the key PEM encoded:
```
$ travis pubkey --pem Public key for travis-ci/travis.rb:
---BEGIN PUBLIC KEY---
...
---END PUBLIC KEY---
```
Whereas the `--fingerprint` flag will print out the key's fingerprint:
```
$ travis pubkey --pem Public key for travis-ci/travis.rb:
9f:57:01:4b:af:42:67:1e:b4:3c:0f:b6:cd:cc:c0:04
```
#### `requests`
With the `requests` command, you can list the build requests received by Travis CI from GitHub. This is handy for figuring out why a repository might not be building.
```
$ travis requests -r sinatra/sinatra push to master accepted (triggered new build)
abc51e2 - Merge pull request #847 from gogotanaka/add_readme_ja
received at: 2014-02-16 09:26:36
PR #843 rejected (skipped through commit message)
752201c - Update Spanish README with tense, verb, and word corrections. [ci skip]
received at: 2014-02-16 05:07:16
```
You can use `-l`/`--limit` to limit the number of requests displayed.
#### `restart`
This command will restart the latest build:
```
$ travis restart build #85 has been restarted
```
You can also restart any build by giving a build number:
```
$ travis restart 57 build #57 has been restarted
```
Or a single job:
```
$ travis restart 57.1 job #57.1 has been restarted
```
##### `settings`
Certain repository settings can be read via the CLI:
```
$ travis settings Settings for travis-ci/travis.rb:
[-] builds_only_with_travis_yml Only run builds with a .travis.yml
[+] build_pushes Build pushes
[+] build_pull_requests Build pull requests
[-] maximum_number_of_builds Maximum number of concurrent builds
```
You can also filter the settings by passing them in as arguments:
```
$ travis settings build_pushes build_pull_requests Settings for travis-ci/travis.rb:
[+] build_pushes Build pushes
[+] build_pull_requests Build pull requests
```
It is also possible to change these settings via `--enable`, `--disable` and `--set`:
```
$ travis settings build_pushes --disable Settings for travis-ci/travis.rb:
[-] build_pushes Build pushes
$ travis settings maximum_number_of_builds --set 1 Settings for travis-ci/travis.rb:
1 maximum_number_of_builds Maximum number of concurrent builds
```
Or, alternatively, you can use `-c` to configure the settings interactively:
```
$ travis settings -c Settings for travis-ci/travis.rb:
Only run builds with a .travis.yml? |yes| no Build pushes? |no| yes Build pull requests? |yes|
Maximum number of concurrent builds: |1| 5
```
#### `setup`
Helps you configure Travis addons.
```
Usage: travis setup service [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
--skip-version-check don't check if travis client is up to date
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
--adapter ADAPTER Faraday adapter to use for HTTP requests
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-f, --force override config section if it already exists
```
Available services: `anynines`, `appfog`, `artifacts`, `biicode`, `cloudcontrol`, `cloudfiles`, `cloudfoundry`, `cloud66`, `codedeploy`, `deis`, `divshot`, `elasticbeanstalk`, `engineyard`, `gcs`, `hackage`, `heroku`, `modulus`, `npm`, `ninefold`, `nodejitsu`, `openshift`, `opsworks`, `pypi`, `releases`, `rubygems`, `s3` and `sauce_connect`.
Example:
```
$ travis setup heroku Deploy only from travis-ci/travis-chat? |yes|
Encrypt API key? |yes|
```
#### `show`
Displays general infos about the latest build:
```
$ travis show Build #77: fix name clash State: passed Type: push Compare URL: https://github.com/travis-ci/travis.rb/compare/7cc9b739b0b6...39b66ee24abe Duration: 5 min 51 sec Started: 2013-01-19 19:00:49 Finished: 2013-01-19 19:02:17
#77.1 passed: 45 sec rvm: 1.8.7
#77.2 passed: 50 sec rvm: 1.9.2
#77.3 passed: 45 sec rvm: 1.9.3
#77.4 passed: 46 sec rvm: 2.0.0
#77.5 failed: 1 min 18 sec rvm: jruby (failure allowed)
#77.6 passed: 1 min 27 sec rvm: rbx
```
Any other build:
```
$ travis show 1 Build #1: add .travis.yml State: failed Type: push Compare URL: https://github.com/travis-ci/travis.rb/compare/ad817bc37c76...b8c5d3b463e2 Duration: 3 min 16 sec Started: 2013-01-13 23:15:22 Finished: 2013-01-13 23:21:38
#1.1 failed: 21 sec rvm: 1.8.7
#1.2 failed: 34 sec rvm: 1.9.2
#1.3 failed: 24 sec rvm: 1.9.3
#1.4 failed: 52 sec rvm: 2.0.0
#1.5 failed: 38 sec rvm: jruby
#1.6 failed: 27 sec rvm: rbx
```
The last build for a given branch:
```
$ travis show rkh-debug Build #75: what?
State: passed Type: push Branch: rkh-debug Compare URL: https://github.com/travis-ci/travis.rb/compare/8d4aa5254359...7ef33d5e5993 Duration: 6 min 16 sec Started: 2013-01-19 18:51:17 Finished: 2013-01-19 18:52:43
#75.1 passed: 1 min 10 sec rvm: 1.8.7
#75.2 passed: 51 sec rvm: 1.9.2
#75.3 passed: 36 sec rvm: 1.9.3
#75.4 passed: 48 sec rvm: 2.0.0
#75.5 failed: 1 min 26 sec rvm: jruby (failure allowed)
#75.6 passed: 1 min 25 sec rvm: rbx
```
Or a job:
```
$ travis show 77.3 Job #77.3: fix name clash State: passed Type: push Compare URL: https://github.com/travis-ci/travis.rb/compare/7cc9b739b0b6...39b66ee24abe Duration: 45 sec Started: 2013-01-19 19:00:49 Finished: 2013-01-19 19:01:34 Allow Failure: false Config: rvm: 1.9.3
```
#### `sshkey`
```
Checks, updates or deletes an SSH key.
Usage: travis sshkey [OPTIONS]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
-e, --api-endpoint URL Travis API server to talk to
-I, --[no-]insecure do not verify SSL certificate of API endpoint
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-X, --enterprise [NAME] use enterprise setup (optionally takes name for multiple setups)
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-D, --delete remove SSH key
-d, --description DESCRIPTION set description
-u, --upload FILE upload key from given file
-s, --stdin upload key read from stdin
-c, --check set exit code depending on key existing
-g, --generate generate SSH key and set up for given GitHub user
-p, --passphrase PASSPHRASE pass phrase to decrypt with when using --upload
```
*This feature is for [Pro and Enterprise](#pro-and-enterprise) only.*
With the `sshkey` command you can check if there is a custom SSH key set up. Custom SSH keys are used for cloning the repository.
```
$ travis sshkey No custom SSH key installed.
```
You can also use it to upload an SSH key:
```
$ travis sshkey --upload ~/.ssh/id_rsa Key description: Test Key updating ssh key for travis-pro/test-project with key from /Users/konstantin/.ssh/id_rsa Current SSH key: Test Key
```
And to remove it again:
```
$ travis sshkey --delete DANGER ZONE: Remove SSH key for travis-pro/test-project? |no| yes removing ssh key for travis-pro/test-project No custom SSH key installed.
```
You can also have it generate a key for a given GitHub user (for instance, for a dedicated CI user that only has read access). The public key will automatically be added to GitHub and the private key to Travis CI:
```
$ travis sshkey --generate We need the GitHub login for the account you want to add the key to.
This information will not be sent to Travis CI, only to api.github.com.
The password will not be displayed.
Username: travisbot Password for travisbot: **************
Generating RSA key.
Uploading public key to GitHub.
Uploading private key to Travis CI.
```
See the [private dependencies example](examples/cli/private_dependencies.md) for an in-detail description.
#### `status`
```
Usage: travis status [options]
-h, --help Display help
-i, --[no-]interactive be interactive and colorful
-E, --[no-]explode don't rescue exceptions
-e, --api-endpoint URL Travis API server to talk to
--pro short-cut for --api-endpoint 'https://api.travis-ci.com/'
--org short-cut for --api-endpoint 'https://api.travis-ci.org/'
-t, --token [ACCESS_TOKEN] access token to use
--debug show API requests
-r, --repo SLUG repository to use (will try to detect from current git clone)
-R, --store-repo SLUG like --repo, but remembers value for current directory
-x, --[no-]exit-code sets the exit code to 1 if the build failed
-q, --[no-]quiet does not print anything
-p, --[no-]fail-pending sets the status code to 1 if the build is pending
```
Outputs a one line status message about the project's last build. With `-q` that line will even not be printed out. How's that useful? Combine it with `-x` and the exit code will be 1 if the build failed, with `-p` and it will be 1 for a pending build.
```
$ travis status -qpx && cap deploy
```
### Pro and Enterprise
By default, [General API Commands](#general-api-commands) will talk to [api.travis-ci.org](https://api.travis-ci.org). You can change this by supplying `--pro` for [api.travis-ci.com](https://api.travis-ci.com) or `--api-endpoint` with your own endpoint. Note that all [Repository Commands](#repository-commands) will try to figure out the API endpoint to talk to automatically depending on the project's visibility on GitHub.
```
$ travis login --pro
...
$ travis monitor --pro -m
...
```
The custom `--api-endpoint` option is handy for local development:
```
$ travis whatsup --api-endpoint http://localhost:3000
...
```
If you have a Travis Enterprise setup in house, you can use the `--enterprise` option (or short `-X`). It will ask you for the enterprise domain the first time it is used.
```
$ travis login -X Enterprise domain: travisci.example.com
...
$ travis whatsup -X
...
```
Note that currently [Repository Commands](#repository-commands) will not be able to detect Travis Enterprise automatically. You will have to use the `-X` flag at least once per repository. The command line tool will remember the API endpoint for subsequent commands issued against the same repository.
### Environment Variables
You can set the following environment variables to influence the travis behavior:
* `$TRAVIS_TOKEN` - access token to use when the `--token` flag is not user
* `$TRAVIS_ENDPOINT` - API endpoint to use when the `--api-endpoint`, `--org` or `--pro` flag is not used
* `$TRAVIS_CONFIG_PATH` - directory to store configuration in (defaults to ~/.travis)
### Desktop Notifications
Some commands support sending desktop notifications. The following notification systems are currently supported:
* **Notification Center** - requires Mac OSX 10.8 or later and [Notification Center](http://support.apple.com/kb/ht5362) must be running under the system executing the `travis` command.
* **Growl** - [growlnotify](http://growl.info/downloads#generaldownloads) has to be installed and [Growl](https://itunes.apple.com/us/app/growl/id467939042?mt=12&ign-mpt=uo%3D4) needs to be running. Does currently not support the Windows version of Growl.
* **libnotify** - needs [libnotify](http://www.linuxfromscratch.org/blfs/view/svn/x/libnotify.html) installed, including the `notify-send` executable.
### Plugins
The `travis` binary has rudimentary support for plugins: It tries to load all files matching `~/.travis/*/init.rb`. Note that the APIs plugins use are largely semi-private. That is, they should remain stable, but are not part of the public API covered by semantic versioning. You can list the installed plugins via [`travis report`](#report).
It is possible to define new commands directly in the [init.rb](https://github.com/travis-ci/travis-build/blob/master/init.rb) or to set up [lazy-loading](https://github.com/travis-ci/travis-cli-pr/blob/master/init.rb) for these.
#### Official Plugins
* [travis-cli-gh](https://github.com/travis-ci/travis-cli-gh#readme): Plugin for interacting with the GitHub API.
Ruby Library
---
There are two approaches of using the Ruby library, one straight forward with one global session:
```
require 'travis'
rails = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails')
puts "oh no" unless rails.green?
```
And one where you have to instantiate your own session:
```
require 'travis/client'
client = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)").[new](/gems/travis-async-listener/Travis/Client#new-class_method "Travis::Client.new (method)")
rails = client.repo('rails/rails')
puts "oh no" unless rails.green?
```
For most parts, those are pretty much the same, the entities you get back look the same, etc, except one offers nice constants as part of the API, the other doesn't. In fact the "global" session style uses `Travis::Client` internally.
So, which one to choose? The global style has one session, whereas with the client style, you have one session per client instance. Each session has its own cache and identity map. This might matter for long running processes. If you use a new session for separate units of work, you can be pretty sure to not leak any objects. On the other hand using the constants or reusing the same session might save you from unnecessary HTTP requests.
In either way, if you should use the first approach or long living clients, here is how you make sure not to have stale data around:
```
[Travis](/gems/travis-async-listener/Travis "Travis (module)").clear_cache client.clear_cache
```
Note that this will still keep the identity map around, it will only drop all attributes. To clear the identity map, you can use the `clear_cache!` method. However, if you do that, you should not keep old instances of any entities (like repositories, etc) around.
### Authentication
Authentication is pretty easy, you just need to set an access token:
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)").access_token = "..."
puts "Hello #{[Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current.name}!"
```
Or with your own client instance:
```
require 'travis/client'
client = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)").[new](/gems/travis-async-listener/Travis/Client#new-class_method "Travis::Client.new (method)")(access_token: "...")
puts "Hello #{client.user.name}"
```
See [the token command](#token) for obtaining the access token used by the CLI.
If you don't have an access token for Travis CI, you can use a GitHub access token to get one:
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)").github_auth("...")
puts "Hello #{[Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current.name}!"
```
Travis CI will not store that token.
It also ships with a tool for generating a GitHub token from a user name and password via the GitHub API:
```
require 'travis'
require 'travis/tools/github'
# drop_token will make the token a temporary one github = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Tools](/gems/travis-async-listener/Travis/Tools "Travis::Tools (module)")::[Github](/gems/travis-async-listener/Travis/Tools/Github "Travis::Tools::Github (class)").[new](/gems/travis-async-listener/Travis/Tools/Github#initialize-instance_method "Travis::Tools::Github#initialize (method)")(drop_token: true) do |g|
g.ask_login = -> { print("GitHub login: "); gets }
g.ask_password = -> { print("Password: "); gets }
g.ask_otp = -> { print("Two-factor token: "); gets }
end
github.with_token do |token|
[Travis](/gems/travis-async-listener/Travis "Travis (module)").github_auth(token)
end
puts "Hello #{[Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current.name}!"
```
There is also `travis/auto_login`, which will try to read the CLI configuration or .netrc for a Travis CI or GitHub token to authenticate with automatically:
```
require 'travis/auto_login'
puts "Hello #{[Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current.name}!"
```
### Using Pro
Using the library with private projects pretty much works the same, except you use `Travis::Pro`.
Keep in mind that you need to authenticate.
```
require 'travis/pro'
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Pro](/gems/travis-async-listener/Travis#Pro-constant "Travis::Pro (constant)").access_token = '...'
user = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Pro](/gems/travis-async-listener/Travis#Pro-constant "Travis::Pro (constant)")::User.current
puts "Hello #{user.name}!"
```
There is also `travis/pro/auto_login`, which will try to read the CLI configuration or .netrc for a Travis CI or GitHub token to authenticate with automatically:
```
require 'travis/pro/auto_login'
puts "Hello #{[Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Pro](/gems/travis-async-listener/Travis#Pro-constant "Travis::Pro (constant)")::User.current.name}!"
```
### Entities
Entities are like the models in the Travis Client land. They keep the data and it's usually them you talk to if you want something.
They are pretty much normal Ruby objects.
The Travis session will cache all entities, so don't worry about loading the same one twice.
Once you got a hold of one, you can easily reload it at any time if you want to make sure the data is fresh:
```
rails = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails')
sleep 1.hour rails.reload
```
The travis gem supports lazy and partial loading, so if you want to make sure you have all the data, just call load.
```
rails.load
```
This is not something you should usually do, as partial loading is actually your friend (keeps requests to a minimum).
#### Stateful Entities
[Repositories](#repositories), [Builds](#builds) and [Jobs](#jobs) all are basically state machines, which means they implement the following methods:
```
require 'travis'
build = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails').last_build
p build.canceled?
p build.created?
p build.errored?
p build.failed?
p build.finished?
p build.green?
p build.passed?
p build.pending?
p build.queued?
p build.red?
p build.running?
p build.started?
p build.successful?
p build.unsuccessful?
p build.yellow?
p build.color
```
Builds and jobs also have a `state` method. For repositories, use `last_build.state`.
#### Repositories
Repositories are probably one of the first entities you'll load. It's pretty straight forward, too.
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails') # find by slug
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find(891) # find by id
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find_all(owner_name: 'rails') # all repos in the rails organization
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.current # repos that see some action right now
# all repos with the same owner as the repo with id 891
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find(891).owner.repositories
```
Once you have a repository, you can for instance encrypt some strings with its private key:
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails')
puts repo.encrypt('FOO=bar')
```
Repositories are [stateful](#stateful-entities).
You can enable or disable a repository with the methods that go by the same name.
```
rails.disable system "push all the things"
rails.enable
```
If you want to enable a new project, you might have to do a sync first.
#### Builds
You could load a build by its id using `Travis::Build.find`. But most of the time you won't have the id handy, so you'd usually start with a repository.
```
require 'travis'
rails = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('rails/rails')
rails.last_build # the latest build rails.recent_builds # the last 20 or so builds (don't rely on that number)
rails.builds(after_number: 42) # the last 20 or so builds *before* 42 rails.build(42) # build with the number 42 (not the id!)
rails.builds # Enumerator for #each_build
# this will loop through all builds rails.each_build do |build|
puts "#{build.number}: #{build.state}"
end
# this will loop through all builds before build 42 rails.each_build(after_number: 42) do |build|
puts "#{build.number}: #{build.state}"
end
```
Note that `each_build` (and thus `builds` without and argument) is lazy and uses pagination, so you can safely do things like this:
```
build = rails.builds.detect { |b| b.failed? }
puts "Last failing Rails build: #{build.number}"
```
Without having to load more than 6000 builds.
You can restart a build, if the current user has sufficient permissions on the repository:
```
rails.last_build.restart
```
Same goes for canceling it:
```
rails.last_build.cancel
```
You can also retrieve a Hash mapping branch names to the latest build on that given branch via `branches` or use the `branch` method to get the last build for a specific branch:
```
if rails.branch('4-0-stable').green?
puts "Time for another 4.0.x release!"
end
count = rails.branches.size puts "#{count} rails branches tested on travis"
```
#### Jobs
Jobs behave a lot like [builds](#builds), and similar to them, you probably don't have the id ready. You can get the jobs from a build:
```
rails.last_build.jobs.each do |job|
puts "#{job.number} took #{job.duration} seconds"
end
```
If you have the job number, you can also reach a job directly from the repository:
```
rails.job('5000.1')
```
Like builds, you can also restart singe jobs:
```
rails.job('5000.1').restart
```
Same goes for canceling it:
```
rails.job('5000.1').cancel
```
#### Artifacts
The artifacts you usually care for are probably logs. You can reach them directly from a build:
```
require 'travis'
repo = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('travis-ci/travis.rb')
job = repo.last_build.jobs.first puts job.log.body
```
If you plan to print out the body, be aware that it might contain malicious escape codes. For this reason, we added `colorized_body`, which removes all the unprintable characters, except for ANSI color codes, and `clean_body` which also removes the color codes.
```
puts job.log.colorized_body
```
You can stream a body for a job that is currently running by passing a block:
```
job.log.body { |chunk| print chunk }
```
#### Users
The only user you usually get access to is the currently authenticated one.
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)").access_token = '...'
user = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current
puts "Hello, #{user.login}! Or should I call you... #{user.name.upcase}!?"
```
If some data gets out of sync between GitHub and Travis, you can use the user object to trigger a new sync.
```
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::User.current.sync
```
#### Commits
Commits cannot be loaded directly. They come as a byproduct of [jobs](#jobs) and [builds](#builds).
```
require 'travis'
repo = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('travis-ci/travis.rb')
commit = repo.last_build.commit
puts "Last tested commit: #{commit.short_sha} on #{commit.branch} by #{commit.author_name} - #{commit.subject}"
```
#### Caches
Caches can be fetched for a repository.
```
require 'travis/pro'
[Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Pro](/gems/travis-async-listener/Travis#Pro-constant "Travis::Pro (constant)").access_token = "MY SECRET TOKEN"
repo = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Pro](/gems/travis-async-listener/Travis#Pro-constant "Travis::Pro (constant)")::Repository.find("my/rep")
repo.caches.each do |cache|
puts "#{cache.branch}: #{cache.size}"
cache.delete end
```
It is also possible to delete multiple caches with a single API call:
```
repo.delete_caches(branch: "master", match: "rbx")
```
#### Repository Settings
You can access a repositories settings via `Repository#settings`:
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)").access_token = "MY SECRET TOKEN"
settings = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('my/repo').settings
if settings.build_pushes?
settings.build_pushes = false
settings.save end
```
#### Build Environment Variables
You can access environment variables via `Repository#env_vars`:
```
require 'travis'
[Travis](/gems/travis-async-listener/Travis "Travis (module)").access_token = "MY SECRET TOKEN"
env_vars = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find('my/repo').env_vars
env_vars['foo'] = 'bar'
env_vars.upsert('foo', 'foobar', public: true)
env_vars.each { |var| var.delete }
```
### Dealing with Sessions
Under the hood the session is where the fun is happening. Most methods on the constants and entities just wrap methods on your session, so you don't have to pass the session around all the time or even see it if you don't want to.
There are two levels of session methods, the higher level methods from the `Travis::Client::Methods` mixin, which are also available from `Travis`, `Travis::Pro` or any custom [Namespace](#using-namespaces).
```
require 'travis/client/session'
session = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::[Session](/gems/travis-async-listener/Travis/Client/Session "Travis::Client::Session (class)").[new](/gems/travis-async-listener/Travis/Client#new-class_method "Travis::Client.new (method)")
session.access_token = "secret_token" # access token to use session.api_endpoint = "http://localhost:3000/" # api endpoint to talk to session.github_auth("github_token") # log in with a github token session.repos(owner_name: 'travis-ci') # all travis-ci/* projects session.repo('travis-ci/travis.rb') # this project session.repo(409371) # same as the one above session.build(4266036) # build with id 4266036 session.job(4266037) # job with id 4266037 session.artifact(42) # artifact with id 42 session.log(42) # same as above session.user # the current user, if logged in session.restart(session.build(4266036)) # restart some build session.cancel(session.build(4266036)) # cancel some build
```
You can add these methods to any object responding to `session` via said mixin.
Below this, there is a second API, close to the HTTP level:
```
require 'travis/client/session'
session = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::[Session](/gems/travis-async-listener/Travis/Client/Session "Travis::Client::Session (class)").[new](/gems/travis-async-listener/Travis/Client#new-class_method "Travis::Client.new (method)")
session.instrument do |description, block|
time = Time.now
block.call
puts "#{description} took #{Time.now - time} seconds"
end
session.connection = Faraday::Connection.new
session.get_raw('/repos/rails/rails') # => {"repo" => {"id" => 891, "slug" => "rails/rails", ...}}
session.get('/repos/rails/rails') # => {"repo" => #<Travis::Client::Repository: rails/rails>}
session.headers['Foo'] = 'Bar' # send a custom HTTP header with every request
rails = session.find_one([Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::[Repository](/gems/travis-async-listener/Travis/Client/Repository "Travis::Client::Repository (class)"), 'rails/rails')
session.find_many([Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::[Repository](/gems/travis-async-listener/Travis/Client/Repository "Travis::Client::Repository (class)")) # repositories with the latest builds session.find_one_or_many([Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::[User](/gems/travis-async-listener/Travis/Client/User "Travis::Client::User (class)")) # the current user (you could also use find_one here)
session.reload(rails)
session.reset(rails) # lazy reload
session.clear_cache # empty cached attributes session.clear_cache! # empty identity map
```
### Listening for Events
You can use the `listen` method to listen for events on repositories, builds or jobs:
```
require 'travis'
rails = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find("rails/rails")
sinatra = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::Repository.find("sinatra/sinatra")
[Travis](/gems/travis-async-listener/Travis "Travis (module)").listen(rails, sinatra) do |stream|
stream.on('build:started', 'build:finished') do |event|
# ie "rails/rails just passed"
puts "#{event.repository.slug} just #{event.build.state}"
end end
```
Current events are `build:created`, `build:started`, `build:finished`, `job:created`, `job:started`, `job:finished` and `job:log` (the last one only when subscribing to jobs explicitly). Not passing any arguments to `listen` will monitor the global stream.
### Using Namespaces
`Travis` and `Travis::Pro` are just two different namespaces for two different Travis sessions. A namespace is a Module, exposing the higher level [session methods](#dealing-with-sessions). It also has a dummy constant for every [entity](#entities), wrapping `find_one` (aliased to `find`) and `find_many` (aliased to `find_all`) for you, so you don't have to keep track of the session or hand in the entity class. You can easily create your own namespace:
```
require 'travis/client'
MyTravis = [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::Namespaces.new("http://localhost:3000")
MyTravis.access_token = "..."
MyTravis::Repository.find("foo/bar")
```
Since namespaces are Modules, you can also include them.
```
require 'travis/client'
class MyTravis
include [Travis](/gems/travis-async-listener/Travis "Travis (module)")::[Client](/gems/travis-async-listener/Travis/Client "Travis::Client (module)")::Namespaces.new end
MyTravis::Repository.find('rails/rails')
```
Installation
---
Make sure you have at least [Ruby](http://www.ruby-lang.org/en/downloads/) 1.9.3 (2.0.0 recommended) installed.
You can check your Ruby version by running `ruby -v`:
```
$ ruby -v ruby 2.0.0p195 (2013-05-14 revision 40734) [x86_64-darwin12.3.0]
```
Then run:
```
$ gem install travis -v 1.8.2 --no-rdoc --no-ri
```
Now make sure everything is working:
```
$ travis version 1.8.2
```
See also [Note on Ubuntu](#ubuntu) below.
### Development Version
You can also install the development version via RubyGems:
```
$ gem install travis --pre
```
We automatically publish a new development version after every successful build.
### Updating your Ruby
If you have an outdated Ruby version, you should use your package system or a Ruby Installer to install a recent Ruby.
#### Mac OS X via Homebrew
Mac OSX prior to 10.9 ships with a very dated Ruby version. You can use [Homebrew](http://mxcl.github.io/homebrew/) to install a recent version:
```
$ brew install ruby
$ gem update --system
```
#### Windows
On Windows, we recommend using the [RubyInstaller](http://rubyinstaller.org/), which includes the latest version of Ruby.
#### Other Unix systems
On other Unix systems, like Linux, use your package system to install Ruby. Please inquire before hand which package you might actually want to install, as for some distributions `ruby` might actually still be 1.8.7 or older.
Debian:
```
$ sudo apt-get install ruby1.9.3 ruby1.9.3-dev ruby-switch
$ sudo ruby-switch --set ruby1.9.3
```
Ubuntu:
```
$ sudo apt-get install python-software-properties
$ sudo apt-add-repository ppa:brightbox/ruby-ng
$ sudo apt-get update
$ sudo apt-get install ruby2.1 ruby-switch
$ sudo ruby-switch --set ruby2.1
```
Fedora:
```
$ sudo yum install ruby ruby-devel
```
Arch Linux:
```
$ sudo pacman -S ruby
```
#### Ruby versioning tools
Alternatively, you can use a Ruby version management tool such as [rvm](https://rvm.io/rvm/install/), [rbenv](http://rbenv.org/) or [chruby](https://github.com/postmodern/chruby). This is only recommended if you need to run multiple versions of Ruby.
You can of course always compile Ruby from source, though then you are left with the hassle of keeping it up to date and making sure that everything is set up properly.
### Troubleshooting
#### Ubuntu
On certain versions of Ubuntu (e.g., 13.10), you need to install the corresponding `-dev` package in order to build the C extension on which `travis` gem depends.
For the stock Ubuntu 13.10, run:
```
$ sudo apt-get install ruby1.9.1-dev
```
If you updated to Ruby 2.1 as shown above:
```
$ sudo apt-get install ruby2.1-dev
```
#### Mac OS X
If you start with a clean Mac OS X, you will have to install the XCode Command Line Tools, which are necessary for installing native extensions. You can do so via `xcode-select`:
```
$ xcode-select --install
```
Mac OS X 10.9.2 shipped with a slightly broken Ruby version. If you want to install the gem via the system Ruby and you get an error, you might have to run the following instead:
```
$ ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future gem install travis
```
#### Upgrading from travis-cli
If you have the old `travis-cli` gem installed, you should `gem uninstall travis-cli`, just to be sure, as it ships with an executable that is also named `travis`.
Version History
---
**1.8.0** (July 15, 2015)
* Fix listener for pusher changes on [travis-ci.org](https://travis-ci.org).
* Change `monitor` command to only monitor personal repositories if `common` channel is not available.
**1.7.7** (May 26, 2015)
* Fix `travis whatsup` for fresh Travis Enterprise installations.
**1.7.6** (April 08, 2015)
* Add support for "received" build state.
* Fix issue with archived logs.
* On version check, do not kill the process if a newer version has been released.
**1.7.5** (January 15, 2015)
* Add support for url..insteadOf
* Fix packaging error with 1.7.4, in which Code Deploy setup code was not included
**1.7.4** (November 12, 2014)
* Add `travis setup codedeploy`
**1.7.3** (November 10, 2014)
* Add `travis setup biicode`
* Add `travis env clear`
* Print error message if `travis login` is run for a GitHub account unknown to the Travis CI setup.
* Fix bug in S3 ACL settings.
* Make `travis console` work with newer pry versions.
**1.7.2** (September 17, 2014)
* Add `travis setup elasticbeanstalk`.
* Properly display educational accounts in `travis accounts`.
* Upgrade go version default for `travis init`.
* Fix SSL verification issue on OS X Yosemite and certain Linux setups.
* Be more forgiving with outdated API version (Enterprise).
* Better handling of multibyte characters in archived logs.
* Use more restricitve permissions for the config file.
**1.7.1** (August 9, 2014)
* Better error message when trying to encrypt a string that is too long.
* Fix Validation failed error using `travis sshkey --upload`.
**1.7.0** (August 5, 2014)
* Add `travis encrypt-file`.
* Add `--store-repo`/`-R` to repository commands to permanently store the slug for a repository.
* Announce repository slug when first detected, ask for confirmation in interactive mode.
* Have `travis repos` only print repository slugs in non-interactive mode.
* Add `travis/auto_login` and `travis/pro/auto_login` to the Ruby API for easy authentication.
* Add `--fingerprint` to `pubkey` command.
* Add `fingerprint` to `Repository#public_key`.
* Display better error messages for user errors (user data validation failing, etc).
* Have `travis sshkey --upload` check that the content is a private key.
* Make `travis sshkey --upload` prompt for and remove the pass phrase if the key is encrypted.
**1.6.17** (July 25, 2014)
* Add `travis sshkey` and corresponding Ruby API.
* Make desktop notifications work on Mac OS X 10.10.
**1.6.16** (July 19, 2014)
* Fix check for updates.
**1.6.15** (July 18, 2014)
* Add `travis env [list|add|set|copy]`.
* Add `Repository#env_vars`.
* Add `travis setup ghc`.
* Add `Log#delete_body`, `Job#delete_log` and `Build#delete_logs` to Ruby API.
* Add `--delete`, `--force` and `--no-stream` options to `travis logs`.
* Add `acl` option to `travis setup s3`.
* Add `--set` option to `travis settings`, support non-boolean values.
* Expose `maximum_number_of_builds` setting.
* Give GitHub OAuth token generated by `travis setup releases` a proper description.
* Proper handling for empty or broken config files.
* Reset terminal colors after `travis logs`.
**1.6.14** (June 17, 2014)
* Add `travis lint` command and Ruby API.
**1.6.13** (June 15, 2014)
* Added Deis and Hackage setup support.
**1.6.12** (June 12, 2014)
* Added artifacts setup support.
**1.6.11** (May 12, 2014)
* Added Cloud 66 and Ninefold setup support.
* Require typhoeus 0.6.8 and later.
**1.6.10** (April 24, 2014)
* Better CloudFoundry support
* Update Faraday to version 0.9.
**1.6.9** (April 9, 2014)
* Add `--limit` to `travis requests`.
* Add `--committer` option to `travis history`.
* Avoid error when running `travis login` with a revoked token.
* Add `travis setup releases`.
* Desktop notifications via libnotify are now transient (disappear on their own if the user is active).
* Update Rubinius version generated by `travis init ruby`.
* Improve setup when running `travis` executable that has not been installed via RubyGems.
**1.6.8** (March 12, 2014)
* Display annotations in `travis show`.
* Add `travis requests` to see build requests Travis CI has received.
* Improve annotation support in the Ruby library.
* Add `Repository#requests` to Ruby library.
* Fix behavior for missing entities.
**1.6.7** (January 30, 2014)
* Properly display OS for projects tested on multiple operating systems.
* Better error message when using an invalid access token.
* Fix desktop notifications using libnotify (Linux/BSD).
* `travis branches` preserves branch name when displaying Pull Request builds.
* Add `travis setup modulus`.
* Ruby library now supports build annotations.
* Document plugin support.
* Do not have the client raise on unknown API entities.
* Do not try and resolve missing commit data (as it will lead to a 404).
**1.6.6** (December 16, 2013)
* Fix `travis login --pro` for new users.
**1.6.5** (December 16, 2013)
* Add `travis settings` command for accessing repository settings.
* Add `travis setup opsworks`.
* Add `travis console -x` to run a line of Ruby code with a valid session.
* Add authentication and streaming example for Ruby library.
* Add Ruby API for dealing with repository settings.
* Improve `travis login` and `travis login --auto`. Add ability to load GitHub token from Keychain.
* Only ask for GitHub two-factor auth token if two-factor auth is actually required.
* Fix access right check for `travis caches`.
**1.6.4** (December 16, 2013)
Release was yanked. See 1.6.5 for changes.
**1.6.3** (November 27, 2013)
* Fix OS detection on Windows.
* Add `travis repos` command.
* Add `travis setup cloudfiles`.
* Add `travis setup divshot`.
* Add `--date` flag to `travis history`.
* Add upload and target directory options to `travis setup s3`.
* Include commit message in desktop notifications.
* Check if Notification Center or Growl is actually running before sending out notifications.
* Better documentation for desktop notifications.
* Improved handling of pusher errors when streaming.
* Add ability to load archived logs from different host.
* User proper API endpoint for streaming logs, as old endpoint has been removed.
* Make tests run on Rubinius 2.x.
**1.6.2** (November 8, 2013)
* Remove worker support, as API endpoints have been removed from Travis CI.
* Improve OS detection.
* Fix `travis report`.
* Fix issues with new payload for permissions endpoint (used by `travis monitor`).
* Improve default logic for whether `travis monitor` should display desktop notifications.
* Make desktop notifications work on Mac OSX 10.9.
* Increase and improve debug output.
* Only load pry if console command is actually invoked, not when it is loaded (for instance by `travis help`).
**1.6.1** (November 4, 2013)
* Update autocompletion when updating travis gem.
**1.6.0** (November 4, 2013)
* Add `travis cache` to list and delete directory caches.
* Add `travis report` to give a report of the system, endpoint, configuration and last exception.
* Add `Cache` entity.
* Keep `travis monitor` running on API errors.
**1.5.8** (October 24, 2013)
* Fix bug in completion code that stopped command line client from running.
**1.5.7** (October 24, 2013)
* Improve logic for automatically figuring out a repository slug based on the tracked git remote.
* Display error if argument passed to `-r` is not a full slug.
* Do not automatically install shell completion on gem installation.
* Add Travis CI mascot as logo to desktop notifications.
* Improve OSX and Growl notifications.
* Require user to be logged in for all commands issued against an enterprise installation.
* Improve error message when not logged in for enterprise installations.
* Fix API endpoint detection for enterprise installations.
* Make streaming API, and thus the `monitor` and `logs` command, work with enterprise installations.
* Add `--build`, `--push` and `--pull` flags to monitor command to allow filtering events.
**1.5.6** (October 22, 2013)
* Add `travis setup appfog` and `travis setup s3`.
* Use new API for fetching a single branch for Repository#branch. This also circumvents the 25 branches limit.
* Start publishing gem prereleases after successful builds.
* Have `travis logs` display first job for a build if a build number is given (or for the last build if called without arguments)
* Add support for branch names to `travis logs`.
* Add support for just using the job suffix with `travis logs`.
* Improve error message if job cannot be found/identified by `travis logs`.
* Add `travis logout` for removing access token.
* Improve error message for commands that require user to be logged in.
* Add `account` method for fetching a single account to `Travis::Client::Methods`.
* Allow creating account objects for any account, not just these the user is part of. Add `Account#member?` to check for membership.
* Add `Account#repositories` to load all repos for a given account.
* Add `Repository#owner_name` and `Repository#owner` to load the account owning a repository.
* Add `Repository#member?` to check if the current user is a member of a repository.
* Add `Build#pull_request_number` and `Build#pull_request_title`.
* Remove trailing new lines from string passed to `travis encrypt`.
* Fix double `provider` entry generated by `travis setup engineyard`.
* Only load auto-completions if available.
* Fix and improve growl notifications.
* Fix GitHub host detection `travis login --auto`.
* API endpoint may now include a path all the requests will be prefixed with.
* Allow overriding SSL options in Ruby client.
* Add `--insecure` to turn off SSL verification.
* Add `--enterprise`/`-X` option for Travis Enterprise integration.
**1.5.5** (October 2, 2013)
* Add `travis setup pypi`
* Add `travis setup npm`
* When loading accounts, set all flag to true.
* Fix bug where session.config would be nil instead of a hash.
**1.5.4** (September 7, 2013)
* Make `travis monitor` send out desktop notifications.
* List available templates on `travis init --help`.
* List available services on `travis setup --help`.
* Make `travis setup cloudfoundry` detect the target automatically if possible
* Have `travis setup` ask if you want to deploy/release from current branch if not on master.
* Give autocompletion on zsh [superpowers](http://ascii.io/a/5139).
* Add `Repository#github_language`.
* `travis init` now is smarter when it comes to detecting the template to use (ie, "CoffeeScript" will be mapped to "node_js")
* Running `travis init` without a language will now use `Repository#github_language` as default language rather than ruby.
* Make `travis login` and `travis login --auto` work with GitHub Enterprise.
* Make `travis login` work with two factor authentication.
* Add `travis endpoint --github`.
* Make `travis accounts` handle accounts without name better.
**1.5.3** (August 22, 2013)
* Fix issues on Windows.
* Improve `travis setup rubygems` (automatically figure out API token for newer RubyGems versions, offer to only release tagged commits, allow changing gem name).
* Add command descriptions to help pages.
* Smarter check if travis gem is outdated.
* Better error messages for non-existing build/job numbers.
**1.5.2** (August 18, 2013)
* Add `travis cancel`.
* Add `Build#cancel` and `Job#cancel` to Ruby API.
* Add `travis setup cloudfoundry`.
* Add `--set-default` and `--drop-default` to `travis endpoint`.
* Make it possible to configure cli via env variables (`$TRAVIS_TOKEN`, `$TRAVIS_ENDPOINT` and `$TRAVIS_CONFIG_PATH`).
* Improve `travis setup cloudcontrol`.
**1.5.1** (August 15, 2013)
* Add `travis setup engineyard`.
* Add `travis setup cloudcontrol`.
* Silence warnings when running `travis help` or `travis console`.
**1.5.0** (August 7, 2013)
* Add `travis setup rubygems`.
* Add `travis accounts`.
* Add `travis monitor`.
* Make `travis logs` stream.
* Add Broadcast entity.
* Add streaming body API.
* Add event listener API.
* Add simple plugin system (will load any ~/.travis/*/init.rb when running cli).
* Implement shell completion for bash and zsh.
* Be smarter about warnings when running `travis encrypt`.
* Improve documentation.
**1.4.0** (July 26, 2013)
* Add `travis init`
* Improve install documentation, especially for people from outside the Ruby community
* Improve error message on an expired token
* Add Account entity to library
* Switch to Typhoeus as default HTTP adapter
* Fix tests for forks
**1.3.1** (July 21, 2013)
* Add `travis whatsup --my-repos`, which corresponds to the "My Repositories" tab in the web interface
* It is now recommended to use Ruby 2.0, any Ruby version prior to 1.9.3 will lead to a warning being displayed. Disable with `--skip-version-check`.
* Add `--override` and `--append` to `travis encrypt`, make default behavior depend on key.
* Add shorthand for `travis encrypt --add`.
**1.3.0** (July 20, 2013)
* Add `travis setup [heroku|openshift|nodejitsu|sauce_connect]`
* Add `travis branches`
* Add Repository#branch and Repository#branches
* Improve `--help`
* Improve error message when calling `travis logs` with a matrix build number
* Check if travis gem is up to date from time to time (CLI only, not when used as library)
**1.2.8** (July 19, 2013)
* Make pubkey print out key in ssh encoding, add --pem flag for old format
* Fix more encoding issues
* Fix edge cases that broke history view
**1.2.7** (July 15, 2013)
* Add pubkey command
* Remove all whitespace from an encrypted string
**v1.2.6** (July 7, 2013)
* Improve output of history command
**v1.2.5** (July 7, 2013)
* Fix encoding issue
**v1.2.4** (July 7, 2013)
* Allow empty commit message
**v1.2.3** (June 27, 2013)
* Fix encoding issue
* Will detect github repo from other remotes besides origin
* Add clear_cache(!) to Travis::Namespace
**v1.2.2** (May 24, 2013)
* Fixed `travis disable`.
* Fix edge cases around `travis encrypt`.
**v1.2.1** (May 24, 2013)
* Builds with high build numbers are properly aligned when running `travis history`.
* Don't lock against a specific backports version, makes it easier to use it as a Ruby library.
* Fix encoding issues.
**v1.2.0** (February 22, 2013)
* add `--adapter` to API endpoints
* added branch to `show`
* fix bug where colors were not used if stdin is a pipe
* make `encrypt` options `--split` and `--add` work together properly
* better handling of missing or empty `.travis.yml` when running `encrypt --add`
* fix broken example code
* no longer require network connection to automatically detect repository slug
* add worker support to the ruby library
* adjust artifacts/logs code to upstream api changes
**v1.1.3** (January 26, 2013)
* use persistent HTTP connections (performance for commands with example api requests)
* include round trip time in debug output
**v1.1.2** (January 24, 2013)
* `token` command
* no longer wrap $stdin in delegator (caused bug on some Linux systems)
* correctly detect when running on Windows, even on JRuby
**v1.1.1** (January 22, 2013)
* Make pry a runtime dependency rather than a development dependency.
**v1.1.0** (January 21, 2013)
* New commands: `console`, `status`, `show`, `logs`, `history`, `restart`, `sync`, `enable`, `disable`, `open` and `whatsup`.
* `--debug` option for all API commands.
* `--split` option for `encrypt`.
* Fix `--add` option for `encrypt` (was naming key `secret` instead of `secure`).
* First class representation for builds, commits and jobs in the Ruby library.
* Print warning when running "encrypt owner/project data", as it's not supported by the new client.
* Improved documentation.
**v1.0.3** (January 15, 2013)
* Fix `-r slug` for repository commands. (#3)
**v1.0.2** (January 14, 2013)
* Only bundle CA certs needed to verify Travis CI and GitHub domains.
* Make tests pass on Windows.
**v1.0.1** (January 14, 2013)
* Improve `encrypt --add` behavior.
**v1.0.0** (January 14, 2013)
* Fist public release.
* Improved documentation.
**v1.0.0pre2** (January 14, 2013)
* Added Windows support.
* Suggestion to run `travis login` will add `--org` if needed.
**v1.0.0pre** (January 13, 2013)
* Initial public prerelease. |
dyndnsc | readthedoc | Markdown | dyndnsc 0.6.1 documentation
[dyndnsc](#)
---
Welcome to Dyndnsc’s documentation![¶](#welcome-to-dyndnsc-s-documentation)
===
User Guide[¶](#user-guide)
---
This part of the documentation, which is mostly prose, begins with some background information about Dyndnsc, then focuses on step-by-step instructions for getting the most out of Dyndnsc.
### Introduction[¶](#introduction)
#### What is Dyndnsc?[¶](#what-is-dyndnsc)
It’s a [dynamic DNS client](https://en.wikipedia.org/wiki/Dynamic_DNS).
It can detect your IP address in a variety of ways and update DNS records automatically.
#### Goals[¶](#goals)
Provide:
* an easy to use command line tool
* an API for developers
* support for a variety of ways to detect IP addresses
* support for a variety of ways to update DNS records
### Installation[¶](#installation)
This part of the documentation covers the installation of Dyndnsc.
The first step to using any software package is getting it properly installed.
#### Pip / pipsi[¶](#pip-pipsi)
Installing Dyndnsc is simple with [pip](http://www.pip-installer.org/):
```
pip install dyndnsc
```
Or, if you prefer a more encapsulated way, use [pipsi](https://github.com/mitsuhiko/pipsi/):
```
pipsi install dyndnsc
```
#### Docker[¶](#docker)
[Docker](https://www.docker.com) images are provided for the following architectures.
x86:
```
docker pull infothrill/dyndnsc-x86-alpine
```
See also <https://hub.docker.com/r/infothrill/dyndnsc-x86-alpine/armhf:
```
docker pull infothrill/dyndnsc-armhf-alpine
```
See also <https://hub.docker.com/r/infothrill/dyndnsc-armhf-alpine/#### Get the Code[¶](#get-the-code)
Dyndnsc is developed on GitHub, where the code is
[available](https://github.com/infothrill/python-dyndnsc).
You can clone the public repository:
```
git clone https://github.com/infothrill/python-dyndnsc.git
```
Once you have a copy of the source, you can embed it in your Python package,
or install it into your site-packages easily:
```
python setup.py install
```
### Quickstart[¶](#quickstart)
Eager to get started? This page gives a good introduction in how to get started with Dyndnsc. This assumes you already have Dyndnsc installed. If you do not,
head over to the [Installation](index.html#install) section.
First, make sure that:
* Dyndnsc is [installed](index.html#install)
* Dyndnsc is [up-to-date](index.html#updates)
Let’s get started with some simple examples.
#### Command line usage[¶](#command-line-usage)
Dyndnsc exposes all options through the command line interface, however,
we do recommend using a configuration file.
Here is an example to update an IPv4 record on nsupdate.info with web based IP autodetection:
```
$ dyndnsc --updater-dyndns2 \
--updater-dyndns2-hostname test.nsupdate.info \
--updater-dyndns2-userid test.nsupdate.info \
--updater-dyndns2-password XXXXXXXX \
--updater-dyndns2-url https://nsupdate.info/nic/update \
--detector-webcheck4 \
--detector-webcheck4-url https://ipv4.nsupdate.info/myip \
--detector-webcheck4-parser plain
```
Updating an IPv6 address when using [Miredo](http://www.remlab.net/miredo/):
```
$ dyndnsc --updater-dyndns2 \
--updater-dyndns2-hostname test.nsupdate.info \
--updater-dyndns2-userid test.nsupdate.info \
--updater-dyndns2-password XXXXXXXX \
--detector-teredo
```
Updating an IPv6 record on nsupdate.info with interface based IP detection:
```
$ dyndnsc --updater-dyndns2 \
--updater-dyndns2-hostname test.nsupdate.info \
--updater-dyndns2-userid test.nsupdate.info \
--updater-dyndns2-password XXXXXXXX \
--detector-socket \
--detector-socket-family INET6
```
#### Update protocols[¶](#update-protocols)
Dyndnsc supports several different methods for updating dynamic DNS services:
* [dnsimple](https://developer.dnsimple.com/)Note: requires python package [dnsimple-dyndns](https://pypi.python.org/pypi/dnsimple-dyndns) to be installed
* [duckdns](https://www.duckdns.org/)
* [dyndns2](https://help.dyn.com/remote-access-api/)
* [freedns.afraid.org](https://freedns.afraid.org/)
A lot of services on the internet offer some form of compatibility, so check against this list. Some of these external services are pre-configured for Dyndnsc as a preset, see the section on presets.
Each supported update protocol can be parametrized on the dyndnsc command line using long options starting with ‘–updater-’ followed by the name of the protocol:
```
$ dyndnsc --updater-afraid
$ dyndnsc --updater-dnsimple
$ dyndnsc --updater-duckdns
$ dyndnsc --updater-dyndns2
```
Each of these update protocols supports specific parameters, which might differ from each other. Each of these additional parameters can specified on the command line by appending them to the long option described above.
Example to specify token for updater duckdns:
```
$ dyndnsc --updater-duckdns-token 847c0ffb-39bd-326f-b971-bfb3d4e36d7b
```
#### Detecting the IP[¶](#detecting-the-ip)
*Dyndnsc* ships a couple of “detectors” which are capable of finding an IP address through different means.
Detectors may need additional parameters to work properly. Additional parameters can be specified on the command line similarly to the update protocols.
```
$ dyndnsc --detector-iface \
--detector-iface-iface en0 \
--detector-iface-family INET
$ dyndnsc --detector-webcheck4 \
--detector-webcheck4-url http://ipv4.nsupdate.info/myip \
--detector-webcheck4-parser plain
```
Some detectors require additional python dependencies:
* *iface*, *teredo* detectors require [netifaces](https://pypi.python.org/pypi/netifaces) to be installed
#### Presets[¶](#presets)
*Dyndnsc* comes with a list of pre-configured presets. To see all configured presets, you can run
```
$ dyndnsc --list-presets
```
Presets are used to shorten the amount of configuration needed by providing preconfigured parameters. For convenience, Dyndnsc ships some built-in presets but this list can be extended by yourself by adding them to the configuration file. Each preset has a section in the ini file called ‘[preset:NAME]’.
See the section on the configuration file to see how to use presets.
Note: Presets can currently only be used in a configuration file. There is currently no support to select a preset from the command line.
#### Configuration file[¶](#configuration-file)
Create a config file test.cfg with this content (no spaces at the left!):
```
[dyndnsc]
configs = test_ipv4, test_ipv6
[test_ipv4]
use_preset = nsupdate.info:ipv4 updater-hostname = test.nsupdate.info updater-userid = test.nsupdate.info updater-password = xxxxxxxx
[test_ipv6]
use_preset = nsupdate.info:ipv6 updater-hostname = test.nsupdate.info updater-userid = test.nsupdate.info updater-password = xxxxxxxx
```
Now invoke dyndnsc and give this file as configuration:
```
$ dyndnsc --config test.cfg
```
#### Custom services[¶](#custom-services)
If you are using a dyndns2 compatible service and need to specify the update URL explicitly, you can add the argument –updater-dyndns2-url:
```
$ dyndnsc --updater-dyndns2 \
--updater-dyndns2-hostname=test.dyndns.com \
--updater-dyndns2-userid=bob \
--updater-dyndns2-password=fub4r \
--updater-dyndns2-url=https://dyndns.example.com/nic/update
```
#### Plugins[¶](#plugins)
*Dyndnsc* supports plugins which can be notified when a dynamic DNS entry was changed. Currently, only two plugins exist:
* [dyndnsc-growl](https://pypi.python.org/pypi/dyndnsc-growl)
* [dyndnsc-macosnotify](https://pypi.python.org/pypi/dyndnsc-macosnotify)
The list of plugins that are installed and available in your environment will be listed in the command line help. Each plugin command line option starts with
‘–with-‘.
### Frequently Asked Questions[¶](#frequently-asked-questions)
#### Python 3 Support?[¶](#python-3-support)
Yes! In fact, we only support Python3 at this point.
Here’s a list of Python platforms that are officially supported:
* Python 3.6
* Python 3.7
* Python 3.8
* Python 3.9
#### Is service xyz supported?[¶](#is-service-xyz-supported)
To find out wether a certain dynamic dns service is supported by Dyndnsc, you can either try to identify the protocol involved and see if it is supported by Dyndnsc by looking the output of ‘dyndnsc –help’. Or maybe the service in question is already listed in the presets (‘dyndnsc –list-presets’).
#### I get a wrong IPv6 address, why?[¶](#i-get-a-wrong-ipv6-address-why)
If you use the “webcheck6” detector and your system has IPv6 privacy extensions,
it’ll result in the temporary IPv6 address that you use to connect to the outside world.
You likely rather want your less private, but static global IPv6 address in DNS and you can determine it using the “socket” detector.
#### What about error handling of network issues?[¶](#what-about-error-handling-of-network-issues)
“Hard” errors on the transport level (tcp timeouts, socket erors…) are not handled and will fail the client. In daemon or loop mode, exceptions are caught to keep the client alive (and retries will be issued at a later time).
### Community Updates[¶](#community-updates)
#### Tracking development[¶](#tracking-development)
The best way to track the development of Dyndnsc is through
[the GitHub repo](https://github.com/infothrill/python-dyndnsc).
#### Release history[¶](#release-history)
##### 0.6.1 (April 2nd 2021)[¶](#april-2nd-2021)
* improved: dnswanip error reporting now includes dns information
* improved: fix for bug [#144](https://github.com/infothrill/python-dyndnsc/issues/144)
* improved: added tests for console script
##### 0.6.0 (February 21st 2021)[¶](#february-21st-2021)
* changed (**INCOMPATIBLE**): dropped support for python 2.7 and python 3.4, 3.5
* added: more presets
* improved: add support for python 3.8, 3.9
* added: docker build automation
* added: –log-json command line option, useful when running in docker
##### 0.5.1 (July 7th 2019)[¶](#july-7th-2019)
* improved: pin pytest version to [version smaller than 5.0.0](https://docs.pytest.org/en/latest/py27-py34-deprecation.html)
##### 0.5.0 (June 25th 2019)[¶](#june-25th-2019)
* improved: simplified notification plugin and externalized them using entry_points
* added: WAN IP detection through DNS (detector ‘dnswanip’)
* improved: replaced built-in daemon code with [daemonocle](https://pypi.python.org/pypi/daemonocle)
* switched to [pytest](https://pytest.org) for running tests
* changed (**INCOMPATIBLE**): dropped support for python 2.6 and python 3.3
* added: new command line option -v to control verbosity
* improved: infinite loop and daemon stability, diagnostics #57
* improved: updated list of external urls for IP discovery
* improved: install documentation updated
* improved: add many missing docstrings and fixed many code smells
* improved: run [flake8](http://flake8.pycqa.org/) code quality checks in CI
* improved: run [check-manifest](https://pypi.python.org/pypi/check-manifest) in CI
* improved: run [safety](https://pypi.python.org/pypi/safety) in CI
##### 0.4.4 (December 27th 2017)[¶](#december-27th-2017)
* fixed: fixed wheel dependency on python 2.6 and 3.3
* fixed: pep8 related changes, doc fixes
##### 0.4.3 (June 26th 2017)[¶](#june-26th-2017)
* fixed: nsupdate URLs
* fixed: several minor cosmetic issues, mostly testing related
##### 0.4.2 (March 8th 2015)[¶](#march-8th-2015)
* added: support for <https://www.duckdns.org>
* fixed: user configuration keys now override built-in presets
##### 0.4.1 (February 16th 2015)[¶](#february-16th-2015)
* bugfixes
##### 0.4.0 (February 15th 2015)[¶](#february-15th-2015)
* changed (**INCOMPATIBLE**): command line arguments have been drastically adapted to fit different update protocols and detectors
* added: config file support
* added: running against multiple update services in one go using config file
* improved: for python < 3.2, install more dependencies to get SNI support
* improved: the DNS resolution automatically resolves using the same address family (ipv4/A or ipv6/AAAA or any) as the detector configured
* improved: it is now possible to specify arbitrary service URLs for the different updater protocols.
* fixed: naming conventions
* fixed: http connection robustness (i.e. catch more errors and handle them as being transient)
* changed: dependency on netifaces was removed, but if installed, the functionality remains in place
* a bunch of pep8, docstring and documntation updates
##### 0.3.4 (January 3rd 2014)[¶](#january-3rd-2014)
* added: initial support for dnsimple.com through
[dnsimple-dyndns](https://pypi.python.org/pypi/dnsimple-dyndns)
* added: plugin based desktop notification (growl and OS X notification center)
* changed: for python3.3+, use stdlib ‘ipaddress’ instead of ‘IPy’
* improved: dyndns2 update is now allowed to timeout
* improved: freedns.afraid.org robustness
* improved: webcheck now has an http timeout
* improved: naming conventions in code
* added: initial documentation using sphinx
##### 0.3.3 (December 2nd 2013)[¶](#december-2nd-2013)
* added: experimental support for <http://freedns.afraid.org>
* added: detecting ipv6 addresses using ‘webcheck6’ or ‘webcheck46’
* fixed: long outstanding state bugs in detector base class
* improved: input validation in Iface detection
* improved: support pytest conventions
##### 0.3.2 (November 16th 2013)[¶](#november-16th-2013)
* added: command line option –debug to explicitly increase loglevel
* fixed potential race issues in detector base class
* fixed: several typos, test structure, naming conventions, default loglevel
* changed: dynamic importing of detector code
##### 0.3.1 (November 2013)[¶](#november-2013)
* added: support for <https://nsupdate.info>
* fixed: automatic installation of ‘requests’ with setuptools dependencies
* added: more URL sources for ‘webcheck’ IP detection
* improved: switched optparse to argparse for future-proofing
* fixed: logging initialization warnings
* improved: ship tests with source tarball
* improved: use reStructuredText rather than markdown
##### 0.3 (October 2013)[¶](#october-2013)
* moved project to <https://github.com/infothrill/python-dyndnsc>
* added continuous integration tests using <http://travis-ci.org>
* added unittests
* dyndnsc is now a package rather than a single file module
* added more generic observer/subject pattern that can be used for desktop notifications
* removed growl notification
* switched all http related code to the “requests” library
* added <http://www.noip.com>
* removed dyndns.majimoto.net
* dropped support for python <= 2.5 and added support for python 3.2+
##### 0.2.1 (February 2013)[¶](#february-2013)
* moved code to git
* minimal PEP8 changes and code restructuring
* provide a makefile to get dependencies using buildout
##### 0.2.0 (February 2010)[¶](#february-2010)
* updated IANA reserved IP address space
* Added new IP Detector: running an external command
* Minimal syntax changes based on the 2to3 tool, but remaining compatible with python 2.x
##### 0.1.2 (July 2009)[¶](#july-2009)
* Added a couple of documentation files to the source distribution
##### 0.1.1 (September 2008)[¶](#september-2008)
* Focus: initial public release
### License[¶](#license)
*Dyndnsc* is released under terms of [MIT License](http://www.opensource.org/licenses/MIT). This license was chosen explicitly to allow inclusion of this software in proprietary and closed systems.
Copyright (c) 2008-2015 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
API Documentation[¶](#api-documentation)
---
If you are looking for information on a specific function, class or method,
this part of the documentation is for you.
### API Documentation[¶](#module-dyndnsc)
This part of the documentation should cover all the relevant interfaces of dyndnsc.
#### Main Interface[¶](#main-interface)
*class* `dyndnsc.``DynDnsClient`(*updater=None*, *detector=None*, *plugins=None*, *detect_interval=300*)[[source]](_modules/dyndnsc/core.html#DynDnsClient)[¶](#dyndnsc.DynDnsClient)
This class represents a client to the dynamic dns service.
Initialize.
Parameters
**detect_interval** – amount of time in seconds that can elapse between checks
`check`()[[source]](_modules/dyndnsc/core.html#DynDnsClient.check)[¶](#dyndnsc.DynDnsClient.check)
Check if the detector changed and call sync() accordingly.
If the sleep time has elapsed, this method will see if the attached detector has had a state change and call sync() accordingly.
`has_state_changed`()[[source]](_modules/dyndnsc/core.html#DynDnsClient.has_state_changed)[¶](#dyndnsc.DynDnsClient.has_state_changed)
Detect changes in offline detector and real DNS value.
Detect a change either in the offline detector or a difference between the real DNS value and what the online detector last got.
This is efficient, since it only generates minimal dns traffic for online detectors and no traffic at all for offline detectors.
Return type boolean
`needs_check`()[[source]](_modules/dyndnsc/core.html#DynDnsClient.needs_check)[¶](#dyndnsc.DynDnsClient.needs_check)
Check if enough time has elapsed to perform a check().
If this time has elapsed, a state change check through has_state_changed() should be performed and eventually a sync().
Return type boolean
`needs_sync`()[[source]](_modules/dyndnsc/core.html#DynDnsClient.needs_sync)[¶](#dyndnsc.DynDnsClient.needs_sync)
Check if enough time has elapsed to perform a sync().
A call to sync() should be performed every now and then, no matter what has_state_changed() says. This is really just a safety thing to enforce consistency in case the state gets messed up.
Return type boolean
`sync`()[[source]](_modules/dyndnsc/core.html#DynDnsClient.sync)[¶](#dyndnsc.DynDnsClient.sync)
Synchronize the registered IP with the detected IP (if needed).
This can be expensive, mostly depending on the detector, but also because updating the dynamic ip in itself is costly. Therefore, this method should usually only be called on startup or when the state changes.
#### IP Updaters[¶](#ip-updaters)
##### Afraid[¶](#module-dyndnsc.updater.afraid)
Functionality for interacting with a service compatible with <https://freedns.afraid.org/>.
##### Duckdns[¶](#module-dyndnsc.updater.duckdns)
Module containing the logic for updating DNS records using the duckdns protocol.
From the duckdns.org website:
<https:/>/{DOMAIN}/update?domains={DOMAINLIST}&token={TOKEN}&ip={IP}
where:DOMAIN the service domain DOMAINLIST is either a single domain or a comma separated list of domains TOKEN is the API token for authentication/authorization IP is either the IP or blank for auto-detection
##### Dyndns2[¶](#module-dyndnsc.updater.dyndns2)
Module providing functionality to interact with dyndns2 compatible services.
#### IP Detectors[¶](#ip-detectors)
##### Command[¶](#module-dyndnsc.detector.command)
Module containing logic for command based detectors.
*class* `dyndnsc.detector.command.``IPDetector_Command`(*command=''*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/command.html#IPDetector_Command)[¶](#dyndnsc.detector.command.IPDetector_Command)
IPDetector to detect IP address executing shell command/script.
Initialize.
Parameters
**command** – string shell command that writes IP address to STDOUT
`__init__`(*command=''*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/command.html#IPDetector_Command.__init__)[¶](#dyndnsc.detector.command.IPDetector_Command.__init__)
Initialize.
Parameters
**command** – string shell command that writes IP address to STDOUT
##### DNS WAN IP[¶](#module-dyndnsc.detector.dnswanip)
Module containing logic for DNS WAN IP detection.
See also <https://www.cyberciti.biz/faq/how-to-find-my-public-ip-address-from-command-line-on-a-linux/*class* `dyndnsc.detector.dnswanip.``IPDetector_DnsWanIp`(*family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/dnswanip.html#IPDetector_DnsWanIp)[¶](#dyndnsc.detector.dnswanip.IPDetector_DnsWanIp)
Detect the internet visible IP address using publicly available DNS infrastructure.
Initialize.
Parameters
**family** – IP address family (default: ‘’ (ANY), also possible: ‘INET’, ‘INET6’)
`__init__`(*family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/dnswanip.html#IPDetector_DnsWanIp.__init__)[¶](#dyndnsc.detector.dnswanip.IPDetector_DnsWanIp.__init__)
Initialize.
Parameters
**family** – IP address family (default: ‘’ (ANY), also possible: ‘INET’, ‘INET6’)
##### Interface[¶](#module-dyndnsc.detector.iface)
Module providing IP detection functionality based on netifaces.
*class* `dyndnsc.detector.iface.``IPDetector_Iface`(*iface=None*, *netmask=None*, *family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/iface.html#IPDetector_Iface)[¶](#dyndnsc.detector.iface.IPDetector_Iface)
IPDetector to detect an IP address assigned to a local interface.
This is roughly equivalent to using ifconfig or ipconfig.
Initialize.
Parameters
* **iface** – name of interface
* **family** – IP address family (default: INET, possible: INET6)
* **netmask** – netmask to be matched if multiple IPs on interface
(default: none (match all)”, example for teredo:
“2001:0000::/32”)
`__init__`(*iface=None*, *netmask=None*, *family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/iface.html#IPDetector_Iface.__init__)[¶](#dyndnsc.detector.iface.IPDetector_Iface.__init__)
Initialize.
Parameters
* **iface** – name of interface
* **family** – IP address family (default: INET, possible: INET6)
* **netmask** – netmask to be matched if multiple IPs on interface
(default: none (match all)”, example for teredo:
“2001:0000::/32”)
##### Socket[¶](#module-dyndnsc.detector.socket_ip)
Module containing logic for socket based detectors.
*class* `dyndnsc.detector.socket_ip.``IPDetector_Socket`(*family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/socket_ip.html#IPDetector_Socket)[¶](#dyndnsc.detector.socket_ip.IPDetector_Socket)
Detect IPs used by the system to communicate with outside world.
Initialize.
Parameters
**family** – IP address family (default: INET, possible: INET6)
`__init__`(*family=None*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/socket_ip.html#IPDetector_Socket.__init__)[¶](#dyndnsc.detector.socket_ip.IPDetector_Socket.__init__)
Initialize.
Parameters
**family** – IP address family (default: INET, possible: INET6)
##### Teredo[¶](#module-dyndnsc.detector.teredo)
Module containing logic for teredo based detectors.
*class* `dyndnsc.detector.teredo.``IPDetector_Teredo`(*iface='tun0'*, *netmask='2001:0000::/32'*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/teredo.html#IPDetector_Teredo)[¶](#dyndnsc.detector.teredo.IPDetector_Teredo)
IPDetector to detect a Teredo ipv6 address of a local interface.
Bits 0 to 31 of the ipv6 address are set to the Teredo prefix (normally 2001:0000::/32).
This detector only checks the first 16 bits!
See <http://en.wikipedia.org/wiki/Teredo_tunneling> for more information on Teredo.
Inherits IPDetector_Iface and sets default options only.
Initialize.
`__init__`(*iface='tun0'*, *netmask='2001:0000::/32'*, **args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/teredo.html#IPDetector_Teredo.__init__)[¶](#dyndnsc.detector.teredo.IPDetector_Teredo.__init__)
Initialize.
##### Web check[¶](#module-dyndnsc.detector.webcheck)
Module containing logic for webcheck based detectors.
*class* `dyndnsc.detector.webcheck.``IPDetectorWebCheck`(**args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/webcheck.html#IPDetectorWebCheck)[¶](#dyndnsc.detector.webcheck.IPDetectorWebCheck)
Class to detect an IPv4 address as seen by an online web site.
Return parsable output containing the IP address.
Note
This detection mechanism requires ipv4 connectivity, otherwise it will simply not detect the IP address.
Initialize.
`__init__`(**args*, ***kwargs*)[[source]](_modules/dyndnsc/detector/webcheck.html#IPDetectorWebCheck.__init__)[¶](#dyndnsc.detector.webcheck.IPDetectorWebCheck.__init__)
Initialize.
Contributor Guide[¶](#contributor-guide)
---
If you want to contribute to the project, this part of the documentation is for you.
### Contributing[¶](#contributing)
#### Basic method to contribute a change[¶](#basic-method-to-contribute-a-change)
Dyndnsc is under active development, and contributions are more than welcome!
1. Check for open issues or open a fresh issue to start a discussion around a bug on the [issue tracker](https://github.com/infothrill/python-dyndnsc/issues).
2. Fork [the repository](https://github.com/infothrill/python-dyndnsc) and start making your changes to a new branch.
3. Write a test which shows that the bug was fixed.
4. Send a pull request and bug the maintainer until it gets merged and published. :)
Make sure to add yourself to [AUTHORS](https://github.com/infothrill/python-dyndnsc/blob/master/AUTHORS).
#### Idioms to keep in mind[¶](#idioms-to-keep-in-mind)
* keep amount of external dependencies low, i.e. if it can be done using the standard library, do it using the standard library
* do not prefer specific operating systems, i.e. even if we love Linux, we shall not make other suffer from our personal choice
* write unittests
Also, keep these [**PEP 20**](https://www.python.org/dev/peps/pep-0020) idioms in mind:
1. Beautiful is better than ugly.
2. Explicit is better than implicit.
3. Simple is better than complex.
4. Complex is better than complicated.
5. Readability counts.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html) |
TwoPhaseInd | cran | R | Package ‘TwoPhaseInd’
October 12, 2022
Type Package
Title Estimate Gene-Treatment Interaction Exploiting Randomization
Version 1.1.2
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Description Estimation of gene-treatment interactions in randomized clinical trials exploiting gene-
treatment independence. Methods used in the package re-
fer to <NAME>, <NAME>, and <NAME> (2009) Biometrics <doi:10.1111/j.1541-
0420.2008.01046.x>.
License GPL (>= 2)
LazyLoad no
NeedsCompilation yes
Imports survival
Repository CRAN
Date/Publication 2022-02-17 00:50:11 UTC
R topics documented:
aco1ar... 2
aco2ar... 4
acoar... 6
acodat... 8
caseonl... 9
char2nu... 10
mel... 11
remove_missingdat... 13
remove_rarevariant... 14
spml... 14
whiBioMarke... 16
aco1arm A function to estimate parameters in augmented case-only designs,
the genotype is ascertained for a random subcohort from the active
treatment arm or the placebo arm.
Description
This function estimates parameters of proportional hazards model with gene-treatment interaction.
It employs case-cohort estimation incorporating the case-only estimators. The method was pub-
lished in Dai et al. (2016) Biometrics.
Usage
aco1arm(data, svtime, event, treatment, BaselineMarker, subcohort, esttype = 1,
augment = 1, extra)
Arguments
data A data frame used to access the following data.
svtime A character string of column name, corresponds to one column of the data frame,
which is used to store the failure time variable (numeric).
event A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of failure event (1: failure, 0: not failure).
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store the binary vector of treatment variable (1: treatment, 0:
placebo).
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of biomarker.
subcohort A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of sub-cohort (1: sample belong to the sub-
cohort, 0: not belong to the sub-cohort)
esttype The option of estimation methods (1: Self-Prentice estimator, 0: Lin-Ying esti-
mator).
augment The indicator of whether subcohort was drawn from the active treatment arm
(augment=1) or from the placebo arm (augment=0).
extra A string vector of column name(s), corresponds to more or more column(s) of
the data frame, which is/are used to store the extra baseline covariate(s) to be
adjusted for in addition to treatment and biomarker.
Details
The function returns estimates of the proportional hazards model, and variance of the estimates.
The method was published in Dai et al. (2016) Biometrics.
Value
A list of estimates and variance of the estimates.
Estimate A data frame of beta(Estimated parameter), stder(Standard error),and pVal(p
value)
Covariance covariance data frame of genotype,treatment,and interaction
Author(s)
<NAME>
References
<NAME>, <NAME>,<NAME>, and <NAME>. Augmented case-only designs for random-
ized clinical trials with failure time endpoints. Biometrics, DOI: 10.1111/biom.12392, 2016.
See Also
aco2arm
Examples
## Load the example data
data(acodata)
## Augmented data in the active arm
rfit1 <- aco1arm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
augment=1,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm","any_drug",
"num_male_part_cat","uias","uras"))
rfit1
## Augmented data in the placebo arm
rfit2 <- aco1arm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
augment=0,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm",
"any_drug","num_male_part_cat","uias","uras"))
rfit2
aco2arm A function to estimate parameters in Cox proportional hazards model
using augmented case-only designs, the genotype is ascertained for a
random subcohort from both the active treatment arm and the placebo
arm (case-cohort sampling) or a case-control sample in both arms.
Description
This function estimates parameters of proportional hazards model with gene-treatment interaction.
It employs case-cohort estimation incorporating the case-only estimators. The method was pub-
lished in Dai et al. (2015) Biometrics.
Usage
aco2arm(data, svtime, event, treatment, BaselineMarker, subcohort=NULL,
esttype = NULL, weight=NULL, extra=NULL)
Arguments
data A data frame used to access the following data.
svtime A character string of column name, corresponds to one column of the data frame,
which is used to store the failure time variable (numeric).
event A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of failure event (1: failure, 0: not failure).
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store the binary vector of treatment variable (1: treatment, 0:
placebo).
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of biomarker.
subcohort A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of sub-cohort in the case-cohort sampling
(1: sample belong to the sub-cohort, 0: not belong to the sub-cohort). In case-
control sampling, this variable is set to be NULL.
esttype The option of estimation methods (1: Self-Prentice estimator, 0: Lin-Ying esti-
mator).
weight If the genotype data are obtained through case-control sampling, weight is a
vector of sampling weights (inverse of sampling probability) corresponding to
rows of data. If the genotype data are obtained through case-cohort sampling,
weight is NULL. If a vector of weights have been supplied by user, then esttype
is automatically set to 0: Lin-Ying estimator.
extra A string vector of column name(s), corresponds to more or more column(s) of
the data frame, which is/are used to store the extra baseline covariate(s) to be
adjusted for in addition to treatment and biomarker.
Details
The function returns estimates of the proportional hazards model, and variance of the estimates.
The method was published in Dai et al. (2016) Biometrics.
Value
A list of estimates and variance of the estimates.
Estimate A data frame of beta(Estimated parameter), stder(Standard error),and pVal(p
value)
Covariance covariance data frame of genotype,treatment,and interaction
Author(s)
<NAME>
References
<NAME>, <NAME>,<NAME>, and <NAME>. Augmented case-only designs for random-
ized clinical trials with failure time endpoints. Biometrics, DOI: 10.1111/biom.12392, 2016.
See Also
aco1arm
Examples
## Load the example data
data(acodata)
## Case-cohort + case-only estimators
rfit1 <- aco2arm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
weight=NULL,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm","any_drug",
"num_male_part_cat","uias","uras"))
rfit1
acoarm A function to estimate parameters in Cox proportional hazard models
by augmented case-only designs for randomized clinical trials with
failure time endpoints.
Description
This function estimates parameters of proportional hazards models with gene-treatment interac-
tions. It employs classical case-cohort estimation methods, incorporating the case-only estimators.
The method was published in Dai et al. (2016) Biometrics.
Usage
acoarm(data, svtime, event, treatment, BaselineMarker,subcohort, esttype = 1,
augment = 1, weight=NULL, extra = NULL)
Arguments
data A data frame used to access the following data.
svtime A character string of column name, corresponds to one column of the data frame,
which is used to store the failure time variable (numeric).
event A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of failure event (1: failure, 0: not failure).
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store the binary vector of treatment variable (1: treatment, 0:
placebo).
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of baseline biomarker that is under investigation
for interaction with treatment. The BaselineMarker variable is missing for those
who are not sampled in the case-cohort.
subcohort A character string of column name, corresponds to one column of the data frame,
which is used to store the indicator of sub-cohort (1: sample belong to the sub-
cohort, 0: not belong to the sub-cohort)
esttype The option of estimation methods (1: Self-Prentice estimator, 0: Lin-Ying esti-
mator).
augment The indicator of whether subcohort was drawn from the placebo arm (aug-
ment=0), from the active treatment arm (augment=1), or from both arms (aug-
ment=2).
weight If the genotype data are obtained through case-control sampling, weight is a
vector of sampling weights (inverse of sampling probability) corresponding to
rows of data. If the genotype data are obtained through case-cohort sampling,
weight is NULL. If a vector of weights have been supplied by user, then esttype
is automatically set to 0: Lin-Ying estimator.
extra A string vector of column name(s), corresponds to more or more column(s) of
the data frame, which is/are used to store the extra baseline covariate(s) to be
adjusted for in addition to treatment and biomarker.
Details
The function returns point estimates and standard error estimates of parameters in the proportional
hazards model. The method was published in Dai et al. (2015) Biometrics.
Value
beta Estimated parameter
stder Estimated standard error of parameter estimates
pVal p value
Author(s)
<NAME>
References
<NAME>, <NAME>,<NAME>, and <NAME>. Augmented case-only designs for random-
ized clinical trials with failure time endpoints. Biometrics, DOI: 10.1111/biom.12392, 2016.
Examples
## Load the example data
data(acodata)
## ACO in placebo arm
rfit0 <- acoarm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
augment=0,
weight=NULL,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm",
"any_drug","num_male_part_cat","uias","uras"))
rfit0
## ACO in active arm
rfit1 <- acoarm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
augment=1,
weight=NULL,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm",
"any_drug","num_male_part_cat","uias","uras"))
rfit1
## ACO in both arms
rfit2 <- acoarm(data=acodata,
svtime="vacc1_evinf",
event="f_evinf",
treatment="f_treat",
BaselineMarker="fcgr2a.3",
subcohort="subcoh",
esttype=1,
augment=2,
weight=NULL,
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm",
"any_drug","num_male_part_cat","uias","uras"))
rfit2
acodata A dataset from the STEP trial to study the interactions between gene
and vaccine on HIV infection
Description
A dataset from the STEP trial to study the interactions between gene and vaccine on HIV infection
Usage
data("acodata")
Format
A data frame with 907 observations on the following 14 variables.
vacc1_evinf the time to HIV infection, a numeric vector
f_evinf the indicator variable for HIV infection, a numeric vector
subcoh the indicator of whether the participant was selected into the sub-cohort for genotyping, a
logical vector
ptid patricipant identifier, a numeric vector
f_treat vaccine assignment variable, a numeric vector
fcgr2a.3 the genotype of Fcr receptor FcrRIIIa, the biomarker of interest here, a numeric vector
f_agele30 a numeric vector
f_hsv_2 a numeric vector
f_ad5gt18 a numeric vector
f_crcm a numeric vector
any_drug a numeric vector
num_male_part_cat a numeric vector
uias a numeric vector
uras a numeric vector
Details
A dataset from the STEP trial to study the interactions between gene and vaccine on HIV infection
References
<NAME>, <NAME>, and <NAME> et al. Efficacy assessment of a cell-mediated immunity
HIV-1 vaccine (the Step Study): a double-blind, randomised, placebo-controlled, test-of-concept
trial. Lancet. 372(9653):1881-1893, 2008.
<NAME>, <NAME>, and S. Bu et l. Immunoglobulin genes and the acquisition of HIV
infection in a randomized trial of recombinant adenovirus HIV vaccine. Virology, 441:70-74, 2013.
Examples
data(acodata)
## maybe str(acodata)
caseonly A function to deal with case-only designs
Description
This function estimates parameters of case-only designs.
Usage
caseonly(data, treatment, BaselineMarker, extra = NULL, fraction = 0.5)
Arguments
data A data frame used to access the following data.
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store the binary vector of treatment variable (1: treatment, 0:
placebo).
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of biomarker.
extra A string vector of column name(s), corresponds to more or more column(s)
of the data frame, which is/are used to store the extra baseline covariate(s) to
be included in case-only regression. Note that extra covariates are not needed
unless the interactions of treatment and extra coviarates are of interest.
fraction The randomization fraction of active treatment assignment.
Details
This function estimates parameters of case-only designs. It estimates two parameters for "treatment
effect when baselineMarker=0"" and treatment+baselineMarker interaction".
Value
For each paramter, it returns:
beta Estimated parameter
stder Standard error
pVal p value
Author(s)
<NAME>
References
<NAME>, <NAME>, and <NAME>. Case-only methods for competing risks models with application
to assessing differential vaccine efficacy by viral and host genetics. Biometrics, 15(1):196-203,
2014.
Examples
#form the data
data(acodata)
cdata=acodata[acodata[,2]==1,]
cfit=caseonly(data=cdata,
treatment="f_treat",
BaselineMarker="fcgr2a.3",
extra=c("f_agele30","f_hsv_2","f_ad5gt18","f_crcm",
"any_drug","num_male_part_cat","uias","uras"))
cfit
char2num A function used in acoarm to transform categorical variable to inte-
gers
Description
Transform category data to integers 0..levels(data)-1. The the numeric variable can be then used in
acoarm models.
Usage
char2num(data)
Arguments
data data is a dataframe composed of categorical variables.
Details
The function transforms a categorical variable to integers.
Value
A data frame of transformed values. For each column, each category is transformed to an integer,
from 0 to levels(data[,column])-1.
Author(s)
<NAME>
Examples
## Load the example data
data(acodata)
result <- char2num(acodata[, "fcgr2a.3"])
mele function to compute the maximum estimated likelihood estimator
Description
This function computes the maximum estimated likelihood estimator (MELE) of regression pa-
rameters, which assess treatment-biomarker interactions in studies with two-phase sampling in ran-
domized clinical trials. The function has an option to incorporate the independence between a
randomized treatment and the baseline markers.
Usage
mele(data, response, treatment, BaselineMarker, extra = NULL, phase,
ind = TRUE, maxit=2000)
Arguments
data A data frame used to access the following data. Each row contains the response
and predictors of a study participant. All variables are numerical.
response A character string of column name, corresponds to one column of the data frame,
which is used to store a numeric vector of response. The response variable
should be coded as 1 for cases and 0 for controls.
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store a binary vector of the treatment . The treatment variable
should be coded as 1 for treatment and 0 for placebo.
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of biomarker that is assessed for interaction with
the treatment. The BaselineMarker variable is missing for those who are not
sampled in the second phase.
extra A string vector of column name(s), corresponds to one or more column(s) of the
data frame, which are used to store the extra covariate(s) to be adjusted for in
addition to treatment and biomarker. All extra variables are missing for those
who are not sampled in the second phase.
phase A character string of column name, correspond to one column of the data frame,
which is used to store the indicator of two-phase sampling (1: not being sampled
for measuring biomarker; 2: being sampled for measuring biomarker).
ind A logical flag. TRUE indicates incorporating the independence between the
randomized treatment and the baseline markers.
maxit A integer number of the maximal number of iteration.
Details
The function returns estimates, standard errors, and p values for MELE of a regression model for
treatment-biomarker interaction studies with two-phase sampling in randomized trials, response ~
treatment + biomarker + treatment*biomarker + other covariates. Treatment and response are avail-
able for all the samples, while baseline biomarker data are available for a subset of samples. The
mele can incorporate the independence between the treatment and baseline biomarkers ascertained
in the phase-two sample.
Value
beta Estimated parameter
stder Standard error
pVal p value
Author(s)
<NAME>
References
<NAME>, <NAME>, and <NAME>. Semiparametric estimation exploiting covariate inde-
pendence in two-phase randomized trials. Biometrics, 65(1):178-187, 2009.
See Also
spmle
Examples
## Load the example data
data(whiBioMarker)
## Here is an example of MELE with exploiting independent and with confounding factors:
melIndExtra <- mele(data=whiBioMarker, ## dataset
response="stroke",## response variable
treatment="hrtdisp",## treatment variable
BaselineMarker="papbl",## environment variable
extra=c(
"age" ## age
## physical activity levels
, "dias" ## diabetes
, "hyp" ## hypertension
, "syst" ## systolic
, "diabtrt" ## diastolic BP
, "lmsepi" ## waist:hip ratio
),## extra variable(s)
phase="phase",## phase indicator
ind=TRUE ## independent or non-indepentent
)
remove_missingdata A function used in acoarm to remove missing data
Description
It is used to remove samples which have NA/missing data in covariates.
Usage
remove_missingdata(data)
Arguments
data data is a dataframe.
Details
The function removes samples (by rows) which have NA/missing data.
Value
A list of the following components.
idx The indices of rows without missing values
data The dataframe without missing values
Author(s)
<NAME>
Examples
## Load the example data
data(acodata)
result <- remove_missingdata(acodata[, c("vacc1_evinf","fcgr2a.3")])
remove_rarevariants A function used in spmle and acoarm to remove rare-variant covari-
ates
Description
It is used to remove rare-variant covariates, which can cause divergence problem.
Usage
remove_rarevariants(data, cutoff = 0.02)
Arguments
data A dataframe composed of covariates.
cutoff Proportion cutoff. If data composed of more than (1-cutoff) proportion of a
constant value, we call it rare-variant.
Details
The function removes rare-variant covariates.
Value
A logical vector composed of True or False. True means a covariate is rare-variant.
Author(s)
<NAME>
Examples
## Load the example data
data(acodata)
result <- remove_rarevariants(acodata[, c("vacc1_evinf","fcgr2a.3")])
spmle function to compute the semiparametric maximum likelihood estima-
tor
Description
This function computes the semiparametric maximum likelihood estimator (SPMLE) of regression
parameters, which assess treatment-biomarker interactions in studies with two-phase sampling in
randomized clinical trials. The function has an option to incorporate the independence between a
randomized treatment and the baseline markers.
Usage
spmle(data, response, treatment, BaselineMarker, extra = NULL, phase,
ind = TRUE, difffactor = 0.001, maxit = 1000)
Arguments
data A data frame used to access the following data. Each row contains the response
and predictors of a study participant. All variables are numerical.
response A character string of column name, corresponds to one column of the data frame,
which is used to store a numeric vector of response. The response variable
should be coded as 1 for cases and 0 for controls.
treatment A character string of column name, corresponds to one column of the data frame,
which is used to store a binary vector of the treatment . The treatment variable
should be coded as 1 for treatment and 0 for placebo.
BaselineMarker A character string of column name, corresponds to one column of the data frame,
which is used to store a vector of biomarker that is assessed for interaction with
the treatment. The BaselineMarker variable is missing for those who are not
sampled in the second phase.
extra A string vector of column name(s), corresponds to one or more column(s) of the
data frame, which are used to store the extra covariate(s) to be adjusted for in
addition to treatment and biomarker. All extra variables are missing for those
who are not sampled in the second phase.
phase A character string of column name, correspond to one column of the data frame,
which is used to store the indicator of two-phase sampling (1: not being sampled
for measuring biomarker; 2: being sampled for measuring biomarker).
ind A logical flag. TRUE indicates incorporating the independence between the
randomized treatment and the baseline markers.
difffactor A decimal number of the differentiation factor, used to control the step of nu-
merical differentiation.
maxit A integer number of the maximal number of numerical differentiation iteration.
Details
The function returns estimates, standard errors, and p values for SPMLE for parameters of a re-
gression model for treatment-biomarker interaction studies with two-phase sampling in randomized
trials, response ~ treatment + biomarker + treatment*biomarker + other covariates. Treatment and
response are available for all the samples, while biomarker data are available for a subset of sam-
ples. The SPMLE can incorporate the independence between the treatment and baseline biomarkers
ascertained in the phase-two sample. A profile likelihood based Newton-Raphson algorithm is used
to compute SPMLE.
Value
beta Estimated parameter
stder Standard error
pVal p value
Author(s)
<NAME>
References
<NAME>, <NAME>, and <NAME>. Semiparametric estimation exploiting covariate inde-
pendence in two-phase randomized trials. Biometrics, 65(1):178-187, 2009.
See Also
mele
Examples
## Load the example data
data(whiBioMarker)
## Here is an example of SPMLE with exploiting independent and with confounding factors:
spmleIndExtra <- spmle(data=whiBioMarker, ## dataset
response="stroke", ## response variable
treatment="hrtdisp", ## treatment variable
BaselineMarker="papbl",## environment variable
extra=c(
"age" ## age
, "dias" ## diabetes
, "hyp" ## hypertension
, "syst" ## systolic
, "diabtrt" ## diastolic BP
, "lmsepi" ## waist:hip ratio
),## extra variable(s)
phase="phase", ## phase indicator
ind=TRUE ## independent or non-independent
)
whiBioMarker An example dataset to demostrate the usage of MELE and SPMLE
Description
A dataset from a Women’s Health Initiative (WHI) hormone trial to study the interaction between
biomarker and hormone therapy on stroke.
Usage
data("whiBioMarker")
Format
A data frame consisting of 10 observations, with the following columns:
stroke a binary indicator vector of stroke; 1=has stroke
hrtdisp a binary indicator vector of treatment in the Estrogen Plus Progestin Trial; 1="Estrogen
Plus Progestin", 0="placebo"
papbl a numeric vector of Biomarker PAP (plasmin-antiplasmin complex) in logarithmic scale
(base 10)
age an integer vector of age
dias A binary indicator vector of Diastolic BP; 1="Yes"
hyp a vector of hypertension with levels Missing, No, Yes
syst an integer vector of Systolic BP
diabtrt A vector of Diabetes with levels: Missing, No, Yes
lmsepi A vector of episodes per week of moderate and strenuous recreational physical activity
of >= 20 minutes duration with levels 2 - <4 episodes per week, 4+ episodes per week,
Missing, No activity, Some activity
phase a numeric vector of phase; 1: phase 1, 2:phase 2
Details
It is an two-phase sampling example dataset adapted from Kooperberg et al. (2007) to demostrate
the usage of MELE and SPMLE algorithms in Dai et al. (2009).
Source
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, K.
<NAME>, <NAME>, <NAME>, and <NAME>. Can biomarkers identify women at
increased stroke risk? the women’s health initiative hormone trials. PLoS clinical trials, 2(6):e28,
Jun 15 2007.
References
<NAME>, <NAME>, and <NAME>. Semiparametric estimation exploiting co-variate inde-
pendence in two-phase randomized trials. Biometrics, 65(1):178-187, 2009.
Examples
data(whiBioMarker)
str(whiBioMarker)
colnames(whiBioMarker) |
opencensus_absinthe | hex | Erlang | Opencensus.Absinthe
===
Extends [`Absinthe`](https://hexdocs.pm/absinthe/1.4.16/Absinthe.html) to automatically create `opencensus` spans. Designed to work with whatever is producing spans upstream, e.g. `Opencensus.Plug`.
Installation
---
Assuming you're using [`Absinthe.Plug`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html):
Add `opencensus_absinthe` to your `deps` in `mix.exs`, using a tighter version constraint than:
```
{:absinthe_plug, ">= 0.0.0"},
{:opencensus_absinthe, ">= 0.0.0"},
```
Add a `:pipeline` to your [`Absinthe.Plug.opts/0`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html#t:opts/0) to have it call
[`Opencensus.Absinthe.Plug.traced_pipeline/2`](Opencensus.Absinthe.Plug.html#traced_pipeline/2). If you're using `Phoenix.Router.forward/4`, for example:
```
forward(
path,
Absinthe.Plug,
# ... existing config ...
pipeline: {Opencensus.Absinthe.Plug, :traced_pipeline}
)
```
If you already have a `pipeline`, you can define your own and call both to insert their phases.
To work with `ApolloTracing`, for example:
```
def your_custom_pipeline(config, pipeline_opts \ []) do
config
|> Absinthe.Plug.default_pipeline(pipeline_opts)
|> ApolloTracing.Pipeline.add_phases()
|> Opencensus.Absinthe.add_phases()
end
```
Worst case, you'll need to copy the code from the current `pipeline` target and add a call to
[`Opencensus.Absinthe.add_phases/1`](Opencensus.Absinthe.html#add_phases/1) as above.
Summary
===
[Functions](#functions)
---
[add_phases(pipeline)](#add_phases/1)
Add tracing phases to an existing pipeline for blueprint resolution.
[middleware(middleware, field, object)](#middleware/3)
Add tracing middleware for field resolution.
[Link to this section](#functions)
Functions
===
```
add_phases([Absinthe.Pipeline.t](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Pipeline.html#t:t/0)()) :: [Absinthe.Pipeline.t](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Pipeline.html#t:t/0)()
```
Add tracing phases to an existing pipeline for blueprint resolution.
```
pipeline =
Absinthe.Pipeline.for_document(schema, pipeline_opts)
|> Opencensus.Absinthe.add_phases()
```
```
middleware(
[[Absinthe.Middleware.spec](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Middleware.html#t:spec/0)(), ...],
[Absinthe.Type.Field.t](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Type.Field.html#t:t/0)(),
[Absinthe.Type.Object.t](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Type.Object.html#t:t/0)()
) :: [[Absinthe.Middleware.spec](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Middleware.html#t:spec/0)(), ...]
```
Add tracing middleware for field resolution.
Specifically, prepends [`Opencensus.Absinthe.Middleware`](Opencensus.Absinthe.Middleware.html) to the `middleware` chain if the field has `trace` or `absinthe_telemetry` set in its metadata, e.g.:
```
field :users, list_of(:user), meta: [trace: true] do
middleware(Middleware.Authorize, "superadmin")
resolve(&Resolvers.Account.all_users/2)
end
```
Opencensus.Absinthe.Middleware
===
[`Absinthe.Middleware`](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Middleware.html) for field resolution tracing.
Opencensus.Absinthe.Plug
===
Modify your [`Absinthe.Plug`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html) pipeline for [`Opencensus.Absinthe`](Opencensus.Absinthe.html):
Installation
---
Specify [`traced_pipeline/2`](#traced_pipeline/2) as your `pipeline` in your [`Absinthe.Plug.opts/0`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html#t:opts/0), e.g. via
`Phoenix.Router.forward/4`:
```
forward "/graphql", Absinthe.Plug,
schema: MyApp.Schema,
pipeline: {Opencensus.Absinthe.Plug, :traced_pipeline}
```
**WARNING:** [`traced_pipeline/2`](#traced_pipeline/2) will be present *only* if [`Absinthe.Plug`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html) is loaded.
Don't forget your `absinthe_plug` dependency in `mix.exs`!
Summary
===
[Functions](#functions)
---
[traced_pipeline(config, pipeline_opts \\ [])](#traced_pipeline/2)
Return the default pipeline with tracing phases.
[Link to this section](#functions)
Functions
===
Return the default pipeline with tracing phases.
See also:
* [`Absinthe.Pipeline.for_document/2`](https://hexdocs.pm/absinthe/1.4.16/Absinthe.Pipeline.html#for_document/2).
* [`Absinthe.Plug.default_pipeline/1`](https://hexdocs.pm/absinthe_plug/1.4.6/Absinthe.Plug.html#default_pipeline/1). |
jsonapi | readthedoc | Python | jsonapi 0.6.0 documentation
[jsonapi](index.html#document-index)
---
Welcome to json:api’s documentation![¶](#welcome-to-json-api-s-documentation)
===
jsonapi modules[¶](#jsonapi-modules)
---
### jsonapi Package[¶](#jsonapi-package)
#### [jsonapi](index.html#module-jsonapi) Package[¶](#id1)
JSON:API realization.
#### jsonapi.api Package[¶](#jsonapi-api-package)
#### jsonapi.auth Package[¶](#jsonapi-auth-package)
#### jsonapi.deserializer Package[¶](#jsonapi-deserializer-package)
#### jsonapi.resource Package[¶](#jsonapi-resource-package)
Installation[¶](#installation)
===
Requires: Django (1.5, 1.6); python (2.7, 3.3).
```
pip install jsonapi
```
Quickstart[¶](#quickstart)
===
Create resource for model, register it with api and use it within urls!
```
# resources.py from jsonapi.api import API from jsonapi.resource import Resource
api = API()
@api.register class AuthorResource(Resource):
class Meta:
model = 'testapp.author'
# urls.py from .resources import api
urlpatterns = patterns(
'',
url(r'^api', include(api.urls))
)
```
Notes[¶](#notes)
===
REST anti patterns <http://www.infoq.com/articles/rest-anti-patternsFeatures[¶](#features)
===
What makes a decent API Framework? These features:
> * + Pagination
> * Posting of data with validation
> * + Publishing of metadata along with querysets
> * + API discovery
> * Proper HTTP response handling
> * Caching
> * + Serialization
> * Throttling
> * + Authentication
> * Authorization/Permissions
Proper API frameworks also need:
> * Really good test coverage of their code
> * Decent performance
> * Documentation
> * An active community to advance and support the framework
Docs[¶](#docs)
===
> * Resource definition
> * Resource and Models discovery
> * Authentication
> * Authorization
Indices and tables[¶](#indices-and-tables)
===
* [*Index*](genindex.html)
* [*Module Index*](py-modindex.html)
* [*Search Page*](search.html) |
celery-redbeat | readthedoc | Python | Celery Redbeat documentation
Welcome to Celery Redbeat’s documentation![¶](#welcome-to-celery-redbeat-s-documentation)
===
Introduction[¶](#introduction)
---
> | alt: | Circle CI |
[RedBeat](https://github.com/sibson/redbeat) is a
[Celery Beat Scheduler](http://celery.readthedocs.org/en/latest/userguide/periodic-tasks.html)
that stores the scheduled tasks and runtime metadata in [Redis](http://redis.io/).
### Why RedBeat?[¶](#why-redbeat)
1. Dynamic live task creation and modification, without lengthy downtime 2. Externally manage tasks from any language with Redis bindings 3. Shared data store; Beat isn’t tied to a single drive or machine 4. Fast startup even with a large task count 5. Prevent accidentally running multiple Beat servers
### Getting Started[¶](#getting-started)
Install with pip:
```
pip install celery-redbeat
```
Configure RedBeat settings in your Celery configuration file:
```
redbeat_redis_url = "redis://localhost:6379/1"
```
Then specify the scheduler when running Celery Beat:
```
celery beat -S redbeat.RedBeatScheduler
```
RedBeat uses a distributed lock to prevent multiple instances running.
To disable this feature, set:
```
redbeat_lock_key = None
```
### Development[¶](#development)
RedBeat is available on [GitHub](https://github.com/sibson/redbeat)
Once you have the source you can run the tests with the following commands:
```
pip install -r requirements.dev.txt py.test tests
```
You can also quickly fire up a sample Beat instance with:
```
celery beat --config exampleconf
```
Configuration[¶](#configuration)
---
You can add any of the following parameters to your Celery configuration
(see Celery 3.x compatible configuration value names in below).
### `redbeat_redis_url`[¶](#redbeat-redis-url)
URL to redis server used to store the schedule, defaults to value of
[broker_url](http://docs.celeryproject.org/en/4.0/userguide/configuration.html#std:setting-broker_url).
### `redbeat_redis_use_ssl`[¶](#redbeat-redis-use-ssl)
Additional SSL options used when using the `rediss` scheme in
`redbeat_redis_url`, defaults to the values of [broker_use_ssl](http://docs.celeryproject.org/en/4.0/userguide/configuration.html#std:setting-broker_use_ssl).
### `redbeat_key_prefix`[¶](#redbeat-key-prefix)
A prefix for all keys created by RedBeat, defaults to `'redbeat'`.
### `redbeat_lock_key`[¶](#redbeat-lock-key)
Key used to ensure only a single beat instance runs at a time,
defaults to `'<redbeat_key_prefix>:lock'`.
### `redbeat_lock_timeout`[¶](#redbeat-lock-timeout)
Unless refreshed the lock will expire after this time, in seconds.
Defaults to five times of the default scheduler’s loop interval
(`300` seconds), so `1500` seconds (`25` minutes).
See the [beat_max_loop_interval](http://docs.celeryproject.org/en/4.0/userguide/configuration.html#std:setting-beat_max_loop_interval) Celery docs about for more information.
### Celery 3.x config names[¶](#celery-3-x-config-names)
Here are the old names of the configuration values for use with Celery 3.x.
| **Celery 4.x** | **Celery 3.x** |
| --- | --- |
| `redbeat_redis_url` | `REDBEAT_REDIS_URL` |
| `redbeat_redis_use_ssl` | `REDBEAT_REDIS_USE_SSL` |
| `redbeat_key_prefix` | `REDBEAT_KEY_PREFIX` |
| `redbeat_lock_key` | `REDBEAT_LOCK_KEY` |
| `redbeat_lock_timeout` | `REDBEAT_LOCK_TIMEOUT` |
### Sentinel support[¶](#sentinel-support)
The redis connexion can use a Redis/Sentinel cluster. The configuration syntax is inspired from [celery-redis-sentinel](https://github.com/dealertrack/celery-redis-sentinel)
```
# celeryconfig.py BROKER_URL = 'redis-sentinel://redis-sentinel:26379/0'
BROKER_TRANSPORT_OPTIONS = {
'sentinels': [('192.168.1.1', 26379),
('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'password': '123',
'db': 0,
'service_name': 'master',
'socket_timeout': 0.1,
}
CELERY_RESULT_BACKEND = 'redis-sentinel://redis-sentinel:26379/1'
CELERY_RESULT_BACKEND_TRANSPORT_OPTIONS = BROKER_TRANSPORT_OPTIONS
```
Some notes about the configuration:
* note the use of `redis-sentinel` schema within the URL for broker and results backend.
* hostname and port are ignored within the actual URL. Sentinel uses transport options
`sentinels` setting to create a `Sentinel()` instead of configuration URL.
* `password` is going to be used for Celery queue backend as well.
* `db` is optional and defaults to `0`.
If other backend is configured for Celery queue use
`REDBEAT_REDIS_URL` instead of `BROKER_URL` and
`REDBEAT_REDIS_OPTIONS` instead of `BROKER_TRANSPORT_OPTIONS`. to avoid conflicting options. Here follows the example::
```
# celeryconfig.py REDBEAT_REDIS_URL = 'redis-sentinel://redis-sentinel:26379/0'
REDBEAT_REDIS_OPTIONS = {
'sentinels': [('192.168.1.1', 26379),
('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'password': '123',
'service_name': 'master',
'socket_timeout': 0.1,
'retry_period': 60,
}
```
If `retry_period` is given, retry connection for `retry_period`
seconds. If not set, retrying mechanism is not triggered. If set to `-1` retry infinitely.
Creating Tasks[¶](#creating-tasks)
---
You can use Celery’s usual way to define static tasks or you can insert tasks directly into Redis. The config options is called [beat_schedule](http://docs.celeryproject.org/en/4.0/userguide/periodic-tasks.html#beat-entries), e.g.:
```
app.conf.beat_schedule = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': 30.0,
'args': (16, 16)
},
}
```
On Celery 3.x the config option was called [CELERYBEAT_SCHEDULE](http://docs.celeryproject.org/en/3.1/userguide/periodic-tasks.html#beat-entries).
The easiest way to insert tasks from Python is it use `RedBeatSchedulerEntry()`:
```
interval = celery.schedules.schedule(run_every=60) # seconds entry = RedBeatSchedulerEntry('task-name', 'tasks.some_task', interval, args=['arg1', 2])
entry.save()
```
Alternatively, you can insert directly into Redis by creating a new hash with a key of `<redbeat_key_prefix>:task-name`. It should contain a single key
`definition` which is a JSON blob with the task details.
### Interval[¶](#interval)
An interval task is defined with the JSON like:
```
{
"name" : "interval example",
"task" : "tasks.every_5_seconds",
"schedule": {
"__type__": "interval",
"every" : 5, # seconds
"relative": false, # optional
},
"args" : [ # optional
"param1",
"param2"
],
"kwargs" : { # optional
"max_targets" : 100
},
"enabled" : true, # optional
}
```
### Crontab[¶](#crontab)
An crontab task is defined with the JSON like:
```
{
"name" : "crontab example",
"task" : "tasks.daily",
"schedule": {
"__type__": "crontab",
"minute" : "5", # optional, defaults to *
"hour" : "*", # optional, defaults to *
"day_of_week" : "monday", # optional, defaults to *
"day_of_month" : "*/7", # optional, defaults to *
"month_of_year" : "[1-12]", # optional, defaults to *
},
"args" : [ # optional
"param1",
"param2"
],
"kwargs" : { # optional
"max_targets" : 100
},
"enabled" : true, # optional
}
```
Design[¶](#design)
---
At its core RedBeat uses a Sorted Set to store the schedule as a priority queue.
It stores task details using a hash key with the task definition and metadata.
The schedule set contains the task keys sorted by the next scheduled run time.
For each tick of Beat
1. get list of due keys and due next tick 2. retrieve definitions and metadata for all keys from previous step 3. update task metadata and reschedule with next run time of task 4. call due tasks using async_apply 5. calculate time to sleep until start of next tick using remaining tasks
### Scheduling[¶](#scheduling)
Assuming your redbeat_key_prefix config values is set to ‘redbeat:’
(default) you will also need to insert the new task into the schedule with:
```
zadd redbeat::schedule 0 new-task-name
```
The score is the next time the task should run formatted as a UNIX timestamp.
### Metadata[¶](#metadata)
Applications may also want to manipulate the task metadata to have more control over when a task runs.
The meta key contains a JSON blob as follows:
```
{
'last_run_at': {
'__type__': 'datetime',
'year': 2015,
'month': 12,
'day': 29,
'hour': 16,
'minute': 45,
'microsecond': 231
},
'total_run_count'; 23
}
```
For instance by default ``last_run_at`` corresponds to when Beat dispatched the task, but depending on queue latency it might not run immediately, but the application could update the metadata with the actual run time, allowing intervals to be relative to last execution rather than last dispatch.
Indices and tables[¶](#indices-and-tables)
===
* [Index](genindex.html)
* [Module Index](py-modindex.html)
* [Search Page](search.html)
[Celery Redbeat](index.html#document-index)
===
### Navigation
Contents:
* [Introduction](index.html#document-intro)
* [Configuration](index.html#document-config)
* [Creating Tasks](index.html#document-tasks)
* [Design](index.html#document-design)
### Related Topics
* [Documentation overview](index.html#document-index)
### Quick search |
aldryn-newsblog | readthedoc | Unknown | Aldryn News Blog Documentation
Release 1.2.1
Divio AG
March 18, 2016
Contents 1.1 How-to guide... 3 1.2 Referenc... 6 1.3 Using Aldryn News & Blo... 6 1.4 Development & communit... 7
i
ii
Aldryn News Blog Documentation, Release 1.2.1 Aldryn News & Blog is an Aldryn-compatible news and weblog application for django CMS.
Content editors looking for documentation on how to use the editing interface should refer to our Using Aldryn News
& Blog section.
Django developers who want to learn more about django CMS, as well as how to install, configure and customize it for their own projects should refer to the How-to guides and Reference sections.
Aldryn News & Blog is intended to serve as a model of good practice for development of django CMS and Aldryn applications.
Aldryn News Blog Documentation, Release 1.2.1 2 Contents
CHAPTER 1
Documentation 1.1 How-to guides These guides presuppose some familiarity with django CMS.
1.1.1 Installation You can install Aldryn News & Blog either on Aldryn or by hand into your own project.
Aldryn Platform Users To install the addon on Aldryn, all you need to do is follow this installation link on the Aldryn Marketplace and follow the instructions.
Manually you can:
1. Choose a site you want to install the add-on to from the dashboard.
2. Go to Apps > Install App
3. Click Install next to the News & Blog app.
4. Redeploy the site.
Manual Installation Requirements
• This project requires django CMS 3.0.12 or later.
PIP dependency If you’re installing into an existing django CMS project, you can run either:
pip install aldryn-newsblog or:
Aldryn News Blog Documentation, Release 1.2.1 pip install -e git+https://github.com/aldryn/aldryn-newsblog.git#egg=aldryn-newsblog If you need to start a new project, we recommend that first you use the django CMS Installer to create it, and then install Aldryn News & Blog on top of that.
settings.py In your project’s settings.py make sure you have all of:
'aldryn_apphooks_config',
'aldryn_boilerplates',
'aldryn_categories',
'aldryn_common',
'aldryn_newsblog',
'aldryn_people',
'aldryn_reversion',
'aldryn_translation_tools',
'djangocms_text_ckeditor',
'easy_thumbnails',
'filer',
'parler',
'reversion',
'sortedm2m',
'taggit',
listed in INSTALLED_APPS, after ’cms’.
Additional Configuration Important: To get Aldryn News & Blog to work you need to add additional configurations:
1. Aldryn-Boilerplates You need set additional configurations to settings.py for Aldryn Boilerplates.
To use the old templates, set ALDRYN_BOILERPLATE_NAME=’legacy’. To use https://github.com/aldryn/aldryn-
boilerplate-bootstrap3 (recommended), set ALDRYN_BOILERPLATE_NAME=’bootstrap3’.
2. Django-Filer Aldryn News & Blog requires the use of the optional “subject location” processor from Django Filer for Easy Thumbnails. This requires setting the THUMBNAIL_PROCESSORS tuple in your project’s settings and explicitly omitting the default processor scale_and_crop and including the optional scale_and_crop_with_subject_location processor. For example:
THUMBNAIL_PROCESSORS = (
'easy_thumbnails.processors.colorspace',
'easy_thumbnails.processors.autocrop',
# 'easy_thumbnails.processors.scale_and_crop',
'filer.thumbnail_processors.scale_and_crop_with_subject_location',
'easy_thumbnails.processors.filters',
'easy_thumbnails.processors.background',
)
For more information on this optional processor, see the documentation for Django Filer.
Aldryn News Blog Documentation, Release 1.2.1 Migrations
Now run python manage.py syncdb if you have not already done so, followed by python manage.py migrate to prepare the database for the new applications.
Note: Aldryn News & Blog supports both South and Django 1.7 migrations.
Server To finish the setup, you need to create a page, change to the Advanced Settings and choose NewsBlog within the Application drop-down.
You also need to set the Application configurations and publish the changes.
Finally you just need to restart your local development server and you are ready to go.
This process is described in more depth within Basic Usage.
1.1.2 Upgrading The CHANGELOG is maintained and updated within the repository.
Upgrade from 0.5.0 Note: If you’re upgrading from a version earlier than 0.5.0.
In this version 0.5.0, we’re deprecating all of the static placeholders and instead making them PlaceholderFields on the app_config object. This means that you’ll be able to have content that is different in each instance of the app, which was originally intended.
Because some may have already used these static placeholders, there will be a (very) short deprecation cycle. 0.5.0 will introduce the new PlaceholderFields whilst leaving the existing static placeholders intact. This will allow developers and content managers to move plugins from the old to the new.
Version 0.6.0 will remove the old static placeholders to avoid any further confusion.
Also note: The article’s PlaceholderField has also had its visible name updated. The old name will continue to be displayed in structure mode until the article is saved. Similarly, the new app_config-based PlaceholderFields will not actually appear in structure mode until the app_config is saved again.
1.1.3 Basic Usage Aldryn News & Blog works the way that many django-CMS-compatible applications to. It expects you to create a new page for it in django CMS, and then attach it to that page with an Apphook.
Getting started
1. if this is a new project, change the default example.com Site in the Admin to whatever is appropriate for your
setup (typically, localhost:8000)
2. in Admin > Aldryn_Newsblog, create a new Apphook config with the value aldryn_newsblog
Aldryn News Blog Documentation, Release 1.2.1
3. create a new django CMS page; this page will be associated with the Aldryn News & Blog application
4. open the new page’s Advanced settings
5. from the Application choices menu select NewsBlog
6. save the page
7. restart the runserver (necessary because of the new Apphook)
Now you have a new page to which the Aldryn NewsBlog will publish content.
Let’s create a new weblog article, at Admin > Aldryn_NewsBlog. Fill in the fields as appropriate - most are self-
explanatory - and Save.
The page you created a moment ago should now list your new article.
1.2 Reference 1.2.1 Settings The flag ALDRYN_NEWSBLOG_SEARCH can be set to False in settings if indexing should be globally disabled for Aldryn News & Blog. When this is False, it overrides the setting in the application configuration on each apphook.
If Aldryn Search, Haystack, et al, are not installed, this setting does nothing.
The flag ALDRYN_NEWSBLOG_UPDATE_SEARCH_DATA_ON_SAVE, when set to True (default value), updates the article’s search_data field whenever the article is saved or a plugin is saved on the article’s content placeholder. Set to false to disable this feature.
1.2.2 Management Commands The management command: rebuild_article_search_data can be used to update the search_data in all articles for searching. It can accept a switch --language or the short-hand -l to specify the translations to process.
If this switch is not provided, all translations are indexed by default.
1.2.3 Plugins Related Articles Plugin The Related Articles plugin is only appropriate for use only on the article detail view. If the plugin in placed on any other page, it will render an empty <div></div>.
1.3 Using Aldryn News & Blog The documentation in these two sections focuses on the basics of content creation and editing using Aldryn News &
Blog. It’s suitable for non-technical and technical audiences alike.
Aldryn News Blog Documentation, Release 1.2.1 1.4 Development & community Aldryn News & Blog is an open-source project.
You don’t need to be an expert developer to make a valuable contribution - all you need is a little knowledge, and a willingness to follow the contribution guidelines.
1.4.1 Divio AG Aldryn News & Blog is developed by Divio AG and released under a BSD licence.
Aldryn News & Blog is compatible with Divio’s Aldryn cloud-based django CMS hosting platform, and therefore with any standard django CMS installation. The additional requirements of an Aldryn application do not preclude its use with any other django CMS deployment.
Divio is committed to Aldryn News & Blog as a high-quality application that helps set standards for others in the Aldryn/django CMS ecosystem, and as a healthy open source project.
Divio maintains overall control of the Aldryn News & Blog repository.
1.4.2 Standards & policies Aldryn News & Blog is a django CMS application, and shares much of django CMS’s standards and policies.
These include:
• guidelines and policies for contributing to the project, including standards for code and documentation
• standards for managing the project’s development
• a code of conduct for community activity Please familiarise yourself with this documentation if you’d like to contribute to Aldryn News & Blog.
1.4.3 Running tests Aldryn News & Blog uses django CMS Helper to run its test suite.
Backend Tests To run the tests, in the aldryn-newsblog directory:
virtualenv env # create a virtual environment source env/bin/activate # activate it python setup.py install # install the package requirements pip install -r test_requirements/django-1.7.txt # install the test requirements python test_settings.py # run the tests You can run the tests against a different version of Django by using the appropriate value in django-x.x.txt when installing the test requirements.
Aldryn News Blog Documentation, Release 1.2.1 Frontend Tests Follow the instructions in the aldryn-boilerplate-bootstrap3 documentation and setup the environment through the Backend Tests section.
Instead of using python test_settings.py described above, you need to excecute python test_settings.py server to get a running local server. You can open the development server locally through http://127.0.0.1:8000/. The database is added within the root of this project local.sqlite.
You might want to delete the database from time to time to start with a fresh installation. Don’t forget to restart the server if you do so.
1.4.4 Documentation You can run the documentation locally for testing:
1. navigate to the documentation cd /docs
2. run make install to install requirements
3. run make run to run the server Now you can open http://localhost:8000 on your favourite browser and start changing the rst files within docs/. |
framabook_org_optionlibre-dubonusagedeslicenceslibres | free_programming_book | Unknown | # Reprenons du terrain aux géants du web !
Date: 2019-11-13
Categories:
Tags:
Les licences libres, loin de se substituer au droit, sont elles-mêmes des objets juridiques. Quelles stratégies adopter dans ces nouveaux équilibres provoquées par la révolution des licences libres ?
Dans leurs fondements juridiques, les licences libres interrogent à plus d’un titre les usages de la propriété intellectuelle en cours depuis des siècles.
En moins de trente ans, le mouvement du Libre a réalisé une telle révolution, à la fois technique et culturelle, que le nombre de licences n’a cessé d’augmenter, formalisant de multiples manières les rapports entre les auteurs, les utilisateurs et l’œuvre.
Titre : Option Libre. Du bon usage des licences libres
Licences : LAL 1.3 ; GNU FDL ; Creative Commons By-Sa
Prix : 20 EUR
ISBN : 978-2-9539187-4-8
Première édition : Décembre 2011, Framasoft
Format : broché – 155 x 200 mm
Poids : 464 gr.
Nombre de pages : 307 (+ xvi)
Quelle stratégie adopter dans le choix d’une licence et comment concilier ce choix avec un modèle économique ? Quelles sont les compatibilités entre les licences et comment envisager l’équilibre entre les droits existants et éprouvés (droit d’auteur, brevets, etc.) et la permissivité propre au Libre ? Dans cet ouvrage documenté, objectif et pédagogique, <NAME> élabore une base saine et pérenne de discussions et d’échanges entre tous les acteurs du Libre. Se livrant à un véritable inventaire des pratiques juridiques dans ce domaine, l’auteur nous permet de les appréhender finement et encourage à les perfectionner et les porter dans d’autres secteurs.
Après une présentation du cadre légal associé aux créations de l’esprit, initialement conçu comme un système en équilibre, l’ouvrage plonge le lecteur dans ce nouveau paradigme des licences libres. Tout en évitant la simple exposition de règles et de normes, c’est de manière méthodique que seront abordées les notions juridiques à la base des nouveaux rapports entre les acteurs. Une sérieuse analyse de la maturation de ce système et un repérage des bons réflexes et des principaux écueils, permettront enfin d’aborder une étude pratique et éclairée des quelques licences libres les plus utilisées.
Actif depuis près de dix ans dans ce domaine, il enseigne la propriété intellectuelle dans plusieurs Masters, intervient comme consultant au sein du cabinet <NAME> (Paris) et achève une thèse sur les systèmes collaboratifs. Il a codirigé au sein du Syntec Numérique la rédaction du guide Open Source intitulé Réflexions sur la construction et le pilotage d’un projet Open Source, et créé et dirigé le premier Centre Juridique Open Source. À l’échelle européenne, il organise les conférences annuelles EOLE (European Open Source & Free Software Law Event) et il est membre de l’European Legal Network (FSF Europe).
Très présent dans les communautés du logiciel libre, il est cofondateur de <NAME> et de la SARD (Société d’Acceptation et de Répartition des Dons). En 2011, il a créé sa propre société, Inno³, qui accompagne les entreprises et acteurs publics dans l’ouverture de leur politique d’innovation en faveur de processus partagés et collaboratifs.
|
specs | cran | R | Package ‘specs’
October 14, 2022
Title Single-Equation Penalized Error-Correction Selector (SPECS)
Version 0.1.1
Maintainer <NAME> <<EMAIL>>
Description Implementation of SPECS, your favourite Single-Equation Penalized Error-
Correction Selector developed in
Smeekes and Wijler (2020) <arXiv:1809.08889>. SPECS provides a fully automated estima-
tion procedure for large and potentially
(co)integrated datasets. The dataset in levels is converted to a conditional error-
correction model, either by the user or
by means of the functions included in this package, and various specialised forms of penal-
ized regression can be applied to
the model. Automated options for initializing and selecting a sequence of penal-
ties, as well as the construction of penalty
weights via an initial estimator, are available. Moreover, the user may choose from a num-
ber of pre-specified deterministic
configurations to further simplify the model building process.
Depends R (>= 3.5.0)
License GPL (>= 2)
Encoding UTF-8
LazyData true
RoxygenNote 7.1.0
LinkingTo Rcpp, RcppArmadillo
Imports Rcpp
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut] (<https://orcid.org/0000-0002-0157-639X>)
Repository CRAN
Date/Publication 2020-07-17 12:20:02 UTC
R topics documented:
spec... 2
specs_op... 4
specs_t... 6
specs_tr_op... 8
Unempl_G... 10
specs SPECS
Description
This function estimates the Single-equation Penalized Error Correction Selector as described in
Smeekes and Wijler (2020). The function takes a dependent variable y and a matrix of indepen-
dent variables x as input, and transforms it to a conditional error correction model. This model is
estimated by means of penalized regression, involving L1-penalty on individual coefficients and a
potential L2-penalty on the coefficients of the lagged levels in the model, see Smeekes and Wijler
(2020) for details.
Usage
specs(
y,
x,
p = 1,
deterministics = c("constant", "trend", "both", "none"),
ADL = FALSE,
weights = c("ridge", "ols", "none"),
k_delta = 1,
k_pi = 1,
lambda_g = NULL,
lambda_i = NULL,
thresh = 1e-04,
max_iter_delta = 1e+05,
max_iter_pi = 1e+05,
max_iter_gamma = 1e+05
)
Arguments
y A vector containing the dependent variable in levels.
x A matrix containing the independent variables in levels.
p Integer indicating the desired number of lagged differences to include. Default
is 1.
deterministics A character object indicating which deterministic variables should be added
("none","constant","trend","both"). Default is "constant".
ADL Logical object indicating whether an ADL model without error-correction term
should be estimated. Default is FALSE.
weights Choice of penalty weights. The weights can be automatically generated by ridge
regression (default) or ols. Alternatively, a conformable vector of non-negative
weights can be supplied or no weights can be applied.
k_delta The power to which the weights for delta should be raised, if weights are set to
"ridge" or "ols".
k_pi The power to which the weights for pi should be raised, if weights are set to
"ridge" or "ols".
lambda_g An optional user-specified grid for the group penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
lambda_i An optional user-specified grid for the individual penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
thresh The treshold for convergence.
max_iter_delta Maximum number of updates for delta. Default is 105 .
max_iter_pi Maximum number of updates for pi. Default is 105 .
max_iter_gamma Maximum number of updates for gamma. Default is 105 .
Details
The function can generate an automated sequence of penalty parameters and offers the option to
compute and include adaptive penalty weights. In addition, it is possible to estimate a penalized
ADL model in differences by excluding the lagged levels from the model. For automated selection
of an optimal penalty value, see the function specs_opt(...).
Value
D A matrix containing the deterministic variables included in the model.
gammas A matrix containing the estimated coefficients of the stochastic variables in the
conditional error-correction model.
lambda_g The grid of group penalties.
lambda_i The grid of individual penalties.
Mv A matrix containing the independent variables, after regressing out the deter-
ministic components.
My_d A vector containing the dependent variable, after regressing out the deterministic
components.
theta The estimated coefficients for the constant and trend. If a deterministic compo-
nent is excluded, its coefficient is set to zero.
v A matrix containing the independent variables (excluding deterministic compo-
nents).
weights The vector of penalty weights.
y_d A vector containing the dependent variable, i.e. the differences of y.
Examples
#Estimate a model for unemployment and ten google trends
#Organize data
y <- Unempl_GT[,1]
index_GT <- sample(c(2:ncol(Unempl_GT)),10)
x <- Unempl_GT[,index_GT]
#Estimate a CECM with 1 lagged differences
my_specs <- specs(y,x,p=1)
#Estimate a CECM with 1 lagged differences and no group penalty
my_specs2 <- specs(y,x,p=1,lambda_g=0)
#Estimate an autoregressive distributed lag model with 2 lagged differences
my_specs3 <- specs(y,x,ADL=TRUE,p=2)
specs_opt SPECS with data transformation and penalty optimization
Description
This function estimates SPECS and selects the optimal penalty parameter based on a selection rule.
All arguments correspond to those of the function specs(...), but it contains the additional arguments
rule and CV_cutoff. Selection of the penalty parameter can be carried out by BIC or AIC or by time
series cross-validation (TSCV). The degrees of freedom for the information criteria (BIC or AIC)
are approximated by the number of non-zero coefficients in the estimated model. TSCV cuts the
sample in two, based on the argument CV_cutoff which determines the proportion of the training
sample. SPECS is estimated on the first part and the estimated model is used to predict the values
in the second part. The selection is then based on the lowest Mean-Squared Forecast Error (MSFE)
obtained over the test sample.
Usage
specs_opt(
y,
x,
p = 1,
rule = c("BIC", "AIC", "TSCV"),
CV_cutoff = 2/3,
deterministics = c("constant", "trend", "both", "none"),
ADL = FALSE,
weights = c("ridge", "ols", "none"),
k_delta = 1,
k_pi = 1,
lambda_g = NULL,
lambda_i = NULL,
thresh = 1e-04,
max_iter_delta = 1e+05,
max_iter_pi = 1e+05,
max_iter_gamma = 1e+05
)
Arguments
y A vector containing the dependent variable in levels.
x A matrix containing the independent variables in levels.
p Integer indicating the desired number of lagged differences to include. Default
is 1.
rule A charcater object indicating which selection rule the optimal choice of the
penalty parameters is based on. Default is "BIC".
CV_cutoff A numeric value between 0 and 1 that decides the proportion of the training
sample as a fraction of the complete sample. Applies only when rule="TSCV".
Default is 2/3.
deterministics A character object indicating which deterministic variables should be added
("none","constant","trend","both"). Default is "constant".
ADL Logical object indicating whether an ADL model without error-correction term
should be estimated. Default is FALSE.
weights Choice of penalty weights. The weights can be automatically generated by ridge
regression (default) or ols. Alternatively, a conformable vector of non-negative
weights can be supplied.
k_delta The power to which the weights for delta should be raised, if weights are set to
"ridge" or "ols".
k_pi The power to which the weights for pi should be raised, if weights are set to
"ridge" or "ols".
lambda_g An optional user-specified grid for the group penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
lambda_i An optional user-specified grid for the individual penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
thresh The treshold for convergence.
max_iter_delta Maximum number of updates for delta. Default is 105 .
max_iter_pi Maximum number of updates for pi. Default is 105 .
max_iter_gamma Maximum number of updates for gamma. Default is 105 .
Value
D A matrix containing the deterministic variables included in the model.
gammas A matrix containing the estimated coefficients of the stochastic variables in the
conditional error-correction model.
gamma_opt A vector containing the estimated coefficients corresponding to the optimal
model.
lambda_g The grid of group penalties.
lambda_i The grid of individual penalties.
Mv A matrix containing the independent variables, after regressing out the deter-
ministic components.
My_d A vector containing the dependent variable, after regressing out the deterministic
components.
theta The estimated coefficients for the constant and trend. If a deterministic compo-
nent is excluded, its coefficient is set to zero.
theta_opt The estimated coefficients for the constant and trend in the optimal model.
v A matrix containing the independent variables (excluding deterministic compo-
nents).
weights The vector of penalty weights.
y_d A vector containing the dependent variable, i.e. the differences of y.
Examples
#Estimate an automatically optimized model for unemployment and ten google trends
#Organize data
y <- Unempl_GT[,1]
index_GT <- sample(c(2:ncol(Unempl_GT)),10)
x <- Unempl_GT[,index_GT]
#Estimate a CECM with 1 lagged difference and penalty chosen by the minimum BIC
my_specs <- specs_opt(y,x,p=1,rule="BIC")
coefs <- my_specs$gamma_opt
specs_tr SPECS on pre-transformed data
Description
This function computes the Single-equation Penalized Error Correction Selector as described in
Smeekes and Wijler (2020) based on data that is already in the form of a conditional error-correction
model.
Usage
specs_tr(
y_d,
z_l = NULL,
w,
deterministics = c("constant", "trend", "both", "none"),
ADL = FALSE,
weights = c("ridge", "ols", "none"),
k_delta = 1,
k_pi = 1,
lambda_g = NULL,
lambda_i = NULL,
thresh = 1e-04,
max_iter_delta = 1e+05,
max_iter_pi = 1e+05,
max_iter_gamma = 1e+05
)
Arguments
y_d A vector containing the differences of the dependent variable.
z_l A matrix containing the lagged levels.
w A matrix containing the required difference
deterministics Indicates which deterministic variables should be added (0 = none, 1=constant,
2=constant and linear trend).
ADL Boolean indicating whether an ADL model without error-correction term should
be estimated. Default is FALSE.
weights Choice of penalty weights. The weights can be automatically generated by ridge
regression (default) or ols. Alternatively, a conformable vector of non-negative
weights can be supplied.
k_delta The power to which the weights for delta should be raised, if weights are set to
"ridge" or "ols".
k_pi The power to which the weights for pi should be raised, if weights are set to
"ridge" or "ols".
lambda_g An optional user-specified grid for the group penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
lambda_i An optional user-specified grid for the individual penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
thresh The treshold for convergence.
max_iter_delta Maximum number of updates for delta. Defaults is 1e5.
max_iter_pi Maximum number of updates for pi. Defaults is 1e5.
max_iter_gamma Maximum number of updates for gamma. Defaults is 1e5.
Value
D A matrix containing the deterministic variables included in the model.
gammas A matrix containing the estimated coefficients of the stochastic variables in the
conditional error-correction model.
gamma_opt A vector containing the estimated coefficients corresponding to the optimal
model.
lambda_g The grid of group penalties.
lambda_i The grid of individual penalties.
theta The estimated coefficients for the constant and trend. If a deterministic compo-
nent is excluded, its coefficient is set to zero.
theta_opt The estimated coefficients for the constant and trend in the optimal model.
weights The vector of penalty weights.
Examples
#Estimate a conditional error-correction model on pre-transformed data with a constant
#Organize data
y <- Unempl_GT[,1]
index_GT <- sample(c(2:ncol(Unempl_GT)),10)
x <- Unempl_GT[,index_GT]
y_d <- y[-1]-y[-100]
z_l <- cbind(y[-100],x[-100,])
w <- x[-1,]-x[-100,] #This w corresponds to a cecm with p=0 lagged differences
my_specs <- specs_tr(y_d,z_l,w,deterministics="constant")
#Estimate an ADL model on pre-transformed data with a constant
my_specs <- specs_tr(y_d,NULL,w,ADL=TRUE,deterministics="constant")
specs_tr_opt SPECS with data transformation and penalty optimization
Description
The same function as specs_tr(...), but on data that is pre-transformed to a CECM.
Usage
specs_tr_opt(
y_d,
z_l = NULL,
w,
rule = c("BIC", "AIC", "TSCV"),
CV_cutoff = 2/3,
deterministics = c("constant", "trend", "both", "none"),
ADL = FALSE,
weights = c("ridge", "ols", "none"),
k_delta = 1,
k_pi = 1,
lambda_g = NULL,
lambda_i = NULL,
thresh = 1e-04,
max_iter_delta = 1e+05,
max_iter_pi = 1e+05,
max_iter_gamma = 1e+05
)
Arguments
y_d A vector containing the differences of the dependent variable.
z_l A matrix containing the lagged levels.
w A matrix containing the required difference.
rule A charcater object indicating which selection rule the optimal choice of the
penalty parameters is based on. Default is "BIC".
CV_cutoff A numeric value between 0 and 1 that decides the proportion of the training
sample as a fraction of the complete sample. Applies only when rule="TSCV".
Default is 2/3.
deterministics A character object indicating which deterministic variables should be added
("none","constant","trend","both"). Default is "constant".
ADL Logical object indicating whether an ADL model without error-correction term
should be estimated. Default is FALSE.
weights Choice of penalty weights. The weights can be automatically generated by ridge
regression (default) or ols. Alternatively, a conformable vector of non-negative
weights can be supplied.
k_delta The power to which the weights for delta should be raised, if weights are set to
"ridge" or "ols".
k_pi The power to which the weights for pi should be raised, if weights are set to
"ridge" or "ols".
lambda_g An optional user-specified grid for the group penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
lambda_i An optional user-specified grid for the individual penalty may be supplied. If left
empty, a 10-dimensional grid containing 0 as the minimum value is generated.
thresh The treshold for convergence.
max_iter_delta Maximum number of updates for delta. Default is 105 .
max_iter_pi Maximum number of updates for pi. Default is 105 .
max_iter_gamma Maximum number of updates for gamma. Default is 105 .
Value
D A matrix containing the deterministic variables included in the model.
gammas A matrix containing the estimated coefficients of the stochastic variables in the
conditional error-correction model.
gamma_opt A vector containing the estimated coefficients corresponding to the optimal
model.
lambda_g The grid of group penalties.
lambda_i The grid of individual penalties.
theta The estimated coefficients for the constant and trend. If a deterministic compo-
nent is excluded, its coefficient is set to zero.
theta_opt The estimated coefficients for the constant and trend in the optimal model.
v A matrix containing the independent variables (excluding deterministic compo-
nents).
weights The vector of penalty weights.
y_d A vector containing the dependent variable, i.e. the differences of y.
Examples
#Estimate a CECM with a constant, ols initial weights and penalty chosen by the minimum AIC
#Organize data
y <- Unempl_GT[,1]
index_GT <- sample(c(2:ncol(Unempl_GT)),10)
x <- Unempl_GT[,index_GT]
y_d <- y[-1]-y[-100]
z_l <- cbind(y[-100],x[-100,])
w <- x[-1,]-x[-100,] #This w corresponds to a cecm with p=0 lagged differences
my_specs <- specs_tr_opt(y_d,z_l,w,rule="AIC",weights="ols",deterministics="constant")
Unempl_GT Unemployment and Google Trends Data
Description
Time series data on Dutch unemployment from Statistics Netherlands, and Google Trends popular-
ity index for search terms related to unemployment. The Google Trends data can be used to nowcast
unemployment.
Usage
Unempl_GT
Format
A time series object where the first column contains monthly total unemployment in the Netherlands
(x1000, seasonally unadjusted), and the remaining 87 columns are monthly Google Trends series
with popularity of Dutch search terms related to unemployment.
Source
CBS StatLine, https://opendata.cbs.nl/statline, and Google Trends, https://www.google.nl/trends |
contextily | readthedoc | Markdown | # Guide to the Place API¶
# Guide to the
`Place` API¶
`contextily` to get maps & more of named places¶ Contextily allows to get location information as well as a map from named places through the `Places` API. These places could be countries, cities, or streets. The geocoding is handled by geopy and the default service provider is OpenStreetMap’s Nominatim. We can get a place by instantiating a `Place` object by simply passing a query string that is passed on to the geocoder. `[1]:`
```
import geopandas as gpd
from shapely.geometry import box, Point
from contextily import Place
import contextily as cx
import numpy as np
from matplotlib import pyplot as plt
import rasterio
from rasterio.plot import show as rioshow
plt.rcParams["figure.dpi"] = 70 # lower image size
```
### Instantiating a
`Place` object¶ `[2]:`
```
madrid = Place("Madrid")
ax = madrid.plot()
```
The zoom level is detected automatically, but can be adjusted through the `zoom` argument. `[3]:`
```
fig, axs = plt.subplots(2,2, figsize=(20,20))
for i, zoom_lvl in enumerate([10, 11, 12,13]):
ax = Place("Madrid", zoom=zoom_lvl).plot(ax=axs.flatten()[i])
ax.axis("off")
plt.tight_layout()
```
Bear in mind that increasing the zoom level by one leads to four times as many tiles being downloaded:
`[4]:`
```
for zoom_lvl in [12, 13, 14, 15, 16]:
cx.howmany(*madrid.bbox, zoom_lvl, ll=True)
```
```
Using zoom level 12, this will download 30 tiles
Using zoom level 13, this will download 99 tiles
Using zoom level 14, this will download 357 tiles
Using zoom level 15, this will download 1394 tiles
Using zoom level 16, this will download 5508 tiles
```
The basemap provider can be set with the `source` argument. `[5]:`
```
fig, axs = plt.subplots(2,2, figsize=(20,20))
for i, source in enumerate([cx.providers.Stamen.TonerLite,
cx.providers.OpenStreetMap.Mapnik,
cx.providers.Stamen.Watercolor,
cx.providers.CartoDB.Positron
]):
ax = Place("Madrid", source=source).plot(ax=axs.flatten()[i])
ax.axis("off")
plt.tight_layout()
```
There are many providers to choose from. They can be explored from within Python
`[6]:` `cx.providers.keys()` `[6]:`
```
dict_keys(['OpenStreetMap', 'OpenSeaMap', 'OpenPtMap', 'OpenTopoMap', 'OpenRailwayMap', 'OpenFireMap', 'SafeCast', 'Thunderforest', 'OpenMapSurfer', 'Hydda', 'MapBox', 'Stamen', 'Esri', 'OpenWeatherMap', 'HERE', 'FreeMapSK', 'MtbMap', 'CartoDB', 'HikeBike', 'BasemapAT', 'nlmaps', 'NASAGIBS', 'NLS', 'JusticeMap', 'Wikimedia', 'GeoportailFrance', 'OneMapSG'])
```
The image returned by the `Place` API can be saved separately by specifying the path argument upon instantiation `[7]:`
```
scq = Place("Santiago de Compostela", path="santiago.tif")
```
```
with rasterio.open("santiago.tif") as r:
rioshow(r)
```
### Exploring the Place object’s attributes¶
The image can be accessed separately:
`[9]:`
```
plt.imshow(madrid.im)
```
`[9]:`
```
<matplotlib.image.AxesImage at 0x11aa87d90>
```
If a path has been set at instantiation, the path can also be easily accessed:
`[10]:`
```
with rasterio.open(scq.path) as r:
rioshow(r)
```
The center coordinates of a place as returned by the geocoder are also available
`[11]:`
```
madrid.longitude, madrid.latitude
```
```
(-3.7035825, 40.4167047)
```
The place object has a bounding box of the geocoded place, as well as a bounding box of the map
`[12]:` `madrid.bbox` `[12]:`
```
[-3.8889539, 40.3119774, -3.5179163, 40.6437293]
```
`[13]:` `madrid.bbox_map` `[13]:`
```
(-440277.2829226152, -391357.5848201024, 4901753.749871785, 4960457.387594798)
```
Or you can access the `bbox` elements individually like this: `[14]:`
```
madrid.w, madrid.s, madrid.e, madrid.n
```
```
(-3.8889539, 40.3119774, -3.5179163, 40.6437293)
```
The bounding box of the map can come in handy when you want to get close-up maps of e.g. certain neighborhoods within a city:
`[15]:`
```
# Create random points within map bbox
Y = np.random.uniform(low=madrid.bbox_map[2], high=madrid.bbox_map[3], size=(8000,))
X = np.random.uniform(low=madrid.bbox_map[1], high=madrid.bbox_map[0], size=(8000,))
r_points = [Point(x,y) for x,y in zip(X,Y)]
df = gpd.GeoDataFrame(r_points, geometry=0, crs="EPSG:3857")
```
`[16]:`
```
ax = madrid.plot()
ax2 = df.plot(ax=ax, markersize=2)
```
`[17]:`
```
madrid_neighborhoods = ["La Latina", "Retiro"]
for barrio in madrid_neighborhoods:
place = Place(f"{barrio}, Madrid")
# get close-up maps for each neighborhood
e, w, s, n = place.bbox_map
ax = place.plot()
df.cx[e:w, s:n].plot(ax=ax)
```
Most basemap tiles provided through the web are expressed in the Web Mercator coordinate reference syste ( `EPSG:3857` ). However, often our data is not expressed in the same CRS. In those cases, we have two options if we want to plot them on top of a basemap: a) reproject our data to Web Mercator, or b) reproject (warp) the tiles to conform with our data. Which one is best depends on many things but mostly on the size of your own data. If you have a large and/or detailed dataset (e.g. high
resolution polygons), reprojecting it might be expensive; if you have a small one, it might be the easiest as you can plot on top of the native tile. For the case where you don’t want to transform your dataset, an alternative is to change the tiles to its CRS. In raster parlance, this is called “warping”, and `contextily` can help you do that. `[1]:`
import geopandas
import rasterio
from rasterio.plot import show as rioshow
import matplotlib.pyplot as plt
import contextily as cx
from contextily.tile import warp_img_transform, warp_tiles, _warper
```
## Data¶
For this example, we will use the NYC boroughs dataset provided with `geopandas` : `[2]:`
```
db = geopandas.read_file(geopandas.datasets.get_path('nybb'))
```
By default, this dataset is expressed in the “NAD83 / New York Long Island” CRS ( `EPSG:2263` ): `[3]:` `db.crs` `[3]:`
```
<Projected CRS: EPSG:2263>
Name: NAD83 / New York Long Island (ftUS)
Axis Info [cartesian]:
- X[east]: Easting (US survey foot)
- Y[north]: Northing (US survey foot)
Area of Use:
- name: United States (USA) - New York - counties of Bronx; Kings; Nassau; New York; Queens; Richmond; Suffolk.
- bounds: (-74.26, 40.47, -71.8, 41.3)
Coordinate Operation:
- name: SPCS83 New York Long Island zone (US Survey feet)
- method: Lambert Conic Conformal (2SP)
Datum: North American Datum 1983
- Ellipsoid: GRS 1980
- Prime Meridian: Greenwich
```
## Convert your data to Web Mercator¶
The first option, if your data is not too large, is to convert them to Web Mercator, and then directly add a basemap (for example, with `add_basemap` ): `[4]:`
```
db_wm = db.to_crs(epsg=3857)
```
Once projected, the workflow is straightforward:
`[5]:`
```
ax = db_wm.plot()
cx.add_basemap(ax);
```
The result here is a map expresed in Web Mercator.
## Convert the tiles to your data’s CRS¶
The same journey can be travelled in the oposite direction by leaving your data untouched and warping the tiles coming from the web. To do this in `add_basemap` , all you have to do is express the CRS your data are in: `[6]:`
```
ax = db.plot()
cx.add_basemap(ax, crs=db.crs);
```
The result is then expressed in NAD83 in this case (note the coordinates of the axes differ across plots).
## Convert both datasets into a different CRS¶
It is also possible to make resulting plot in an entirely different CRS. For this, you will need to both project your own data and warp the tiles. For this example, let’s use the WGS84 (lon/lat, `EPSG:4326` ) CRS for the destination: `[7]:`
```
db_lonlat = db.to_crs(epsg=4326)
ax = db_lonlat.plot()
cx.add_basemap(ax, crs=db_lonlat.crs)
```
Note that the coordinates on the X and Y axis are now expressed in longitude and latitude.
## Warping from local files¶
The same functionality to warp tiles also works with local rasters stored in disk. This means you can have a a raster saved in one CRS and you can plot it with `add_basemap` on a different CRS. This computation is all done on the fly, so no need to write to intermediate files.
For an example, let’s first save a raster with a basemap:
`[9]:`
```
! rm warp_tst.tif
# Extract bounding box in Web Mercator
w, s, e, n = db.to_crs(epsg=3857).total_bounds
# Download file
img, ext = cx.bounds2raster(w, s, e, n,
'warp_tst.tif',
zoom=10
)
```
The raster we have saved is expressed in Web Mercator:
`[10]:`
```
with rasterio.open("warp_tst.tif", "r") as r:
print(r.crs)
```
`EPSG:3857`
Now we can plot our original data in NAD83 and add a basemap from the (Web Mercator) raster by warping it into NAD83:
`[11]:`
```
ax = db.plot()
cx.add_basemap(ax, source="warp_tst.tif", crs=db.crs)
```
## Low-level warping functionality¶
In most cases, you will probably be fine using `add_basemap` . However, sometimes we need a bit more flexibility and power, even if it takes some more code. For those moments, `contextily` includes two additional functions: `warp_img_transform` , and `warp_tiles` . Let’s have a look at each of them!
`warp_tiles` ¶ This method allows you to warp an image, provided you have its extent and CRS. For example, if you have downloaded an image with `bounds2img` , you can reproject it using `warp_tiles` . Let’s download an image from the web: `[12]:`
```
# Extract bounding box in Web Mercator
w, s, e, n = db.to_crs(epsg=3857).total_bounds
# Download image
img, ext = cx.bounds2img(w, s, e, n)
# Render image
plt.imshow(img, extent=ext)
```
`[12]:`
```
<matplotlib.image.AxesImage at 0x1633f3520>
```
This is expressed in the source CRS, which in this case is Web Mercator. If we want to plot it in say lon/lat, we can warp it by:
`[13]:`
```
warped_img, warped_ext = cx.warp_tiles(img, ext, "EPSG:4326")
```
And we can render them similarly as before:
`[14]:`
```
plt.imshow(warped_img, extent=warped_ext)
```
```
<matplotlib.image.AxesImage at 0x16287e4c0>
```
Note how the extent is now expressed in longitude and latitude.
`warp_img_transform` ¶ This method allows you to warp an image you have loaded from disk, provided you have its transform and CRS. For example, if you have downloaded an image with `bounds2raster` and stored it in a file, you can read it with `rasterio` and warp it using `warp_img_transform` . Let’s use the example from before: `[15]:`
```
src = rasterio.open('warp_tst.tif')
img = src.read()
rioshow(img, transform=src.transform);
```
Now to take this into a NAD83, for example, we can:
`[16]:`
```
w_img, w_transform = warp_img_transform(img,
src.transform,
src.crs,
db.crs
)
f, ax = plt.subplots(1)
rioshow(w_img, transform=w_transform, ax=ax);
```
At heart, `contextily` is a package to work with data from the web. Its main functionality allows you to access tilesets exposed through the popular XYZ format and include them in your workflow through `matplotlib` . However, a little hidden gem in the pacakge is also how it is useful to work with local files. For all this functionality, `contextily` relies on `rasterio` so, in the name of showing how a streamlined workflow could look like, we will switch back and forth between the two in
this notebook. For good measure, we will also use `geopandas` as it’ll show how they are all a family that works great together! `[1]:`
import contextily as ctx
import geopandas
import rasterio
from rasterio.plot import show as rioshow
from shapely.geometry import box
import matplotlib.pyplot as plt
```
## Saving tiles locally¶
The first usecase is when you want to store locally a basemap you have accessed with `contextily` . For example, let’s say you are visualising a point dataset. In this case, let’s pull the CLIWOC ``routes` <https://figshare.com/articles/CLIWOC_Slim_and_Routes/11941224>`__ dataset, which records ship routes from the XVIIth to XIXth Centuries: `[2]:`
```
cliwoc = geopandas.read_file("https://ndownloader.figshare.com/files/21940242")
cliwoc.plot()
```
`[2]:` `<AxesSubplot:>`
A quick plot reveals some structure, but it is a bit hard to see much. Let’s style the routes a bit and add a basemap:
`[3]:`
```
ax = cliwoc.plot(linewidth=0.01, alpha=0.5, color="k")
ctx.add_basemap(ax,
crs=cliwoc.crs,
source=ctx.providers.Stamen.Watercolor
)
```
Now this is better! But imagine that you wanted to take this map to a Desktop GIS (like QGIS) and maybe do some more work; or that you simply wanted to retain a copy of the basemap in case you need to work offline. In those cases, `contextily` lets you download a basemap off the internet directly into a common GeoTIFF file.
### Raster from bounds¶
The workhorse here is `bounds2raster` , which expects a bounding box and downloads a basemap for that area into a local `.tif` file. Let’s see for the example above. First, we need to extract the bounds of our dataset, which will set the extent of the download: `[4]:`
```
west, south, east, north = bbox = cliwoc.total_bounds
bbox
```
`[4]:`
```
array([-179.98, -71.17, 179.98, 80.8 ])
```
Then we can download the tile:
`[5]:`
```
img, ext = ctx.bounds2raster(west,
south,
east,
north,
"world_watercolor.tif",
source=ctx.providers.Stamen.Watercolor,
ll=True
)
```
Note that, since our bounding box was expressed in lon/lat, we pass `ll=True` so the function knows about it. You should now see a file written locally named `world_watercolor.tif` . This is saved in a standard format that any GIS should be able to read. In Python, the quickest way to visualise a `GeoTIFF` file is with `rasterio` : `[6]:`
```
with rasterio.open("world_watercolor.tif") as r:
rioshow(r)
```
Note how the data is the same as in the figure above, but it’s expressed in a different projection. This is because a basemap is always stored in its original CRS, Web Mercator. See below how you can modify that on-the-fly with `contextily` .
### Raster from name¶
The other option `contextily` includes to save rasters is through its Place API, which allows you to query locations through their names (thanks to ``geopy` <https://geopy.readthedocs.io/en/stable/>`__). For example, we can retrieve a basemap of Cape Town: `[8]:`
```
cape_town = ctx.Place("Cape Town", source=ctx.providers.OpenStreetMap.Mapnik)
cape_town.plot()
```
```
<AxesSubplot:title={'center':'Cape Town, City of Cape Town, Western Cape, 8001, South Africa'}, xlabel='X', ylabel='Y'>
```
Now, if we want to store the basemap in a file as we download it, you can pass a path to the `path` argument: `[9]:`
```
cape_town = ctx.Place("Cape Town", source=ctx.providers.OpenStreetMap.Mapnik, path="cape_town.tif")
```
And this should create a new file on your local directory.
## Reading local rasters¶
`rasterio` ¶ `rasterio` allows us to move quickly from file to plot if we want to inspect what’s saved: `[10]:`
```
with rasterio.open("cape_town.tif") as r:
rioshow(r)
```
`add_basemap` ¶ If we are combining a locally stored raster with other data (e.g. vector),
provides a few goodies that make it worth considering.
[Data preparation detour]
To demonstrate this functionality, we will first clip out the sections of the CLIWOC routes within the bounding box of Cape Town:
`[11]:`
```
with rasterio.open("cape_town.tif") as r:
west, south, east, north = tuple(r.bounds)
cape_town_crs = r.crs
bb_poly = box(west, south, east, north)
bb_poly = geopandas.GeoDataFrame({"geometry": [bb_poly]},
crs=cape_town_crs
)
```
With a `GeoDataFrame` for the area ( `bb_poly` ), we can clip from the routes: `[12]:`
```
cliwoc_cape_town = geopandas.overlay(cliwoc,
bb_poly.to_crs(cliwoc.crs),
how="intersection"
)
cliwoc_cape_town.plot()
```
`[12]:` `<AxesSubplot:>`
Additionally, for the sake of the illustration, we will also clip routes within 10Km of the center of Cape Town (Pseudo-Mercator is definitely not the best projection for calculating distances, but to keep this illustration concise, it’ll do):
`[13]:`
```
cape_town_buffer = geopandas.GeoDataFrame({"geometry": bb_poly.centroid.buffer(10000)},
crs=bb_poly.crs
)
cliwoc_cape_town_buffer = geopandas.overlay(cliwoc,
cape_town_buffer.to_crs(cliwoc.crs),
how="intersection"
)
cliwoc_cape_town_buffer.plot()
```
`[13]:` `<AxesSubplot:>` Now, we can use `add_basemap` to add a local raster file as the background to our map. Simply replace a web `source` for the path to the local file, and you’ll be set! `[14]:`
```
ax = cliwoc_cape_town.plot(linewidth=0.05, color="k")
ctx.add_basemap(ax,
crs=cliwoc_cape_town.crs,
source="cape_town.tif"
)
```
Note how the `crs` functionality works just as expected in this context as well. `contextily` checks the CRS of the local file and, if it is different from that specified in the `crs` parameter, it warps the image so they align automatically.
Same as with web tiles, we can “dim” the basemap by boosting up transparency:
`[15]:`
```
ax = cliwoc_cape_town.plot(linewidth=0.05, color="k")
ctx.add_basemap(ax,
crs=cliwoc_cape_town.crs,
source="cape_town.tif",
alpha=0.5
)
```
The `add_basemap` method has the `reset_extent` parameter, which is set to `True` by default. When loading local rasters, this option uses the axis bounds and loads only the portion of the raster within the bounds. This results in two key benefits: first, the original axis is not modified in its extent, so you are still displaying exactly what you wanted, regardless of the extent of the raster you use on the basemap; and second, the method is very efficient even if your raster is large
and has wider coverage (only the portion of the raster needed to cover the axis is accessed and loaded).
This can be better demonstrated using the buffer clip:
`[16]:`
```
ax = cliwoc_cape_town_buffer.plot(linewidth=1, color="k")
ctx.add_basemap(ax,
crs=cliwoc_cape_town.crs,
source="cape_town.tif"
)
```
Now, `reset_extent` can be turned off when you want to modify the original axis bounds to include the full raster you use as basemap. The effect is clearly seen in the following figure: `[17]:`
```
ax = cliwoc_cape_town_buffer.plot(linewidth=1, color="k")
ctx.add_basemap(ax,
crs=cliwoc_cape_town.crs,
source="cape_town.tif",
reset_extent=False
)
```
These two options give you flexibility to use the local rasters features in different contexts.
Date: 2017-07-15
Categories:
Tags:
`contextily` to display imagery from Google Earth Engine (GEE)¶ In this notebook, we show how you can access data from Google Earth Engine through Google’s `ee` package and create static (base)maps using `contextily` . We also show how this relates to the standard way suggested by Google to display imagery in an interactive context using `folium` .
### Requirements¶
See official guidance from Google here. Let’s import the API package:
`[1]:`
```
import ee
ee.__version__
```
`[1]:` `'0.1.221'`
And before we can access Earth Engine imagery, we need to authenticate ourselves. If you’re on a browser where you’re logged on your Google account, this should be straightforward (and a one off):
`[ ]:`
```
# Cell cleared to avoid pasting tokens, etc.
ee.Authenticate()
```
And once past Google security check, we can initialize the session:
`[2]:` `ee.Initialize()`
### Interactive way using
`folium` ¶ Google has an illustration with the great ``folium` <https://python-visualization.github.io/folium/>`__ library, which provides interactive access to EE imagery.
In this example, replicated from the original link, we create an interactive map of the SRTM elevation model from NASA:
`[3]:`
```
# Import the Folium library.
import folium
# Define a method for displaying Earth Engine image tiles to folium map.
def add_ee_layer(self, ee_image_object, vis_params, name):
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles = map_id_dict['tile_fetcher'].url_format,
attr = 'Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
name = name,
overlay = True,
control = True
).add_to(self)
# Add EE drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}
# Create a folium map object.
my_map = folium.Map(location=[20, 0], zoom_start=3, height=500)
# Add the elevation model to the map object.
my_map.add_ee_layer(dem.updateMask(dem.gt(0)), vis_params, 'DEM')
# Add a layer control panel to the map.
my_map.add_child(folium.LayerControl())
# Display the map.
display(my_map)
```
### Rendering GEE with
`contextily` ¶ Now, because GEE provides access to imagery through the popular XYZ standard, which is fully supported by `contextily` , we can also render its imagery statically. `[4]:`
```
import contextily
import matplotlib.pyplot as plt
```
# SRTM elevation model¶
Let’s decouple the data access from the rendering in the example above. First, we specify the dataset ( `dem` ) and pick the visualisation parameters ( `vis_params` ): `[5]:`
# Set visualization parameters.
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5'],
'transparency': 0.5
}
```
With this, we can obtain from Google a bespoke base URL:
`[6]:`
```
src = dem.getMapId(vis_params)["tile_fetcher"].url_format
src
```
`[6]:`
```
'https://earthengine.googleapis.com/v1alpha/projects/earthengine-legacy/maps/e772f0b6ccf89aae965d25d9887de682-80246485c60c87ca3f19bec781b4a24e/tiles/{z}/{x}/{y}'
```
Note that, because this requires login with you account, the link above will not work directly outside the session. So if you want to access it, you’ll need your own login and then replicate the code to get your own URL to the dataset.
Now let’s pick a place to create a map for. For example, Switzerland. We can pull down the part of the GEE-provided DEM for this region using, for example, the place API in `contextily` . All we need to specify is to point the `source` argument to the URL provided above: `[7]:`
```
sw = contextily.Place("Switzerland", source=src)
```
And voila, we have it ready to plot:
`[8]:` `sw.plot()` `[8]:`
```
<matplotlib.axes._subplots.AxesSubplot at 0x7fb82cef01d0>
```
### Landsat - 8¶
Here is another example to access Landsat imagery at two different points in time. To do this, we will need to create the base URL for the two dates. We will use the case of Hurricane Harvey to extract two snapshots of Houston before and after the storm.
First, we specify the dataset ( `l8` ), the location we want to query for clean/cloudfree images ( `houston` ) and the dates for before ( `pre` ) and after ( `aft` ), as well as the visual parameters we want for the resulting imagery ( `vis_params` ): `[9]:`
```
l8 = ee.ImageCollection("LANDSAT/LC08/C01/T1_SR")
houston = ee.Geometry.Point(-95.3908, 29.7850)
pre = l8.filterDate("2017-07-15", "2017-08-16")\
.filterBounds(houston)\
.mosaic()
aft = l8.filterDate("2017-08-17", "2017-09-30")\
.filterBounds(houston)\
.mosaic()
viz_params = {
"bands": ["B4", "B3", "B2"],
"gain": '0.1, 0.1, 0.1',
"scale":20
}
```
With this, we can create the URL for imagery before ( `url_pre` ) and after ( `url_aft` ) the hurricane: `[10]:`
```
url_pre = pre.getMapId(viz_params)["tile_fetcher"].url_format
url_aft = aft.getMapId(viz_params)["tile_fetcher"].url_format
```
To create a map with `contextily` , we can now use the place API again: `[11]:`
```
map_pre = contextily.Place("Houston, TX", source=url_pre)
map_aft = contextily.Place("Houston, TX", source=url_aft)
```
Now we can plot the image of before and after the storm:
`[12]:`
```
f, axs = plt.subplots(1, 2, figsize=(16, 8))
for i, m in enumerate([map_pre, map_aft]):
ax = axs[i]
m.plot(ax=ax)
ax.set_axis_off()
f.suptitle("Houston - Before/After Harvey")
plt.show()
```
# OSMNX & Cenpy
Date: 2018-03-01
Categories:
Tags:
# OSMNX & Cenpy¶
## Using contextily to map OpenStreetMap & US Census bureau data¶
In this notebook, we’ll discuss how to access open data sources, such as data from OpenStreetMap or the United States Census, and make static maps showing urban demographics and streets with basemaps provided by `contextily` . The three main packages covered in this notebook are explained below:
### OSMNX¶
OSMNX, (styled `osmnx` , pronounced as `oh-ess-em-en-echs` ), is a well-used package to examine Open Streetmap data from python. A good overview of the core concepts & ideas comes from @gboeing, the lead author and maintainer of the package. Here, we’ll use it to extract the street network of Austin, TX.
### Contextily¶
Contextily (pronounced `context-a-lee` ) is a python package that works with online map tile servers to provide basemaps for `matplotlib` plots. A ton of information on the package is available at
```
`contextily.readthedocs.io
```
<https://contextily.readthedocs.io/en/latest/>`__.
### Cenpy¶
CenPy (pronounced `sen-pie` ) is a a python package for interacting with the US Census Bureau’s Data Products, hosted at ``api.census.gov` <https://api.census.gov>`__. The Census exposes a ton of data products for people to use. Cenpy itself provides 2 “levels” of access.
# Census
`products` ¶ Most users simply want to get into the census, retrieve data, and then map, plot, analyze, or model that data. For this, `cenpy` wraps the main “products” that users may want to access: the American Community Survey & 2010 Decennial Census. These are desgined to interface directly with the US Census Bureau’s data APIs, get both the geographies & data from the US Census, and return that to the user, ready to plot. We’ll cover this API here.
# Building Blocks of
`cenpy.products` ¶ For those interested, `cenpy` also has a lower-level interface designed to directly interact with US Census data products through their two constituent parts: the data product from https://api.census.gov, and the geography product, from the US Census’s ESRI MapServer. This is intended for developers to build new `products` or to interface directly with the API as they wish. This is pretty straightforward to use, but requires a bit more technical knowledge to make just work, so if you
simply need US Census or ACS data, focus on the `product` API.
# Using the Packages¶
To use packages in python, you must first `import` the package. Below, we import three packages:
*
`cenpy` *
`osmnx` *
`contextily` *
`matplotlib.pyplot` `[1]:`
```
import cenpy
import osmnx
import contextily
import matplotlib.pyplot as plt
%matplotlib inline
```
```
/opt/anaconda3/envs/analysis/lib/python3.8/site-packages/fuzzywuzzy/fuzz.py:11: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
```
`osmnx` , `contextily` , and `cenpy.products` work using a place-oriented API. This means that users specify a place name, like `Columbus, OH` or `Kansas City, MO-KS` , or `California` , and the package parses this name and grabs the relevant data. `osmnx` uses the Open Street Map service, `cenpy` uses the Us Census Bureau’s service, and `contextily` has its own distinctive set of providers so they can
sometimes disagree slightly, especially when considering older census products. Regardless, to grab the US census data using `cenpy` , you pass the place name and the columns of the Census product you wish to extract. Below, we’ll grab two columns from the American Community Survey: Total population ( `B02001_001E` ) and count of African American persons ( `B02001_003E` ). We’ll grab this from Austin, TX: `[2]:`
```
aus_data = cenpy.products.ACS().from_place('Austin, TX',
variables=['B02001_001E', 'B02001_003E'])
```
```
Matched: Austin, TX to Austin city within layer Incorporated Places
```
When this runs, `cenpy` does a few things: 1. it asks the census for all the relevant US Census Tracts that fall within Austin, TX 2. it parses the shapes of Census tracts to make sure they’re valid 3. it parses the data from the Census to ensure it’s valid Above, you may see a warning that the Austin, TX shape is invalid! This is `cenpy` running validation on the data. This problem can be fixed, but does not immediately affect analyses.
Likewise, OSMNX has a place-oriented API. To grab the street network from Austin, we can run a similar query:
`[3]:`
```
aus_graph = osmnx.graph_from_place('Austin, TX')
```
However, the two pcakages default representations are quite different. `osmnx` focuses on the `networkx` package for its core representation (hence, `osm` for Open Streetmap and `nx` for NetworkX): `[4]:` `aus_graph` `[4]:`
```
<networkx.classes.multidigraph.MultiDiGraph at 0x162e26b50>
```
In contrast `cenpy` uses `pandas` (and, specifically, `geopandas` ) to express the demographics and geography of US Census data. These packages provide dataframes, like spreadsheets, which can be used to analyze data in Python. Below, each road contains the shape of one US Census tract (the geometry used by default in by `cenpy` ), and the columns provide descriptive information about the tract. `[5]:` `aus_data.head()` `[5]:`
GEOID | geometry | B02001_001E | B02001_003E | state | county | tract |
| --- | --- | --- | --- | --- | --- | --- |
0 | 48453001777 | POLYGON ((-10893854.050 3530337.870, -10893733... | 6270.0 | 305.0 | 48 | 453 | 001777 |
1 | 48453001303 | POLYGON ((-10884014.410 3536874.560, -10883958... | 4029.0 | 41.0 | 48 | 453 | 001303 |
2 | 48453000603 | POLYGON ((-10881841.790 3540726.020, -10881828... | 8045.0 | 577.0 | 48 | 453 | 000603 |
3 | 48453001786 | POLYGON ((-10883185.970 3559253.320, -10883154... | 5283.0 | 70.0 | 48 | 453 | 001786 |
4 | 48453001402 | POLYGON ((-10881202.260 3534408.710, -10881206... | 2617.0 | 23.0 | 48 | 453 | 001402 |
Fortunately, you can convert the `networkx` objects that `osmnx` focuses on into `pandas` dataframes, so that both `cenpy` and `osmnx` match in their representation. This makes it very easy to work with OSM data alongside of census data. To convert the OSM data into a `pandas` dataframe, we must do two things. First, we need to use the `osmnx.graph_to_gdfs` to convert the graph to `GeoDataFrames` , which are like a standard `pandas.DataFrame` , but with additional geographic information on the shape of each road. The `graph_to_gdfs` actually produces two dataframes: one full of roads and one full of intersections. We’ll separate the two below: `[6]:`
```
aus_nodes, aus_streets = osmnx.graph_to_gdfs(aus_graph)
```
Now, the `aus_streets` dataframe looks like the `aus_data` dataframe, where each row is a street, and columns contain some information about the street: `[7]:` `aus_streets.head()` `[7]:`
u | v | key | osmid | highway | oneway | length | geometry | name | lanes | bridge | service | junction | maxspeed | access | ref | width | tunnel | area |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
0 | 5532286976 | 2022679877 | 0 | 191679752 | service | False | 69.530 | LINESTRING (-97.79634 30.15963, -97.79675 30.1... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
1 | 5532286976 | 5532286980 | 0 | 191679752 | service | False | 17.599 | LINESTRING (-97.79634 30.15963, -97.79632 30.1... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
2 | 5532286976 | 5532286980 | 1 | 576925636 | service | False | 126.129 | LINESTRING (-97.79634 30.15963, -97.79593 30.1... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
3 | 5532286980 | 5532286976 | 0 | 191679752 | service | False | 17.599 | LINESTRING (-97.79630 30.15948, -97.79632 30.1... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
4 | 5532286980 | 5532286976 | 1 | 576925636 | service | False | 126.129 | LINESTRING (-97.79630 30.15948, -97.79611 30.1... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
The last bit of data processing that is needed to make the two datasets fully comport within one another is to set their coordinate reference systems to ensure that they align and can be plotted with webtile backing. The US Census provides geographical data in Web Mercator projection (likely due to the fact that it serves many webmapping applications in the US Government), whereas the Open Streetmap project serves data in raw latitude/longitude by default. For `contextily` , we’ll need
everything in a Web Mercator projection. So, to convert data between coordinate reference systems, we can use the `to_crs` method of `GeoDataFrames` . This changes the coordinate reference system for the dataframe. To convert one dataframe into the coordiante reference system of another, it’s often enough to provide the coordinate reference of the target dataframe to the `to_crs` function: `[8]:`
```
aus_streets = aus_streets.to_crs(aus_data.crs)
```
Now, the two dataframes have the same coordinate reference system:
`[9]:` `aus_data.crs` `[9]:`
`[10]:` `aus_streets.crs` `[10]:`
Now, we can make maps using the data, or can conduct analyses using the streets & demographics of Austin, TX. Using
, we can also ensure that a nice basemap is added: `[11]:`
```
f,ax = plt.subplots(1,1, figsize=(15,15))
aus_streets.plot(linewidth=.25, ax=ax, color='k')
aus_data.eval('pct_afam = B02001_003E / B02001_001E')\
.plot('pct_afam', cmap='plasma', alpha=.7, ax=ax, linewidth=.25, edgecolor='k')
contextily.add_basemap(ax=ax, url=contextily.providers.CartoDB.Positron)
#ax.axis(aus_streets.total_bounds[[0,2,1,3]])
ax.set_title('Austin, TX\nAfrican American %')
#ax.set_facecolor('k')
```
```
Text(0.5, 1.0, 'Austin, TX\nAfrican American %')
```
This means that urban data science in Python has never been easier! So much data is at your fingertips, `from_place` away. Both packages can be installed from `conda-forge` , the community-driven package repository in Anaconda, the scientific python distribution. Check out other examples of using ``cenpy` <https://cenpy-devs.github.io/cenpy>`__, ``contextily` <https://contextily.readthedocs.io/en/stable>`__ ``osmnx` <https://osmnx.readthedocs.io/en/stable/>`__ from their respective
websites. And, most importantly, happy hacking!
## Plotting basemaps¶
* contextily.add_basemap(ax, zoom='auto', source=None, interpolation='bilinear', attribution=None, attribution_size=8, reset_extent=True, crs=None, resampling=Resampling.bilinear, **extra_imshow_args)¶
*
Add a (web/local) basemap to ax.
Matplotlib axes object on which to add the basemap. The extent of the axes is assumed to be in Spherical Mercator (EPSG:3857), unless the crs keyword is specified.
* zoomint or ‘auto’
*
[Optional. Default=’auto’] Level of detail for the basemap. If ‘auto’, it is calculated automatically. Ignored if source is a local file.
* sourcexyzservices.TileProvider object or str
*
[Optional. Defaults to ATTRIBUTION_SIZE]. Font size to render attribution text with.
* reset_extentbool
*
[Optional. Default=True] If True, the extent of the basemap added is reset to the original extent (xlim, ylim) of ax
* crsNone or str or CRS
*
[Optional. Default=None] coordinate reference system (CRS), expressed in any format permitted by rasterio, to use for the resulting basemap. If None (default), no warping is performed and the original Spherical Mercator (EPSG:3857) is used.
* resampling<enum ‘Resampling’>
*
Other parameters to be passed to imshow.
Examples
> >>> import geopandas >>> import contextily as ctx >>> db = geopandas.read_file(ps.examples.get_path('virginia.shp'))
Ensure the data is in Spherical Mercator:
> >>> db = db.to_crs(epsg=3857)
Add a web basemap:
> >>> ax = db.plot(alpha=0.5, color='k', figsize=(6, 6)) >>> ctx.add_basemap(ax, source=url) >>> plt.show()
Or download a basemap to a local file and then plot it:
> >>> source = 'virginia.tiff' >>> _ = ctx.bounds2raster(*db.total_bounds, zoom=6, source=source) >>> ax = db.plot(alpha=0.5, color='k', figsize=(6, 6)) >>> ctx.add_basemap(ax, source=source) >>> plt.show()
* contextily.add_attribution(ax, text, font_size=8, **kwargs)¶
*
Utility to add attribution text.
Matplotlib axes object on which to add the attribution text.
* textstr
*
Text to be added at the bottom of the axis.
* font_sizeint
*
[Optional. Defaults to 8] Font size in which to render the attribution text.
* **kwargsAdditional keywords to pass to the matplotlib text method.
* Returns
*
* matplotlib.text.Text
*
Matplotlib Text object added to the plot.
## Working with tiles¶
Take bounding box and zoom, and write tiles into a raster file in the Spherical Mercator CRS (EPSG:3857)
Level of detail
* pathstr
*
Path to raster file to be written
* sourcexyzservices.TileProvider object or str
*
Take bounding box and zoom and return an image with all the tiles that compose the map and its Spherical Mercator extent.
Level of detail
* sourcexyzservices.TileProvider object or str
*
* contextily.warp_tiles(img, extent, t_crs='EPSG:4326', resampling=Resampling.bilinear)¶
*
Reproject (warp) a Web Mercator basemap into any CRS on-the-fly
* NOTE: this method works well with contextily’s bounds2img approach to
*
raster dimensions (h, w, b)
Bounding box [minX, maxX, minY, maxY] of the returned image, expressed in Web Mercator (EPSG:3857)
* t_crsstr/CRS
*
[Optional. Default=’EPSG:4326’] Target CRS, expressed in any format permitted by rasterio. Defaults to WGS84 (lon/lat)
* resampling<enum ‘Resampling’>
*
* contextily.warp_img_transform(img, transform, s_crs, t_crs, resampling=Resampling.bilinear)¶
*
Reproject (warp) an img with a given transform and s_crs into a different t_crs
NOTE: this method works well with rasterio’s .read() approach to raster’s dimensions (b, h, w)
Image as a 3D array (b, h, w) of RGB values (e.g. as returned from rasterio’s .read() method)
* transformaffine.Affine
*
Source CRS in which img is passed, expressed in any format permitted by rasterio.
* t_crsstr/CRS
*
Target CRS, expressed in any format permitted by rasterio.
* resampling<enum ‘Resampling’>
*
* w_imgndarray
*
Warped image as a 3D array (b, h, w) of RGB values (e.g. as returned from rasterio’s .read() method)
* w_transformaffine.Affine
*
* contextily.howmany(w, s, e, n, zoom, verbose=True, ll=False)¶
*
Number of tiles required for a given bounding box and a zoom level
Level of detail
* verboseBoolean
*
[Optional. Default=True] If True, print short message with number of tiles and zoom.
* llBoolean
*
## Geocoding and plotting places¶
* class contextily.Place(search, zoom=None, path=None, zoom_adjust=None, source=None, geocoder=<geopy.geocoders.nominatim.Nominatim object>)¶
*
Geocode a place by name and get its map.
This allows you to search for a name (e.g., city, street, country) and grab map and location data from the internet.
* searchstring
*
The location to be searched.
* zoomint or None
*
[Optional. Default: None] The level of detail to include in the map. Higher levels mean more tiles and thus longer download time. If None, the zoom level will be automatically determined.
* pathstr or None
*
[Optional. Default: None] Path to a raster file that will be created after getting the place map. If None, no raster file will be downloaded.
* zoom_adjustint or None
*
[Optional. Default: None] The amount to adjust a chosen zoom level if it is chosen automatically.
* sourcexyzservices.providers object or str
*
[Optional. Default: geopy.geocoders.Nominatim()] Geocoder method to process search
* Attributes
*
* geocodegeopy object
*
The result of calling
```
geopy.geocoders.Nominatim
```
with `search` as input. * sfloat
*
The southern bbox edge.
* nfloat
*
The northern bbox edge.
* efloat
*
The eastern bbox edge.
* wfloat
*
The western bbox edge.
* imndarray
*
The image corresponding to the map of
`search` . * bboxlist
*
The bounding box of the returned image, expressed in lon/lat, with the following order: [minX, minY, maxX, maxY]
* bbox_maptuple
*
The bounding box of the returned image, expressed in Web Mercator, with the following order: [minX, minY, maxX, maxY]
Methods
`plot` ([ax, zoom, interpolation, attribution])
Plot a Place object
* Place.plot(ax=None, zoom='auto', interpolation='bilinear', attribution=None)¶
*
Plot a Place object …
Matplotlib axis with x_lim and y_lim set in Web Mercator (EPSG=3857). If not provided, a new 12x12 figure will be set and the name of the place will be added as title
* zoomint/’auto’
*
[Optional. Default=’auto’] Level of detail for the basemap. If ‘auto’, if calculates it automatically. Ignored if source is a local file.
* interpolationstr
*
Matplotlib axis with x_lim and y_lim set in Web Mercator (EPSG=3857) containing the basemap
Examples
> >>> lvl = ctx.Place('Liverpool') >>> lvl.plot()
* contextily.plot_map(place, bbox=None, title=None, ax=None, axis_off=True, latlon=True, attribution=None)¶
*
Plot a map of the given place.
* placeinstance of Place or ndarray
*
The map to plot. If an ndarray, this must be an image corresponding to a map. If an instance of
`Place` , the extent of the image and name will be inferred from the bounding box. * axinstance of matplotlib Axes object or None
*
The axis on which to plot. If None, one will be created.
* axis_offbool
*
Whether to turn off the axis border and ticks before plotting.
* attributionstr
*
[Optional. Default to standard ATTRIBUTION] Text to be added at the bottom of the axis.
* Returns
*
* axinstance of matplotlib Axes object or None
*
The axis on the map is plotted.
|
reaflow-extended | npm | JavaScript | 🕸 reaflow
===
Node-based Visualizations for React
---
REAFLOW is a modular diagram engine for building static or interactive editors. The library is feature-rich and modular allowing for displaying complex visualizations with total customizability.
If you are looking for network graphs, checkout [reagraph](https://reagraph.dev).
🚀 Quick Links
---
* Checkout the [**docs and demos**](https://reaflow.dev)
* Explore the library on [Chroma](https://www.chromatic.com/library?appId=5f99ba42fe88ac0022fd1147)
* Learn about updates from the [Changelog](https://github.com/reaviz/reaflow/blob/HEAD/CHANGELOG.md)
✨ Features
---
* Complex automatic layout leveraging ELKJS
* Easy Node/Edge/Port customizations
* Zooming / Panning / Centering controls
* Drag and drop Node/Port connecting and rearranging
* Nesting of Nodes/Edges
* Proximity based Node linking helper
* Node/Edge selection helper
* Undo/Redo helper
📦 Usage
---
Install the package via **NPM**:
```
npm i reaflow --save
```
Install the package via **Yarn**:
```
yarn add reaflow
```
Import the component into your app and add some nodes and edges:
```
import React from 'react';
import { Canvas } from 'reaflow';
export default () => (
<Canvas
maxWidth={800}
maxHeight={600}
nodes={[
{
id: '1',
text: '1'
},
{
id: '2',
text: '2'
}
]}
edges={[
{
id: '1-2',
from: '1',
to: '2'
}
]}
/>
);
```
🔭 Development
---
If you want to run reaflow locally, its super easy!
* Clone the repo
* `yarn install`
* `yarn start`
* Browser opens to Storybook page
❤️ Contributors
---
Thanks to all our contributors!
Readme
---
### Keywords
* react
* reactjs
* workflow
* node-editor
* diagrams
* elkjs |
github.com/ebitengine/oto | go | Go | README
[¶](#section-readme)
---
### Oto (音)
[![GoDoc](https://godoc.org/github.com/hajimehoshi/oto?status.svg)](http://godoc.org/github.com/hajimehoshi/oto)
A low-level library to play sound. This package offers `io.WriteCloser` to play PCM sound.
#### Platforms
* Windows
* macOS
* Linux
* FreeBSD
* Android
* iOS
* (Modern) web browsers (powered by [GopherJS](https://github.com/gopherjs/gopherjs))
#### Prerequisite
##### Linux
libasound2-dev is required. On Ubuntu or Debian, run this command:
```
apt install libasound2-dev
```
In most cases this command must be run by root user or through `sudo` command.
##### FreeBSD
OpenAL is required. Install openal-soft:
```
pkg install openal-soft
```
Documentation
[¶](#section-documentation)
---
[Rendered for](https://go.dev/about#build-context)
linux/amd64 windows/amd64 darwin/amd64 js/wasm
### Overview [¶](#pkg-overview)
Package oto offers io.Writer to play sound on multiple platforms.
### Index [¶](#pkg-index)
* [type Player](#Player)
* + [func NewPlayer(sampleRate, channelNum, bytesPerSample, bufferSizeInBytes int) (*Player, error)](#NewPlayer)
* + [func (p *Player) Close() error](#Player.Close)
+ [func (p *Player) SetUnderrunCallback(f func())](#Player.SetUnderrunCallback)
+ [func (p *Player) Write(data []byte) (int, error)](#Player.Write)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
This section is empty.
### Types [¶](#pkg-types)
####
type [Player](https://github.com/ebitengine/oto/blob/v0.1.0/player.go#L24) [¶](#Player)
```
type Player struct {
// contains filtered or unexported fields
}
```
Player is a PCM (pulse-code modulation) audio player. It implements io.Writer, use Write method to play samples.
####
func [NewPlayer](https://github.com/ebitengine/oto/blob/v0.1.0/player.go#L48) [¶](#NewPlayer)
```
func NewPlayer(sampleRate, channelNum, bytesPerSample, bufferSizeInBytes [int](/builtin#int)) (*[Player](#Player), [error](/builtin#error))
```
NewPlayer creates a new, ready-to-use Player.
The sampleRate argument specifies the number of samples that should be played during one second.
Usual numbers are 44100 or 48000.
The channelNum argument specifies the number of channels. One channel is mono playback. Two channels are stereo playback. No other values are supported.
The bytesPerSample argument specifies the number of bytes per sample per channel. The usual value is 2. Only values 1 and 2 are supported.
The bufferSizeInBytes argument specifies the size of the buffer of the Player. This means, how many bytes can Player remember before actually playing them. Bigger buffer can reduce the number of Write calls, thus reducing CPU time. Smaller buffer enables more precise timing. The longest delay between when samples were written and when they started playing is equal to the size of the buffer.
####
func (*Player) [Close](https://github.com/ebitengine/oto/blob/v0.1.0/player.go#L121) [¶](#Player.Close)
```
func (p *[Player](#Player)) Close() [error](/builtin#error)
```
Close closes the Player and frees any resources associated with it. The Player is no longer usable after calling Close.
####
func (*Player) [SetUnderrunCallback](https://github.com/ebitengine/oto/blob/v0.1.0/player.go#L80) [¶](#Player.SetUnderrunCallback)
```
func (p *[Player](#Player)) SetUnderrunCallback(f func())
```
SetUnderrunCallback sets a function which will be called whenever an underrun occurs. This is mostly for debugging and optimization purposes.
Underrun occurs when not enough samples is written to the player in a certain amount of time and thus there's nothing to play. This usually happens when there's too much audio data processing,
or the audio data processing code gets stuck for a while, or the player's buffer is too small.
Example:
```
player.SetUnderrunCallback(func() {
log.Println("UNDERRUN, YOUR CODE IS SLOW")
})
```
Supported platforms: Linux.
####
func (*Player) [Write](https://github.com/ebitengine/oto/blob/v0.1.0/player.go#L100) [¶](#Player.Write)
```
func (p *[Player](#Player)) Write(data [][byte](/builtin#byte)) ([int](/builtin#int), [error](/builtin#error))
```
Write writes PCM samples to the Player.
The format is as follows:
```
[data] = [sample 1] [sample 2] [sample 3] ...
[sample *] = [channel 1] ...
[channel *] = [byte 1] [byte 2] ...
```
Byte ordering is little endian.
The data is first put into the Player's buffer. Once the buffer is full, Player starts playing the data and empties the buffer.
If the supplied data doesn't fit into the Player's buffer, Write block until a sufficient amount of data has been played (or at least started playing) and the remaining unplayed data fits into the buffer.
Note, that the Player won't start playing anything until the buffer is full. |
injurytools | cran | R | Package ‘injurytools’
September 27, 2023
Title A Toolkit for Sports Injury Data Analysis
Version 1.0.2
Description Sports Injury Data analysis aims to identify and describe the
magnitude of the injury problem, and to gain more insights (e.g. determine
potential risk factors) by statistical modelling approaches. The 'injurytools'
package provides standardized routines and utilities that simplify such
analyses. It offers functions for data preparation, informative visualizations
and descriptive and model-based analyses.
License MIT + file LICENSE
Encoding UTF-8
LazyData true
RoxygenNote 7.2.3
Suggests covr, gridExtra, kableExtra, knitr, RColorBrewer, rmarkdown,
spelling, survival, survminer, coxme, pscl, lme4, MASS,
testthat (>= 3.0.0)
Config/testthat/edition 3
Language en-US
Imports checkmate, dplyr, forcats, ggplot2, lubridate, metR, purrr,
rlang, stats, stringr, tidyr, tidyselect, withr
Depends R (>= 3.5)
VignetteBuilder knitr
URL https://github.com/lzumeta/injurytools,
https://lzumeta.github.io/injurytools/
BugReports https://github.com/lzumeta/injurytools/issues
NeedsCompilation no
Author <NAME> [aut, cre]
(<https://orcid.org/0000-0001-6141-1469>),
<NAME> [aut] (<https://orcid.org/0000-0002-8995-8535>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-27 21:50:02 UTC
R topics documented:
cut_inj... 2
date2seaso... 3
gg_injbarplo... 4
gg_injphot... 5
gg_injprev_pola... 6
gg_injriskmatri... 7
inj... 9
injpre... 10
injsummar... 11
is_inj... 13
is_injd... 14
prepare_dat... 14
raw_df_exposure... 17
raw_df_injurie... 18
season2yea... 19
cut_injd Cut the range of the follow-up
Description
Given an injd object, cut the range of the time period such that the limits of the observed dates,
first and last observed dates, are date0 and datef, respectively. It is possible to specify just one
date, i.e. the two dates of the range do not necessarily have to be entered. See Note section.
Usage
cut_injd(injd, date0, datef)
Arguments
injd Prepared data, an injd object.
date0 Starting date of class Date or numeric. If numeric, it should refer to a year (e.g.
date = 2018). Optional.
datef Ending date. Same class as date0. Optional.
Value
An injd object with a shorter follow-up period.
Note
Be aware that by modifying the follow-up period of the cohort, the study design is being altered.
This function should not be used, unless there is no strong argument supporting it. And in that case,
it should be used with caution.
Examples
# Prepare data
df_injuries <- prepare_inj(
df_injuries0 = raw_df_injuries,
player = "player_name",
date_injured = "from",
date_recovered = "until"
)
df_exposures <- prepare_exp(
df_exposures0 = raw_df_exposures,
player = "player_name",
date = "year",
time_expo = "minutes_played"
)
injd <- prepare_all(
data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes"
)
cut_injd(injd, date0 = 2018)
date2season Get the season
Description
Get the season given the date.
Usage
date2season(date)
Arguments
date A vector of class Date or integer/numeric. If it is integer/numeric, it should
refer to the year in which the season started (e.g. date = 2015 to refer to the
2015/2016 season)
Value
Character specifying the respective competition season given the date. The season (output) follows
this pattern: "2005/2006".
Examples
date <- Sys.Date()
date2season(date)
gg_injbarplot Plot player’s injury incidence/burden ranking
Description
A bar chart that shows player-wise injury summary statistics, either injury incidence or injury bur-
den, ranked in descending order.
Usage
gg_injbarplot(injds, type = c("incidence", "burden"), title = NULL)
Arguments
injds injds S3 object (see injsummary()).
type A character value indicating whether to plot injury incidence’s or injury burden’s
ranking. One of "incidence" or "burden", respectively.
title Text for the main title.
Value
A ggplot object (to which optionally more layers can be added).
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
injds <- injsummary(injd)
p1 <- gg_injbarplot(injds, type = "incidence",
title = "Overall injury incidence per player")
p2 <- gg_injbarplot(injds, type = "burden",
title = "Overall injury burden per player")
# install.packages("gridExtra")
# library(gridExtra)
if (require("gridExtra")) {
gridExtra::grid.arrange(p1, p2, nrow = 1)
}
gg_injphoto Plot injuries over the follow-up period
Description
Given an injd S3 object it plots an overview of the injuries sustained by each player/athlete in
the cohort during the follow-up. Each subject timeline is depicted horizontally where the red cross
indicates the exact injury date, the blue circle the recovery date and the bold black line indicates the
duration of the injury (time-loss).
Usage
gg_injphoto(injd, title = NULL, fix = FALSE, by_date = "1 months")
Arguments
injd Prepared data. An injd object.
title Text for the main title.
fix A logical value indicating whether to limit the range of date (x scale) to the
maximum observed exposure date or not to limit the x scale, regardless some
recovery dates might be longer than the maximum observed exposure date.
by_date increment of the date sequence at which x-axis tick-marks are to drawn. An
argument to be passed to base::seq.Date().
Value
A ggplot object (to which optionally more layers can be added).
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "minutes")
gg_injphoto(injd, title = "Injury Overview", by_date = "1 years")
gg_injprev_polar Plot polar area diagrams representing available/injured players pro-
portions
Description
Plot the proportions of available and injured players in the cohort, on a monthly or season basis, by
a polar area diagram. Further information on the type of injury may be specified so that the injured
players proportions are disaggregated and reported according to this variable.
Usage
gg_injprev_polar(
injd,
by = c("monthly", "season"),
var_type_injury = NULL,
title = "Polar area diagram\ninjured and available (healthy) players"
)
Arguments
injd Prepared data, an injd object.
by Character, one of "monthly" or "season", specifying the periodicity according to
which to calculate the proportions of available and injured players/athletes.
var_type_injury
Character specifying the name of the column on the basis of which to classify
the injuries and calculate proportions of the injured players. It should refer to a
(categorical) variable that describes the "type of injury". Defaults to NULL.
title Text for the main title.
Value
A ggplot object (to which optionally more layers can be added).
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
library(ggplot2)
our_palette <- c("seagreen3", "red3", rev(RColorBrewer::brewer.pal(5, "Reds")))
gg_injprev_polar(injd, by = "monthly", var_type_injury = "injury_type",
title = "Polar area diagram\ninjured and available (healthy) players per month") +
scale_fill_manual(values = our_palette)
gg_injprev_polar(injd, by = "monthly",
title = "Polar area diagram\ninjured and available (healthy) players per month") +
scale_fill_manual(values = our_palette)
gg_injriskmatrix Plot risk matrices
Description
Given an injds S3 object, it depicts risk matrix plots , a graph in which the injury incidence
(frequency) is plotted against the average days lost per injury (consequence). The point estimate
of injury incidence together with its confidence interval is plotted, according to the method used
when running injsummary() function. On the y-axis, the mean time-loss per injury together with
± IQR (days) is plotted. The number shown inside the point and the point size itself, report the
injury burden (days lost per player-exposure time), the bigger the size the greater the burden. See
References section.
Usage
gg_injriskmatrix(
injds,
var_type_injury = NULL,
add_contour = TRUE,
title = NULL,
xlab = "Incidence (injuries per _)",
ylab = "Mean time-loss (days) per injury",
errh_height = 1,
errv_width = 0.05
)
Arguments
injds injds S3 object (see injsummary())
var_type_injury
Character specifying the name of the column. A (categorical) variable referring
to the "type of injury" (e.g. muscular/articular/others or overuse/not-overuse
etc.) according to which visualize injury summary statistics (optional, defaults
to NULL).
add_contour Logical, whether or not to add contour lines of the product between injury inci-
dence and mean severity (i.e. ’incidence x average time-loss’), which leads to
injury burden (defaults to TRUE).
title Text for the main title passed to ggplot2::ggtitle().
xlab x-axis label to be passed to ggplot2::xlab().
ylab y-axis label to be passed to ggplot2::ylab().
errh_height Set the height of the horizontal interval whiskers; the height argument for
ggplot2::geom_errorbarh()
errv_width Set the width of the vertical interval whiskers; the width argument for
ggplot2::geom_errorbar()
Value
A ggplot object (to which optionally more layers can be added).
References
<NAME>, <NAME>, <NAME>, et al. International Olympic Committee consensus statement: meth-
ods for recording and reporting of epidemiological data on injury and illness in sport 2020 (includ-
ing STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS)) British Journal
of Sports Medicine 2020; 54:372-389.
<NAME>. (2018). Injury Risk (Burden), Risk Matrices and Risk Contours in Team Sports: A
Review of Principles, Practices and Problems.Sports Medicine, 48(7), 1597–1606.
https://doi.org/10.1007/s40279-018-0913-5
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
injds <- injsummary(injd)
injds2 <- injsummary(injd, var_type_injury = "injury_type")
gg_injriskmatrix(injds)
gg_injriskmatrix(injds2, var_type_injury = "injury_type", title = "Risk matrix")
injd Example of an injd object
Description
An injd object (S3), called injd, to showcase what this object is like and also to save computa-
tion time in some help files provided by the package. The result of applying prepare_all() to
raw_df_exposures (prepare_exp(raw_df_exposures, ...)) and
raw_df_injuries (prepare_inj(raw_df_injuries, ...)).
Usage
injd
Format
The main data frame in injd gathers information of 28 players and has 108 rows and 19 columns:
player Player identifier (factor)
t0 Follow-up period of the corresponding player, i.e. player’s first observed date, same value for
each player (Date)
tf Follow-up period of the corresponding player, i.e. player’s last observed date, same value for
each player (Date)
date_injured Date of injury of the corresponding observation (if any). Otherwise NA (Date)
date_recovered Date of recovery of the corresponding observation (if any). Otherwise NA (Date)
tstart Beginning date of the corresponding interval in which the observation has been at risk of
injury (Date)
tstop Ending date of the corresponding interval in which the observation has been at risk of injury
(Date)
tstart_minPlay Beginning time. Minutes played in matches until the start of this interval in which
the observation has been at risk of injury (numeric)
tstop_minPlay Ending time. Minutes played in matches until the finish of this interval in which
the observation has been at risk of injury (numeric)
status injury (event) indicator (numeric)
enum an integer indicating the recurrence number, i.e. the k-th injury (event), at which the obser-
vation is at risk
days_lost Number of days lost due to injury (numeric)
player_id Identification number of the football player (factor)
season Season to which this player’s entry corresponds (factor)
games_lost Number of matches lost due to injury (numeric)
injury Injury specification as it appears in https://www.transfermarkt.com, if any; otherwise
NA (character)
injury_acl Whether it is Anterior Cruciate Ligament (ACL) injury or not (NO_ACL); if the interval
corresponds to an injury, NA otherwise (factor)
injury_type A five level categorical variable indicating the type of injury, whether Bone, Concus-
sion, Ligament, Muscle or Unknown; if any, NA otherwise (factor)
injury_severity A four level categorical variable indicating the severity of the injury (if any),
whether Minor (<7 days lost), Moderate ([7, 28) days lost), Severe ([28, 84) days lost) or
Very_severe (>=84 days lost); NA otherwise (factor)
Details
It consists of a data frame plus 4 other attributes: a character specifying the unit of exposure
(unit_exposure); and 3 (auxiliary) data frames: follow_up, data_exposures and data_injuries.
injprev Calculate injury prevalence
Description
Calculate the prevalence of injured players and the proportion of non-injured (available) players in
the cohort, on a monthly or season basis. Further information on the type of injury may be specified
so that the injury-specific prevalences are reported according to this variable.
Usage
injprev(injd, by = c("monthly", "season"), var_type_injury = NULL)
Arguments
injd Prepared data. An injd object.
by Character. One of "monthly" or "season", specifying the periodicity according
to which to calculate the proportions of available and injured players/athletes.
var_type_injury
Character specifying the name of the column on the basis of which to classify
the injuries and calculate proportions of the injured players. Defaults to NULL.
Value
A data frame containing one row for each combination of season, month (optionally) and injury type
(if var_type_injury not specified, then this variable has two categories: Available and Injured).
Plus, three more columns, specifying the proportion of players (prop) satisfying the corresponding
row’s combination of values, i.e. prevalence, how many players were injured at that moment with
the type of injury of the corresponding row (n), over how many players were at that time in the
cohort (n_player). See Note section.
Note
If var_type_injury is specified (and not NULL), it may happen that a player in one month suffers
two different types of injuries. For example, a muscle and a ligament injury. In this case, this two
injuries contribute to the proportions of muscle and ligament injuries for that month, resulting in an
overall proportion that exceeds 100%. Besides, the players in Available category are those that did
not suffer any injury in that moment (season-month), that is, they were healthy all the time that the
period lasted
References
<NAME>, <NAME>, <NAME>, et al. International Olympic Committee consensus statement: meth-
ods for recording and reporting of epidemiological data on injury and illness in sport 2020 (includ-
ing STROBE Extension for Sport Injury and Illness Surveillance (STROBE-SIIS)) British Journal
of Sports Medicine 2020; 54:372-389.
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
injprev(injd, by = "monthly", var_type_injury = "injury_type")
injprev(injd, by = "monthly")
injprev(injd, by = "season", var_type_injury = "injury_type")
injprev(injd, by = "season")
injsummary Estimate injury summary statistics
Description
Calculate injury summary statistics such as injury incidence and injury burden (see Bahr et al. 20),
including total number of injuries, number of days lost due to injury, total time of exposure etc., by
means of a (widely used) Poisson method, negative binomial, zero-inflated poisson or zero-inflated
negative binomial, on a player and overall basis.
Usage
injsummary(
injd,
var_type_injury = NULL,
method = c("poisson", "negbin", "zinfpois", "zinfnb"),
conf_level = 0.95,
quiet = FALSE
)
Arguments
injd injd S3 object (see prepare_all()).
var_type_injury
Character specifying the name of the column according to which compute injury
summary statistics. It should refer to a (categorical) variable that describes the
"type of injury". Optional, defaults to NULL.
method Method to estimate injury incidence and injury burden. One of "poisson", "neg-
bin", "zinfpois" or "zinfnb"; characters that stand for Poisson method, negative
binomial method, zero-inflated Poisson and zero-inflated negative binomial.
conf_level Confidence level (defaults to 0.95).
quiet Logical, whether or not to silence the warning messages (defaults to FALSE).
Value
A list of two data frames comprising player-wise and overall injury summary statistics, respectively,
that constitute an injds S3 object. Both of them made up of the following columns:
• ninjuries: number of injuries sustained by the player or overall in the team over the given
period specified by the injd data frame.
• ndayslost: number of days lost by the player or overall in the team due to injury over the
given period specified by the injd data frame.
• mean_dayslost: average of number of days lost (i.e. ndayslost) playerwise or overall in the
team.
• median_dayslost: median of number of days lost (i.e. ndayslost) playerwise or overall in
the team.
• iqr_dayslost: interquartile range of number of days lost (i.e. ndayslost) playerwise or
overall in the team.
• totalexpo: total exposure that the player has been under risk of sustaining an injury.
• injincidence: injury incidence, number of injuries per unit of exposure.
• injburden: injury burden, number of days lost per unit of exposure.
• var_type_injury: only if it is specified as an argument to function.
Apart from this column names, they may further include these other columns depending on the
user’s specifications to the function:
• percent_ninjuries: percentage (%) of number of injuries of that type relative to all types of
injuries (if var_type_injury specified).
• percent_dayslost: percentage (%) of number of days lost because of injuries of that type
relative to the total number of days lost because of all types of injuries (if var_type_injury
specified).
• injincidence_sd and injburden_sd: estimated standard deviation, by the specified method
argument, of injury incidence (injincidence) and injury burden (injburden), for the overall
injury summary statistics (the 2nd element of the function output).
• injincidence_lower and injburden_lower: lower bound of, for example, 95% confidence
interval (if conf_level = 0.95) of injury incidence (injincidence) and injury burden (injburden),
for the overall injury summary statistics (the 2nd element of the function output).
• injincidence_upper and injburden_upper: the same (as above item) applies but for the
upper bound.
References
<NAME>., <NAME>., & <NAME>. (2018). Why we should focus on the burden of injuries
and illnesses, not just their incidence. British Journal of Sports Medicine, 52(16), 1018–1021.
https://doi.org/10.1136/bjsports-2017-098160
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2023).
Football-specific extension of the IOC consensus statement: methods for recording and reporting
of epidemiological data on injury and illness in sport 2020. British journal of sports medicine.
Examples
df_exposures <- prepare_exp(raw_df_exposures, player = "player_name",
date = "year", time_expo = "minutes_played")
df_injuries <- prepare_inj(raw_df_injuries, player = "player_name",
date_injured = "from", date_recovered = "until")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
injsummary(injd)
injsummary(injd, var_type_injury = "injury_type")
is_injd Check if an object is of class injd
Description
Check if an object x is of class injd.
Usage
is_injd(x)
Arguments
x any R object.
Value
A logical value: TRUE if x inherits from injd class, FALSE otherwise.
is_injds Check if an object is of class injds
Description
Check if an object x is of class injds.
Usage
is_injds(x)
Arguments
x any R object.
Value
A logical value: TRUE if x inherits from injds class, FALSE otherwise.
prepare_data Prepare data in a standardized format
Description
These are the data preprocessing functions provided by the injurytools package, which involve:
1. setting exposure and injury data in a standardized format and
2. integrating both sources of data into an adequate data structure.
prepare_inj() and prepare_exp() set standardized names and proper classes to the (key) columns
in injury and exposure data, respectively. prepare_all() integrates both, standardized injury and
exposure data sets, and convert them into an injd S3 object that has an adequate structure for further
statistical analyses. See the Prepare Sports Injury Data vignette for details.
Usage
prepare_inj(
df_injuries0,
player = "player",
date_injured = "date_injured",
date_recovered = "date_recovered"
)
prepare_exp(
df_exposures0,
player = "player",
date = "date",
time_expo = "time_expo"
)
prepare_all(
data_exposures,
data_injuries,
exp_unit = c("minutes", "hours", "days", "matches_num", "matches_minutes",
"activity_days", "seasons")
)
Arguments
df_injuries0 A data frame containing injury information, with columns referring to the player
name/id, date of injury and date of recovery (as minimal data).
player Character referring to the column name where player information is stored.
date_injured Character referring to the column name where the information about the date of
injury is stored.
date_recovered Character referring to the column name where the information about the date of
recovery is stored.
df_exposures0 A data frame containing exposure information, with columns referring to the
player name/id, date of exposure and the total time of exposure of the corre-
sponding data entry (as minimal data).
date Character referring to the column name where the exposure date information is
stored. Besides, the column must be of class Date or integer/numeric. If it is
integer/numeric, it should refer to the year in which the season started (e.g.
date = 2015 to refer to the 2015/2016 season).
time_expo Character referring to the column name where the information about the time of
exposure in that corresponding date is stored.
data_exposures Exposure data frame with standardized column names, in the same fashion that
prepare_exp() returns.
data_injuries Injury data frame with standardized column names, in the same fashion that
prepare_inj() returns.
exp_unit Character defining the unit of exposure time ("minutes" the default).
Value
prepare_inj() returns a data frame in which the key columns in injury data are standardized and
have a proper format.
prepare_exp() returns a data frame in which the key columns in exposure data are standardized
and have a proper format.
prepare_all() returns the injd S3 object that contains all the necessary information and a proper
data structure to perform further statistical analyses (e.g. calculate injury summary statistics, visu-
alize injury data).
• If exp_unit is "minutes" (the default), the columns tstart_min and tstop_min are created
which specify the time to event (injury) values, the starting and stopping time of the interval,
respectively. That is the training time in minutes, that the player has been at risk, until an
injury (or censorship) has occurred. For other choices, tstart_x and tstop_x are also created
according to the exp_unit indicated (x, one of: min, h, match, minPlay, d, acd or s). These
columns will be useful for survival analysis routines. See Note section.
• It also creates days_lost column based on the difference between date_recovered and
date_injured in days. And if it does exist (in the raw data) it overrides.
Note
Depending on the unit of exposure, tstart_x and tstop_x columns might have same values (e.g.
if exp_unit = "matches_num" and the player has not played any match between the corresponding
period of time). Please be aware of this before performing any survival analysis related task.
Examples
df_injuries <- prepare_inj(df_injuries0 = raw_df_injuries,
player = "player_name",
date_injured = "from",
date_recovered = "until")
df_exposures <- prepare_exp(df_exposures0 = raw_df_exposures,
player = "player_name",
date = "year",
time_expo = "minutes_played")
injd <- prepare_all(data_exposures = df_exposures,
data_injuries = df_injuries,
exp_unit = "matches_minutes")
head(injd)
class(injd)
str(injd, 1)
raw_df_exposures Minimal example of exposure data
Description
An example of a player exposure data set that contains minimum required exposure information as
well as other player- and match-related variables. It includes Liverpool Football Club male’s first
team players’ exposure data, exposure measured as (number or minutes of) matches played, over
two consecutive seasons, 2017-2018 and 2018-2019. Each row refers to player-season. These data
have been scrapped from https://www.transfermarkt.com/ website using self-defined R code
with rvest and xml2 packages.
Usage
raw_df_exposures
Format
A data frame with 42 rows corresponding to 28 football players and 16 variables:
player_name Name of the football player (factor)
player_id Identification number of the football player (factor)
season Season to which this player’s entry corresponds (factor)
year Year in which each season started (numeric)
matches_played Matches played by the player in each season (numeric)
minutes_played Minutes played by the player in each season (numeric)
liga Name of the ligue where the player played in each season (factor)
club_name Name of the club to which the player belongs in each season (factor)
club_id Identification number of the club to which the player belongs in each season (factor)
age Age of the player in each season (numeric)
height Height of the player in m (numeric)
place Place of birth of each player (character)
citizenship Citizenship of the player (factor)
position Position of the player on the pitch (factor)
foot Dominant leg of the player. One of both, left or right (factor)
goals Number of goals scored by the player in that season (numeric)
assists Number of assists provided by the player in that season (numerical)
yellows Number of the yellow cards received by the player in that season (numeric)
reds Number of the red cards received by the player in that season (numeric)
Note
This data frame is provided for illustrative purposes. We warn that they might not be accurate, there
might be a mismatch and non-completeness with what actually occurred. As such, its use cannot be
recommended for epidemiological research (see also Hoenig et al., 2022).
Source
https://www.transfermarkt.com/
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2022).
Analysis of more than 20,000 injuries in European professional football by using a citizen science-
based approach: An opportunity for epidemiological research?. Journal of science and medicine in
sport, 25(4), 300-305.
raw_df_injuries Minimal example of injury data
Description
An example of an injury data set containing minimum required injury information as well as
other further injury-related variables. It includes Liverpool Football Club male’s first team play-
ers’ injury data. Each row refers to player-injury. These data have been scrapped from https:
//www.transfermarkt.com/ website using self-defined R code with rvest and xml2 packages.
Usage
raw_df_injuries
Format
A data frame with 82 rows corresponding to 23 players and 11 variables:
player_name Name of the football player (factor)
player_id Identification number of the football player (factor)
season Season to which this player’s entry corresponds (factor)
from Date of the injury of each data entry (Date)
until Date of the recovery of each data entry (Date)
days_lost Number of days lost due to injury (numeric)
games_lost Number of matches lost due to injury (numeric)
injury Injury specification as it appears in https://www.transfermarkt.com (character)
injury_acl Whether it is Anterior Cruciate Ligament (ACL) injury or not (NO_ACL)
injury_type A five level categorical variable indicating the type of injury, whether Bone, Concus-
sion, Ligament, Muscle or Unknown; if any, NA otherwise (factor)
injury_severity A four level categorical variable indicating the severity of the injury (if any),
whether Minor (<7 days lost), Moderate ([7, 28) days lost), Severe ([28, 84) days lost) or
Very_severe (>=84 days lost); NA otherwise (factor)
Note
This data frame is provided for illustrative purposes. We warn that they might not be accurate, there
might be a mismatch and non-completeness with what actually occurred. As such, its use cannot be
recommended for epidemiological research (see also Hoenig et al., 2022).
Source
https://www.transfermarkt.com/
References
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2022).
Analysis of more than 20,000 injuries in European professional football by using a citizen science-
based approach: An opportunity for epidemiological research?. Journal of science and medicine in
sport, 25(4), 300-305.
season2year Get the year
Description
Get the year given the season.
Usage
season2year(season)
Arguments
season Character/factor specifying the season. It should follow the pattern "xxxx/yyyy",
e.g. "2005/2006".
Value
Given the season, it returns the year (in numeric) in which the season started.
Examples
season <- "2022/2023"
season2year(season) |
karyotapR | cran | R | Package ‘karyotapR’
September 7, 2023
Title DNA Copy Number Analysis for Genome-Wide Tapestri Panels
Version 1.0.1
Description Analysis of DNA copy number in single cells using
custom genome-wide targeted DNA sequencing panels for the Mission Bio
Tapestri platform. Users can easily parse, manipulate, and visualize
datasets produced from the automated 'Tapestri Pipeline', with support for
normalization, clustering, and copy number calling. Functions are also
available to deconvolute multiplexed samples by genotype and parsing
barcoded reads from exogenous lentiviral constructs.
License MIT + file LICENSE
Encoding UTF-8
RoxygenNote 7.2.3
URL https://github.com/joeymays/karyotapR,
http://joeymays.xyz/karyotapR/
BugReports https://github.com/joeymays/karyotapR/issues
biocViews
Imports circlize, cli, ComplexHeatmap, dbscan, dplyr, fitdistrplus,
GenomicRanges, ggplot2, gtools, IRanges, magrittr, methods,
purrr, rhdf5, rlang, S4Vectors, stats, SummarizedExperiment,
tibble, tidyr, umap, viridisLite
Depends R (>= 3.6), SingleCellExperiment
Suggests Biostrings, knitr, rmarkdown, Rsamtools, testthat (>= 3.0.0)
Config/testthat/edition 3
NeedsCompilation no
Author <NAME> [aut, cre, cph] (<https://orcid.org/0000-0003-4903-938X>)
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-07 08:50:05 UTC
R topics documented:
assayBoxPlo... 2
assayHeatma... 3
calcCopyNumbe... 5
calcGMMCopyNumbe... 6
calcNormCount... 8
calcSmoothCopyNumbe... 9
callSampleLable... 10
corne... 11
countBarcodedRead... 11
createTapestriExperimen... 13
Custom Slot Getters and Setter... 15
getChrOrde... 17
getCytoband... 18
getGMMBoundarie... 18
getTidyDat... 19
moveNonGenomeProbe... 20
newTapestriExperimentExampl... 21
PCAKneePlo... 22
plotCopyNumberGM... 22
reducedDimPlo... 23
runClusterin... 24
runPC... 25
runUMA... 27
TapestriExperiment-clas... 28
assayBoxPlot Generate a box plot from assay data
Description
Draws box plot of data from indicated TapestriExperiment assay slot. This is especially useful
for visualizing altExp count data, such as counts from probes on chrY or barcode probe counts.
Usage
assayBoxPlot(
TapestriExperiment,
alt.exp = NULL,
assay = NULL,
log.y = TRUE,
split.features = FALSE,
split.x.by = NULL,
split.y.by = NULL
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp to plot. NULL (default) uses the top-level experiment in TapestriExperiment.
assay Character, assay to plot. NULL (default) selects first assay listed TapestriExperiment.
log.y Logical, if TRUE, scales data using log1p(). Default TRUE.
split.features Logical, if TRUE, splits plot by rowData features if slot has more than one row
feature/probe. Default FALSE.
split.x.by Character, colData column to use for X-axis categories. Default NULL.
split.y.by Character, colData column to use for Y-axis splitting/faceting. Default NULL.
Value
ggplot object, box plot
See Also
ggplot2::geom_boxplot()
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
assayBoxPlot(tap.object, alt.exp = "chrYCounts", split.features = TRUE, split.x.by = "test.cluster")
assayHeatmap Generate heatmap of assay data
Description
Creates a heatmap of data from the indicated TapestriObject assay slot using the ComplexHeatmap
package. Heatmaps are generated as transposed (i.e. x-y flipped) representations of the assay ma-
trix. Additional ComplexHeatmap::Heatmap() parameters can be passed in to overwrite defaults.
Usage
assayHeatmap(
TapestriExperiment,
alt.exp = NULL,
assay = NULL,
split.col.by = NULL,
split.row.by = NULL,
annotate.row.by = NULL,
color.preset = NULL,
color.custom = NULL,
...
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp slot to use. NULL (default) uses top-level/main experiment.
assay Character, assay slot to use. NULL (default) uses first-indexed assay (usually
"counts").
split.col.by Character, rowData column to split columns by, i.e. "chr" or "arm". Default
NULL.
split.row.by Character, colData column to split rows by, i.e. "cluster". Default NULL.
annotate.row.by
Character, colData column to use for block annotation. Default NULL.
color.preset Character, color preset to use for heatmap color, either "copy.number" or "copy.number.denoise"
(see Details). Overrides color.custom. NULL (default) uses default ComplexHeatmap
coloring.
color.custom Color mapping function given by circlize::colorRamp2(). color.preset
must be NULL.
... Additional parameters to pass to ComplexHeatmap::Heatmap().
Value
A ComplexHeatmap object
Options for color.preset
"copy.number":
Blue-white-red gradient from 0-2-4. 4 to 8+ is red-black gradient.
circlize::colorRamp2(c(0,1,2,3,4,8),
c('#2c7bb6','#abd9e9','#ffffff','#fdae61','#d7191c', "black"))
"copy.number.denoise":
Similar to ’copy.number’ present, but white range is from 1.5-2.5 to reduce the appearance of
noise around diploid cells.
circlize::colorRamp2(c(0,1,1.5,2,2.5,3,4,8),
c('#2c7bb6','#abd9e9','#ffffff','#ffffff','#ffffff','#fdae61','#d7191c', "black"))
See Also
Heatmap
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
assayHeatmap(tap.object,
assay = "counts", split.row.by = "test.cluster",
annotate.row.by = "test.cluster", split.col.by = "chr"
)
calcCopyNumber Calculate relative copy number value for each cell-probe unit using
reference sample
Description
calcCopyNumber() transforms the normalized count matrix normcounts of a TapestriExperiment
object into copy number values based on a set of reference cell barcodes and given copy number
value (e.g. 2 for diploid). This is practically used to set the median copy number of a usually
diploid reference cell population to a known copy number value, e.g. 2, and then calculate the
copy number for all the cells relative to that reference population. This occurs individually for each
probe, such that the result is one copy number value per cell barcode per probe (cell-probe unit).
control.copy.number is a data.frame lookup table used to indicate the copy number value and
cell barcodes to use as the reference. A template for control.copy.number can be generated us-
ing generateControlCopyNumberTemplate(), which will have a row for each chromosome arm
represented in TapestriExperiment.
The control.copy.number data.frame should include 3 columns named arm, copy.number, and
sample.label. arm is chromosome arm names from chr1p through chrXq, copy.number is the
reference copy number value (2 = diploid), and sample.label is the value corresponding to the
colData column given in sample.feature to indicate the set of reference cell barcodes to use to
set the copy number. This is best used in a workflow where the cells are clustered first into their
respective samples, and then one cluster is used as the reference population the other clusters. This
also allows for the baseline copy number to be set for each chromosome arm individually in the
case where the reference population is not completely diploid.
Usage
calcCopyNumber(
TapestriExperiment,
control.copy.number,
sample.feature = "cluster",
remove.bad.probes = FALSE
)
generateControlCopyNumberTemplate(
TapestriExperiment,
copy.number = 2,
sample.feature.label = NA
)
Arguments
TapestriExperiment
TapestriExperiment object.
control.copy.number
data.frame with columns arm, copy.number, and sample.label. See details.
sample.feature Character, colData column to use for subsetting cell.barcodes. Default "clus-
ter".
remove.bad.probes
Logical, if TRUE, probes with median normalized counts = 0 are removed from
the returned TapestriExperiment. If FALSE (default), probes with median
normalized counts = 0 throw error and stop function.
copy.number Numeric, sets all entries of copy.number column in output. Default 2 (diploid).
sample.feature.label
Character, sets all entries of sample.label column in output.
Value
TapestriExperiment object with cell-probe copy number values in copyNumber assay slot.
data.frame with 3 columns named arm, copy.number, and sample.label
Functions
• generateControlCopyNumberTemplate(): generates a data.frame template for control.copy.number
in calcCopyNumber().
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
tap.object <- calcCopyNumber(tap.object,
control.copy.number,
sample.feature = "test.cluster"
)
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
calcGMMCopyNumber Call copy number for each cell-chromosome using Gaussian mixture
models
Description
Uses control cells to simulate expected smoothed copy number distributions for all chromosomes
across each of model.components (copy number level). Then uses the distributions to calculate
posterior probabilities for each cell-chromosome belonging to each of copy number level. Each
cell-chromosome is assigned the copy number value for which its posterior probability is highest.
This is done for both whole chromosomes and chromosome arms.
Usage
calcGMMCopyNumber(
TapestriExperiment,
cell.barcodes,
control.copy.number,
model.components = 1:5,
model.priors = NULL,
...
)
Arguments
TapestriExperiment
TapestriExperiment object.
cell.barcodes character, vector of cell barcodes to fit GMM. Usually corresponds to diploid
control.
control.copy.number
data.frame with columns arm and copy.number, indicating of known copy
number of cells in cell.barcodes.
model.components
numeric, vector of copy number GMM components to calculate, default 1:5 (for
copy number = 1, 2, 3, 4, 5).
model.priors numeric, relative prior probabilities for each GMM component. If NULL (de-
fault), assumes equal priors.
... Additional parameters to be passed to internal functions.
Value
TapestriExperiment object with copy number calls based on the calculated GMMs, saved to
gmmCopyNumber slot of smoothedCopyNumberByChr and smoothedCopyNumberByArm altExps. GMM
parameters for each feature.id are saved to the metadata slot.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
tap.object <- calcCopyNumber(tap.object,
control.copy.number,
sample.feature = "test.cluster"
)
tap.object <- calcSmoothCopyNumber(tap.object)
tap.object <- calcGMMCopyNumber(tap.object,
cell.barcodes = colnames(tap.object),
control.copy.number = control.copy.number,
model.components = 1:5
)
calcNormCounts Normalize raw counts
Description
Normalizes raw counts from counts slot in TapestriExperiment and returns the object with nor-
malized counts in the normcounts slot. Also calculates the standard deviation for each probe using
normalized counts and adds it to rowData.
Usage
calcNormCounts(TapestriExperiment, method = "mb", scaling.factor = NULL)
Arguments
TapestriExperiment
TapestriExperiment object.
method Character, normalization method. Default "mb".
scaling.factor Numeric, optional number to scale normalized counts if method == "libNorm".
Default NULL.
Details
"mb" method performs the same normalization scheme as in Mission Bio’s mosaic package for
python: Counts for each barcode are normalized relative to their barcode’s mean and probe counts
are normalized relative to their probe’s median. "libNorm" method preforms library size normal-
ization, returning the proportion of counts of each probe within a cell. The proportion is multiplied
by scaling.factor if provided.
Value
TapestriExperiment object with normalized counts added to normcounts slot.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
calcSmoothCopyNumber Smooth copy number values across chromosomes and chromosome
arms
Description
calcSmoothCopyNumber() takes copyNumber slot values for probes on a chromosome and smooths
them by median (default) for each chromosome and chromosome arm, resulting in one copy number
value per chromosome and chromosome arm for each cell barcode. Cell-chromosome values are
then discretized into integers by conventional rounding (1.5 <= x < 2.5 rounds to 2). Smoothed
copy number and discretized smoothed copy number values are stored as smoothedCopyNumber
and discreteCopyNumber assays, in altExp slots smoothedCopyNumberByChr for chromosome-
level smoothing, and smoothedCopyNumberByArm for chromosome arm-level smoothing.
Usage
calcSmoothCopyNumber(TapestriExperiment, method = "median")
Arguments
TapestriExperiment
TapestriExperiment object.
method Character, smoothing method: median (default) or mean.
Value
TapestriExperiment with smoothedCopyNumber and discreteCopyNumber assays in altExp
slots smoothedCopyNumberByChr and smoothedCopyNumberByArm.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
tap.object <- calcCopyNumber(tap.object,
control.copy.number,
sample.feature = "test.cluster"
)
tap.object <- calcSmoothCopyNumber(tap.object)
callSampleLables Call sample labels based on feature counts
Description
callSampleLables() assigns labels (stored as colData column) to cells using feature count data in
colData. This is most useful for assigning barcode labels based on barcoded reads (see countBar-
codedReads). For method = max, labels are dictated by whichever input.features column has the
highest number of counts. By default, ties are broken by choosing whichever label has the lowest
index position (ties.method = "first"). Samples with 0 counts for all input.features columns
are labeled according to neg.label. If only one feature column is used, labels are assigned to cells
with counts > min.count.threshold, and neg.label otherwise.
Usage
callSampleLables(
TapestriExperiment,
input.features,
output.feature = "sample.call",
return.table = FALSE,
neg.label = NA,
method = "max",
ties.method = "first",
min.count.threshold = 1
)
Arguments
TapestriExperiment
A TapestriExperiment object.
input.features Character vector, column names in colData to evaluate.
output.feature Character, column name to use for the call output. Default "sample.call".
return.table Logical, if TRUE, returns a data.frame of the sample.calls. If FALSE (default),
returns updated TapestriExperiment object.
neg.label Character, label for samples with no counts. Default NA.
method Character, call method. Only "max" currently supported, calls based on whichever
input.features column has the most counts.
ties.method Character, passed to max.col() indicating how to break ties. Default "first".
min.count.threshold
Numeric, minimum number of counts per cell to use for call. Default 1.
Value
A TapestriExperiment object with sample calls added to colData column sample.name. If
return.table == TRUE, a data.frame of sample calls.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
colData(tap.object)$gRNA1 <- 2 # example barcode counts
colData(tap.object)$gRNA2 <- 10 # example barcode counts
tap.object <- callSampleLables(tap.object,
input.features = c("gRNA1", "gRNA2"),
output.feature = "sample.grna"
)
corner Print the top-left corner of a matrix
Description
Outputs up to 5 rows and columns of the input matrix object (with rownames and colnames) to get
a quick look without filling the console.
Usage
corner(input.mat)
Arguments
input.mat A matrix-like object.
Value
A matrix-like object matching input class, subset to a maximum of 5 rows and columns.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
corner(assay(tap.object, "counts"))
countBarcodedReads Get read counts from barcoded reads
Description
countBarcodedReads() and countBarcodedReadsFromContig() match exogenous DNA barcode
sequences to their associated cell barcodes and saves them to the colData (cell barcode metadata) of
TapestriExperiment. countBarcodedReads() is a shortcut for countBarcodedReadsFromContig(),
allowing the user to specify ’gRNA’ or ’barcode’ to use the grnaCounts or barcodeCounts altExp
slots. The entries in the barcode.lookup table do not have to be present in the sample, allowing
users to keep one master table/file of available barcode sequences for use in all experiments. The
Rsamtools and Biostrings packages must be installed to use these functions.
Usage
countBarcodedReads(
TapestriExperiment,
bam.file,
barcode.lookup,
probe,
return.table = FALSE,
max.mismatch = 2,
with.indels = FALSE,
...
)
countBarcodedReadsFromContig(
bam.file,
barcode.lookup,
contig,
cell.barcode.tag = "RG",
max.mismatch = 2,
with.indels = FALSE
)
Arguments
TapestriExperiment
TapestriExperiment object
bam.file File path of BAM file. .bai BAM index file must be in the same location (can
be generated using Rsamtools::indexBam()).
barcode.lookup data.frame where the first column is the barcode identifier/name and the sec-
ond column is the DNA sequence. Headers are ignored.
probe Character, either "gRNA" or "barcode" to parse counts from grnaCounts or
barcodeCounts altExp slots, respectively.
return.table Logical, if TRUE, returns table of read counts per barcode. If FALSE, returns
TapestriExperiment. Default FALSE.
max.mismatch Numeric, the maximum and minimum number of mismatching letters allowed.
Default 2.
with.indels If TRUE, then indels are allowed. Default FALSE.
... Arguments to pass on to countBarcodedReadsFromContig().
contig Character, contig or chromosome name to search for barcodes in. Can be a
vector of more than one contig to expand search space.
cell.barcode.tag
Character of length 2, indicates cell barcode field in BAM, specified by Tapestri
pipeline (currently "RG"). Default "RG".
Value
TapestriExperiment with barcoded read counts added to colData.
A data.frame of read counts for each specified barcode.
See Also
Rsamtools::indexBam()
Biostrings::matchPattern()
Examples
## Not run:
counts <- countBarcodedReads(
TapestriExperiment,
bam.file, barcode.lookup, "gRNA"
)
## End(Not run)
## Not run:
counts <- countBarcodedReadsFromContig(bam.file, barcode.lookup, "virus_ref2")
## End(Not run)
createTapestriExperiment
Create TapestriExperiment object from Tapestri Pipeline output
Description
createTapestriExperiment() constructs a TapestriExperiment container object from data stored
in the .h5 file output by the Tapestri Pipeline. Read count matrix (probe x cell barcode) is stored
in the "counts" assay slot of the top-level experiment. Allele frequency matrix (variant x cell bar-
code) is stored in the "alleleFrequency" assay slot of the "alleleFrequency" altExp (alternative
experiment) slot. panel.id is an optional shortcut to set special probe identities for specific custom
panels.
Usage
createTapestriExperiment(
h5.filename,
panel.id = NULL,
get.cytobands = TRUE,
genome = "hg19",
move.non.genome.probes = TRUE,
filter.variants = TRUE,
verbose = TRUE
)
Arguments
h5.filename File path for .h5 file from Tapestri Pipeline output.
panel.id Character, Tapestri panel ID, either CO261, CO293, CO610, or NULL. Initializes
barcodeProbe and grnaProbe slots. Default NULL.
get.cytobands Logical, if TRUE (default), retrieve and add chromosome cytobands and chromo-
some arms to rowData (probe metadata).
genome Character, reference genome for pulling cytoband coordinates and chromosome
arm labels (see getCytobands()). Only "hg19" (default) is currently supported.
move.non.genome.probes
Logical, if TRUE (default), move counts and metadata from non-genomic probes
to altExp slots (see moveNonGenomeProbes()).
filter.variants
Logical, if TRUE (default), only stores variants that have passed Tapestri Pipeline
filters.
verbose Logical, if TRUE (default), metadata is output in message text.
Value
TapestriExperiment object containing data from Tapestri Pipeline output.
Panel ID Shortcuts
panel.id is an optional shortcut to set the barcodeProbe and grnaProbe slots in TapestriExperiment
for specific custom Tapestri panels.
CO261:
• barcodeProbe = "not specified"
• grnaProbe = "not specified"
CO293:
• barcodeProbe = "AMPL205334"
• grnaProbe = "AMPL205666"
CO610:
• barcodeProbe = "CO610_AMP351"
• grnaProbe = "CO610_AMP350"
Automatic Operations
Raw Data:
Read count and allele frequency matrices are imported to their appropriate slots as described
above. filter.variants == TRUE (default) only loads allele frequency variants that have passed
internal filters in the Tapestri Pipeline. This greatly reduces the number of variants from tens of
thousands to hundreds of likely more consequential variants, saving RAM and reducing operation
time.
Metadata:
Several metadata sets are copied or generated and then stored in the appropriate TapestriExperiment
slot during construction.
• Probe panel metadata stored in the .h5 file are copied to rowData.
• Basic QC stats (e.g. total number of reads per probe) are added to rowData.
• Basic QC stats (e.g. total number of reads per cell barcode) are added to colData.
• Experiment-level metadata is stored in metadata.
Optional Operations
Two additional major operations are called by default during TapestriExperiment construction
for convenience. get.cytobands == TRUE (default) calls getCytobands(), which retrieves the
chromosome arm and cytoband for each probe based on stored positional data and saves them in
rowData. Some downstream smoothing and plotting functions may fail if chromosome arms are
not present in rowData, so this generally should always be run. move.non.genome.probes calls
moveNonGenomeProbes(), which moves probes corresponding to the specified tags to altExp (al-
ternative experiment) slots in the TapestriExperiment object. The exception is probes on chro-
mosome Y; CNVs of chrY are more rare, so we move it to an altExp for separate analysis. Probes
corresponding to the barcodeProbe and grnaProbe slots, which are specified by the panel.id
shortcut or manually (see Custom Slot Getters and Setters), are automatically moved to altExp by
this operation as well. If such probes are not present, the function will only generate a warning
message, so it is always safe (and recommended) to run by default. Any remaining probes that
are not targeting a human chromosome and are not specified by the shortcut tags are moved to the
otherProbeCounts slot.
See Also
moveNonGenomeProbes(), getCytobands(), which are run as part of this function by default.
Examples
## Not run:
tapExperiment <- createTapestriExperiment("myh5file.h5", "CO293")
## End(Not run)
Custom Slot Getters and Setters
Getter and Setter functions for TapestriExperiment slots
Description
Get and set custom slots in TapestriExperiment. Slots include barcodeProbe for a sample bar-
code probe ID and grnaProbe for a gRNA-associated probe ID. These are used as shortcuts for
moveNonGenomeProbes() and countBarcodedReads(). gmmParams holds parameters and meta-
data for GMM copy number calling models.
Usage
barcodeProbe(x)
## S4 method for signature 'TapestriExperiment'
barcodeProbe(x)
barcodeProbe(x) <- value
## S4 replacement method for signature 'TapestriExperiment'
barcodeProbe(x) <- value
grnaProbe(x)
## S4 method for signature 'TapestriExperiment'
grnaProbe(x)
grnaProbe(x) <- value
## S4 replacement method for signature 'TapestriExperiment'
grnaProbe(x) <- value
gmmParams(x)
## S4 method for signature 'TapestriExperiment'
gmmParams(x)
Arguments
x A TapestriExperiment object
value Character, probe ID to assign to slot
TapestriExperiment
A TapestriExperiment object
Value
For the getter methods barcodeProbe, grnaProbe, and gmmParams, the value of the given slot is
returned. For the setter methods barcodeProbe and grnaProbe, a TapestriExperiment object is
returned with modifications made to the given slot.
Functions
• barcodeProbe(TapestriExperiment): barcodeProbe getter
• barcodeProbe(TapestriExperiment) <- value: barcodeProbe setter
• grnaProbe(TapestriExperiment): grnaProbe getter
• grnaProbe(TapestriExperiment) <- value: grnaProbe setter
• gmmParams(TapestriExperiment): gmmParams getter
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
barcodeProbe(tap.object) <- "Probe01"
barcodeProbe(tap.object)
grnaProbe(tap.object) <- "Probe02"
grnaProbe(tap.object)
gmmParams(tap.object)
getChrOrder Get chromosome order from a string of chromosome/contig names
Description
getChrOrder() takes a string of chromosome or contig names and returns the indices of the string
in typical chromosome order, i.e. 1 through 22, X, Y. Contig names that do not match 1:22, X, or Y
are sorted numerically and alphabetically (with numbers coming first), and added to the end of the
order. The output string can then be used to sort the input string into typical chromosome order.
Usage
getChrOrder(chr.vector)
Arguments
chr.vector Character vector of chromosome or contig names.
Value
A numerical vector of the input vectors indices in chromosome order.
Examples
chr.order <- getChrOrder(c(1, "virus", 5, "X", 22, "plasmid", "Y"))
ordered.vector <- c(1, "virus", 5, "X", 22, "plasmid", "Y")[chr.order]
getCytobands Add chromosome cytobands and chromosome arms to
TapestriExperiment
Description
getCytobands() retrieves the chromosome arm and cytoband for each probe based on stored posi-
tional data and saves them in rowData. This is run automatically as part of createTapestriExperiment().
Note: Some downstream smoothing and plotting functions may fail if chromosome arms are not
present in rowData.
Usage
getCytobands(TapestriExperiment, genome = "hg19", verbose = TRUE)
Arguments
TapestriExperiment
TapestriExperiment object.
genome Character, reference genome to use. Only hg19 is currently supported.
verbose Logical, if TRUE (default), progress is output as message text.
Value
TapestriExperiment object with rowData updated to include chromosome arms and cytobands.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- getCytobands(tap.object, genome = "hg19")
getGMMBoundaries Calculate decision boundaries between components of copy number
GMMs
Description
Calculate decision boundaries between components of copy number GMMs
Usage
getGMMBoundaries(TapestriExperiment, chromosome.scope = "chr")
Arguments
TapestriExperiment
TapestriExperiment object.
chromosome.scope
"chr" or "arm", for using models for either whole chromosomes or chromosome
arms. Default "chr".
Value
tibble containing boundary values of GMMs for each feature.id.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
tap.object <- calcCopyNumber(tap.object,
control.copy.number,
sample.feature = "test.cluster"
)
tap.object <- calcSmoothCopyNumber(tap.object)
tap.object <- calcGMMCopyNumber(tap.object,
cell.barcodes = colnames(tap.object),
control.copy.number = control.copy.number,
model.components = 1:5
)
boundaries <- getGMMBoundaries(tap.object,
chromosome.scope = "chr"
)
getTidyData Get tidy-style data from TapestriExperiment objects
Description
getTidyData() pulls data from the indicated assay and/or altExp slot(s), and rearranges it into
tidy format. colData (cell metadata) from the top-level/main experiment is included. rowData
(probe metadata) from the indicated assay and/or altExp slot(s) is included. Attempts are made to
sort by "chr" and "start.pos" columns if they are present to simplify plotting and other downstream
operations.
Usage
getTidyData(
TapestriExperiment,
alt.exp = NULL,
assay = NULL,
feature.id.as.factor = TRUE
)
Arguments
TapestriExperiment
TapestriExperiment object.
alt.exp Character, altExp slot to use. NULL (default) uses top-level/main experiment.
assay Character, assay slot to use. NULL (default) uses first-indexed assay (often
"counts").
feature.id.as.factor
Logical, if TRUE (default), the feature.id column is returned as a factor.
Value
A tibble of tidy data with corresponding metadata from colData and rowData.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tidy.data <- getTidyData(tap.object, alt.exp = "alleleFrequency")
moveNonGenomeProbes Move non-genome probes counts and metadata to altExp slots
Description
moveNonGenomeProbes() takes the probe IDs corresponding to grnaProbe and barcodeProbe
slots of the TapestriExperiment object, as well as probes on chrY, and moves them to their own
altExp slots in the object. This allows those counts and associated metadata to be manipulated sep-
arately without interfering with the probes used for CNV measurements which target the endoge-
nous genome. SingleCellExperiment::splitAltExps() can be used for manual specification of
probes to move to altExp slots if the shortcut slots are not used.
Usage
moveNonGenomeProbes(TapestriExperiment)
Arguments
TapestriExperiment
TapestriExperiment object.
Details
moveNonGenomeProbes() moves probes corresponding to the specified tags to altExp (alternative
experiment) slots in the TapestriExperiment object. These probes should be those which do
not correspond to a chromosome and therefore would not be used to call copy number variants.
The exception is probes on chromosome Y; CNVs of chrY are more rare, so we move it to an
altExp for separate analysis. Probes corresponding to the barcodeProbe and grnaProbe slots,
which are specified by the panel.id shortcut or manually (see Custom Slot Getters and Setters),
are automatically moved to altExp by this operation as well. If such probes are not present, the
function will only generate a warning message, so it is always safe (and recommended) to run by
default. Any remaining probes that are not targeting a human chromosome and are not specified by
the shortcut tags are moved to the otherProbeCounts slot. This function is run automatically by
default and with default behavior as part of createTapestriExperiment().
Value
TapestriExperiment with altExp slots filled with counts and metadata for non-genomic probes.
See Also
SingleCellExperiment::splitAltExps() for manual specification of probes to move to altExp
slots.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment
tap.object <- moveNonGenomeProbes(tap.object)
newTapestriExperimentExample
Create Example TapestriExperiment
Description
Creates a TapestriExperiment object for demonstration purposes, which includes 240 probes
across the genome, and 300 cells of 3 types. Raw counts are generated randomly. Type 1 has 75
cells, all XY, all diploid. Type 2 has 100 cells, all XX, with 3 copies of chr 7, otherwise diploid.
Type 3 has 125 cells, all XY, with 1 copy of chr 1p, otherwise diploid.
Usage
newTapestriExperimentExample()
Value
TapestriExperiment object with demo data.
Examples
tapExperiment <- newTapestriExperimentExample()
PCAKneePlot Plot of PCA proportion of variance explained
Description
Draws "knee plot" of PCA proportion of variance explained to determine which principal compo-
nents (PCs) to include for downstream applications e.g. clustering. Variance explained for each PC
is indicated by the line. Cumulative variance explained is indicated by the bars.
Usage
PCAKneePlot(TapestriExperiment, alt.exp = "alleleFrequency", n.pcs = 10)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp to use, NULL uses top-level/main experiment. Default "allele-
Frequency".
n.pcs Numeric, number of PCs to plot, starting at 1. Default 10.
Value
ggplot2 object, combined line plot and bar graph
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- runPCA(tap.object, alt.exp = "alleleFrequency")
PCAKneePlot(tap.object, n.pcs = 5)
plotCopyNumberGMM Plot copy number GMM components
Description
Plots the probability densities of GMM components for given chromosome or chromosome arm,
store in a TapestriExperiment. calcGMMCopyNumber() must be run first.
Usage
plotCopyNumberGMM(
TapestriExperiment,
feature.id = 1,
chromosome.scope = "chr",
draw.boundaries = FALSE
)
Arguments
TapestriExperiment
TapestriExperiment object.
feature.id chromosome or chromosome arm to plot.
chromosome.scope
"chr" or "arm", for plotting models for either whole chromosomes or chromo-
some arms.
draw.boundaries
logical, if TRUE, draw decision boundaries between each Gaussian component.
Value
ggplot object, density plot
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- calcNormCounts(tap.object)
control.copy.number <- generateControlCopyNumberTemplate(tap.object,
copy.number = 2,
sample.feature.label = "cellline1"
)
tap.object <- calcCopyNumber(tap.object,
control.copy.number,
sample.feature = "test.cluster"
)
tap.object <- calcSmoothCopyNumber(tap.object)
tap.object <- calcGMMCopyNumber(tap.object,
cell.barcodes = colnames(tap.object),
control.copy.number = control.copy.number,
model.components = 1:5
)
tap.object <- plotCopyNumberGMM(tap.object,
feature.id = 7,
chromosome.scope = "chr",
draw.boundaries = TRUE
)
reducedDimPlot Scatter plot for dimensional reduction results
Description
Plots a scatter plot of the indicated dimensional reduction results.
Usage
reducedDimPlot(
TapestriExperiment,
alt.exp = "alleleFrequency",
dim.reduction,
dim.x = 1,
dim.y = 2,
group.label = NULL
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp to use, NULL uses top-level/main experiment. Default "allele-
Frequency".
dim.reduction Character, dimension reduction to plot, either "PCA" or "UMAP".
dim.x Numeric, index of dimensional reduction data to plot on X axis. Default 1.
dim.y Numeric, index of dimensional reduction data to plot on Y axis. Default 2.
group.label Character, colData column for grouping samples by color. Default NULL.
Value
ggplot2 object, scatter plot
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- runPCA(tap.object, alt.exp = "alleleFrequency")
reducedDimPlot(tap.object, dim.reduction = "pca")
runClustering Cluster 2D data
Description
Clusters data using dbscan method and saves cluster assignments for each cell barcode to colData.
Generally used to assign clusters to UMAP projection after PCA and UMAP dimensional reduction.
Usage
runClustering(
TapestriExperiment,
alt.exp = "alleleFrequency",
dim.reduction = "UMAP",
eps = 0.8,
dim.1 = 1,
dim.2 = 2,
...
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp slot to use. NULL uses top-level/main experiment. Default
"alleleFrequency".
dim.reduction Character, reduced dimension data to use. Default "UMAP".
eps Numeric, dbscan eps parameter. Lower to increase cluster granularity. See
dbscan::dbscan(). Default 0.8.
dim.1 Numeric, index of data dimension to use. Default 1.
dim.2 Numeric, index of data dimension to use. Default 2.
... Additional parameters to pass to dbscan::dbscan().
Value
TapestriExperiment object with updated colData containing cluster assignments.
See Also
dbscan::dbscan()
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- runPCA(tap.object, alt.exp = "alleleFrequency")
tap.object <- runUMAP(tap.object, pca.dims = 1:3)
tap.object <- runClustering(tap.object, dim.reduction = "UMAP", eps = 0.8)
runPCA Cluster assay data by Principal Components Analysis
Description
Analyzes assay data by Principal Components Analysis (PCA) and saves results to reducedDims
slot of TapestriObject.
Usage
runPCA(
TapestriExperiment,
alt.exp = "alleleFrequency",
assay = NULL,
sd.min.threshold = NULL,
center = TRUE,
scale. = TRUE
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp to use, NULL uses top-level/main experiment. Default "allele-
Frequency".
assay Character, assay to use. NULL (default) uses first-indexed assay.
sd.min.threshold
Numeric, minimum threshold for allelefreq.sd. Increase to run PCA on fewer,
more variable dimensions. Set to NULL if not using for alleleFrequency slot.
Default NULL.
center Logical, if TRUE (default), variables are shifted to be zero centered. See stats::prcomp().
scale. Logical,if TRUE (default), variables are scaled to have unit variance prior to PCA.
See stats::prcomp().
Value
TapestriExperiment with PCA results saved to reducedDims slot of altExp, and proportion of
variance explained by each PC saved to metadata slot of altExp.
See Also
stats::prcomp() for PCA method details.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment
tap.object <- runPCA(tap.object, alt.exp = "alleleFrequency")
runUMAP Cluster matrix data by UMAP
Description
Analyzes matrix data by UMAP and saves results to reducedDims slot of TapestriObject.
Usage
runUMAP(
TapestriExperiment,
alt.exp = "alleleFrequency",
assay = NULL,
use.pca.dims = TRUE,
pca.dims = NULL,
...
)
Arguments
TapestriExperiment
TapestriExperiment object
alt.exp Character, altExp to use, NULL uses top-level/main experiment. Default "allele-
Frequency".
assay Character, assay to use. NULL (default) uses first-indexed assay. Not used when
use.pca.dims = TRUE.
use.pca.dims Logical, if TRUE, uses experiment PCA, otherwise uses assay data. Default
TRUE.
pca.dims Numeric, indices of PCs to use in UMAP. Default NULL.
... Additional parameters to pass to umap::umap(), e.g. for configuration (see
umap::umap.defaults()).
Value
TapestriExperiment with UMAP embeddings saved to reducedDims slot of altExp.
Examples
tap.object <- newTapestriExperimentExample() # example TapestriExperiment object
tap.object <- runPCA(tap.object, alt.exp = "alleleFrequency")
tap.object <- runUMAP(tap.object, pca.dims = 1:3)
TapestriExperiment-class
TapestriExperiment Class Definition
Description
TapestriExperiment Class Definition
Usage
## S4 method for signature 'TapestriExperiment'
show(object)
Arguments
object An R object
TapestriExperiment
A TapestriExperiment object
Value
TapestriExperiment object
Methods (by generic)
• show(TapestriExperiment): Show method for TapestriExperiment
Slots
barcodeProbe character.
grnaProbe character.
gmmParams list.
Examples
tapExpObject <- new("TapestriExperiment") |
catIrt | cran | R | Package ‘catIrt’
October 12, 2022
Type Package
Version 0.5.1
Title Simulate IRT-Based Computerized Adaptive Tests
Maintainer <NAME> <<EMAIL>>
URL https://github.com/swnydick/catIrt
BugReports https://github.com/swnydick/catIrt/issues
Depends R (>= 2.11.0), numDeriv (>= 2012.3-1)
Suggests irtoys, ltm, catR
Description Functions designed to simulate data that conform to basic
unidimensional IRT models (for now 3-parameter binary response models
and graded response models) along with Post-Hoc CAT simulations of
those models given various item selection methods, ability estimation
methods, and termination criteria. See
Wainer (2000) <doi:10.4324/9781410605931>,
van der Linden & Pashley (2010) <doi:10.1007/978-0-387-85461-8_1>,
and Eggen (1999) <doi:10.1177/01466219922031365> for more details.
License GPL (>= 2)
LazyLoad yes
NeedsCompilation yes
Author <NAME> [cre, aut] (<https://orcid.org/0000-0002-2908-1188>)
Repository CRAN
Date/Publication 2022-05-25 22:50:10 UTC
R topics documented:
catIrt-packag... 2
catIr... 2
F... 13
itChoos... 17
K... 23
mleEs... 27
simIr... 31
catIrt-package Simulate IRT-Based Computerized Adaptive Tests (CATs)
Description
catIrt provides methods for simulating Computerized Adaptive Tests, including response simula-
tion, item selection methods, ability estimation methods, and test termination methods. Unique in
catIrt is support for the graded response model (and, soon, other polytomous models) along with
expanding support for classification CAT (including an implementation of the SPRT, GLR, and CI
methods).
Details
Package: catIrt
Type: Package
Version: 0.5-0
Date: 2014-10-04
License: GPL (>= 2)
LazyLoad: yes
Author(s)
Maintainer: <NAME> <<EMAIL>>
catIrt Simulate Computerized Adaptive Tests (CATs)
Description
catIrt simulates Computerized Adaptive Tests (CATs) given a vector/matrix of responses or a vec-
tor of ability values, a matrix of item parameters, and several item selection mechanisms, estimation
procedures, and termination criteria.
Usage
catIrt( params, mod = c("brm", "grm"),
resp = NULL,
theta = NULL,
catStart = list( n.start = 5, init.theta = 0,
select = c("UW-FI", "LW-FI", "PW-FI",
"FP-KL", "VP-KL", "FI-KL", "VI-KL",
"random"),
at = c("theta", "bounds"),
it.range = NULL, n.select = 1,
delta = .1,
score = c("fixed", "step", "random", "WLE", "BME", "EAP"),
range = c(-1, 1),
step.size = 3, leave.after.MLE = FALSE ),
catMiddle = list( select = c("UW-FI", "LW-FI", "PW-FI",
"FP-KL", "VP-KL", "FI-KL", "VI-KL",
"random"),
at = c("theta", "bounds"),
it.range = NULL, n.select = 1,
delta = .1,
score = c("MLE", "WLE", "BME", "EAP"),
range = c(-6, 6),
expos = c("none", "SH") ),
catTerm = list( term = c("fixed", "precision", "info", "class"),
score = c("MLE", "WLE", "BME", "EAP"),
n.min = 5, n.max = 50,
p.term = list(method = c("threshold", "change"),
i.term = list(method = c("threshold", "change"),
c.term = list(method = c("SPRT", "GLR", "CI"),
ddist = dnorm,
progress = TRUE, ... )
## S3 method for class 'catIrt'
summary( object, group = TRUE, ids = "none", ... )
## S3 method for class 'catIrt'
plot( x, which = "all", ids = "none",
conf.lev = .95, legend = TRUE, ask = TRUE, ... )
Arguments
object, x a catIrt object.
params numeric: a matrix of item parameters. If specified as a matrix, the rows must
index the items, and the columns must designate the item parameters. For the
binary response model, params must either be a 3-column matrix (if not using
item exposure control), a 4-5-column matrix (with Sympson-Hetter parameters
as the last column if using item exposure control), or a 4-5-column matrix (if in-
cluding the item number as the first column). See Details for more information.
mod character: a character string indicating the IRT model. Current support is for
the 3-parameter binary response model (‘"brm"’), and Samejima’s graded re-
sponse model (‘"grm"’). The contents of params must match the designation of
mod. If mod is left blank, it will be designated the class of resp (if resp inherits
either ‘"brm"’ or ‘"grm"’), and if that fails, it will ask the user (if in interactive
mode) or error.
resp numeric: either a N × J matrix (where N indicates the number of simulees and
J indicates the number of items), a J length vector (if there is only one simulee),
or NULL if specifying thetas. For the binary response model (‘"brm"’), resp
must solely contain 0s and 1s. For the graded response model (‘"grm"’), resp
must solely contain integers 1, . . . , K, where K is the number of categories, as
indicated by the dimension of params.
theta numeric: either a N -dimensional vector (where N indicates the number of
simulees) or NULL if specifying resp.
catStart list: a list of options for starting the CAT including:
1. n.start: a scalar indicating the number of items that are used for each
simulee at the beginning of the CAT. After ‘n.start’ reaches the specified
value, the CAT will shift to the middle set of parameters.
2. init.theta: a scalar or vector of initial starting estimates of θ. If init.theta
is a scalar, every simulee will have the same starting value. Otherwise,
simulees will have different starting values based on the respective element
of init.theta.
3. select: a character string indicating the item selection method for the first
few items. Items can be selected either through maximum Fisher infor-
mation or Kullback-Leibler divergence methods or randomly. The Fisher
information methods include
• ‘"UW-FI"’: unweighted Fisher information at a point.
• ‘"LW-FI"’: Fisher information weighted across the likelihood function.
• ‘"PW-FI"’: Fisher information weighted across the posterior distribu-
tion of θ.
And the Kullback-Leibler divergence methods include
• ‘"FP-KL"’: pointwise KL divergence between [P +/- delta], where P is
either the current θ estimate or a classification bound.
• ‘"VP-KL"’: pointwise KL divergence between [P +/- delta/sqrt(n)], where
n is the number of items given to this point in the CAT.
• ‘"FI-KL"’: KL divergence integrated along [P -/+ delta] with respect
to P
• ‘"VI-KL"’: KL divergence integrated along [P -/+ delta/sqrt(n)] with
respect to P.
See itChoose for more information.
4. at: a character string indicating where to select items. If select is ‘"UW-FI"’
and at is ‘"theta"’, then items will be selected to maximize Fisher infor-
mation at the proximate θ estimates.
5. it.range: Either a 2-element numeric vector indicating the minimum and
maximum allowed difficulty parameters for items selected during the start-
ing portion of the CAT (only if mod is equal to ‘"brm"’) or NULL indicating
no item parameter restrictions. See itChoose for more information.
6. n.select: an integer indicating the number of items to select at one time.
For instance, if select is ‘"UW-FI"’, at is ‘"theta"’, and n.select is 5,
the item choosing function will randomly select between the top 5 items
that maximize expected Fisher information at proximate θ estimates.
7. delta: a scalar indicating the multiplier used in initial item selection if a
Kullback-Leibler method is chosen.
8. score: a character string indicating the θ estimation method. As of now,
the options for scoring the first few items are ‘"fixed"’ (at init.thet),
‘"step"’ (by adding or subtracting step.size θ estimates after each item),
Weighted Likelihood Estimation (‘"WLE"’), Bayesian Modal Estimation (‘"BME"’),
and Expected A-Posteriori Estimation (‘"EAP"’). The latter two allow user
specified prior distributions through density (d...) functions. See mleEst
for more information.
9. range: a 2-element numeric vector indicating the minimum and maximum
that θ should be estimated in the starting portion of the CAT.
10. step.size: a scalar indicating how much to increment or decrement the
estimate of θ if score is set to ‘"step"’.
11. leave.after.MLE: a logical indicating whether to skip the remainder of
the starting items if the user has a mixed response pattern and/or a finite
maximum likelihood estimate of θ can be achieved.
catMiddle list: a list of options for selecting/scoring during the middle of the CAT, includ-
ing:
1. select: a character string indicating the item selection method for the re-
maining items. See select in catStart for an explanation of the options.
2. at: a character string indicating where to select items. See select in
catStart for an explanation of the options.
3. it.range: Either a 2-element numeric vector indicating the minimum and
maximum allowed difficulty parameters for items selected during the mid-
dle portion of the CAT (only if mod is equal to ‘"brm"’) or NULL indicating
no item parameter restrictions. See itChoose for more information.
4. n.select: an integer indicating the number of items to select at one time.
5. delta: a scalar indicating the multiplier used in middle item selection if a
Kullback-Leibler method is chosen.
6. score: a character string indicating the θ estimation method. As of now, the
options for scoring the remaining items are Maximum Likelihood Estima-
tion (‘"MLE"’), Weighted Likelihood Estimation (‘"WLE"’), Bayesian Modal
Estimation (‘"BME"’), and Expected A-Posteriori Estimation (‘"EAP"’). The
latter two allow user specified prior distributions through density (d...)
functions. See mleEst for more information.
7. range: a 2-element numeric vector indicating the minimum and maximum
that θ should be estimated in the middle portion of the CAT.
8. expos: a character string indicating whether no item exposure controls
should be implemented (‘"none"’) or whether the CAT should use Sympson-
Hetter exposure controls (‘"SH"’). If (and only if) expos is equal to ‘"SH"’,
the last column of the parameter matrix should indicate the probability of
an item being administered given that it is selected.
catTerm list: a list of options for stopping/terminating the CAT, including:
1. term: a scalar/vector indicating the termination criterion/criteria. CATs can
be terminated either through a fixed number of items (‘"fixed"’) declared
through the n.max argument; related to SEM of a simulee (‘"precision"’)
declared through the p.term argument; related to the test information of
a simulee at a particular point in the cat (‘"info"’) declared through the
i.term argument; and/or when a simulee falls into a category. If more than
one termination criteria is selected, the CAT will terminate after success-
fully satisfying the first of those for a given simulee.
2. score: a character string indicating the θ estimation method for all of the
responses in the bank. score is used to estimate θ given the entire bank
of item responses and parameter set. If the theta estimated using all of the
responses is far away from θ, the size of the item bank is probably too small.
The options for score in catTerm are identical to the options of score in
catMiddle.
3. n.min: an integer indicating the minimum number of items that a simulee
should "take" before any of the termination criteria are checked.
4. n.max: an integer indicating the maximum number of items to administer
before terminating the CAT.
5. p.term: a list indicating the parameters of a precision-based stopping rule,
only if term is ‘"precision"’, including:
(a) method: a character string indicating whether to terminate the CAT
when the SEM dips below a threshold (‘"threshold"’) or changes less
than a particular amount (‘"change"’).
(b) crit: a scalar indicating either the maximum SEM of a simulee before
terminating the CAT or the maximum change in the simulee’s SEM
before terminating the CAT.
6. i.term: a list indicating the parameters of a information-based stopping
rule, only if term is ‘"info"’, including:
(a) method: a character string indicating whether to terminate the CAT
when FI exceeds a threshold (‘"threshold"’) or changes less than a
particular amount (‘"change"’).
(b) crit: a scalar indicating either the minimum FI of a simulee before
terminating the CAT or the maximum change in the simulee’s FI before
terminating the CAT.
7. c.term: a list indicating the parameters of a classification CAT, only if
term is ‘"class"’ or any of the selection methods are at one or more
‘"bounds"’, including:
(a) method: a scalar indicating the method used for a classification CAT.
As of now, the classification CAT options are the Sequential Probability
Ratio Test (‘"SPRT"’), the Generalized Likelihood Ratio (‘"GLR"’), or
the Confidence Interval method (‘"CI"’).
(b) bounds: a scalar, vector, or matrix of classification bounds. If specified
as a scalar, there will be one bound for each simulee at that value. If
specified as a N -dimensional vector, there will be one bound for each
simulee. If specified as a k < N -dimensional vector, there will be k
bounds for each simulee at those values. And if specified as a N × k-
element matrix, there will be k bounds for each simulee.
(c) categ: a vector indicating the names of the categories into which the
simulees should be classified. The length of categ should be one
greater than the length of bounds.
(d) delta: a scalar indicating the half-width of an indifference region when
performing an SPRT-based classification CAT or selecting items by
Kullback-Leibler divergence. See Eggen (1999) and KL for more in-
formation.
(e) alpha: a scalar indicating the specified Type I error rate for performing
an SPRT- based classification CAT.
(f) beta: a scalar indicating the specified Type II error rate for performing
an SPRT- based classification CAT.
(g) conf.lev: a scalar between 0 and 1 indicating the confidence level
used when performing a confidence-based (‘"CI"’) classification CAT.
ddist function: a function indicating how to calculate prior densities for Bayesian es-
timation or particular item selection methods. For instance, if you wish to spec-
ify a normal prior, ddist = dnorm, and if you wish to specify a uniform prior,
ddist = dunif. Note that it is standard in R to use d. . . to indicate a density. See
itChoose for more information.
which numeric: a scalar or vector of integers between 1 and 4, indicating which plots
to include. The plots are as follows:
1. Bank Information
2. Bank SEM
3. CAT Information
4. CAT SEM
which can also be "none", in which case plot.catIrt will not plot any infor-
mation functions, or it can be "all", in which case plot.catIrt will plot all four
information functions.
group logical: TRUE or FALSE indicating whether to display a summary at the group
level.
ids numeric: a scalar or vector of integers between 1 and the number of simulees
indicating which simulees to plot and/or summarize their CAT process and all
of their θ estimates. ids can also be "none" (or, equivalently, NULL) or "all".
conf.lev numeric: a scalar between 0 and 1 indicating the desired confidence level plot-
ted for the individual θ estimates.
legend logical: TRUE or FALSE indicating whether the plot function should display a
legend on the plot.
ask logical: TRUE or FALSE indicating whether the plot function should ask be-
tween plots.
progress logical: TRUE or FALSE indicating whether the catIrt function should dis-
play a progress bar during the CAT.
... arguments passed to ddist or plot.catIrt, usually distribution parameters
identified by name or graphical parameters.
Details
The function catIrt performs a post-hoc computerized adaptive test (CAT), with a variety of user
specified inputs. For a given person/simulee (e.g. simulee i), a CAT represents a simple set of stages
surrounded by a while loop (e.g. Weiss and Kingsbury, 1984):
• Item Selection: The next item is chosen based on a pre-specified criterion/criteria. For ex-
ample, the classic item selection mechanism is picking an item such that it maximizes Fisher
Information at the current estimate of θi . Frequently, content balancing, item constraints, or
item exposure will be taken into consideration at this point (aside from solely picking the "best
item" for a given person). See itChoose for current item selection methods.
• Estimation: θi is estimated based on updated information, usually relating to the just-selected
item and the response associated with that item. In a post-hoc CAT, all of the responses al-
ready exist, but in a standard CAT, "item administration" would be between "item selection"
and "estimation." The classic estimation mechanism is estimating θi based off of maximizing
the likelihood given parameters and a set of responses. Other estimation mechanisms cor-
rect for bias in the maximum likelihood estimate or add a prior information (such as a prior
distribution of θ). If an estimate is untenable (i.e. it returns a non-sensical value or ∞), the es-
timation procedure needs to have an alternative estimation mechanism. See mleEst for current
estimation methods.
• Termination: Either the test is terminated based on a pre-specified criterion/critera, or no ter-
mination criteria is satisfied, in which case the loop repeats. The standard termination criteria
involve a fixed criterion (e.g. administering only 50 items), or a variable criterion (e.g. con-
tinuing until the observed SEM is below .3). Other termination criteria relate to cut-point tests
(e.g. certification tests, classification tests), that depend not solely on ability but on whether
that ability is estimated to exceed a threshold. catIrt terminates classification tests based on
either the Sequential Probability Ratio Test (SPRT) (see Eggen, 1999), the Generalized Like-
lihood Ratio (GLR) (see Thompson, 2009), or the Confidence Interval Method (see Kingsbury
& Weiss, 1983). Essentially, the SPRT compares the ratio of two likelihoods (e.g. the likeli-
hood of the data given being in one category vs the likelihood of the data given being in the
other category, as defined by B + δ and B − δ (where B separates the categories and δ is
the halfwidth of the indifference region) and compares that ratio with a ratio of error rates (α
and β) (see Wald, 1945). The GLR uses the maximum likelihood estimate in place of either
B +δ or B −δ, and the confidence interval method terminates a CAT if the confidence interval
surrounding an estimate of θ is fully within one of the categories.
The CAT estimates θi1 (an initial point) based on init.theta, and terminates the entire simulation
after sequentially terminating each simulee’s CAT.
Value
The function catIrt returns a list (of class "catIrt") with the following elements:
cat_theta a vector of final CAT θ estimates.
cat_categ a vector indicating the final classification of each simulee in the CAT. If term is
not ‘"class"’, cat_categ will be a vector of NA values.
cat_info a vector of observed Fisher information based on the final CAT θ estimates and
the item responses.
cat_sem a vector of observed SEM estimates (or posterior standard deviations) based on
the final CAT θ estimates and the item responses.
cat_length a vector indicating the number of items administered to each simulee in the CAT
cat_term a vector indicating how each CAT was terminated.
tot_theta a vector of θ estimates given the entire item bank.
tot_categ a vector indicating the classification of each simulee given the entire item bank.
tot_info a vector of observed Fisher information based on the entire item bank worth of
responses.
tot_sem a vector of observed SEM estimates based on the entire item bank worth of
responses.
true_theta a vector of true θ values if specified by the user.
true_categ a vector of true classification given θ.
full_params the full item bank.
full_resp the full set of responses.
cat_indiv a list of θ estimates, observed SEM, observed information, the responses and the
parameters chosen for each simulee over the entire CAT.
mod a list of model specifications, as designated by the user, so that the CAT can be
easily reproduced.
Note
Both summary.catIrt and plot.catIrt return different objects than the original catIrt function.
summary.catIrt returns summary labeled summary statistics, and plot.catIrt returns evaluation
points (x values, information, and SEM) for each of the plots. Moreover, if in interactive mode
and missing parts of the catStart, catMiddle, or catTerm arguments, the catIrt function will
interactively ask for each of those and return the set of arguments in the "catIrt" object.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>. (1999). Item selection in adaptive testing with the sequential probability ratio
test. Applied Psychological Measurement, 23, 249 – 261.
<NAME>., & Weiss (1983). A comparison of IRT-based adaptive mastery testing and a
sequential mastery testing procedure. In D. J. Weiss (Ed.), New horizons in testing: Latent trait test
theory and computerized adaptive testing (pp. 257–283). New York, NY: Academic Press.
<NAME>. (2009). Using the generalized likelihood ratio as a termination criterion. In <NAME>.
Weiss (Ed.), Proceedings of the 2009 GMAC conference on computerized adaptive testing.
<NAME>. (Ed.). (2000). Computerized Adaptive Testing: A Primer (2nd Edition). Mahwah, NJ:
Lawrence Erlbaum Associates.
<NAME>. (1945). Sequential tests of statistical hypotheses. Annals of Mathematical Statistics, 16,
117 – 186.
<NAME>., & <NAME>. (1984). Application of computerized adaptive testing to educa-
tional problems. Journal of Educational Measurement, 21, 361-375.
See Also
FI, itChoose, KL, mleEst, simIrt
Examples
## Not run:
#########################
# Binary Response Model #
#########################
set.seed(888)
# generating random theta:
theta <- rnorm(50)
# generating an item bank under a 2-parameter binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = 0)
# simulating responses:
b.resp <- simIrt(theta = theta, params = b.params, mod = "brm")$resp
## CAT 1 ##
# the typical, classic post-hoc CAT:
catStart1 <- list(init.theta = 0, n.start = 5,
select = "UW-FI", at = "theta",
n.select = 4, it.range = c(-1, 1),
score = "step", range = c(-1, 1),
step.size = 3, leave.after.MLE = FALSE)
catMiddle1 <- list(select = "UW-FI", at = "theta",
n.select = 1, it.range = NULL,
score = "MLE", range = c(-6, 6),
expos = "none")
catTerm1 <- list(term = "fixed", n.min = 10, n.max = 50)
cat1 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle1,
catTerm = catTerm1)
# we can print, summarize, and plot:
cat1 # prints theta because
# we have fewer than
# 200 simulees
summary(cat1, group = TRUE, ids = "none") # nice summary!
summary(cat1, group = FALSE, ids = 1:4) # summarizing people too! :)
par(mfrow = c(2, 2))
plot(cat1, ask = FALSE) # 2-parameter model, so expected FI
# and observed FI are the same
par(mfrow = c(1, 1))
# we can also plot particular simulees:
par(mfrow = c(2, 1))
plot(cat1, which = "none", ids = c(1, 30), ask = FALSE)
par(mfrow = c(1, 1))
## CAT 2 ##
# using Fixed Point KL info rather than Unweighted FI to select items:
catStart2 <- catStart1
catMiddle2 <- catMiddle1
catTerm2 <- catTerm1
catStart2$leave.after.MLE <- TRUE # leave after mixed response pattern
catMiddle2$select <- "FP-KL"
catMiddle2$at <- "bounds"
catMiddle2$delta <- .2
catTerm2$c.term <- list(bounds = 0)
cat2 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart2,
catMiddle = catMiddle2,
catTerm = catTerm2)
cor(cat1$cat_theta, cat2$cat_theta) # very close!
summary(cat2, group = FALSE, ids = 1:4) # rarely 5 starting items!
## CAT 3/4 ##
# using "precision" rather than "fixed" to terminate:
catTerm1$term <- catTerm2$term <- "precision"
catTerm1$p.term <- catTerm2$p.term <- list(method = "threshold", crit = .3)
cat3 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle1,
catTerm = catTerm1)
cat4 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart2,
catMiddle = catMiddle2,
catTerm = catTerm2)
mean(cat3$cat_length - cat4$cat_length) # KL info results in slightly more items
## CAT 5/6 ##
# classification CAT with a boundary of 0 (with default classification stuff):
catTerm5 <- list(term = "class", n.min = 10, n.max = 50,
c.term = list(method = "SPRT",
bounds = 0, delta = .2,
alpha = .10, beta = .10))
cat5 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle1,
catTerm = catTerm5)
cat6 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle2,
catTerm = catTerm5)
# how many were classified correctly?
mean(cat5$cat_categ == cat5$tot_categ)
# using a different selection mechanism, we get the similar results:
mean(cat6$cat_categ == cat6$tot_categ)
## CAT 7 ##
# we could change estimation to EAP with the default (normal) prior:
catMiddle7 <- catMiddle1
catMiddle7$score <- "EAP"
cat7 <- catIrt(params = b.params, mod = "brm", # much slower!
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle7,
catTerm = catTerm1)
cor(cat1$cat_theta, cat7$cat_theta) # pretty much the same
## CAT 8 ##
# let's specify the prior as something strange:
cat8 <- catIrt(params = b.params, mod = "brm",
resp = b.resp,
catStart = catStart1,
catMiddle = catMiddle7,
catTerm = catTerm1,
ddist = dchisq, df = 4)
cat8 # all positive values of "theta"
## CAT 9 ##
# finally, we can have:
# - more than one termination criteria,
# - individual bounds per person,
# - simulating based on theta without a response matrix.
catTerm9 <- list(term = c("fixed", "class"),
n.min = 10, n.max = 50,
c.term = list(method = "SPRT",
bounds = cbind(runif(length(theta), -1, 0),
delta = .2,
alpha = .1, beta = .1))
cat9 <- catIrt(params = b.params, mod = "brm",
resp = NULL, theta = theta,
catStart = catStart1,
catMiddle = catMiddle1,
catTerm = catTerm9)
summary(cat9) # see "... with Each Termination Criterion"
#########################
# Graded Response Model #
#########################
# generating random theta
theta <- rnorm(201)
# generating an item bank under a graded response model:
g.params <- cbind(a = runif(100, .5, 1.5), b1 = rnorm(100), b2 = rnorm(100),
# the graded response model is exactly the same, only slower!
cat10 <- catIrt(params = g.params, mod = "grm",
resp = NULL, theta = theta,
catStart = catStart1,
catMiddle = catMiddle1,
catTerm = catTerm1)
# warning because it.range cannot be specified for graded response models!
# if there is more than 200 simulees, it doesn't print individual thetas:
cat10
## End(Not run)
# play around with things - CATs are fun - a little frisky, but fun.
FI Calculate Expected and Observed Fisher Information for IRT Models
Description
FI calculates expected and/or observed Fisher information for various IRT models given a vector
of ability values, a vector/matrix of item parameters, and an IRT model. It also calculates test
information and expected/observed standard error of measurement.
Usage
FI( params, theta, type = c("expected", "observed"), resp = NULL )
## S3 method for class 'brm'
FI( params, theta, type = c("expected", "observed"), resp = NULL )
## S3 method for class 'grm'
FI( params, theta, type = c("expected", "observed"), resp = NULL )
Arguments
params numeric: a vector or matrix of item parameters. If specified as a matrix, the
rows must index the items, and the columns must designate the item parame-
ters. Furthermore, if calculating expected information, the number of rows must
match the number of columns of resp. The class of params must match the
model: either ‘"brm"’ or ‘"grm"’. For the binary response model, params must
either be a 3-dimensional vector or a 3-column matrix. See Details for more
information.
theta numeric: a vector of ability values, one for each simulee. If calculating ex-
pected information, the length of theta must match the number of rows of
resp, unless theta is a scalar, in which case resp could also be a vector of
length nrow(params).
type character: a character string indicating the type of information, either ‘"expected"’
or ‘"observed"’. For the 1-parameter and 2-parameter binary response model
(of class ‘"brm"’ with the third column of params set to 0), both ‘"expected"’
and ‘"observed"’ information are identical. See Details for more information.
resp numeric: either a N × J matrix (where N indicates the number of simulees and
J indicates the number of items), a N length vector (if there is only one item) or
a J length vector (if there is only one simulee). For the binary response model
(‘"brm"’), resp must solely contain 0s and 1s. For the graded response model
(‘"grm"’), resp must solely contain integers 1, . . . , K, where K is the number
of categories, as indicated by the dimension of params.
Details
The function FI returns item information, test information, and standard error of measurement for
the binary response model (‘"brm"’) or the graded response model (‘"grm"’). If the log likelihood
is twice differentiable, expected Fisher information is the negative, expected, second derivative of
the log likelihood with respect to the parameter. For the binary response model, expected item
information simplifies to the following:
∂pij
∂θi
I(θi |aj , bj , cj ) =
where ∂pij /∂θi is the partial derivative of pij with respect to θ, and pij is the probability of re-
sponse, as indicated in the help page for simIrt.
For the graded response model, expected item information simplifies to the following:
∂Pijk
X ∂θi
I(θi |aj , bj1 , . . . , bj(k−1) ) =
Pijk
k
where ∂Pijk /∂θi is the partial derivative of Pijk with respect to θ, and Pijk is the probability of
responding in category k as indicated in the help page for simIrt. See van der Linden and Pashley
(2010).
Observed information is the negative second derivative of the log-likelihood. For the binary re-
sponse model (‘"brm"’) with 2-parameters (such that the third column of the parameter matrix is
set to 0), observed and expected information are identical because the second derivative of their
log-likelihoods do not contain observed data. See Baker and Kim (2004), pp. 66 – 69.
For all models, test information is defined as the following:
X
T (θi ) = Ij (θi )
j
where I(θi )j is shorthand for Fisher information of simulee i on item j. Finally, the standard error
of measurement (SEM) is the inverse, square-root of test information. FI is frequently used to select
items in a CAT and to estimate the precision of θ̂i after test termination.
Value
FI, FI.brm, and FI.grm return a list of the following elements:
item either: (1) a N × J matrix of item information for each simulee to each item;
(2) a J-length vector of item information for one simulee to each item; or (3) an
N -length vector of item information for all simulees to one item, depending on
the dimensions of params and theta.
test an N -length vector of test information, one for each simulee. Test information
is the sum of item information across items. See Details for more information.
sem an N -length vector of expected or observed standard error of measurement for
each simulee, which is the inverse-square-root of test information. See Details
for more information.
type either ‘"observed"’ or ‘"expected"’, indicating the type of information calcu-
lated.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (2004). Item Response Theory: Parameter Estimation Techniques,
Second Edition. New York, NY: Marcel Dekker, Inc.
<NAME>., <NAME>., & <NAME>. (1995). Computerized adaptive testing with polyto-
mous items. Applied Psychological Measurement, 19, 5 – 22.
<NAME>., & <NAME>. (2000). Item Response Theory for Psychologists. Mahway, NJ:
Lawrence Erlbaum Associates.
<NAME>., & <NAME>. (1951). On information and sufficiency. The Annals of Mathematical
Statistics, 22, 79 – 86.
<NAME>. & <NAME>. (2010). Item selection and ability estimation in adaptive
testing. In <NAME> & <NAME> (Eds.), Elements of Adaptive Testing. New York,
NY: Springer.
See Also
catIrt, KL, simIrt
Examples
#########################
# Binary Response Model #
#########################
## 1 ##
set.seed(888)
# generating random theta:
theta <- rnorm(20)
# generating an item bank under a 2-parameter binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = 0)
# simulating responses using random theta:
b.mod <- simIrt(params = b.params, theta = theta, mod = "brm")
# you can indicate class of params or extract it from simIrt object:
class(b.params) <- "brm"
# calculating expected and observed information:
e.info <- FI(params = b.params, theta = theta, type = "expected")
o.info <- FI(params = b.params, theta = theta, type = "observed", resp = b.mod$resp)
# 2-parameter model, so e.info will be equal to o.info:
all(signif(e.info$item) == signif(o.info$item))
## 2 ##
# generating an item bank under a 3-parameter binary response model:
b.params2 <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = .2)
# simulating responses using pre-specified thetas:
b.mod2 <- simIrt(params = b.params2, mod = "brm")
# calculating expected and observed information:
# (if you don't indicate class, you can extract from simIrt object)
e.info2 <- FI(params = b.params2, theta = b.mod2$theta, type = "expected")
o.info2 <- FI(params = b.params2, theta = b.mod2$theta, type = "observed",
resp = b.mod2$resp)
# 3-parameter model, so e.info will not be equal to o.info:
all(signif(e.info2$item) == signif(o.info2$item))
## 3 ##
# if theta is a scalar, item will be a vector and test will be a scalar:
e.info3 <- FI(params = b.params2, theta = 0, type = "expected")
dim(e.info3$item) # no dimension because it's a vector
length(e.info3$item) # of length equal to the number of items
# if params is a vector, item will be a matrix with one row:
e.info4 <- FI(params = c(1, 2, 0), theta = c(1, 2), type = "expected")
dim(e.info4$item)
# if you don't class params, FI will assume a binary response model.
#########################
# Graded Response Model #
#########################
set.seed(999)
# generating random theta
theta <- rnorm(10)
# generating an item bank under a graded response model:
g.params <- cbind(a = runif(30, .5, 1.5), b1 = rnorm(30), b2 = rnorm(30),
# you can sort the parameters yourself:
g.params <- cbind(g.params[ , 1],
t(apply(g.params[ ,2:dim(g.params)[2]], MARGIN = 1,
FUN = sort)))
# simulating responses using random theta:
g.mod <- simIrt(params = g.params, theta = theta, mod = "grm")
# calculating expected and observed information:
class(g.params) <- "grm" # always indicate model or extract from simulation.
e.info5 <- FI(params = g.params, theta = theta, type = "expected")
o.info5 <- FI(params = g.params, theta = theta, type = "observed", resp = g.mod$resp)
# grm, so e.info will not be equal to o.info:
all(signif(e.info5$item) == signif(o.info5$item))
# if thet is a vector and params is a vector, item will be a J x N matrix:
dim(e.info5$item)
# if you don't want to sort the parameters, you can extract from simIrt object:
e.info6 <- FI(params = g.mod$params[ , -1], theta = g.mod$theta, type = "expected")
# but you first need to remove column 1 (the item number column).
itChoose Choose the Next Item in a CAT
Description
itChoose chooses the next item in a CAT based on the remaining items and a variety of item
selection algorithms.
Usage
itChoose( left_par, mod = c("brm", "grm"),
numb = 1, n.select = 1,
cat_par = NULL, cat_resp = NULL, cat_theta = NULL,
select = c("UW-FI", "LW-FI", "PW-FI",
"FP-KL", "VP-KL", "FI-KL", "VI-KL",
"random"),
at = c("theta", "bounds"),
range = c(-6, 6), it.range = NULL,
delta = NULL, bounds = NULL,
ddist = dnorm, quad = 33, ... )
Arguments
left_par numeric: a matrix of item parameters from which to choose the next item. The
rows must index the items, and the columns must designate the item parameters
(in the appropriate order, see catIrt). The first column of left_par must in-
dicate the item numbers, as itChoose returns not just the item parameters but
also the bank number corresponding to those parameters. See Details for more
information.
mod character: a character string indicating the IRT model. Current support is for
the 3-parameter binary response model (‘"brm"’), and Samejima’s graded re-
sponse model (‘"grm"’). The contents of params must match the designation of
mod. See catIrt or simIrt for more information.
numb numeric: a scalar indicating the number of items to return to the user. If numb
is less than n.select, then itChoose will randomly select numb items from the
top n.select items according to the item selection algorithm.
n.select numeric: an integer indicating the number of items to randomly select between
at one time. For instance, if select is ‘"UW-FI"’, at is ‘"theta"’, numb is 3,
and n.select is 8, then itChoose will randomly select 3 items out of the top 8
items that maximize Fisher information at cat_theta.
cat_par numeric: either NULL or a matrix of item parameters that have already been
administered in the CAT. cat_par only needs to be specified if letting select
equal either ‘"LW-FI"’, ‘"PW-FI"’, ‘"VP-KL"’ or ‘"VI-KL"’. The format of
cat_par must be the same as the format of left_par. See Details for more
information.
cat_resp numeric: either NULL or a vector of responses corresponding to the items
specified in cat_par. cat_par only needs to be specified if letting select
equal either ‘"LW-FI"’ or ‘"PW-FI"’.
cat_theta numeric: either NULL or a scalar corresponding to the current ability estimate.
cat_theta is not needed if selecting items at ‘"bounds"’ or using ‘"LW-FI"’ or
‘"PW-FI"’ as the item selection algorithm.
select character: a character string indicating the desired item selection method. Items
can be selected either through maximum Fisher information or Kullback-Leibler
divergence methods or randomly. The Fisher information methods include
• ‘"UW-FI"’: unweighted Fisher information at a point.
• ‘"LW-FI"’: Fisher information weighted across the likelihood function.
• ‘"PW-FI"’: Fisher information weighted across the posterior distribution of
θ.
And the Kullback-Leibler divergence methods include
• ‘"FP-KL"’: pointwise KL divergence between [P +/- delta], where P is ei-
ther the current θ estimate or a classification bound.
• ‘"VP-KL"’: pointwise KL divergence between [P +/- delta/sqrt(n)], where n
is the number of items given to this point in the CAT.
• ‘"FI-KL"’: KL divergence integrated along [P -/+ delta] with respect to P
• ‘"VI-KL"’: KL divergence integrated along [P -/+ delta/sqrt(n)] with re-
spect to P.
See Details for more information.
at character: a character string indicating where to select items.
range numeric: a 2-element numeric vector indicating the range of values that itChoose
should average over if select equals ‘"LW-FI"’ or ‘"PW-FI"’.
it.range numeric: Either a 2-element numeric vector indicating the minimum and max-
imum allowed difficulty parameters for selected items (only if mod is equal to
‘"brm"’) or NULL indicating no item parameter restrictions.
delta numeric: a scalar indicating the multiplier used in item selection if a Kullback-
Leibler method is chosen. For fixed-point KL divergence, delta is frequently
.1 or .2, whereas in variable-point KL divergence, delta usually corresponds to
95 or 97.5 percentiles on a normal distribution.
bounds numeric: a vector of fixed-points/bounds from which to select items if at equals
‘"bounds"’.
ddist function: a function indicating how to calculate prior densities if select equals
‘"PW-FI"’ (i.e., weighting Fisher information on the posterior distribution). See
catIrt for more information.
quad numeric: a scalar indicating the number of quadrature points when select
equals ‘"LW-FI"’ or ‘"PW-FI"’. See Details for more information.
... arguments passed to ddist, usually distribution parameters identified by name.
Details
The function itChoose returns the next item(s) to administer in a CAT environment. The item
selection algorithms fall into three major types: Fisher information, Kullback-Leibler divergence,
and random.
• If choosing items based on Fisher information (select equals ‘"UW-FI"’, ‘"LW-FI"’, or
‘"PW-FI"’), then items are selected based on some aggregation of Fisher information (see
FI). The difference between the three Fisher information methods are the weighting functions
used (see <NAME> Linden, 1998; Veerkamp & Berger, 1997). Let
Z ∞
I(wij |aj , bj , cj ) = wij Ij (θ)µ(dθ)
−∞
be the "average" Fisher information, weighted by real valued function wij . Then all three
Fisher information criteria can be explained solely as using different weights. Unweighted
Fisher information (‘"UW-FI"’) sets wij equal to a Dirac delta function with all of its mass
either on theta (if at equals ‘"theta"’) or the nearest classification bound (if at equals
‘"bounds"’). Likelihood-Weighted Fisher information (‘"UW-FI"’) sets wij equal to the like-
lihood function given all of the previously administered items (Veerkamp & Berger, 1997).
And Posterior-Weighted Fisher information (‘"PW-FI"’) sets wij equal to the likelihood func-
tion times the prior distribution specified in ddist (van der Linden, 1998). All three algo-
rithms select items based on maximizing the respective criterion with ‘"UW-FI"’ the most
popular CAT item selection algorithm and equivalent to maximizing Fisher information at a
point (Pashley & van der Linden, 2010).
• If choosing items based on Kullback-Leibler divergence (select equals ‘"FP-KL"’, ‘"VP-KL"’,
‘"FI-KL"’, or ‘"VI-KL"’), then items are selected based on some aggregation of KL diver-
gence (see KL).
– The Pointwise KL divergence criteria (select equals ‘"FP-KL"’ and ‘"VP-KL"’) com-
pares KL divergence at two points:
KL(wij |aj , bj , cj ) = KLj (P + wij ||P − wij )
The difference between ‘"FP-KL"’ and ‘"VP-KL"’ are the weights used. Fixed Pointwise
KL divergence (‘"FP-KL"’) sets wij equal to ‘delta’, and Variable
√ Pointwise KL diver-
gence (‘"VP-KL"’) sets wij equal to ‘delta’ multiplied by 1/ n, where n is equal to the
number of items given to this point in the CAT (see Chang & Ying, 1996).
– The Integral KL divergence criteria (select equals ‘"FI-KL"’ and ‘"VI-KL"’) integrates
KL divergence across a small area:
Z P +wij
KL(wij |aj , bj , cj ) = KLj (θ||P )dθ
P −wij
As in Pointwise KL divergence, Fixed Integral KL divergence (‘"FI-KL"’) sets wij equal
to ‘delta’, and √Variable Integral KL divergence (‘"VI-KL"’) sets wij equal to ‘delta’
multiplied by 1/ n (see Chang & Ying, 1996).
All KL divergence criteria set P equal to theta (if at equals ‘"theta"’) or the nearest classi-
fication bound (if at equals ‘"bounds"’) and select items based on maximizing the respective
criterion.
• If select is ‘"random"’, then itChoose randomly picks the next item(s) out of the remaining
items in the bank.
Value
itChoose returns a list of the following elements:
params a matrix corresponding to the next item or numb items to administer in a CAT
with the first column indicating the item number
info a vector of corresponding information for the numb items of params.
type the type of information returned in info, which is equal to the item selection
algorithm.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (1996). A global information approach to computerized adaptive testing.
Applied Psychological Measurement, 20, 213 – 229.
<NAME>., & <NAME>. (2010). Item selection and ability estimation in adaptive
testing. In <NAME> & <NAME> (Eds.), Elements of adaptive testing (pp. 3 – 30).
New York, NY: Springer.
<NAME> Linden, <NAME>. (1998). Bayesian item selection criteria for adaptive testing. Psychometrika,
63, 201 – 216.
<NAME>., & <NAME>. (1997). Some new item selection criteria for adaptive testing.
Journal of Educational and Behavioral Statistics, 22, 203 – 226.
See Also
catIrt, FI, KL, mleEst, simIrt
Examples
#########################
# Binary Response Model #
#########################
## Not run:
set.seed(888)
# generating an item bank under a binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = .2)
# simulating responses using default theta:
b.mod <- simIrt(theta = 0, params = b.params, mod = "brm")
# separating the items into "administered" and "not administered":
left_par <- b.mod$params[1:95, ]
cat_par <- b.mod$params[96:100, ]
cat_resp <- b.mod$resp[ , 96:100]
# running simIrt automatically adds the item numbers to the front!
# attempting each item selection algorithm (except random):
uwfi.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_theta = 0,
select = "UW-FI",
at = "theta")
lwfi.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_par = cat_par, cat_resp = cat_resp,
select = "LW-FI")
pwfi.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_par = cat_par, cat_resp = cat_resp,
select = "PW-FI", ddist = dnorm, mean = 0, sd = 1)
fpkl.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_theta = 0,
select = "FP-KL",
at = "theta", delta = 1.96)
vpkl.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_par = cat_par, cat_theta = 0,
select = "VP-KL",
at = "theta", delta = 1.96)
fikl.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_theta = 0,
select = "FI-KL",
at = "theta", delta = 1.96)
vikl.it <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_par = cat_par, cat_theta = 0,
select = "VI-KL",
at = "theta", delta = 1.96)
# which items were the most popular?
uwfi.it$params # 61 (b close to 0)
lwfi.it$params # 55 (b close to -2.5)
pwfi.it$params # 16 (b close to -0.5)
fpkl.it$params # 61 (b close to 0)
vpkl.it$params # 61 (b close to 0)
fikl.it$params # 16 (b close to -0.5)
vikl.it$params # 16 (b close to -0.5)
# if we pick the top 10 items for "FI-KL":
fikl.it2 <- itChoose(left_par = left_par, mod = "brm",
numb = 10, n.select = 10,
cat_theta = 0,
select = "FI-KL",
at = "theta", delta = 1.96)
# we find that item 61 is the third best item
fikl.it2$params
# why did "LW-FI" pick an item with a strange difficulty?
cat_resp
# because cat_resp is mostly 0 ...
# --> so the likelihood is weighted toward negative numbers.
#########################
# Graded Response Model #
#########################
set.seed(999)
# generating an item bank under a graded response model:
g.params <- cbind(runif(100, .5, 1.5), rnorm(100), rnorm(100),
rnorm(100), rnorm(100), rnorm(100))
# simulating responses (so that the parameters are ordered - see simIrt)
left_par <- simIrt(theta = 0, params = g.params, mod = "grm")$params
# now we can choose the best item for theta = 0 according to FI:
uwfi.it2 <- itChoose(left_par = left_par, mod = "brm",
numb = 1, n.select = 1,
cat_theta = 0,
select = "UW-FI",
at = "theta")
uwfi.it2
## End(Not run)
KL Calculate Kullback-Leibler Divergence for IRT Models
Description
KL calculates the IRT implementation of Kullback-Leibler divergence for various IRT models given
a vector of ability values, a vector/matrix of item responses, an IRT model, and a value indicating
the half-width of an indifference region.
Usage
KL( params, theta, delta = .1 )
## S3 method for class 'brm'
KL( params, theta, delta = .1 )
## S3 method for class 'grm'
KL( params, theta, delta = .1 )
Arguments
params numeric: a vector or matrix of item parameters. If specified as a matrix, the
rows must index the items, and the columns must designate the item parame-
ters. Furthermore, if calculating expected information, the number of rows must
match the number of columns of resp. The class of params must match the
model: either ‘"brm"’ or ‘"grm"’. For the binary response model, params must
either be a 3-dimensional vector or a 3-column matrix. See Details for more
information.
theta numeric: a vector of ability values, one for each simulee. When performing
a classification CAT, theta should be the boundary points for which to choose
the next item.
delta numeric: a scalar or vector indicating the half-width of the indifference KL will
estimate the divergence between θ − δ and θ + δ using θ + δ as the "true model."
If delta is a vector, then KL will use recycling to make the length of theta and
delta match. See Details for more information.
Details
The function KL returns item divergence and test divergence for the binary response model (‘"brm"’)
and the graded response model (‘"grm"’). KL-divergence is defined as the following:
KL(θ2 ||θ1 ) = Eθ2 log
where L(θ) stands for the likelihood of θ. Essentially, KL-divergence is the expected log-likelihood
gain when using the true model in place of an alternative model.
For the binary response model, KL-divergence for an item simplifies to the following:
KLj (θ2 ||θ1 )j = pj (θ2 ) log + [1 − pj (θ2 )] log
where pij is the probability of response, as indicated in the help page for simIrt
For the graded response model, KL-divergence for an item simplifies to the following:
KLj (θ2 ||θ1 ) = Pjk (θ2 ) log
k
where Pjk (θ2 ) is the probability of θ2 responding in category k as indicated in the help page for
simIrt. See Eggen (1999) as applied to classification CAT and van der Linden and Pashley (2010)
more generally.
Because of the properties of likelihood functions in item response models, test information is simply
the sum of the item informations, or:
X
KL(θ2 ||θ1 ) = KLj (θ2 ||θ1 )
j
KL is frequently used to select items in a classification CAT where the hypotheses (e.g. being in one
category versus another category are well defined). If "being in the upper category" is θ2 and "being
in the lower category" is θ1 , then θ2 = B + δ and θ1 = B − δ where B is the boundary separating
the lower category from the upper category. Conversely, if using KL to select items in a precision
CAT, then θ2 = θ̂i + δ and θ1 = θ̂i where θ̂i is the current, best estimate of θ. See catIrt for more
information.
Value
KL, KL.brm, and KL.grm return a list of the following elements:
item either: (1) a N × J matrix of item information for each simulee to each item;
(2) a J-length vector of item information for one simulee to each item; or (3) an
N -length vector of item information for all simulees to one item, depending on
the dimensions of params, theta, annd delta.
test an N -length vector of test information, one for each simulee. Test information
is the sum of item information across items. See Details for more information.
Note
Kullback-Leibler divergence in IRT is not true KL divergence, as the expectation is with respect to
a model that is not necessarily true. Furthermore, it is not reciprocal, as KL(θ1 ||θ2 ) 6= KL(θ2 ||θ1 ).
There have been other KL-based item selection measures proposed, including global information.
See Eggen (1999) and itChoose.
Author(s)
<NAME> <<EMAIL>>
References
Eggen, <NAME>. (1999). Item selection in adaptive testing with the sequential probability ratio
test. Applied Psychological Measurement, 23, 249 – 261.
<NAME>., & <NAME>. (1951). On information and sufficiency. The Annals of Mathematical
Statistics, 22, 79 – 86.
<NAME>, <NAME>. & <NAME>. (2010). Item selection and ability estimation in adaptive
testing. In <NAME> & <NAME> (Eds.), Elements of Adaptive Testing. New York,
NY: Springer.
See Also
catIrt, FI, itChoose, simIrt
Examples
#########################
# Binary Response Model #
#########################
## 1 ##
set.seed(888)
# generating random theta:
theta <- rnorm(20)
# generating an item bank under a 3-parameter binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = .2)
# you can indicate class of params or extract it from simIrt object:
class(b.params) <- "brm"
# calculating KL information with delta = .1:
k.info1 <- KL(params = b.params, theta = theta, delt = .1)
# changing delta to .2
k.info2 <- KL(params = b.params, theta = theta, delt = .2)
# notice how the overall information has increased when increasing delt:
k.info1$test; k.info2$test
# also compare with Fisher information:
f.info <- FI(params = b.params, theta = theta, type = "expected")
k.info2$test; f.info$test
# Fisher information is much higher because of how it weighs things.
## 2 ##
# we can maximize information at a boundary - say "0":
k.info3 <- KL(params = b.params, theta = 0, delta = .1)
b.params[which.max(k.info3$item), ]
# notice how the a parameter is high while the b parameter is close to
# 0, so item selection is working.
# does Fisher information choose a different item?
f.info2 <- FI(params = b.params, theta = 0, type = "expected")
b.params[which.max(f.info2$item), ]
# nope - although with more items, who knows?
#########################
# Graded Response Model #
#########################
## 1 ##
set.seed(999)
# generating random theta
theta <- rnorm(20)
# generating an item bank under a graded response model:
g.params <- cbind(runif(100, .5, 1.5), rnorm(100), rnorm(100),
rnorm(100), rnorm(100), rnorm(100))
# simulating responses (so that the parameters are ordered - see simIrt)
g.params <- simIrt(theta = theta, params = g.params, mod = "grm")$params[ , -1]
# we can calculate KL information as before, noting that class(g.params) is "grm"
class(g.params) # so we don't need to set it ourselves
# and now KL info with delt = .1
k.info4 <- KL(theta = theta, params = g.params)
# KL information is higher when more boundaries
k.info4$test
k.info1$test
# Note: k.info1 would be exactly the same if calculated with the "grm"
# rather than the "brm"
## 2 ##
# we can also maximize information at boundary "0"
k.info5 <- KL(params = g.params, theta = 0, delta = .1)
g.params[which.max(k.info5$item), ]
# notice how the a parameter is high while the b parameters are pretty spread out.
# does Fisher information choose a different item?
f.info3 <- FI(params = g.params, theta = 0, type = "expected")
g.params[which.max(f.info3$item), ]
# nope - although with more items, who knows?
mleEst Estimate Ability in IRT Models
Description
mleEst, wleEst, bmeEst, and eapEst estimate ability in IRT models. mleEst is Maximum Likeli-
hood Information, wleEst is Weighted Likelihood Information (see Details), bmeEst is Bayesian-
Modal Estimation, and eapEst is Expected-A-Posterior Estimation.
Usage
mleEst( resp, params, range = c(-6, 6), mod = c("brm", "grm"), ... )
wleEst( resp, params, range = c(-6, 6), mod = c("brm", "grm"), ... )
bmeEst( resp, params, range = c(-6, 6), mod = c("brm", "grm"),
ddist = dnorm, ... )
eapEst( resp, params, range = c(-6, 6), mod = c("brm", "grm"),
ddist = dnorm, quad = 33, ... )
Arguments
resp numeric: either a N × J matrix (where N indicates the number of simulees and
J indicates the number of items), a N length vector (if there is only one item) or
a J length vector (if there is only one simulee). For the binary response model
(‘"brm"’), resp must solely contain 0s and 1s. For the graded response model
(‘"grm"’), resp must solely contain integers 1, . . . , K, where K is the number
of categories, as indicated by the dimension of params.
params numeric: a vector or matrix of item parameters. If specified as a matrix, the
rows must index the items, and the columns must designate the item parameters.
range numeric: a two-element numeric vector indicating the minimum and maximum
over which to optimize a likelihood function (mleEst) or posterior distribution
(bmeEst), find roots to a score function (wleEst), or integrate over (eapEst).
mod character: a character string indicating the IRT model. Current support is for
the 3-parameter binary response model (‘"brm"’), and Samejima’s graded re-
sponse model (‘"grm"’). See simIrt for more information.
ddist function: a function that calculates prior densities for Bayesian estimation. For
instance, if you wish to specify a normal prior, ddist = dnorm, and if you wish
to specify a uniform prior, ddist = dunif. Note that it is standard in R to use
d. . . to indicate a density.
quad numeric: a scalar indicating the number of quadrature points when using eapEst.
See Details for more information.
... arguments passed to ddist, usually distribution parameters identified by name.
Details
These functions return estimated "ability" for the binary response model (‘"brm"’) and the graded
response model (‘"grm"’). The only difference between the functions is how they estimate ability.
The function mleEst searches for a maximum of the log-likelihood with respect to each individual
θi and uses [T (θ)]−1/2 as the corresponding standard error of measurement (SEM), where T (θ) is
the observed test information function at θ, as described in FI.
The function bmeEst searches for the maximum of the log-likelihood after a log-prior is added,
which effectively maximizes the posterior distribution for each individual θi . The SEM of the
bmeEst estimator uses the well known relationship (Keller, 2000, p. 10)
∂ log[p(θ)]
V [θ|ui ]−1 = T (θ) −
where V [θ|ui ] is the variance of θ after taking into consideration the prior distribution and p(θ)
is the prior distribution of θ. The function bmeEst estimates the second derivative of the prior
distribution uses the hessian function in the numDeriv package.
The function wleEst searches for the root of a modified score function (i.e. the first derivative of the
log-likelihood with something added to it). The modification corrects for bias in fixed length tests,
and estimation using this modification results in what is called Weighted Maximum Likelihood (or
alternatively, the Warm estimator) (see Warm, 1989). So rather than maximizing the likelihood,
wleEst finds a root of:
∂l(θ) H(θ)
+
where l(θ) is the log-likelihood of θ given a set of responses and item parameters, I(θ) is expected
test information to this point, and H(θ) is a correction constant defined as:
H(θ) =
j
for the binary response model, where p0ij is the first derivative of pij with respect to θ, p00ij is the
second derivative of pij with respect to θ, and pij is the probability of response, as indicated in the
help page for simIrt, and
Pijk
H(θ) =
j
Pijk
k
for the graded response model, where Pijk is the first derivative of Pijk with respect to θ, Pijk is
the second derivative of Pijk , and Pijk is the probability of responding in category k as indicated in
the help page for simIrt. The SEM of the wleEst estimator uses an approximation based on Warm
(1989, p. 449):
H(θ)
V (θ) ≈
The function eapEst finds the mean and standard deviation of the posterior distribution given the
log-likelihood, a prior distribution (with specified parameters), and the number of quadrature points
using the standard Bayesian identity with summations in place of integrations (see Bock and Mis-
levy, 1982). Rather than using the adaptive, quadrature based integrate, eapEst uses the flexible
integrate.xy function in the sfsmisc package. As long as the prior distribution is reasonable
(such that the joint distribution is relatively smooth), this method should work.
Value
mleEst, wleEst, bmeEst, and eapEst return a list of the following elements:
theta an N -length vector of ability values, one for each simulee.
info an N -length vector of observed test information, one for each simulee. Test
information is the sum of item information across items. See FI for more infor-
mation.
sem an N -length vector of observed standard error of measurement (or posterior
standard deviation) for each simulee. See FI for more information.
Note
For the binary response model (‘"brm"’), it makes no sense to estimate ability with a non-mixed
response pattern (all 0s or all 1s). The user might want to include enough items in the model to
allow for reasonable estimation.
Weighted likelihood estimation (wleEst) uses uniroot to find the root of the modified score func-
tion, so that the end points of ‘range’ must evaluate to opposite signs (or zero). Rarely, the end
points of ‘range’ will evaluate to the same sign, so that uniroot will error. In these cases, uniroot
will extend the interval until the end points of the (modified) range are opposite signs.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (1982). Adaptive EAP estimation of ability in a microcomputer
environment. Applied Psychological Measurement, 6, 431 – 444.
<NAME>., & <NAME>. (2000). Item Response Theory for Psychologists. Mahway, NJ:
Lawrence Erlbaum Associates.
Keller (2000). Ability estimation procedures in computerized adaptive testing (Technical Report).
New York, NY: American Institute of Certified Public Accountants.
<NAME>. (1989). Weighted likelihood estimation of ability in item response theory. Psychome-
trika, 54, 427 – 450.
<NAME>, <NAME>. & <NAME>. (2010). Item selection and ability estimation in adaptive
testing. In <NAME> & <NAME> (Eds.), Elements of Adaptive Testing. New York,
NY: Springer.
See Also
catIrt, simIrt, hessian, uniroot
Examples
## Not run:
#########################
# Binary Response Model #
#########################
set.seed(888)
# generating random theta:
theta <- rnorm(201)
# generating an item bank under a 2-parameter binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = 0)
# simulating responses using specified theta:
b.resp <- simIrt(theta = theta, params = b.params, mod = "brm")$resp
# estimating theta using all four methods:
est.mle1 <- mleEst(resp = b.resp, params = b.params, mod = "brm")$theta
est.wle1 <- wleEst(resp = b.resp, params = b.params, mod = "brm")$theta
est.bme1 <- bmeEst(resp = b.resp, params = b.params, mod = "brm",
ddist = dnorm, mean = 0, sd = 1)$theta
est.eap1 <- eapEst(resp = b.resp, params = b.params, mod = "brm",
ddist = dnorm, mean = 0, sd = 1, quad = 33)$theta
# eap takes a while!
# all of the methods are highly correlated:
cor(cbind(theta = theta, mle = est.mle1, wle = est.wle1,
bme = est.bme1, eap = est.eap1))
# you can force eap to be positive:
est.eap2 <- eapEst(resp = b.resp, params = b.params, range = c(0, 6),
mod = "brm", ddist = dunif, min = 0, max = 6)$theta
est.eap2
# if you only have a single response, MLE will give junk!
mleEst(resp = 0, params = c(1, 0, .2), mod = "brm")$theta
# the others will give you answers that are not really determined by the response:
wleEst(resp = 0, params = c(1, 0, .2), mod = "brm")$theta
bmeEst(resp = 0, params = c(1, 0, .2), mod = "brm")$theta
eapEst(resp = 0, params = c(1, 0, .2), mod = "brm")$theta
#########################
# Graded Response Model #
#########################
set.seed(999)
# generating random theta
theta <- rnorm(400)
# generating an item bank under a graded response model:
g.params <- cbind(a = runif(100, .5, 1.5), b1 = rnorm(100), b2 = rnorm(100),
# simulating responses using random theta:
g.mod <- simIrt(params = g.params, theta = theta, mod = "grm")
# pulling out the responses and the parameters:
g.params2 <- g.mod$params[ , -1] # now the parameters are sorted
g.resp2 <- g.mod$resp
# estimating theta using all four methods:
est.mle3 <- mleEst(resp = g.resp2, params = g.params2, mod = "grm")$theta
est.wle3 <- wleEst(resp = g.resp2, params = g.params2, mod = "grm")$theta
est.bme3 <- bmeEst(resp = g.resp2, params = g.params2, mod = "grm",
ddist = dnorm, mean = 0, sd = 1)$theta
est.eap3 <- eapEst(resp = g.resp2, params = g.params2, mod = "grm",
ddist = dnorm, mean = 0, sd = 1, quad = 33)$theta
# and the correlations are still pretty high:
cor(cbind(theta = theta, mle = est.mle3, wle = est.wle3,
bme = est.bme3, eap = est.eap3))
# note that the graded response model is just a generalization of the brm:
cor(est.mle1, mleEst(resp = b.resp + 1, params = b.params[ , -3], mod = "grm")$theta)
cor(est.wle1, wleEst(resp = b.resp + 1, params = b.params[ , -3], mod = "grm")$theta)
cor(est.bme1, bmeEst(resp = b.resp + 1, params = b.params[ , -3], mod = "grm")$theta)
cor(est.eap1, eapEst(resp = b.resp + 1, params = b.params[ , -3], mod = "grm")$theta)
## End(Not run)
simIrt Simulate Responses to IRT Models
Description
simIrt simulates responses to various IRT models given a vector of ability values and a vec-
tor/matrix of item parameters.
Usage
simIrt( theta = seq(-3, 3, by = 0.1), params, mod = c("brm", "grm") )
Arguments
theta numeric: a vector of ability values, one for each simulee.
params numeric: a vector or matrix of item parameters. If specified as a matrix, the
rows must index the items, and the columns must designate the item parameters.
For the binary response model, (‘"brm"’), params must either be a 3-element
vector or a 3-column matrix. See Details for more information.
mod character: a character string indicating the IRT model. Current support is for
the 3-parameter binary response model (‘"brm"’), and Samejima’s graded re-
sponse model (‘"grm"’). The contents of params must match the designation of
mod. See Details for more information.
Details
The function simIrt returns a response matrix of class "brm" or "grm" depending on the model.
For the binary response model, the probability of endorsing item j for simulee i is the following
(Embretson & Reise, 2000):
pij = P r(uij = 1|θi , aj , bj , cj ) = cj + (1 − cj )
For the graded response model, the probability of endorsing at or above boundary k of item j for
simulee i is the following:
pijk = P r(uij ≥ k|θi , aj , bk ) =
so that the probability of scoring in category k is, Pijk = P r(uij = k|θi , aj , b) = 1−pijk if k = 1;
pijk if k = K; and pij(k−1) − pijk otherwise, where K is the number of categories, so that K − 1
is the number of boundaries.
Assuming perfect model fit, simIrt generates the probability of responding in a category, simu-
lates a random, uniform deviate, and compares the probability of response with the location of the
deviate. For instance, for the binary response model, if pij = .7, so that qij = 1 − pij = .3, simIrt
will generate a uniform deviate (uij ) between 0 and 1. If uij < pij , the simulee will score a 1, and
otherwise, the simulee will score a 0.
Value
The function simIrt returns a list of the following elements:
resp a matrix of class "brm" or "grm" depending on the model used. The dimensions
of the matrix will be N × J (persons by items), and will contain 0s and 1s for
the binary response model or 1 . . . K for the graded response model, where K
indicates the number of categories.
params a matrix of class "brm" or "grm" containing the item parameters used in the
simulation. In the case of "grm", the threshold parameters will be ordered so
that they will work in other functions.
theta a vector of theta used in the simulation. If theta is not specified by the user, it
will default to a 201-length vector of evenly spaced points between -3 and 3.
Author(s)
<NAME> <<EMAIL>>
References
<NAME>., & <NAME>. (2000). Item Response Theory for Psychologists. Mahway, NJ:
Lawrence Erlbaum Associates.
Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores. Psy-
chometrika Monograph Supplement, 34, 100 – 114.
<NAME>, <NAME>. & <NAME>. (2010). Handbook of Modern Item Response Theory.
New York, NY: Springer.
See Also
catIrt
Examples
#########################
# Binary Response Model #
#########################
set.seed(888)
# generating an item bank under a binary response model:
b.params <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = .2)
# simulating responses using default theta:
b.mod <- simIrt(params = b.params, mod = "brm")
# same type of model without a guessing (c) parameter:
b.params2 <- cbind(a = runif(100, .5, 1.5), b = rnorm(100, 0, 2), c = 0)
b.mod2 <- simIrt(params = b.params2, mod = "brm")
# now generating a different theta:
theta <- rnorm(201)
b.mod3 <- simIrt(theta = theta, params = b.params2, mod = "brm")
# notice all of the responses are 0 or 1:
unique(as.vector(b.mod$resp))
# and the percentages (in general) increase as theta increases:
apply(b.mod$resp, 1, mean) # theta = seq(-3, 3, by = 0.1)
#########################
# Graded Response Model #
#########################
set.seed(999)
# generating an item bank under a graded response model:
# (as many categories as your heart desires!)
g.params <- cbind(a = runif(10, .5, 1.5), b1 = rnorm(10), b2 = rnorm(10),
# simulating responses using default theta (automatically sorts boundaries):
g.mod <- simIrt(params = g.params, mod = "grm")
# notice how the old parameters were not sorted:
g.params
# but the new parameters are sorted from column 2 on:
g.mod$params
# don't use these parameters with the binary response model:
try(simIrt(params = g.params, mod = "brm"), silent = TRUE)[1]
# a better parameter set for the graded response model:
g.params2 <- cbind(runif(100, .5, 1.5), b1 = runif(100, -2, -1), b2 = runif(100, -1, 0),
b3 = runif(100, 0, 1), b4 = runif(100, 1, 2))
g.mod2 <- simIrt(params = g.params2, mod = "grm")
# notice all of the responses are positive integers:
unique(as.vector(g.mod$resp))
unique(as.vector(g.mod2$resp))
# and the responses (in general) increase as theta increases:
apply(g.mod2$resp, 1, mean) |
puppeteer_pdf | hex | Erlang | Toggle Theme
PuppeteerPdf
===
This is a wrapper to NodeJS module [puppeteer-pdf](https://www.npmjs.com/package/puppeteer-pdf). After some attempts to use wkhtmltopdf using [pdf_generator](https://github.com/gutschilla/elixir-pdf-generator), I've decided to use other software to generate PDF and create a wrapper for it.
Puppeteer PDF vs wkhtmltopdf
---
I've written a [small blog post](https://coletiv.com/blog/elixir-pdf-generation-puppeteer-wkhtmltopdf/) where I explain my reasons to create this extension. Here is the list of pros and cons compared with `pdf_generator` module.
###
Disadvantage
* Bigger PDF file size
* NodeJS 8+ needed
* Chromium Browser needed
###
Advantages
* Display independent render (for better testing how template will be).
* Less render issues.
Installation
---
Install `puppeteer-pdf` via npm, with the following command:
```
npm i puppeteer-pdf -g
```
In some cases you will need to install this extra dependencies. Here is an example for Debian based distributions.
```
sudo apt-get install libxss1 lsof libasound2 libnss3
```
On your elixir project, you just need to add the following dependency:
```
def deps do
[
{:puppeteer_pdf, "~> 1.0.3"}
]
end
```
If you have the older `applications` structure inside `mix.exs`, you need to add `:briefly` to it. If you have `extra_applications`, you don't need to do anything.
###
Troubleshooting
If for some reason it doesn't download automatically chromium, it will give you the following error:
```
(node:14878) UnhandledPromiseRejectionWarning: Error: Chromium revision is not downloaded. Run "npm install" or "yarn install"
at Launcher.launch (/usr/local/lib/node_modules/puppeteer-pdf/node_modules/puppeteer/lib/Launcher.js:119:15)
at <anonymous>
(node:14878) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:14878) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
```
To solve this, you execute the following commands to copy chromium from the `puppeteer` folder, to the `puppeteer` inside the `puppeteer-pdf`.
On OSX and Linux systems should be the following commands:
```
npm i puppeteer -g # This should install chromium cp -R /usr/local/lib/node_modules/puppeteer/.local-chromium/ /usr/local/lib/node_modules/puppeteer-pdf/node_modules/puppeteer/
```
If you have issues related to this, please comment on [this issue](https://github.com/coletiv/puppeteer-pdf/issues/13).
How to use
---
###
Initial
These are the options available right now:
```
options = [
margin_left: 40,
margin_right: 40,
margin_top: 40,
margin_bottom: 150,
format: "A4",
print_background: true,
header_template: header_html, # Support both file and html
footer_template: footer_html,
display_header_footer: true,
debug: true,
timeout: 10000 # value passed directly to Task.await/2. (Defaults to 5000)
]
```
And to generate the PDF you can use the following code using Phoenix Template:
```
# Get template to be rendered. Note that the full filename is "invoice.html.eex", the but ".eex" is not needed here.
html = Phoenix.View.render_to_string(
MyApp.View,
"pdf/invoice.html",
assigns
)
# Get full path to generated pdf file pdf_path = Path.absname("invoice.pdf")
case PuppeteerPdf.Generate.from_string(html, pdf_path, options) do
{:ok, _} -> ...
{:error, message} -> ...
end
```
Or just with HTML file:
```
html_path = Path.absname("random.html")
case PuppeteerPdf.Generate.from_file(html_path, pdf_path, options) do
{:ok, _} -> ...
{:error, message} -> ...
end
```
###
Using header and footer templates
You can defined an HTML header and footer, using the `header_template` and `footer_template` options.
To use a file, use the following format: `file:///home/user/file.html`.
Don't forget to also include `display_header_footer` to `true`.
###
Support special characters
If you see weird characters printed on a language that can have special (like Germam, Chinese, Russian, ...) define the charset as follows:
```
<head>
<meta charset="UTF-8">
...
```
###
Use images or fonts
You can use custom images or text fonts using the following Elixir code that defines the full path to the file. This should be passed in the `assings` variable when rendering the template, as explained above.
```
font1_path = "#{:code.priv_dir(:myapp)}/static/fonts/font1.otf"
```
On template style:
```
@font-face {
font-family: 'GT-Haptik';
src: url(<%= @font1_path %>) format("opentype");
font-weight: 100;
}
```
###
Configure execution path
In order to configure this setting:
```
config :puppeteer_pdf, exec_path: "/usr/local/bin/puppeteer-pdf"
```
Or you can use system environment variable:
```
export PUPPETEER_PDF_PATH=/usr/local/bin/puppeteer-pdf
```
For development purposes when working on this project, you can set the `PUPPETEER_PDF_PATH`
environment variable to point to the `puppeteer-pdf` executable. **Do not attempt to use this env var to set the path in production. Instead, use the application configuration, above.**
Continuous Integration / Continuous Deployment
---
If you use CI
```
before_script:
- nvm install 8
- npm i puppeteer-pdf -g
```
###
Docker File
If you are deploying a project with Docker and using this module, this is a working Dockerfile configuration.
You can find instructions on how to deploy this with an `alpine` Docker image in [this issue](https://github.com/coletiv/puppeteer-pdf/issues/24).
This Docker file use a two stage building, with a Debian operative system.
```
#
# Stage 1
#
FROM elixir:1.8.2-slim as builder ENV MIX_ENV=prod WORKDIR /myapp
# Umbrella COPY mix.exs mix.lock ./
COPY config config
RUN mix local.hex --force && \
mix local.rebar --force
# App COPY lib lib
# Image / Font files if you need for your PDF document COPY priv priv RUN mix do deps.get, deps.compile
WORKDIR /myapp COPY rel rel
RUN mix release --env=prod --verbose
#
# Stage 2
#
FROM node:10-slim
# Install latest chrome dev package and fonts to support major charsets (Chinese, Japanese, Arabic, Hebrew, Thai and a few others)
# Note: this installs the necessary libs to make the bundled version of Chromium that Puppeteer
# installs, work.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst ttf-freefont \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# If running Docker >= 1.13.0 use docker run's --init arg to reap zombie processes, otherwise
# uncomment the following lines to have `dumb-init` as PID 1
# ADD https://github.com/Yelp/dumb-init/releases/download/v1.2.0/dumb-init_1.2.0_amd64 /usr/local/bin/dumb-init
# RUN chmod +x /usr/local/bin/dumb-init
# ENTRYPOINT ["dumb-init", "--"]
# Uncomment to skip the chromium download when installing puppeteer. If you do,
# you'll need to launch puppeteer with:
# browser.launch({executablePath: 'google-chrome-unstable'})
# ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV MIX_ENV=prod \
SHELL=/bin/bash
# Install puppeteer so it's available in the container.
RUN npm i puppeteer-pdf \
# Add user so we don't need --no-sandbox.
# same layer as npm install to keep re-chowned files from using up several hundred MBs more space
&& groupadd -r pptruser && useradd -r -g pptruser -G audio,video pptruser \
&& mkdir -p /home/pptruser/Downloads \
&& chown -R pptruser:pptruser /home/pptruser \
&& chown -R pptruser:pptruser /node_modules \
&& mkdir /myapp \
&& chown -R pptruser:pptruser /myapp
# Run everything after as non-privileged user.
USER pptruser
WORKDIR /myapp COPY --from=builder /myapp/_build/prod/rel/myapp/releases/0.1.0/myapp.tar.gz .
RUN tar zxf myapp.tar.gz && rm myapp.tar.gz CMD ["/myapp/bin/myapp", "foreground"]
```
Toggle Theme
Puppeteer PDF v1.0.4 CommandHelper
===
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[cmd(exec_path, params)](#cmd/2)
[Link to this section](#functions)
Functions
===
Toggle Theme
Puppeteer PDF v1.0.4 PuppeteerPdf
===
Wrapper library for NodeJS binary puppeteer-pdf.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[get_exec_version()](#get_exec_version/0)
Get puppeteer-pdf binary version
[is_pdf(file)](#is_pdf/1)
Verify if the file generated is a valid PDF file
[Link to this section](#functions)
Functions
===
```
get_exec_version() :: [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()
```
Get puppeteer-pdf binary version
```
is_pdf([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()) :: [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()
```
Verify if the file generated is a valid PDF file
Toggle Theme
Puppeteer PDF v1.0.4 PuppeteerPdf.Generate
===
Generate a PDF file from multiple available sources.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[from_file(html_file_path, pdf_output_path, options \\ [])](#from_file/3)
Generate PDF file with an HTML file path given as input
[from_string(html_code, pdf_output_path, options \\ [])](#from_string/3)
Generate PDF file given an HTML string input
[Link to this section](#functions)
Functions
===
```
from_file([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) ::
{:ok, [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()} | {:error, [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()}
```
Generate PDF file with an HTML file path given as input.
Options
---
* `header_template` - HTML template for the print header.
* `footer_template` - HTML template for the print footer.
* `display_header_footer` - Display header and footer.
* `format` - Page format. Possible values: Letter, Legal, Tabloid, Ledger, A0, A1, A2, A3, A4, A5, A6
* `margin_left` - Integer value (px)
* `margin_right` - Integer value (px)
* `margin_top` - Integer value (px)
* `margin_bottom` - Integer value (px)
* `scale` - Scale of the webpage rendering. (default: 1). Accept values between 0.1 and 2.
* `width` - Paper width, accepts values labeled with units.
* `height` - Paper height, accepts values labeled with units.
* `debug` - Output Puppeteer PDF options
* `landscape` - Paper orientation.
* `print_background` - Print background graphics.
* `timeout` - Integer value (ms), configures the timeout of the PDF creation (defaults to 5000)
* `wait_until` - :load, :domcontentloaded, :networkidle0, :networkidle2
```
from_string([String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)(), [list](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) ::
{:ok, [String.t](https://hexdocs.pm/elixir/String.html#t:t/0)()} | {:error, [atom](https://hexdocs.pm/elixir/typespecs.html#basic-types)()}
```
Generate PDF file given an HTML string input
Options
---
* `header_template` - HTML template for the print header.
* `footer_template` - HTML template for the print footer.
* `display_header_footer` - Display header and footer.
* `format` - Page format. Possible values: Letter, Legal, Tabloid, Ledger, A0, A1, A2, A3, A4, A5, A6
* `margin_left` - Integer value or string with one of the supported units (px, in, mm, cm)
* `margin_right` - Integer value or string with one of the supported units (px, in, mm, cm)
* `margin_top` - Integer value or string with one of the supported units (px, in, mm, cm)
* `margin_bottom` - Integer value or string with one of the supported units (px, in, mm, cm)
* `scale` - Scale of the webpage rendering. (default: 1). Accept values between 0.1 and 2.
* `width` - Paper width, accepts values labeled with units.
* `height` - Paper height, accepts values labeled with units.
* `debug` - Output Puppeteer PDF options
* `landscape` - Paper orientation.
* `print_background` - Print background graphics.
* `timeout` - Integer value (ms), configures the timeout of the PDF creation (defaults to 5000) |
github.com/rotisserie/eris | go | Go | README
[¶](#section-readme)
---
### eris Logo
[![GoDoc](https://pkg.go.dev/badge/github.com/rotisserie/eris)](https://pkg.go.dev/github.com/rotisserie/eris) [![Build](https://github.com/rotisserie/eris/workflows/build/badge.svg)](https://github.com/rotisserie/eris/actions) [![GoReport](https://goreportcard.com/badge/github.com/rotisserie/eris)](https://goreportcard.com/report/github.com/rotisserie/eris) [![Coverage Status](https://codecov.io/gh/rotisserie/eris/branch/master/graph/badge.svg)](https://codecov.io/gh/rotisserie/eris)
Package `eris` is an error handling library with readable stack traces and JSON formatting support.
`go get github.com/rotisserie/eris`
* [Why you should switch to eris](#readme-why-you-should-switch-to-eris)
* [Using eris](#readme-using-eris)
+ [Creating errors](#readme-creating-errors)
+ [Wrapping errors](#readme-wrapping-errors)
+ [Formatting and logging errors](#readme-formatting-and-logging-errors)
+ [Interpreting eris stack traces](#readme-interpreting-eris-stack-traces)
+ [Inverting the stack trace and error output](#readme-inverting-the-stack-trace-and-error-output)
+ [Inspecting errors](#readme-inspecting-errors)
+ [Formatting with custom separators](#readme-formatting-with-custom-separators)
+ [Writing a custom output format](#readme-writing-a-custom-output-format)
+ [Sending error traces to Sentry](#readme-sending-error-traces-to-sentry)
* [Comparison to other packages (e.g. pkg/errors)](#readme-comparison-to-other-packages-eg-pkgerrors)
+ [Error formatting and stack traces](#readme-error-formatting-and-stack-traces)
* [Migrating to eris](#readme-migrating-to-eris)
* [Contributing](#readme-contributing)
#### Why you should switch to eris
This package was inspired by a simple question: what if you could fix a bug without wasting time replicating the issue or digging through the code? With that in mind, this package is designed to give you more control over error handling via error wrapping, stack tracing, and output formatting.
The [example](https://github.com/rotisserie/eris/raw/master/examples/logging/example.go) that generated the output below simulates a realistic error handling scenario and demonstrates how to wrap and log errors with minimal effort. This specific error occurred because a user tried to access a file that can't be located, and the output shows a clear path from the top of the call stack to the source.
```
{
"error":{
"root":{
"message":"error internal server",
"stack":[
"main.main:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:143",
"main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:85",
"main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:82",
"main.GetRelPath:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:61"
]
},
"wrap":[
{
"message":"failed to get relative path for resource 'res2'",
"stack":"main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:85"
},
{
"message":"Rel: can't make ./some/malformed/absolute/path/data.json relative to /Users/roti/",
"stack":"main.GetRelPath:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:61"
}
]
},
"level":"error",
"method":"ProcessResource",
"msg":"method completed with error",
"time":"2020-01-16T11:20:01-05:00"
}
```
Many of the methods in this package will look familiar if you've used [pkg/errors](https://github.com/pkg/errors) or [xerrors](https://github.com/golang/xerrors), but `eris` employs some additional tricks during error wrapping and unwrapping that greatly improve the readability of the stack trace. This package also takes a unique approach to formatting errors that allows you to write custom formats that conform to your error or log aggregator of choice. You can find more information on the differences between `eris` and `pkg/errors` [here](#readme-comparison-to-other-packages-eg-pkgerrors).
#### Using eris
##### Creating errors
Creating errors is simple via [`eris.New`](https://pkg.go.dev/github.com/rotisserie/eris#New).
```
var (
// global error values can be useful when wrapping errors or inspecting error types
ErrInternalServer = eris.New("error internal server")
)
func (req *Request) Validate() error {
if req.ID == "" {
// or return a new error at the source if you prefer
return eris.New("error bad request")
}
return nil
}
```
##### Wrapping errors
[`eris.Wrap`](https://pkg.go.dev/github.com/rotisserie/eris#Wrap) adds context to an error while preserving the original error.
```
relPath, err := GetRelPath("/Users/roti/", resource.AbsPath)
if err != nil {
// wrap the error if you want to add more context
return nil, eris.Wrapf(err, "failed to get relative path for resource '%v'", resource.ID)
}
```
##### Formatting and logging errors
[`eris.ToString`](https://pkg.go.dev/github.com/rotisserie/eris#ToString) and [`eris.ToJSON`](https://pkg.go.dev/github.com/rotisserie/eris#ToJSON) should be used to log errors with the default format (shown above). The JSON method returns a `map[string]interface{}` type for compatibility with Go's `encoding/json` package and many common JSON loggers (e.g. [logrus](https://github.com/sirupsen/logrus)).
```
// format the error to JSON with the default format and stack traces enabled formattedJSON := eris.ToJSON(err, true)
fmt.Println(json.Marshal(formattedJSON)) // marshal to JSON and print logger.WithField("error", formattedJSON).Error() // or ideally, pass it directly to a logger
// format the error to a string and print it formattedStr := eris.ToString(err, true)
fmt.Println(formattedStr)
```
`eris` also enables control over the [default format's separators](#readme-formatting-with-custom-separators) and allows advanced users to write their own [custom output format](#readme-writing-a-custom-output-format).
##### Interpreting eris stack traces
Errors created with this package contain stack traces that are managed automatically. They're currently mandatory when creating and wrapping errors but optional when printing or logging. By default, the stack trace and all wrapped layers follow the opposite order of Go's `runtime` package, which means that the original calling method is shown first and the root cause of the error is shown last.
```
{
"root":{
"message":"error bad request", // root cause
"stack":[
"main.main:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:143", // original calling method
"main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:71",
"main.(*Request).Validate:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:29", // location of Wrap call
"main.(*Request).Validate:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:28" // location of the root
]
},
"wrap":[
{
"message":"received a request with no ID", // additional context
"stack":"main.(*Request).Validate:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:29" // location of Wrap call
}
]
}
```
##### Inverting the stack trace and error output
If you prefer some other order than the default, `eris` supports inverting both the stack trace and the entire error output. When both are inverted, the root error is shown first and the original calling method is shown last.
```
// create a default format with error and stack inversion options format := eris.NewDefaultStringFormat(eris.FormatOptions{
InvertOutput: true, // flag that inverts the error output (wrap errors shown first)
WithTrace: true, // flag that enables stack trace output
InvertTrace: true, // flag that inverts the stack trace output (top of call stack shown first)
})
// format the error to a string and print it formattedStr := eris.ToCustomString(err, format)
fmt.Println(formattedStr)
// example output:
// error not found
// main.GetResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:52
// main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:76
// main.main:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:143
// failed to get resource 'res1'
// main.GetResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:52
```
##### Inspecting errors
The `eris` package provides a couple ways to inspect and compare error types. [`eris.Is`](https://pkg.go.dev/github.com/rotisserie/eris#Is) returns true if a particular error appears anywhere in the error chain. Currently, it works simply by comparing error messages with each other. If an error contains a particular message (e.g. `"error not found"`) anywhere in its chain, it's defined to be that error type.
```
ErrNotFound := eris.New("error not found")
_, err := db.Get(id)
// check if the resource was not found if eris.Is(err, ErrNotFound) {
// return the error with some useful context
return eris.Wrapf(err, "error getting resource '%v'", id)
}
```
[`eris.As`](https://pkg.go.dev/github.com/rotisserie/eris#As) finds the first error in a chain that matches a given target. If there's a match, it sets the target to that error value and returns true.
```
var target *NotFoundError _, err := db.Get(id)
// check if the error is a NotFoundError type if errors.As(err, &target) {
// err is a *NotFoundError and target is set to the error's value
return target
}
```
[`eris.Cause`](https://pkg.go.dev/github.com/rotisserie/eris#Cause) unwraps an error until it reaches the cause, which is defined as the first (i.e. root) error in the chain.
```
ErrNotFound := eris.New("error not found")
_, err := db.Get(id)
// compare the cause to some sentinel value if eris.Cause(err) == ErrNotFound {
// return the error with some useful context
return eris.Wrapf(err, "error getting resource '%v'", id)
}
```
##### Formatting with custom separators
For users who need more control over the error output, `eris` allows for some control over the separators between each piece of the output via the [`eris.Format`](https://pkg.go.dev/github.com/rotisserie/eris#Format) type. If this isn't flexible enough for your needs, see the [custom output format](#readme-writing-a-custom-output-format) section below. To format errors with custom separators, you can define and pass a format object to [`eris.ToCustomString`](https://pkg.go.dev/github.com/rotisserie/eris#ToCustomString) or [`eris.ToCustomJSON`](https://pkg.go.dev/github.com/rotisserie/eris#ToCustomJSON).
```
// format the error to a string with custom separators formattedStr := eris.ToCustomString(err, Format{
FormatOptions: eris.FormatOptions{
WithTrace: true, // flag that enables stack trace output
},
MsgStackSep: "\n", // separator between error messages and stack frame data
PreStackSep: "\t", // separator at the beginning of each stack frame
StackElemSep: " | ", // separator between elements of each stack frame
ErrorSep: "\n", // separator between each error in the chain
})
fmt.Println(formattedStr)
// example output:
// error reading file 'example.json'
// main.readFile | .../example/main.go | 6
// unexpected EOF
// main.main | .../example/main.go | 20
// main.parseFile | .../example/main.go | 12
// main.readFile | .../example/main.go | 6
```
##### Writing a custom output format
`eris` also allows advanced users to construct custom error strings or objects in case the default error doesn't fit their requirements. The [`UnpackedError`](https://pkg.go.dev/github.com/rotisserie/eris#UnpackedError) object provides a convenient and developer friendly way to store and access existing error traces. The `ErrRoot` and `ErrChain` fields correspond to the root error and wrap error chain, respectively. If a root error wraps an external error, that error will be default formatted and assigned to the `ErrExternal` field. If any other error type is unpacked, it will appear in the `ErrExternal` field. You can access all of the information contained in an error via [`eris.Unpack`](https://pkg.go.dev/github.com/rotisserie/eris#Unpack).
```
// get the unpacked error object uErr := eris.Unpack(err)
// send only the root error message to a logging server instead of the complete error trace sentry.CaptureMessage(uErr.ErrRoot.Msg)
```
##### Sending error traces to Sentry
`eris` supports sending your error traces to [Sentry](https://sentry.io/) using the Sentry Go [client SDK](https://github.com/getsentry/sentry-go). You can run the example that generated the following output on Sentry UI using the command `go run examples/sentry/example.go -dsn=<DSN>`.
```
*eris.wrapError: test: wrap 1: wrap 2: wrap 3
File "main.go", line 19, in Example
return eris.New("test")
File "main.go", line 23, in WrapExample
err := Example()
File "main.go", line 25, in WrapExample
return eris.Wrap(err, "wrap 1")
File "main.go", line 31, in WrapSecondExample
err := WrapExample()
File "main.go", line 33, in WrapSecondExample
return eris.Wrap(err, "wrap 2")
File "main.go", line 44, in main
err := WrapSecondExample()
File "main.go", line 45, in main
err = eris.Wrap(err, "wrap 3")
```
#### Comparison to other packages (e.g. pkg/errors)
##### Error formatting and stack traces
Readability is a major design requirement for `eris`. In addition to the JSON output shown above, `eris` also supports formatting errors to a simple string.
```
failed to get resource 'res1'
main.GetResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:52 error not found
main.main:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:143
main.ProcessResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:76
main.GetResource:/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:52
```
The `eris` error stack is designed to be easier to interpret than other error handling packages, and it achieves this by omitting extraneous information and avoiding unnecessary repetition. The stack trace above omits calls from Go's `runtime` package and includes just a single frame for wrapped layers which are inserted into the root error stack trace in the correct order. `eris` also correctly handles and updates stack traces for global error values in a transparent way.
The output of `pkg/errors` for the same error is shown below. In this case, the root error stack trace is incorrect because it was declared as a global value, and it includes several extraneous lines from the `runtime` package. The output is also much more difficult to read and does not allow for custom formatting.
```
error not found main.init
/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:18 runtime.doInit
/usr/local/Cellar/go/1.13.6/libexec/src/runtime/proc.go:5222 runtime.main
/usr/local/Cellar/go/1.13.6/libexec/src/runtime/proc.go:190 runtime.goexit
/usr/local/Cellar/go/1.13.6/libexec/src/runtime/asm_amd64.s:1357 failed to get resource 'res1'
main.GetResource
/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:52 main.ProcessResource
/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:76 main.main
/Users/roti/go/src/github.com/rotisserie/eris/examples/logging/example.go:143 runtime.main
/usr/local/Cellar/go/1.13.6/libexec/src/runtime/proc.go:203 runtime.goexit
/usr/local/Cellar/go/1.13.6/libexec/src/runtime/asm_amd64.s:1357
```
#### Migrating to eris
Migrating to `eris` should be a very simple process. If it doesn't offer something that you currently use from existing error packages, feel free to submit an issue to us. If you don't want to refactor all of your error handling yet, `eris` should work relatively seamlessly with your existing error types. Please submit an issue if this isn't the case for some reason.
Many of your dependencies will likely still use [pkg/errors](https://github.com/pkg/errors) for error handling. When external error types are wrapped with additional context, `eris` creates a new root error that wraps the original external error. Because of this, error inspection should work seamlessly with other error libraries.
#### Contributing
If you'd like to contribute to `eris`, we'd love your input! Please submit an issue first so we can discuss your proposal.
---
Released under the [MIT License](https://github.com/rotisserie/eris/blob/v0.5.4/LICENSE.txt).
Documentation
[¶](#section-documentation)
---
### Overview [¶](#pkg-overview)
Package eris provides a better way to handle, trace, and log errors in Go.
### Index [¶](#pkg-index)
* [func As(err error, target interface{}) bool](#As)
* [func Cause(err error) error](#Cause)
* [func Errorf(format string, args ...interface{}) error](#Errorf)
* [func Is(err, target error) bool](#Is)
* [func New(msg string) error](#New)
* [func StackFrames(err error) []uintptr](#StackFrames)
* [func ToCustomJSON(err error, format JSONFormat) map[string]interface{}](#ToCustomJSON)
* [func ToCustomString(err error, format StringFormat) string](#ToCustomString)
* [func ToJSON(err error, withTrace bool) map[string]interface{}](#ToJSON)
* [func ToString(err error, withTrace bool) string](#ToString)
* [func Unwrap(err error) error](#Unwrap)
* [func Wrap(err error, msg string) error](#Wrap)
* [func Wrapf(err error, format string, args ...interface{}) error](#Wrapf)
* [type ErrLink](#ErrLink)
* [type ErrRoot](#ErrRoot)
* [type FormatOptions](#FormatOptions)
* [type JSONFormat](#JSONFormat)
* + [func NewDefaultJSONFormat(options FormatOptions) JSONFormat](#NewDefaultJSONFormat)
* [type Stack](#Stack)
* [type StackFrame](#StackFrame)
* [type StringFormat](#StringFormat)
* + [func NewDefaultStringFormat(options FormatOptions) StringFormat](#NewDefaultStringFormat)
* [type UnpackedError](#UnpackedError)
* + [func Unpack(err error) UnpackedError](#Unpack)
#### Examples [¶](#pkg-examples)
* [ToJSON (External)](#example-ToJSON-External)
* [ToJSON (Global)](#example-ToJSON-Global)
* [ToJSON (Local)](#example-ToJSON-Local)
* [ToString (External)](#example-ToString-External)
* [ToString (Global)](#example-ToString-Global)
* [ToString (Local)](#example-ToString-Local)
### Constants [¶](#pkg-constants)
This section is empty.
### Variables [¶](#pkg-variables)
This section is empty.
### Functions [¶](#pkg-functions)
####
func [As](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L137) [¶](#As)
added in v0.5.0
```
func As(err [error](/builtin#error), target interface{}) [bool](/builtin#bool)
```
As finds the first error in err's chain that matches target. If there's a match, it sets target to that error value and returns true. Otherwise, it returns false.
The chain consists of err itself followed by the sequence of errors obtained by repeatedly calling Unwrap.
An error matches target if the error's concrete value is assignable to the value pointed to by target,
or if the error has a method As(interface{}) bool such that As(target) returns true.
####
func [Cause](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L171) [¶](#Cause)
```
func Cause(err [error](/builtin#error)) [error](/builtin#error)
```
Cause returns the root cause of the error, which is defined as the first error in the chain. The original error is returned if it does not implement `Unwrap() error` and nil is returned if the error is nil.
####
func [Errorf](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L21) [¶](#Errorf)
```
func Errorf(format [string](/builtin#string), args ...interface{}) [error](/builtin#error)
```
Errorf creates a new root error with a formatted message.
####
func [Is](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L111) [¶](#Is)
```
func Is(err, target [error](/builtin#error)) [bool](/builtin#bool)
```
Is reports whether any error in err's chain matches target.
The chain consists of err itself followed by the sequence of errors obtained by repeatedly calling Unwrap.
An error is considered to match a target if it is equal to that target or if it implements a method Is(error) bool such that Is(target) returns true.
####
func [New](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L11) [¶](#New)
```
func New(msg [string](/builtin#string)) [error](/builtin#error)
```
New creates a new root error with a static message.
####
func [StackFrames](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L183) [¶](#StackFrames)
added in v0.2.0
```
func StackFrames(err [error](/builtin#error)) [][uintptr](/builtin#uintptr)
```
StackFrames returns the trace of an error in the form of a program counter slice.
Use this method if you want to pass the eris stack trace to some other error tracing library.
####
func [ToCustomJSON](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L193) [¶](#ToCustomJSON)
added in v0.2.0
```
func ToCustomJSON(err [error](/builtin#error), format [JSONFormat](#JSONFormat)) map[[string](/builtin#string)]interface{}
```
ToCustomJSON returns a JSON formatted map for a given error.
To declare custom format, the Format object has to be passed as an argument.
An error without trace will be formatted as follows:
```
{
"root": {
"message": "Root error msg",
},
"wrap": [
{
"message": "Wrap error msg'",
}
]
}
```
An error with trace will be formatted as follows:
```
{
"root": {
"message": "Root error msg",
"stack": [
"<Method2>[Format.StackElemSep]<File2>[Format.StackElemSep]<Line2>",
"<Method1>[Format.StackElemSep]<File1>[Format.StackElemSep]<Line1>"
]
}
"wrap": [
{
"message": "Wrap error msg",
"stack": "<Method2>[Format.StackElemSep]<File2>[Format.StackElemSep]<Line2>"
}
]
}
```
####
func [ToCustomString](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L75) [¶](#ToCustomString)
added in v0.2.0
```
func ToCustomString(err [error](/builtin#error), format [StringFormat](#StringFormat)) [string](/builtin#string)
```
ToCustomString returns a custom formatted string for a given error.
To declare custom format, the Format object has to be passed as an argument.
An error without trace will be formatted as follows:
```
<Wrap error msg>[Format.ErrorSep]<Root error msg>
```
An error with trace will be formatted as follows:
```
<Wrap error msg>[Format.MsgStackSep]
[Format.PreStackSep]<Method2>[Format.StackElemSep]<File2>[Format.StackElemSep]<Line2>[Format.ErrorSep]
<Root error msg>[Format.MsgStackSep]
[Format.PreStackSep]<Method2>[Format.StackElemSep]<File2>[Format.StackElemSep]<Line2>[Format.ErrorSep]
[Format.PreStackSep]<Method1>[Format.StackElemSep]<File1>[Format.StackElemSep]<Line1>[Format.ErrorSep]
```
####
func [ToJSON](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L153) [¶](#ToJSON)
added in v0.2.0
```
func ToJSON(err [error](/builtin#error), withTrace [bool](/builtin#bool)) map[[string](/builtin#string)]interface{}
```
ToJSON returns a JSON formatted map for a given error.
An error without trace will be formatted as follows:
```
{
"root": [
{
"message": "Root error msg"
}
],
"wrap": {
"message": "Wrap error msg"
}
}
```
An error with trace will be formatted as follows:
```
{
"root": [
{
"message": "Root error msg",
"stack": [
"<Method2>:<File2>:<Line2>",
"<Method1>:<File1>:<Line1>"
]
}
],
"wrap": {
"message": "Wrap error msg",
"stack": "<Method2>:<File2>:<Line2>"
}
}
```
Example (External) [¶](#example-ToJSON-External)
Demonstrates JSON formatting of wrapped errors that originate from external (non-eris) error types.
```
package main
import (
"encoding/json"
"fmt"
"io"
"github.com/rotisserie/eris"
)
func main() {
// example func that returns an IO error
readFile := func(fname string) error {
return io.ErrUnexpectedEOF
}
// unpack and print the error
err := readFile("example.json")
u, _ := json.Marshal(eris.ToJSON(err, false)) // false: omit stack trace
fmt.Println(string(u))
// example output:
// {
// "external":"unexpected EOF"
// }
}
```
```
Output:
```
Share Format
Run
Example (Global) [¶](#example-ToJSON-Global)
Demonstrates JSON formatting of wrapped errors that originate from global root errors.
```
package main
import (
"encoding/json"
"fmt"
"github.com/rotisserie/eris"
)
var ErrUnexpectedEOF = eris.New("unexpected EOF")
func main() {
// example func that wraps a global error value
readFile := func(fname string) error {
return eris.Wrapf(ErrUnexpectedEOF, "error reading file '%v'", fname) // line 6
}
// example func that catches and returns an error without modification
parseFile := func(fname string) error {
// read the file
err := readFile(fname) // line 12
if err != nil {
return err
}
return nil
}
// unpack and print the error via uerr.ToJSON(...)
err := parseFile("example.json") // line 20
u, _ := json.MarshalIndent(eris.ToJSON(err, true), "", "\t") // true: include stack trace
fmt.Printf("%v\n", string(u))
// example output:
// {
// "root": {
// "message": "unexpected EOF",
// "stack": [
// "main.main:.../example/main.go:20",
// "main.parseFile:.../example/main.go:12",
// "main.readFile:.../example/main.go:6"
// ]
// },
// "wrap": [
// {
// "message": "error reading file 'example.json'",
// "stack": "main.readFile:.../example/main.go:6"
// }
// ]
// }
}
```
```
Output:
```
Share Format
Run
Example (Local) [¶](#example-ToJSON-Local)
Demonstrates JSON formatting of wrapped errors that originate from local root errors (created at the source of the error via eris.New).
```
package main
import (
"encoding/json"
"fmt"
"github.com/rotisserie/eris"
)
func main() {
// example func that returns an eris error
readFile := func(fname string) error {
return eris.New("unexpected EOF") // line 3
}
// example func that catches an error and wraps it with additional context
parseFile := func(fname string) error {
// read the file
err := readFile(fname) // line 9
if err != nil {
return eris.Wrapf(err, "error reading file '%v'", fname) // line 11
}
return nil
}
// example func that just catches and returns an error
processFile := func(fname string) error {
// parse the file
err := parseFile(fname) // line 19
if err != nil {
return err
}
return nil
}
// another example func that catches and wraps an error
printFile := func(fname string) error {
// process the file
err := processFile(fname) // line 29
if err != nil {
return eris.Wrapf(err, "error printing file '%v'", fname) // line 31
}
return nil
}
// unpack and print the raw error
err := printFile("example.json") // line 37
u, _ := json.MarshalIndent(eris.ToJSON(err, true), "", "\t")
fmt.Printf("%v\n", string(u))
// example output:
// {
// "root": {
// "message": "unexpected EOF",
// "stack": [
// "main.main:.../example/main.go:37",
// "main.printFile:.../example/main.go:31",
// "main.printFile:.../example/main.go:29",
// "main.processFile:.../example/main.go:19",
// "main.parseFile:.../example/main.go:11",
// "main.parseFile:.../example/main.go:9",
// "main.readFile:.../example/main.go:3"
// ]
// },
// "wrap": [
// {
// "message": "error printing file 'example.json'",
// "stack": "main.printFile:.../example/main.go:31"
// },
// {
// "message": "error reading file 'example.json'",
// "stack": "main.parseFile: .../example/main.go: 11"
// }
// ]
// }
}
```
```
Output:
```
Share Format
Run
####
func [ToString](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L54) [¶](#ToString)
added in v0.2.0
```
func ToString(err [error](/builtin#error), withTrace [bool](/builtin#bool)) [string](/builtin#string)
```
ToString returns a default formatted string for a given error.
An error without trace will be formatted as follows:
```
<Wrap error msg>: <Root error msg>
```
An error with trace will be formatted as follows:
```
<Wrap error msg>
<Method2>:<File2>:<Line2>
<Root error msg>
<Method2>:<File2>:<Line2>
<Method1>:<File1>:<Line1>
```
Example (External) [¶](#example-ToString-External)
Demonstrates string formatting of wrapped errors that originate from external (non-eris) error types.
```
package main
import (
"fmt"
"io"
"github.com/rotisserie/eris"
)
func main() {
// example func that returns an IO error
readFile := func(fname string) error {
return io.ErrUnexpectedEOF
}
// unpack and print the error
err := readFile("example.json")
fmt.Println(eris.ToString(err, false)) // false: omit stack trace
// example output:
// unexpected EOF
}
```
```
Output:
```
Share Format
Run
Example (Global) [¶](#example-ToString-Global)
Demonstrates string formatting of wrapped errors that originate from global root errors.
```
package main
import (
"fmt"
"github.com/rotisserie/eris"
)
var FormattedErrUnexpectedEOF = eris.Errorf("unexpected %v", "EOF")
func main() {
// example func that wraps a global error value
readFile := func(fname string) error {
return eris.Wrapf(FormattedErrUnexpectedEOF, "error reading file '%v'", fname) // line 6
}
// example func that catches and returns an error without modification
parseFile := func(fname string) error {
// read the file
err := readFile(fname) // line 12
if err != nil {
return err
}
return nil
}
// example func that just catches and returns an error
processFile := func(fname string) error {
// parse the file
err := parseFile(fname) // line 22
if err != nil {
return eris.Wrapf(err, "error processing file '%v'", fname) // line 24
}
return nil
}
// call processFile and catch the error
err := processFile("example.json") // line 30
// print the error via fmt.Printf
fmt.Printf("%v\n", err) // %v: omit stack trace
// example output:
// unexpected EOF: error reading file 'example.json'
// unpack and print the error via uerr.ToString(...)
fmt.Printf("%v\n", eris.ToString(err, true)) // true: include stack trace
// example output:
// error reading file 'example.json'
// main.readFile:.../example/main.go:6
// unexpected EOF
// main.main:.../example/main.go:30
// main.processFile:.../example/main.go:24
// main.processFile:.../example/main.go:22
// main.parseFile:.../example/main.go:12
// main.readFile:.../example/main.go:6
}
```
```
Output:
```
Share Format
Run
Example (Local) [¶](#example-ToString-Local)
Demonstrates string formatting of wrapped errors that originate from local root errors (created at the source of the error via eris.New).
```
package main
import (
"fmt"
"github.com/rotisserie/eris"
)
func main() {
// example func that returns an eris error
readFile := func(fname string) error {
return eris.New("unexpected EOF") // line 3
}
// example func that catches an error and wraps it with additional context
parseFile := func(fname string) error {
// read the file
err := readFile(fname) // line 9
if err != nil {
return eris.Wrapf(err, "error reading file '%v'", fname) // line 11
}
return nil
}
// call parseFile and catch the error
err := parseFile("example.json") // line 17
// print the error via fmt.Printf
fmt.Printf("%v\n", err) // %v: omit stack trace
// example output:
// unexpected EOF: error reading file 'example.json'
// unpack and print the error via uerr.ToString(...)
fmt.Println(eris.ToString(err, true)) // true: include stack trace
// example output:
// error reading file 'example.json'
// main.parseFile:.../example/main.go:11
// unexpected EOF
// main.main:.../example/main.go:17
// main.parseFile:.../example/main.go:11
// main.parseFile:.../example/main.go:9
// main.readFile:.../example/main.go:3
}
```
```
Output:
```
Share Format
Run
####
func [Unwrap](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L95) [¶](#Unwrap)
```
func Unwrap(err [error](/builtin#error)) [error](/builtin#error)
```
Unwrap returns the result of calling the Unwrap method on err, if err's type contains an Unwrap method returning error. Otherwise, Unwrap returns nil.
####
func [Wrap](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L38) [¶](#Wrap)
```
func Wrap(err [error](/builtin#error), msg [string](/builtin#string)) [error](/builtin#error)
```
Wrap adds additional context to all error types while maintaining the type of the original error.
This method behaves differently for each error type. For root errors, the stack trace is reset to the current callers which ensures traces are correct when using global/sentinel error values. Wrapped error types are simply wrapped with the new context. For external types (i.e. something other than root or wrap errors), this method attempts to unwrap them while building a new error chain. If an external type does not implement the unwrap interface, it flattens the error and creates a new root error from it before wrapping with the additional context.
####
func [Wrapf](https://github.com/rotisserie/eris/blob/v0.5.4/eris.go#L45) [¶](#Wrapf)
```
func Wrapf(err [error](/builtin#error), format [string](/builtin#string), args ...interface{}) [error](/builtin#error)
```
Wrapf adds additional context to all error types while maintaining the type of the original error.
This is a convenience method for wrapping errors with formatted messages and is otherwise the same as Wrap.
### Types [¶](#pkg-types)
####
type [ErrLink](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L294) [¶](#ErrLink)
```
type ErrLink struct {
Msg [string](/builtin#string)
Frame [StackFrame](#StackFrame)
}
```
ErrLink represents a single error frame and the accompanying message.
####
type [ErrRoot](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L263) [¶](#ErrRoot)
```
type ErrRoot struct {
Msg [string](/builtin#string)
Stack [Stack](#Stack)
}
```
ErrRoot represents an error stack and the accompanying message.
####
type [FormatOptions](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L8) [¶](#FormatOptions)
added in v0.3.0
```
type FormatOptions struct {
InvertOutput [bool](/builtin#bool) // Flag that inverts the error output (wrap errors shown first).
WithTrace [bool](/builtin#bool) // Flag that enables stack trace output.
InvertTrace [bool](/builtin#bool) // Flag that inverts the stack trace output (top of call stack shown first).
WithExternal [bool](/builtin#bool) // Flag that enables external error output.
}
```
FormatOptions defines output options like omitting stack traces and inverting the error or stack order.
####
type [JSONFormat](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L107) [¶](#JSONFormat)
added in v0.3.0
```
type JSONFormat struct {
Options [FormatOptions](#FormatOptions) // Format options (e.g. omitting stack trace or inverting the output order).
// todo: maybe allow setting of wrap/root keys in the output map as well
StackElemSep [string](/builtin#string) // Separator between elements of each stack frame.
}
```
JSONFormat defines a JSON error format.
####
func [NewDefaultJSONFormat](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L114) [¶](#NewDefaultJSONFormat)
added in v0.3.0
```
func NewDefaultJSONFormat(options [FormatOptions](#FormatOptions)) [JSONFormat](#JSONFormat)
```
NewDefaultJSONFormat returns a default JSON output format.
####
type [Stack](https://github.com/rotisserie/eris/blob/v0.5.4/stack.go#L10) [¶](#Stack)
added in v0.2.0
```
type Stack [][StackFrame](#StackFrame)
```
Stack is an array of stack frames stored in a human readable format.
####
type [StackFrame](https://github.com/rotisserie/eris/blob/v0.5.4/stack.go#L26) [¶](#StackFrame)
```
type StackFrame struct {
Name [string](/builtin#string)
File [string](/builtin#string)
Line [int](/builtin#int)
}
```
StackFrame stores a frame's runtime information in a human readable format.
####
type [StringFormat](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L17) [¶](#StringFormat)
added in v0.3.0
```
type StringFormat struct {
Options [FormatOptions](#FormatOptions) // Format options (e.g. omitting stack trace or inverting the output order).
MsgStackSep [string](/builtin#string) // Separator between error messages and stack frame data.
PreStackSep [string](/builtin#string) // Separator at the beginning of each stack frame.
StackElemSep [string](/builtin#string) // Separator between elements of each stack frame.
ErrorSep [string](/builtin#string) // Separator between each error in the chain.
}
```
StringFormat defines a string error format.
####
func [NewDefaultStringFormat](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L26) [¶](#NewDefaultStringFormat)
added in v0.3.0
```
func NewDefaultStringFormat(options [FormatOptions](#FormatOptions)) [StringFormat](#StringFormat)
```
NewDefaultStringFormat returns a default string output format.
####
type [UnpackedError](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L248) [¶](#UnpackedError)
```
type UnpackedError struct {
ErrExternal [error](/builtin#error)
ErrRoot [ErrRoot](#ErrRoot)
ErrChain [][ErrLink](#ErrLink)
}
```
UnpackedError represents complete information about an error.
This type can be used for custom error logging and parsing. Use `eris.Unpack` to build an UnpackedError from any error type. The ErrChain and ErrRoot fields correspond to `wrapError` and `rootError` types,
respectively. If any other error type is unpacked, it will appear in the ExternalErr field.
####
func [Unpack](https://github.com/rotisserie/eris/blob/v0.5.4/format.go#L222) [¶](#Unpack)
```
func Unpack(err [error](/builtin#error)) [UnpackedError](#UnpackedError)
```
Unpack returns a human-readable UnpackedError type for a given error. |
bmggum | cran | R | Package ‘bmggum’
October 12, 2022
Title Bayesian Multidimensional Generalized Graded Unfolding Model
Version 0.1.0
Date 2021-4-8
Description
Full Bayesian estimation of Multidimensional Generalized Graded Unfolding Model (MG-
GUM) using 'rstan' (See Stan Development Team (2020) <https://mc-stan.org/>).
Functions are provided for estimation, result extraction, model fit statistics, and plottings.
License GPL (>= 3)
Encoding UTF-8
RoxygenNote 7.1.1.9000
Biarch true
Depends R (>= 3.4.0)
Imports methods, Rcpp (>= 0.12.0), RcppParallel (>= 5.0.1), rstan (>=
2.18.1), rstantools (>= 2.1.1), edstan, ggplot2, GGUM, loo,
stats
LinkingTo BH (>= 1.66.0), Rcpp (>= 0.12.0), RcppEigen (>= 0.3.3.3.0),
RcppParallel (>= 5.0.1), rstan (>= 2.18.1), StanHeaders (>=
2.18.0)
SystemRequirements GNU make
Suggests knitr, rmarkdown
VignetteBuilder knitr
URL https://github.com/Naidantu/bmggum
BugReports https://github.com/Naidantu/bmggum/issues
NeedsCompilation yes
Author <NAME> [aut, cre],
<NAME> [aut]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2021-04-09 08:50:06 UTC
R topics documented:
bmggum-packag... 2
bayesplo... 2
bmggu... 3
extrac... 5
itemplo... 6
modfi... 7
bmggum-package The ’bmggum’ package.
Description
Full Bayesian estimation of Multidimensional Generalized Graded Unfolding Model (MGGUM)
References
Stan Development Team (2020). RStan: the R interface to Stan. R package version 2.21.2.
https://mc-stan.org
bayesplot bayesian convergence diagnosis plotting function
Description
This function provides plots including density plots, trace plots, and auto-correlation plots to aid
model convergence diagnosis.
Usage
bayesplot(x, pars, plot, inc_warmup = FALSE)
Arguments
x returned object
pars Names of plotted parameters. They can be "theta", "alpha", "delta", "tau", "cor",
"lambda", or a subset of parameters. See vignette for bmggum for more details.
plot Types of plots.They can be "density", "trace", or "autocorrelation".
inc_warmup Whether to include warmup iterations or not when plotting. The default is
FALSE.
Value
Selected plots for selected parameters
Examples
Data <- c(1,4,2,3)
Data <- matrix(Data,nrow = 2)
deli <- c(1,-1,2,1)
deli <- matrix(deli,nrow = 2)
ind <- c(1,2)
ind <- t(ind)
cova <- c(0.70, -1.25)
mod <- bmggum(GGUM.Data=Data,delindex=deli,trait=2,ind=ind,option=4,covariate=cova,iter=5,chains=1)
bayesplot(mod, 'alpha', 'density', inc_warmup=FALSE)
bmggum Bayesian Multidimensional Generalized Graded Unfolding Model
(bmggum)
Description
This function implements full Bayesian estimation of Multidimensional Generalized Graded Un-
folding Model (MGGUM) using rstan
Usage
bmggum(
GGUM.Data,
delindex,
trait,
ind,
option,
model = "UM8",
covariate = NULL,
iter = 1000,
chains = 3,
warmup = floor(iter/2),
adapt_delta = 0.9,
max_treedepth = 15,
init = "random",
thin = 1,
cores = 2,
ma = 0,
va = 0.5,
mdne = -1,
mdnu = 0,
mdpo = 1,
vd = 1,
mt = seq(-3, 0, 3/(option - 1)),
vt = 2
)
Arguments
GGUM.Data Response data in wide format
delindex A two-row data matrix: the first row is the item number (1, 2, 3, 4...); the second
row indicates the signs of delta for each item (-1,0,1,...). For items that have
negative deltas for sure, "-1" should be assigned; for items that have positive
deltas, "1" should be assigned; for items whose deltas may be either positive or
negative (e.g., intermediate items), "0" should assigned. We recommend at least
two positive and two negative items per trait for better estimation.
trait The number of latent traits.
ind A row vector mapping each item to each trait. For example, c(1, 1, 1, 2, 2, 2)
means that the first 3 items belong to trait 1 and the last 3 items belong to trait
2.
option The number of response options.
model Models fitted. They can be "UM8", "UM7", and "UM4". The default is UM8,
which is the GGUM model. UM4 is UM8 with alpha = 1, called partial credit
unfolding model. UM7 is UM8 with equal taus across items, called generalized
rating scale unfolding model.
covariate An p*c person covariate matrix where p equals sample size and c equals the
number of covariates. The default is NULL, meaning no person covariate.
iter The number of iterations. The default value is 1000. See documentation for
rstan for more details.
chains The number of chains. The default value is 3. See documentation for rstan for
more details.
warmup The number of warmups to discard. The default value is 0.5*iterations. See
documentation for rstan for more details.
adapt_delta Target average proposal acceptance probability during Stan’s adaptation period.
The default value is 0.90. See documentation for rstan for more details.
max_treedepth Cap on the depth of the trees evaluated during each iteration. The default value
is 15. See documentation for rstan for more details.
init Initial values for estimated parameters. The default is random initial values. See
documentation for rstan for more details.
thin Thinning. The default value is 1. See documentation for rstan for more details.
cores The number of computer cores used for parallel computing. The default value
is 2.
ma Mean of the prior distribution for alpha, which follows a lognormal distribution.
The default value is 0.
va Standard deviation of the prior distribution for alpha. The default value is 0.5.
mdne Mean of the prior distribution for negative deltas, which follows a normal distri-
bution. The default value is -1.
mdnu Mean of the prior distribution for neutral deltas, which follows a normal distri-
bution. The default value is 0.
mdpo Mean of the prior distribution for positive deltas, which follows a normal distri-
bution. The default value is 1.
vd Standard deviation of the prior distribution for deltas. The default value is 1.
mt Means of the prior distributions for taus, which follows a normal distribution.
The default values are seq(-3, 0, 3/(options-1)). The last one has to be 0. For
items with only 2 options, we recommend to use (-2, 0) as means of priors.
vt Standard deviation of the prior distribution for taus. The default value is 2.
Value
Result object that stores information including the (1) stanfit object, (2) estimated item parameters,
(3) estimated person parameters, (4) correlations among traits, (5) regression coefficients linking
person covariates to each trait, (6) response data (excluding respondents who endorse a single option
across all items), and (7) the input row vector mapping each item to each trait. Note that when
covariates are included, output (4) represents residual correlations among the traits after controlling
for the covariates.
Examples
Data <- c(1,4,2,3)
Data <- matrix(Data,nrow = 2)
deli <- c(1,-1,2,1)
deli <- matrix(deli,nrow = 2)
ind <- c(1,2)
ind <- t(ind)
cova <- c(0.70, -1.25)
mod <- bmggum(GGUM.Data=Data,delindex=deli,trait=2,ind=ind,option=4,covariate=cova,iter=5,chains=1)
extract results extraction
Description
This function extracts estimation results.
Usage
extract(x, pars)
Arguments
x returned object
pars Names of extracted parameters. They can be "theta" (person trait estimates), "al-
pha" (item discrimination parameters), "delta" (item location parameters), "tau"
(item threshold parameters), "cor" (correlations among latent traits), "lambda"
(regression coefficients linking person covariates to latent traits), "data" (GGUM.Data
after deleting respondents who endorse the same response options across all
items), "fit" (the stanfit object), and "dimension" (the input row vector mapping
each item to each trait). Note that when the model is UM4 in which alpha is
fixed to 1, the extracted alpha is a n*1 matrix where n equals to the number of
items.
Value
Selected results output
Examples
Data <- c(1,4,2,3)
Data <- matrix(Data,nrow = 2)
deli <- c(1,-1,2,1)
deli <- matrix(deli,nrow = 2)
ind <- c(1,2)
ind <- t(ind)
cova <- c(0.70, -1.25)
mod <- bmggum(GGUM.Data=Data,delindex=deli,trait=2,ind=ind,option=4,covariate=cova,iter=5,chains=1)
alpha <- extract(mod, 'alpha')
itemplot item plotting function including observable response categories
(ORCs)
Description
This function provides item plots including observable response categories plots.
Usage
itemplot(x, items = NULL)
Arguments
x returned object
items The items to be plotted. The default is all the items.
Value
Selected ORC plots for selected items
Examples
Data <- c(1,4,2,3)
Data <- matrix(Data,nrow = 2)
deli <- c(1,-1,2,1)
deli <- matrix(deli,nrow = 2)
ind <- c(1,2)
ind <- t(ind)
cova <- c(0.70, -1.25)
mod <- bmggum(GGUM.Data=Data,delindex=deli,trait=2,ind=ind,option=4,covariate=cova,iter=5,chains=1)
itemplot(mod, items=1)
modfit Model fit
Description
This function provides model fit statistics.
Usage
modfit(x, index = "loo")
Arguments
x returned object
index Model fit indices. They can be "waic", which is the widely applicable informa-
tion criterion, "loo", which is the leave-one-out cross-validation, or "chisq.df",
which is the adjusted chi-square degrees of freedom ratios for each trait sepa-
rately that were introduced by Drasgow et al. (1995). The default is loo. Note
that chisq.df can only be computed when the sample size is large. See documen-
tation for loo and GGUM for more details.
Value
Selected model fit statistics
Examples
Data <- c(1,4,2,3)
Data <- matrix(Data,nrow = 2)
deli <- c(1,-1,2,1)
deli <- matrix(deli,nrow = 2)
ind <- c(1,2)
ind <- t(ind)
cova <- c(0.70, -1.25)
mod <- bmggum(GGUM.Data=Data,delindex=deli,trait=2,ind=ind,option=4,covariate=cova,iter=5,chains=1)
waic <- modfit(mod, 'waic') |
DCEM | cran | R | Package ‘DCEM’
October 12, 2022
Type Package
Title Clustering Big Data using Expectation Maximization Star (EM*)
Algorithm
Version 2.0.5
Maintainer <NAME> <<EMAIL>>
Description Implements the Improved Expectation Maximisation EM* and the traditional EM algo-
rithm for clustering
big data (gaussian mixture models for both multivariate and univariate datasets). This version
implements the faster alternative-
EM* that expedites convergence via structure based data segregation.
The implementation supports both random and K-
means++ based initialization. Reference: <NAME>,
<NAME>, <NAME> (2022) <doi:10.1016/j.softx.2021.100944>. <NAME>,
<NAME>, <NAME> (2016) <doi:10.1007/s41060-017-0062-1>.
License GPL-3
Encoding UTF-8
LazyData true
Imports mvtnorm (>= 1.0.7), matrixcalc (>= 1.0.3), MASS (>= 7.3.49),
Rcpp (>= 1.0.2)
LinkingTo Rcpp
RoxygenNote 7.1.2
Depends R(>= 3.2.0)
URL https://github.com/parichit/DCEM
BugReports https://github.com/parichit/DCEM/issues
Suggests knitr, rmarkdown
VignetteBuilder knitr
NeedsCompilation yes
Author <NAME> [aut, cre, ctb],
Kurban Hasan [aut, ctb],
<NAME> [aut]
Repository CRAN
Date/Publication 2022-01-16 00:02:52 UTC
R topics documented:
build_hea... 2
DCE... 3
dcem_cluster_m... 4
dcem_cluster_u... 6
dcem_predic... 7
dcem_star_cluster_m... 8
dcem_star_cluster_u... 9
dcem_star_trai... 10
dcem_tes... 11
dcem_trai... 12
expectation_m... 14
expectation_u... 15
get_prior... 15
insert_node... 16
ionosphere_dat... 16
maximisation_m... 17
maximisation_u... 18
max_heapif... 18
meu_m... 19
meu_mv_imp... 19
meu_u... 20
meu_uv_imp... 20
separate_dat... 21
sigma_m... 22
sigma_u... 22
trim_dat... 23
update_weight... 23
validate_dat... 24
build_heap build_heap: Part of DCEM package.
Description
Implements the creation of heap. Internally called by the dcem_star_train.
Usage
build_heap(data)
Arguments
data (NumericMatrix): The dataset provided by the user.
Value
A NumericMatrix with the max heap property.
Author(s)
<NAME> <<EMAIL>>, <NAME>, <NAME>
DCEM DCEM: Clustering Big Data using Expectation Maximization Star
(EM*) Algorithm.
Description
Implements the EM* and EM algorithm for clustering the (univariate and multivariate) Gaussian
mixture data.
Demonstration and Testing
Cleaning the data: The data should be cleaned (redundant columns should be removed). For
example columns containing the labels or redundant entries (such as a column of all 0’s or 1’s). See
trim_data for details on cleaning the data. Refer: dcem_test for more details.
Understanding the output of dcem_test
The function dcem_test() returns a list of objects. This list contains the parameters associated with
the Gaussian(s), posterior probabilities (prob), mean (meu), co-variance/standard-deviation(sigma)
,priors (prior) and cluster membership for data (membership).
Note: The routine dcem_test() is only for demonstration purpose. The function dcem_test calls
the main routine dcem_train. See dcem_train for further details.
How to run on your dataset
See dcem_train and dcem_star_train for examples.
Package organization
The package is organized as a set of preprocessing functions and the core clustering modules. These
functions are briefly described below.
1. trim_data: This is used to remove the columns from the dataset. The user should clean the
dataset before calling the dcem_train routine. User can also clean the dataset themselves
(without using trim_data) and then pass it to the dcem_train function
2. dcem_star_train and dcem_train: These are the primary interface to the EM* and EM algo-
rithms respectively. These function accept the cleaned dataset and other parameters (number
of iterations, convergence threshold etc.) and run the algorithm until:
(a) The number of iterations is reached.
(b) The convergence is achieved.
DCEM supports following initialization schemes
1. Random Initialization: Initializes the mean randomly. Refer meu_uv and meu_mv for initial-
ization on univariate and multivariate data respectively.
2. Improved Initialization: Based on the Kmeans++ idea published in, K-means++: The Ad-
vantages of Careful Seeding, <NAME> and <NAME>. URL http://ilpubs.stanford.edu:8090/778/1/2006-
13.pdf. See meu_uv_impr and meu_mv_impr for details.
3. Choice of initialization scheme can be specified as the seeding parameter during the training.
See dcem_train for further details.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
External Packages: DCEM requires R packages ’mvtnorm’[1], ’matrixcalc’[2] ’RCPP’[3] and
’MASS’[4] for multivariate density calculation, checking matrix singularity, compiling routines
written in C and simulating mixture of gaussians, respectively.
[1] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2019). mvtnorm: Multivariate Normal and t Distributions. R package version 1.0-7. URL
http://CRAN.R-project.org/package=mvtnorm
[2] <NAME> (2012). matrixcalc: Collection of functions for matrix calculations. R
package version 1.0-3. https://CRAN.R-project.org/package=matrixcalc
[3] <NAME> and <NAME> (2011). Rcpp: Seamless R and C++ Integration. Journal
of Statistical Software, 40(8), 1-18. URL http://www.jstatsoft.org/v40/i08/.
[4] <NAME>. & <NAME>. (2002) Modern Applied Statistics with S. Fourth Edition.
Springer, New York. ISBN 0-387-95457-0
[5] K-Means++: The Advantages of Careful Seeding, <NAME> and <NAME>. URL
http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf
dcem_cluster_mv dcem_cluster (multivariate data): Part of DCEM package.
Description
Implements the Expectation Maximization algorithm for multivariate data. This function is called
by the dcem_train routine.
Usage
dcem_cluster_mv(data, meu, sigma, prior, num_clusters, iteration_count,
threshold, num_data)
Arguments
data A matrix: The dataset provided by the user.
meu (matrix): The matrix containing the initial meu(s).
sigma (list): A list containing the initial covariance matrices.
prior (vector): A vector containing the initial prior.
num_clusters (numeric): The number of clusters specified by the user. Default value is 2.
iteration_count
(numeric): The number of iterations for which the algorithm should run, if the
convergence is not achieved then the algorithm stops. Default: 200.
threshold (numeric): A small value to check for convergence (if the estimated meu are
within this specified threshold then the algorithm stops and exit).
Note: Choosing a very small value (0.0000001) for threshold can increase
the runtime substantially and the algorithm may not converge. On the other
hand, choosing a larger value (0.1) can lead to sub-optimal clustering. De-
fault: 0.00001.
num_data (numeric): The total number of observations in the data.
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabili-
ties, meu, co-variance and prior)
1. (1) Posterior Probabilities: prob :A matrix of posterior-probabilities.
2. (2) Meu: meu: It is a matrix of meu(s). Each row in the matrix corresponds to one meu.
3. (3) Sigma: Co-variance matrices: sigma
4. (4) prior: prior: A vector of prior.
5. (5) Membership: membership: A vector of cluster membership for data.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
dcem_cluster_uv dcem_cluster_uv (univariate data): Part of DCEM package.
Description
Implements the Expectation Maximization algorithm for the univariate data. This function is inter-
nally called by the dcem_train routine.
Usage
dcem_cluster_uv(data, meu, sigma, prior, num_clusters, iteration_count,
threshold, num_data, numcols)
Arguments
data (matrix): The dataset provided by the user (converted to matrix format).
meu (vector): The vector containing the initial meu.
sigma (vector): The vector containing the initial standard deviation.
prior (vector): The vector containing the initial prior.
num_clusters (numeric): The number of clusters specified by the user. Default is 2.
iteration_count
(numeric): The number of iterations for which the algorithm should run. If the
convergence is not achieved then the algorithm stops. Default: 200.
threshold (numeric): A small value to check for convergence (if the estimated meu(s) are
within the threshold then the algorithm stops).
Note: Choosing a very small value (0.0000001) for threshold can increase
the runtime substantially and the algorithm may not converge. On the other
hand, choosing a larger value (0.1) can lead to sub-optimal clustering. De-
fault: 0.00001.
num_data (numeric): The total number of observations in the data.
numcols (numeric): Number of columns in the dataset (After processing the missing val-
ues).
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabili-
ties, meu, standard-deviation and prior)
1. (1) Posterior Probabilities: prob: A matrix of posterior-probabilities.
2. (2) Meu(s): meu: It is a vector of meu. Each element of the vector corresponds to one meu.
3. (3) Sigma: Standard-deviation(s): sigma: A vector of standard deviation.
4. (4) prior: prior: A vector of prior.
5. (5) Membership: membership: A vector of cluster membership for data.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
dcem_predict dcem_predict: Part of DCEM package.
Description
Predict the cluster membership of test data based on the learned parameters i.e, output from dcem_train
or dcem_star_train.
Usage
dcem_predict(param_list, data)
Arguments
param_list (list): List of distribution parameters. The list contains the learned parameteres
of the distribution.
data (vector or dataframe): A vector of data for univariate data. A dataframe (rows
represent the data and columns represent the features) for multivariate data.
Value
A list containing the cluster membership for the test data.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
Examples
# Simulating a mixture of univariate samples from three distributions
# with meu as 20, 70 and 100 and standard deviation as 10, 100 and 40 respectively.
sample_uv_data = as.data.frame(c(rnorm(100, 20, 5), rnorm(70, 70, 1), rnorm(50, 100, 2)))
# Select first few points from each distribution as test data
test_data = as.vector(sample_uv_data[c(1:5, 101:105, 171:175),])
# Remove the test data from the training set
sample_uv_data = as.data.frame(sample_uv_data[-c(1:5, 101:105, 171:175), ])
# Randomly shuffle the samples.
sample_uv_data = as.data.frame(sample_uv_data[sample(nrow(sample_uv_data)),])
# Calling the dcem_train() function on the simulated data with threshold of
# 0.000001, iteration count of 1000 and random seeding respectively.
sample_uv_out = dcem_train(sample_uv_data, num_clusters = 3, iteration_count = 100,
threshold = 0.001)
# Predict the membership for test data
test_data_membership <- dcem_predict(sample_uv_out, test_data)
# Access the output
print(test_data_membership)
dcem_star_cluster_mv dcem_star_cluster_mv (multivariate data): Part of DCEM package.
Description
Implements the EM* algorithm for multivariate data. This function is called by the dcem_star_train
routine.
Usage
dcem_star_cluster_mv(data, meu, sigma, prior, num_clusters, iteration_count, num_data)
Arguments
data (matrix): The dataset provided by the user.
meu (matrix): The matrix containing the initial meu(s).
sigma (list): A list containing the initial covariance matrices.
prior (vector): A vector containing the initial priors.
num_clusters (numeric): The number of clusters specified by the user. Default value is 2.
iteration_count
(numeric): The number of iterations for which the algorithm should run, if the
convergence is not achieved then the algorithm stops and exits. Default: 200.
num_data (numeric): Number of rows in the dataset.
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabili-
ties, meu, co-variance and priors)
1. (1) Posterior Probabilities: prob A matrix of posterior-probabilities for the points in the
dataset.
2. (2) Meu: meu: A matrix of meu(s). Each row in the matrix corresponds to one meu.
3. (3) Sigma: Co-variance matrices: sigma: List of co-variance matrices.
4. (4) Priors: prior: A vector of prior.
5. (5) Membership: membership: A vector of cluster membership for data.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
dcem_star_cluster_uv dcem_star_cluster_uv (univariate data): Part of DCEM package.
Description
Implements the EM* algorithm for the univariate data. This function is called by the dcem_star_train
routine.
Usage
dcem_star_cluster_uv(data, meu, sigma, prior, num_clusters, num_data,
iteration_count)
Arguments
data (matrix): The dataset provided by the user.
meu (vector): The vector containing the initial meu.
sigma (vector): The vector containing the initial standard deviation.
prior (vector): The vector containing the initial priors.
num_clusters (numeric): The number of clusters specified by the user. Default is 2.
num_data (numeric): number of rows in the dataset (After processing the missing values).
iteration_count
(numeric): The number of iterations for which the algorithm should run. If the
convergence is not achieved then the algorithm stops. Default is 100.
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabili-
ties, meu, standard-deviation and priors)
1. (1) Posterior Probabilities: prob A matrix of posterior-probabilities
2. (2) Meu: meu: It is a vector of meu. Each element of the vector corresponds to one meu.
3. (3) Sigma: Standard-deviation(s): sigma
For univariate data: Vector of standard deviation.
4. (4) Priors: prior: A vector of priors.
5. (5) Membership: membership: A vector of cluster membership for data.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
dcem_star_train dcem_star_train: Part of DCEM package.
Description
Implements the improved EM* ([1], [2]) algorithm. EM* avoids revisiting all but high expres-
sive data via structure based data segregation thus resulting in significant speed gain. It calls the
dcem_star_cluster_uv routine internally (univariate data) and dcem_star_cluster_mv for (mul-
tivariate data).
Usage
dcem_star_train(data, iteration_count, num_clusters, seed_meu, seeding)
Arguments
data (dataframe): The dataframe containing the data. See trim_data for cleaning
the data.
iteration_count
(numeric): The number of iterations for which the algorithm should run, if the
convergence is not achieved then the algorithm stops and exit. Default: 200.
num_clusters (numeric): The number of clusters. Default: 2
seed_meu (matrix): The user specified set of meu to use as initial centroids. Default: None
seeding (string): The initialization scheme (’rand’, ’improved’). Default: rand
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabil-
ities, meu, sigma and priors). The parameters can be accessed as follows where sample_out is the
list containing the output:
1. (1) Posterior Probabilities: sample_out$prob A matrix of posterior-probabilities.
2. (2) Meu(s): sample_out$meu
For multivariate data: It is a matrix of meu(s). Each row in the matrix corresponds to one
mean.
For univariate data: It is a vector of meu(s). Each element of the vector corresponds to one
meu.
3. (3) Co-variance matrices: sample_out$sigma
For multivariate data: List of co-variance matrices.
Standard-deviation: sample_out$sigma
For univariate data: Vector of standard deviation.
4. (4) Priors: sample_out$prior A vector of priors.
5. (5) Membership: sample_out$membership: A dataframe of cluster membership for data.
Columns numbers are data indices and values are the assigned clusters.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
Examples
# Simulating a mixture of univariate samples from three distributions
# with mean as 20, 70 and 100 and standard deviation as 10, 100 and 40 respectively.
sample_uv_data = as.data.frame(c(rnorm(100, 20, 5), rnorm(70, 70, 1), rnorm(50, 100, 2)))
# Randomly shuffle the samples.
sample_uv_data = as.data.frame(sample_uv_data[sample(nrow(sample_uv_data)),])
# Calling the dcem_star_train() function on the simulated data with iteration count of 1000
# and random seeding respectively.
sample_uv_out = dcem_star_train(sample_uv_data, num_clusters = 3, iteration_count = 100)
# Simulating a mixture of multivariate samples from 2 gaussian distributions.
sample_mv_data = as.data.frame(rbind(MASS::mvrnorm(n=2, rep(2,5), Sigma = diag(5)),
MASS::mvrnorm(n=5, rep(14,5), Sigma = diag(5))))
# Calling the dcem_star_train() function on the simulated data with iteration count of 100 and
# random seeding method respectively.
sample_mv_out = dcem_star_train(sample_mv_data, iteration_count = 100, num_clusters=2)
# Access the output
sample_mv_out$meu
sample_mv_out$sigma
sample_mv_out$prior
sample_mv_out$prob
print(sample_mv_out$membership)
dcem_test dcem_test: Part of DCEM package.
Description
For demonstrating the execution on the bundled dataset.
Usage
dcem_test()
Details
The dcem_test performs the following steps in order:
1. Read the data from the disk (from the file data/ionosphere_data.csv). The data folder is under
the package installation folder.
2. The dataset details can be see by typing ionosphere_data in R-console or at http://archive.
ics.uci.edu/ml/datasets/Ionosphere.
3. Clean the data (by removing the columns). The data should be cleaned before use. Refer
trim_data to see what columns should be removed and how. The package provides the basic
interface for removing columns.
4. Call the dcem_star_train on the cleaned data.
Accessing the output parameters
The function dcem_test() calls the dcem_star_train. It returns a list of objects as output. This list
contains estimated parameters of the Gaussian (posterior probabilities, meu, sigma and prior). The
parameters can be accessed as follows where sample_out is the list containing the output:
1. (1) Posterior Probabilities: sample_out$prob A matrix of posterior-probabilities
2. (2) Meu: meu
For multivariate data: It is a matrix of meu(s). Each row in the matrix corresponds to one meu.
3. (3) Co-variance matrices: sample_out$sigma
For multivariate data: List of co-variance matrices for the Gaussian(s).
Standard-deviation: sample_out$sigma
For univariate data: Vector of standard deviation for the Gaussian(s))
4. (4) Priors: sample_out$prior A vector of prior.
5. (5) Membership: sample_out$membership: A dataframe of cluster membership for data.
Columns numbers are data indices and values are the assigned clusters.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
dcem_train dcem_train: Part of DCEM package.
Description
Implements the EM algorithm. It calls the relevant clustering routine internally dcem_cluster_uv
(univariate data) and dcem_cluster_mv (multivariate data).
Usage
dcem_train(data, threshold, iteration_count, num_clusters, seed_meu, seeding)
Arguments
data (dataframe): The dataframe containing the data. See trim_data for cleaning
the data.
threshold (decimal): A value to check for convergence (if the meu are within this value
then the algorithm stops and exit). Default: 0.00001.
iteration_count
(numeric): The number of iterations for which the algorithm should run, if the
convergence is not achieved within the specified count then the algorithm stops
and exit. Default: 200.
num_clusters (numeric): The number of clusters. Default: 2
seed_meu (matrix): The user specified set of meu to use as initial centroids. Default: None
seeding (string): The initialization scheme (’rand’, ’improved’). Default: rand
Value
A list of objects. This list contains parameters associated with the Gaussian(s) (posterior probabil-
ities, meu, sigma and priors). The parameters can be accessed as follows where sample_out is the
list containing the output:
1. (1) Posterior Probabilities: sample_out$prob: A matrix of posterior-probabilities
2. (2) Meu: sample_out$meu
For multivariate data: It is a matrix of meu(s). Each row in the matrix corresponds to one meu.
For univariate data: It is a vector of meu(s). Each element of the vector corresponds to one
meu.
3. (3) Sigma: sample_out$sigma
For multivariate data: List of co-variance matrices for the Gaussian(s).
For univariate data: Vector of standard deviation for the Gaussian(s).
4. (4) Priors: sample_out$prior: A vector of priors.
5. (5) Membership: sample_out$membership: A dataframe of cluster membership for data.
Columns numbers are data indices and values are the assigned clusters.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
Examples
# Simulating a mixture of univariate samples from three distributions
# with meu as 20, 70 and 100 and standard deviation as 10, 100 and 40 respectively.
sample_uv_data = as.data.frame(c(rnorm(100, 20, 5), rnorm(70, 70, 1), rnorm(50, 100, 2)))
# Randomly shuffle the samples.
sample_uv_data = as.data.frame(sample_uv_data[sample(nrow(sample_uv_data)),])
# Calling the dcem_train() function on the simulated data with threshold of
# 0.000001, iteration count of 1000 and random seeding respectively.
sample_uv_out = dcem_train(sample_uv_data, num_clusters = 3, iteration_count = 100,
threshold = 0.001)
# Simulating a mixture of multivariate samples from 2 gaussian distributions.
sample_mv_data = as.data.frame(rbind(MASS::mvrnorm(n=100, rep(2,5), Sigma = diag(5)),
MASS::mvrnorm(n=50, rep(14,5), Sigma = diag(5))))
# Calling the dcem_train() function on the simulated data with threshold of
# 0.00001, iteration count of 100 and random seeding method respectively.
sample_mv_out = dcem_train(sample_mv_data, threshold = 0.001, iteration_count = 100)
# Access the output
print(sample_mv_out$meu)
print(sample_mv_out$sigma)
print(sample_mv_out$prior)
print(sample_mv_out$prob)
print(sample_mv_out$membership)
expectation_mv expectation_mv: Part of DCEM package.
Description
Calculates the probabilistic weights for the multivariate data.
Usage
expectation_mv(data, weights, meu, sigma, prior, num_clusters, tolerance)
Arguments
data (matrix): The input data.
weights (matrix): The probability weight matrix.
meu (matrix): The matrix of meu.
sigma (list): The list of sigma (co-variance matrices).
prior (vector): The vector of priors.
num_clusters (numeric): The number of clusters.
tolerance (numeric): The system epsilon value.
Value
Updated probability weight matrix.
expectation_uv expectation_uv: Part of DCEM package.
Description
Calculates the probabilistic weights for the univariate data.
Usage
expectation_uv(data, weights, meu, sigma, prior, num_clusters, tolerance)
Arguments
data (matrix): The input data.
weights (matrix): The probability weight matrix.
meu (vector): The vector of meu.
sigma (vector): The vector of sigma (standard-deviations).
prior (vector): The vector of priors.
num_clusters (numeric): The number of clusters.
tolerance (numeric): The system epsilon value.
Value
Updated probability weight matrix.
get_priors get_priors: Part of DCEM package.
Description
Initialize the priors.
Usage
get_priors(num_priors)
Arguments
num_priors (numeric): Number of priors one corresponding to each cluster.
Details
For example, if the user specify 2 priors then the vector will have 2 entries (one for each cluster)
where each will be 1/2 or 0.5.
Value
A vector of uniformly initialized prior values (numeric).
insert_nodes insert_nodes: Part of DCEM package.
Description
Implements the node insertion into the heaps.
Usage
insert_nodes(heap_list, heap_assn, data_probs, leaves_ind, num_clusters)
Arguments
heap_list (list): The nested list containing the heaps. Each entry in the list is a list main-
tained in max-heap structure.
heap_assn (numeric): The vector representing the heap assignments.
data_probs (string): A vector containing the probability for data.
leaves_ind (numeric): A vector containing the indices of leaves in heap.
num_clusters (numeric): The number of clusters. Default: 2
Value
A nested list. Each entry in the list is a list maintained in the max-heap structure.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
ionosphere_data Ionosphere data: A dataset of 351 radar readings
Description
This dataset contains 351 entries (radar readings from a system in goose bay laboratory) and 35
columns. The 35th columns is the label columns identifying the entry as either good or bad. Addi-
tionally, the 2nd column only contains 0’s.
Usage
ionosphere_data
Format
A file with 351 rows and 35 columns of multivariate data in a csv file. All values are numeric.
Source
Space Physics Group Applied Physics Laboratory Johns Hopkins University Johns Hopkins Road
Laurel, MD 20723 Web URL: http://archive.ics.uci.edu/ml/datasets/Ionosphere
References: <NAME>., <NAME>., Hutton, <NAME>., & <NAME>. (1989). Classification of
radar returns from the ionosphere using neural networks. Johns Hopkins APL Technical Digest, 10,
262-266.
maximisation_mv maximisation_mv: Part of DCEM package.
Description
Calculates meu, sigma and prior based on the updated probability weight matrix.
Usage
maximisation_mv(data, weights, meu, sigma, prior, num_clusters, num_data)
Arguments
data (matrix): The input data.
weights (matrix): The probability weight matrix.
meu (matrix): The matrix of meu.
sigma (list): The list of sigma (co-variance matrices).
prior (vector): The vector of priors.
num_clusters (numeric): The number of clusters.
num_data (numeric): The total number of observations in the data.
Value
Updated values for meu, sigma and prior.
maximisation_uv maximisation_uv: Part of DCEM package.
Description
Calculates meu, sigma and prior based on the updated probability weight matrix.
Usage
maximisation_uv(data, weights, meu, sigma, prior, num_clusters, num_data)
Arguments
data (matrix): The input data.
weights (matrix): The probability weight matrix.
meu (vector): The vector of meu.
sigma (vector): The vector of sigma (standard-deviations).
prior (vector): The vector of priors.
num_clusters (numeric): The number of clusters.
num_data (numeric): The total number of observations in the data.
Value
Updated values for meu, sigma and prior.
max_heapify max_heapify: Part of DCEM package.
Description
Implements the creation of max heap. Internally called by the dcem_star_train.
Usage
max_heapify(data, index, num_data)
Arguments
data (NumericMatrix): The dataset provided by the user.
index (int): The index of the data point.
num_data (numeric): The total number of observations in the data.
Value
A NumericMatrix with the max heap property.
Author(s)
<NAME> <<EMAIL>>, <NAME>, <NAME>
meu_mv meu_mv: Part of DCEM package.
Description
Initialize the meus(s) by randomly selecting the samples from the dataset. This is the default
method for initializing the meu(s).
Usage
# Randomly seeding the mean(s).
meu_mv(data, num_meu)
Arguments
data (matrix): The dataset provided by the user.
num_meu (numeric): The number of meu.
Value
A matrix containing the selected samples from the dataset.
meu_mv_impr meu_mv_impr: Part of DCEM package.
Description
Initialize the meu(s) by randomly selecting the samples from the dataset. It uses the proposed
implementation from K-means++: The Advantages of Careful Seeding, <NAME> and Sergei
Vassilvitskii. URL http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf.
Usage
# Randomly seeding the meu.
meu_mv_impr(data, num_meu)
Arguments
data (matrix): The dataset provided by the user.
num_meu (numeric): The number of meu.
Value
A matrix containing the selected samples from the dataset.
meu_uv meu_uv: Part of DCEM package.
Description
This function is internally called by the dcem_train to initialize the meu(s). It randomly selects the
meu(s) from the range min(data):max(data).
Usage
# Randomly seeding the meu.
meu_uv(data, num_meu)
Arguments
data (matrix): The dataset provided by the user.
num_meu (number): The number of meu.
Value
A vector containing the selected samples from the dataset.
meu_uv_impr meu_uv_impr: Part of DCEM package.
Description
This function is internally called by the dcem_train to initialize the meu(s). It uses the proposed
implementation from K-means++: The Advantages of Careful Seeding, <NAME> and <NAME>. URL http://ilpubs.stanford.edu:8090/778/1/2006-13.pdf.
Usage
# Seeding the meu using the K-means++ implementation.
meu_uv_impr(data, num_meu)
Arguments
data (matrix): The dataset provided by the user.
num_meu (number): The number of meu.
Value
A vector containing the selected samples from the dataset.
separate_data separate_data: Part of DCEM package.
Description
Separate leaf nodes from the heaps.
Usage
separate_data(heap_list, num_clusters)
Arguments
heap_list (list): The nested list containing the heaps. Each entry in the list is a list main-
tained in max-heap structure.
num_clusters (numeric): The number of clusters. Default: 2
Value
A nested list where,
First entry is the list of heaps with leaves removed.
Second entry is the list of leaves.
References
<NAME>, <NAME>, <NAME> DCEM: An R package for clustering big data via
data-centric modification of Expectation Maximization, SoftwareX, 17, 100944 URL https://doi.org/10.1016/j.softx.2021.100
sigma_mv sigma_mv: Part of DCEM package.
Description
Initializes the co-variance matrices as the identity matrices.
Usage
sigma_mv(num_sigma, numcol)
Arguments
num_sigma (numeric): Number of covariance matrices.
numcol (numeric): The number of columns in the dataset.
Value
A list of identity matrices. The number of entries in the list is equal to the input parameter
(num_cov).
sigma_uv sigma_uv: Part of DCEM package.
Description
Initializes the standard deviation for the Gaussian(s).
Usage
sigma_uv(data, num_sigma)
Arguments
data (matrix): The dataset provided by the user.
num_sigma (number): Number of sigma (standard_deviations).
Value
A vector of standard deviation value(s).
trim_data trim_data: Part of DCEM package. Used internally in the package.
Description
Removes the specified column(s) from the dataset.
Usage
trim_data(columns, data)
Arguments
columns (string): A comma separated list of column(s) that needs to be removed from
the dataset. Default: ”
data (dataframe): Dataframe containing the input data.
Value
A dataframe with the specified column(s) removed from it.
update_weights update_weights: Part of DCEM package.
Description
Update the probability values for specific data points that change between the heaps.
Usage
update_weights(temp_weights, weights, index_list, num_clusters)
Arguments
temp_weights (matrix): A matrix of probabilistic weights for leaf data.
weights (matrix): A matrix of probabilistic weights for all data.
index_list (vector): A vector of indices.
num_clusters (numeric): The number of clusters.
Value
Updated probabilistic weights matrix.
validate_data validate_data: Part of DCEM package. Used internally in the pack-
age.
Description
Implements sanity check for the input data. This function is for internal use and is called by the
dcem_train.
Usage
validate_data(columns, numcols)
Arguments
columns (string): A comma separated list of columns that needs to be removed from the
dataset. Default: ”
numcols (numeric): Number of columns in the dataset.
Details
An example would be to check if the column to be removed exist or not? trim_data internally calls
this function before removing the column(s).
Value
boolean: TRUE if the columns exists otherwise FALSE. |
treetaggerwrapper | readthedoc | Unknown | TreeTagger Python Wrapper
Documentation
Release 2.3
<NAME>
Jan 29, 2019
Contents 2.1 Requirement... 3 2.2 Automati... 3 2.3 Manua... 3 6.1 This module does two main thing... 11 6.2 Other things done by this modul... 12 6.3 Alternative too... 12 7.1 Window buffer overflo... 13 7.2 TreeTagger automatic locatio... 13 7.3 TreeTagger probabilitie... 14 11.1 Short exampl... 29 11.2 Main process poll classe... 30
i
ii
CHAPTER 1
About treetaggerwrapper
author <NAME> <<EMAIL>> <<EMAIL>>
organization CNRS - LIMSI
copyright CNRS - 2004-2019
license GNU-GPL Version 3 or greater
version 2.3 For language independent part-of-speech tagger TreeTagger, see Helmut Schmid TreeTagger site.
For this module, see Developer Project page and Project Source repository on french academic repository SourceSup.
And Module Documentation on Read The Docs.
You can also retrieve the latest version of this module with the svn command:
svn export https://subversion.renater.fr/ttpw/trunk/treetaggerwrapper.py Or install it (and the module treetaggerpoll.py) using pip (add pip install option --user for user private installation):
pip install treetaggerwrapper This wrapper tool is intended to be used in projects where multiple chunk of texts must be processed via TreeTagger in an automatic way (else you may simply use the base TreeTagger installation once as an external command).
Warning: Parameter files renaming.
Latest distributed files on TreeTagger site removed -utf8 part from parameter files names. This version 2.3 ot
the wrapper tries to adapt to your installed version of TreeTagger: test existence of .par file without -utf8 part,
and if it failed, test existence of file with adding -utf8 part.
If you use this wrapper, a small email would be welcome to support module maintenance (where, purpose, fund-
ing. . . ). Send it to <EMAIL>
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 2
Installation 2.1 Requirements treetaggerwrapper rely on six module for Python2 and Python3 compatibility. It also uses standard io module for files reading with decoding / encoding .
Tests have been limited to Python 2.7 and Python 3.4 under Linux and Windows. It don’t work with earlier version of Python as some names are not defined in their standard libraries.
2.2 Automatic As the module is now registered on PyPI, you can simply install it:
pip install treetaggerwrapper Or, if you can’t (or don’t want) to install the module system-wide (and don’t use a virtual env):
pip install --user treetaggerwrapper May use pip3 to go with your Python3 installation.
If it is already installed as a package, use pip’s install -U option to install the last version (update).
2.3 Manual For a complete manual installation, install six module and other dependencies, and simply put the treetaggerwrapper.py and treetaggerpoll.py files in a directory listed in the Python path (or in your scripts directory).
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 3
Configuration The wrapper search for the treetagger directory (the one with bin, lib and cmd subdirectories), in several places,
allowing variations in TreeTagger directory name — see TreeTagger automatic locate for details.
If the treetagger directory is found, its location is stored in a file $HOME/.config/treetagger_wrapper.cfg
(or any place following XDG_CONFIG_DIR if it is specified), and at next start the directory indicated in this file is used if it still exists.
If you installed TreeTagger in a non-guessable location, you still can set up an environment variable TAGDIR to refer-
ence the TreeTagger software installation directory, or give a TAGDIR named argument when building a TreeTagger object to provide this information, or simply put that information into configuration file in section [CACHE] under key tagdir = ....
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 4
Usage Primary usage is to wrap TreeTagger binary and use it as a functional tool. You have to build a TreeTagger object,
specifying the target language [by its country code!], and possibly some other TreeTagger parameters (else we use standard files specified in the module for each supported language). Once this wrapper object created, you can simply call its tag_text() method with the string to tag, and it will return a list of lines corresponding to the text tagged by TreeTagger.
Example (with Python3, Unicode strings by default — with Python2 you need to use explicit notation u"string",
of if within a script start by a from __future__ import unicode_literals directive):
>>> import pprint # For proper print of sequences.
>>> import treetaggerwrapper
>>> #1) build a TreeTagger wrapper:
>>> tagger = treetaggerwrapper.TreeTagger(TAGLANG='en')
>>> #2) tag your text.
>>> tags = tagger.tag_text("This is a very short text to tag.")
>>> #3) use the tags list... (list of string output from TreeTagger).
>>> pprint.pprint(tags)
['This\tDT\tthis',
'is\tVBZ\tbe',
'a\tDT\ta',
'very\tRB\tvery',
'short\tJJ\tshort',
'text\tNN\ttext',
'to\tTO\tto',
'tag\tVV\ttag',
'.\tSENT\t.']
>>> # Note: in output strings, fields are separated with tab chars (\t).
You can transform it into a list of named tuples Tag, NotTag (unknown tokens) TagExtra (token having extra informations requested via tagger options - like probabilistic indications) using the helper make_tags() function:
>>> tags2 = treetaggerwrapper.make_tags(tags)
>>> pprint.pprint(tags2)
[Tag(word='This', pos='DT', lemma='this'),
(continues on next page)
TreeTagger Python Wrapper Documentation, Release 2.3
(continued from previous page)
Tag(word='is', pos='VBZ', lemma='be'),
Tag(word='a', pos='DT', lemma='a'),
Tag(word='very', pos='RB', lemma='very'),
Tag(word='short', pos='JJ', lemma='short'),
Tag(word='text', pos='NN', lemma='text'),
Tag(word='to', pos='TO', lemma='to'),
Tag(word='tag', pos='VV', lemma='tag'),
Tag(word='.', pos='SENT', lemma='.')]
You can also directly process files using TreeTagger.tag_file() and TreeTagger.tag_file_to()
methods.
The module itself can be used as a command line tool too, for more information ask for module help:
python treetaggerwrapper.py --help If available within PYTHONPATH, the module can also be called from anywhere with the -m Python option:
python -m treetaggerwrapper --help
CHAPTER 5
Important modifications notes On august 2015, the module has been reworked deeply, some modifications imply modifications in users code.
• Methods renamed (and functions too) to follow Python rules, they are now lowercase with underscore sep-
arator between words. Typically for users, tt.TagText() becomes tt.tag_text() (for this method a
compatibility method has been written, but no longer support lists of non-Unicode strings).
• Work with Python2 and Python3, with same code.
• Use Unicode strings internally (it’s no more possible to provide binary strings and their encoding as separated
parameters - you have to decode the strings yourself before calling module functions).
• Assume utf-8 when dealing with TreeTagger binary, default to its utf-8 versions of parameter and abbrev files.
If you use another encoding, you must specify these files: in your sources, or via environment vars, or in the
treetagger_wrapper.cfg configuration file under encoding name section (respecting Python encoding
names as given by codecs.lookup(enc).name, ie. uses utf-8).
• Default to utf-8 when reading user files (you need to specify latin1 if you use such encoding - previously it was
the default).
• Guess TreeTagger location — you can still provide TAGDIR as environment variable or as TreeTagger pa-
rameter, but it’s no more necessary. Found directory is cached in treetagger_wrapper.cfg configuration
file to only guess once.
• Documentation has been revised to only export main things for module usage; internals stay documented via
comments in the source.
• Text chunking (tokenizing to provide treetagger input) has been revisited and must be more efficient. And you
can now also provide your own external chunking function when creating the wrapper — which will replace
internal chunking in the whole process.
• XML tags generated have been modified (made shorted and with ttpw: namespace).
• Can be used in multithreading context (pipe communications with TreeTagger are protected by a Lock, pre-
venting concurrent access). If you need multiple parallel processing, you can create multiple TreeTagger
objects, put them in a poll, and work with them from different threads.
• Support polls of taggers for optimal usage on multi-core computers. See treetaggerwrapper.
TaggerPoll class for thread poll and treetaggerpoll.TaggerProcessPoll class for process poll.
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 6
Processing 6.1 This module does two main things
• Manage preprocessing of text (chunking to extract tokens for treetagger input) in place of external Perl scripts
as in base TreeTagger installation, thus avoid starting Perl each time a piece of text must be tagged.
• Keep alive a pipe connected to TreeTagger process, and use that pipe to send data and retrieve tags, thus avoid
starting TreeTagger each time and avoid writing / reading temporary files on disk (direct communication via the
pipe). Ensure flushing of tagger output.
6.1.1 Supported languages Note: Encoding specification When specifying language with treetaggerwrapper, we use the the two chars language codes, not the complete language name.
This module support chunking (tokenizing) + tagging for languages:
• spanish (es)
• french (fr)
• english (en)
• german (de)
It can be used for tagging only for languages:
• bulgarian (bg)
• dutch (nl)
• estonian (et)
TreeTagger Python Wrapper Documentation, Release 2.3
• finnish (fi)
• galician (gl)
• italian (it)
• korean (kr)
• latin (la)
• mongolian (mn)
• polish (pl)
• russian (ru)
• slovak (sk’)
• swahili (sw)
Note: chunking parameters have not been adapted to these language and their specific features, you may try to chunk with default processing. . . with no guaranty. If you have an external chunker, you can call the tagger with option tagonly set to True, you should then provide a simple string with one token by line (or list of strings with one token by item). If you chunker is a callable, you can provide your own chunking function with CHUNKERPROC named parameter when constructing TreeTagger object, and then use it normally (your function is called in place of standard chunking).
For all these languages, the wrapper use standard filenames for TreeTagger’s parameter and abbreviation files. You can override these names using TAGPARFILE and TAGABBREV parameters, and then use alternate files.
6.2 Other things done by this module
• Can number lines into XML tags (to identify lines after TreeTagger processing).
• Can mark whitespaces with XML tags.
• By default replace non-talk parts like URLs, emails, IP addresses, DNS names (can be turned off). Replaced by
a ‘replaced-xxx’ string followed by an XML tag containing the replaced text as attribute (the tool was originally
used to tag parts of exchanges from technical mailing lists, containing many of these items).
• Acronyms like U.S.A. are systematically written with a final dot, even if it is missing in original file.
• Automatic encode/decode files using user specified encoding (default to utf-8).
In normal mode, all journal outputs are done via Python standard logging system, standard output is only used if a)
you run the module in pipe mode (ie. results goes to stdout), or b) you set DEBUG or DEBUG_PREPROCESS global variables and you use the module directly on command line (which make journal and other traces to be sent to stdout).
For an example of logging use, see enable_debugging_log() function.
6.3 Alternative tool You may also take a look at project treetagger python which wraps TreeTagger command-line tools (simpler than this module, it may be slower if you have many texts to tag in your process as it calls and restarts TreeTagger chunking then tagging tools chain for each text).
CHAPTER 7
Hints 7.1 Window buffer overflow On windows, if you get the following error about some file manipulation (ex. in an abspath() call):
TypeError: must be (buffer overflow), not str Check that directories and filenames total length don’t exceed 260 chars. If this is the case, you may try to use UNC names starting by \\?\ (read Microsoft Naming Files, Paths, and Namespaces documentation — note: you cannot use / to separate directories with this notation).
7.2 TreeTagger automatic location For your TreeTagger to be automatically find by the script, its directory must follow installation rules below:
7.2.1 Directory naming and content Location search function tries to find a directory beginning with tree, possibly followed by any char (ex. a space, a dash. . . ), followed by tagger, possibly followed by any sequence of chars (ex. a version number), and without case distinction.
This match directory names like treetagger, TreeTagger, Tree-tagger, Tree Tagger,
treetagger-2.0. . .
The directory must contain bin and lib subdirectories (they are normally created by TreeTagger installation script,
or directly included in TreeTagger Windows zipped archive).
First directory corresponding to these criteria is considered to be the TreeTagger installation directory.
TreeTagger Python Wrapper Documentation, Release 2.3 7.2.2 Searched locations TreeTagger directory location is searched from local (user private installation) to global (system wide installation).
1. Near the treetaggerwrapper.py file (TreeTagger being in same directory).
2. Containing the treetaggerwraper.py file (module inside TreeTagger directory).
3. User home directory (ex. /home/login, C:\Users\login).
4. First level directories in user home directory (ex. /home/login/tools, C:\Users\login\Desktop).
5. For MacOSX, in ~/Library/Frameworks.
6. For Windows, in program files directories (ex. C:\Program Files).
7. For Windows, in each existing fixed disk root and its first level directories (ex. C:\, C:\Tools, E:\,
E:\Apps).
8. For Posix (Linux, BSD. . . MacOSX), in a list of standard directories:
• /usr/bin,
• /usr/lib,
• /usr/local/bin,
• /usr/local/lib,
• /opt,
• /opt/bin,
• /opt/lib,
• /opt/local/bin,
• /opt/local/lib.
9. For MacOSX, in applications standard directories:
• /Applications,
• /Applications/bin,
• /Library/Frameworks.
7.3 TreeTagger probabilities Using TAGOPT parameter when constructing TreeTagger object, you can provide -threshold and -prob parameters to the treetagger process, and then retrieve probability informations in the tagger output (see TreeTagger README file for all options).
>>> import treetaggerwrapper as ttpw
>>> tagger = ttpw.TreeTagger(TAGLANG='fr', TAGOPT="-prob -threshold 0.7 -token -lemma
˓→-sgml -quiet")
>>> tags = tagger.tag_text('Voici un petit test de TreeTagger pour voir.')
>>> import pprint
>>> pprint.pprint(tags)
['Voici\tADV voici 1.000000',
'un\tDET:ART un 0.995819',
'petit\tADJ petit 0.996668',
'test\tNOM test 1.000000',
(continues on next page)
TreeTagger Python Wrapper Documentation, Release 2.3
(continued from previous page)
'de\tPRP de 1.000000',
'TreeTagger\tNAM <unknown> 0.966699',
'pour\tPRP pour 0.663202',
'voir\tVER:infi voir 1.000000',
'.\tSENT . 1.000000']
>>> tags2 = ttpw.make_tags(tags, allow_extra=True)
>>> pprint.pprint(tags2)
[TagExtra(word='Voici', pos='ADV', lemma='voici', extra=(1.0,)),
TagExtra(word='un', pos='DET:ART', lemma='un', extra=(0.995819,)),
TagExtra(word='petit', pos='ADJ', lemma='petit', extra=(0.996668,)),
TagExtra(word='test', pos='NOM', lemma='test', extra=(1.0,)),
TagExtra(word='de', pos='PRP', lemma='de', extra=(1.0,)),
TagExtra(word='TreeTagger', pos='NAM', lemma='<unknown>', extra=(0.966699,)),
TagExtra(word='pour', pos='PRP', lemma='pour', extra=(0.663202,)),
TagExtra(word='voir', pos='VER:infi', lemma='voir', extra=(1.0,)),
TagExtra(word='.', pos='SENT', lemma='.', extra=(1.0,))]
Note: This provides extra data for each token, your script must be adapted for this (you can note in the pprint formated display that we have tab and space separators — a tab after the word, then spaces between items).
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 8
Module exceptions, class and functions exception treetaggerwrapper.TreeTaggerError
For exceptions generated directly by TreeTagger wrapper.
class treetaggerwrapper.TreeTagger(**kargs)
Wrap TreeTagger binary to optimize its usage on multiple texts.
The two main methods you may use are the __init__() initializer, and the tag_text() method to process
your data and get TreeTagger output results.
Construction of a wrapper for a TreeTagger process.
You can specify several parameters at construction time. These parameters can be set via environment variables
too (except for CHUNKERPROC). All of them have standard default values, even TAGLANG default to tagging
english.
Parameters
• TAGLANG (string) – language code for texts (‘en’,’fr’,. . . ) (default to ‘en’).
• TAGDIR (string) – path to TreeTagger installation directory.
• TAGOPT (string) – options for TreeTagger (default to ‘-token -lemma -sgml -quiet’, it is
recomanded to keep these default options for correct use of this tool, and add other options
on your need).
• TAGPARFILE (string) – parameter file for TreeTagger. (default available for supported
languages). Use value None to force use of default if environment variable define a value
you don’t wants to use.
• TAGABBREV (string) – abbreviation file for preprocessing. (default available for sup-
ported languages).
• TAGINENC (str) – encoding to use for TreeTagger input, default to utf8.
• TAGOUTENC (str) – encoding to use for TreeTagger output, default to utf8
• TAGINENCERR (str) – management of encoding errors for TreeTagger input, strict or
ignore or replace - default to replace.
TreeTagger Python Wrapper Documentation, Release 2.3
• TAGOUTENCERR (str) – management of encoding errors for TreeTagger output, strict or
ignore or replace - default to replace.
• CHUNKERPROC (fct(tagger, ['text']) => list ['chunk']) – function to
call for chunking in place of wrapper’s chunking — default to None (use standard chunking).
Take the TreeTagger object as first parameter and a list of str to chunk as second parameter.
Must return a list of chunk str (tokens). Note that normal initialization of chunking param-
eters is done even with an external chunking function, so these parameters are available for
this function.
Returns None
tag_text(text, numlines=False, tagonly=False, prepronly=False, tagblanks=False, notagurl=False,
notagemail=False, notagip=False, notagdns=False, nosgmlsplit=False)
Tag a text and returns corresponding lines.
This is normally the method you use on this class. Other methods are only helpers of this one.
The return value of this method can be processed by make_tags() to retrieve a list of Tag named tuples
with meaning fields.
Parameters
• text (unicode string / [ unicode string ]) – the text to tag.
• numlines (boolean) – indicator to keep line numbering information in data flow (done
via SGML tags) (default to False).
• tagonly (boolean) – indicator to only do TreeTagger tagging processing on input
(default to False). If tagonly is set, providen text must be composed of one token by line
(either as a collection of line-feed separated lines in one string, or as a list of lines).
• prepronly (boolean) – indicator to only do preprocessing of text without tagging
(default to False).
• tagblanks (boolean) – indicator to keep blanks characters information in data flow
(done via SGML tags) (default to False).
• notagurl (boolean) – indicator to not do URL replacement (default to False).
• notagemail (boolean) – indicator to not do email address replacement (default to
False).
• notagip (boolean) – indicator to not do IP address replacement (default to False).
• notagdns (boolean) – indicator to not do DNS names replacement (default to False).
• nosgmlsplit (boolean) – indicator to not split on sgml already within the text (de-
fault to False).
Returns List of output strings from the tagger. You may use make_tags() function to build
a corresponding list of named tuple, for further processing readbility.
Return type [ str ]
tag_file(infilepath, encoding=’utf-8’, numlines=False, tagonly=False, prepronly=False, tag-
blanks=False, notagurl=False, notagemail=False, notagip=False, notagdns=False, nosgml-
split=False)
Call tag_text() on the content of a specified file.
Parameters
• infilepath (str) – pathname to access the file to read.
• encoding (str) – specify encoding of the file to read, default to utf-8.
TreeTagger Python Wrapper Documentation, Release 2.3
Returns List of output strings from the tagger.
Return type [ str ]
Other parameters are simply passed to tag_text().
tag_file_to(infilepath, outfilepath, encoding=’utf-8’, numlines=False, tagonly=False, pre-
pronly=False, tagblanks=False, notagurl=False, notagemail=False, notagip=False,
notagdns=False, nosgmlsplit=False)
Call tag_text() on the content of a specified file and write result to a file.
Parameters
• infilepath (str) – pathname to access the file to read.
• outfilepath (str) – pathname to access the file to write.
• encoding (str) – specify encoding of the files to read/write, default to utf-8.
Other parameters are simply passed to tag_text().
treetaggerwrapper.make_tags(result, exclude_nottags=False, allow_extra=False)
Tool function to transform a list of TreeTagger tabbed text output strings into a list of Tag/TagExtra/NotTag
named tuples.
You call this function using the result of a TreeTagger.tag_text() call. Tag and TagExtra have
attributes word, pos and lemma. TagExtra has an extra attribute containing a tuple of tagger’s output
complement values (where numeric values are converted to float). NotTag has a simple attribute what.
Parameters
• result – result of a TreeTagger.tag_text() call.
• exclude_nottags (bool) – dont generate NotTag for wrong size outputs. Default to
False.
• allow_extra (bool) – build a TagExtra for outputs longer than expected. Default to
False.
TreeTagger Python Wrapper Documentation, Release 2.3 20 Chapter 8. Module exceptions, class and functions
CHAPTER 9
Polls of taggers threads class treetaggerwrapper.TaggerPoll(workerscount=None, taggerscount=None, **kwargs)
Keep a poll of TreeTaggers for processing with different threads.
This class is here for people preferring natural language processing over multithread programming. . . :-)
Each poll manage a set of threads, able to do parallel chunking, and a set of taggers, able to do (more real)
parallel tagging. All taggers in the same poll are created for same processing (with same options).
TaggerPoll objects has same high level interface than TreeTagger ones with _async at end of methods
names. Each of . . . _asynch method returns a Job object allowing to know if processing is finished, to wait for
it, and to get the result.
If you want to properly terminate a TaggerPoll, you must call its TaggerPoll.stop_poll() method.
Note: Parallel processing via threads in Python within the same process is limited due to the global interpreter
lock (Python’s GIL). See Polls of taggers process for real parallel process.
Example of use
In this example no parameter is given to the poll, it auto-adapt to the count of CPU cores.
import treetaggerwrapper as ttpw
p = ttpw.TaggerPoll()
res = []
text = "This is <NAME>'s own house, it's very nice."
print("Creating jobs")
for i in range(10):
print(" Job", i)
res.append(p.tag_text_async(text))
print("Waiting for jobs to be completed")
for i, r in enumerate(res):
print(" Job", i)
r.wait_finished()
(continues on next page)
TreeTagger Python Wrapper Documentation, Release 2.3
(continued from previous page)
print(r.result)
p.stop_poll()
print("Finished")
Creation of a new TaggerPoll.
By default a TaggerPoll creates same count of threads and of TreeTagger objects than there are CPU cores
on your computer.
Parameters
• workerscount (int) – number of worker threads to create.
• taggerscount (int) – number of TreeTaggers objects to create.
• kwargs – same parameters as TreeTagger.__init__().
tag_text_async(text, numlines=False, tagonly=False, prepronly=False, tagblanks=False, no-
tagurl=False, notagemail=False, notagip=False, notagdns=False, nosgml-
split=False)
See TreeTagger.tag_text() method and TaggerPoll doc.
Returns a Job object about the async process.
Return type Job
tag_file_async(infilepath, encoding=’utf-8’, numlines=False, tagonly=False, prepronly=False,
tagblanks=False, notagurl=False, notagemail=False, notagip=False, no-
tagdns=False, nosgmlsplit=False)
See TreeTagger.tag_file() method and TaggerPoll doc.
Returns a Job object about the async process.
Return type Job
tag_file_to_async(infilepath, outfilepath, encoding=’utf-8’, numlines=False, tagonly=False,
prepronly=False, tagblanks=False, notagurl=False, notagemail=False, no-
tagip=False, notagdns=False, nosgmlsplit=False)
See TreeTagger.tag_file_to() method and TaggerPoll doc.
Returns a Job object about the async process.
Return type Job
stop_poll()
Properly stop a TaggerPoll.
Takes care of finishing waiting threads, and deleting TreeTagger objects (removing pipes connexions to
treetagger process).
Once called, the TaggerPoll is no longer usable.
class treetaggerwrapper.Job(poll, methname, kwargs)
Asynchronous job to process a text with a Tagger.
These objects are automatically created for you and returned by TaggerPoll methods
TaggerPoll.tag_text_async(), TaggerPoll.tag_file_async() and TaggerPoll.
tag_file_to_async().
You use them to know status of the asynchronous request, eventually wait for it to be finished, and get the final
result.
Variables
TreeTagger Python Wrapper Documentation, Release 2.3
• finished – Boolean indicator of job termination.
• result – Final job processing result — or exception.
wait_finished()
Lock on the Job event signaling its termination.
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 10
Extra functions Some functions can be of interest, eventually for another project.
treetaggerwrapper.blank_to_space(text)
Replace blanks characters by real spaces.
May be good to prepare for regular expressions & Co based on whitespaces.
Parameters text (string) – the text to clean from blanks.
Returns List of parts in their apparition order.
Return type [ string ]
treetaggerwrapper.blank_to_tag(text)
Replace blanks characters by corresponding SGML tags in a text.
Parameters text (string) – the text to transform from blanks.
Returns List of texts and sgml tags where there was a blank.
Return type list.
treetaggerwrapper.enable_debugging_log()
Setup logging module output.
This setup a log file which register logs, and also dump logs to stdout. You can just copy/paste and adapt it to
make logging write to your own log files.
treetaggerwrapper.get_param(paramname, paramsdict, defaultvalue)
Search for a working parameter value.
It is searched respectively in:
1. parameters given at TreeTagger construction.
2. environment variables.
3. configuration file, in [CONFIG] section.
4. default value.
TreeTagger Python Wrapper Documentation, Release 2.3 treetaggerwrapper.is_sgml_tag(text)
Test if a text is - completly - a SGML tag.
Parameters text (string) – the text to test.
Returns True if it’s an SGML tag.
Return type boolean treetaggerwrapper.load_configuration()
Load configuration file for the TreeTagger wrapper.
This file is used mainly to store last automatically found directory of TreeTagger installation. It can also be used
ot override some default working parameters of this script.
treetaggerwrapper.locate_treetagger()
Try to find treetagger directory in some standard places.
If a location is already available in treetaggerwrapper config file, then the function first check if it is still valid,
and if yes simply return this location.
A treetagger directory (any variation of directory name with tree and tagger, containing lib and bin subdirec-
tories) is search:
• In user home directories and its subdirectories.
• In MacOSX user own library frameworks.
• In system wide standard installation directories (depend on used platform).
The found location, if any, is stored into treetagger_wrapper.cfg file for later direct use (located in
standard XDG config path).
If not found, the function returns None.
Returns directory conntaining TreeTagger installation, or None.
Return type str treetaggerwrapper.main(*args)
Test/command line usage code.
See command line usage help with:
python treetaggerwrapper.py --help
or:
python -m treetaggerwrapper --help treetaggerwrapper.maketrans_unicode(s1, s2, todel=”)
Build translation table for use with unicode.translate().
Parameters
• s1 (unicode) – string of characters to replace.
• s2 (unicode) – string of replacement characters (same order as in s1).
• todel (unicode) – string of characters to remove.
Returns translation table with character code -> character code.
Return type dict
TreeTagger Python Wrapper Documentation, Release 2.3 treetaggerwrapper.pipe_writer(pipe, text, flushsequence, encoding, errors)
Write a text to a pipe and manage pre-post data to ensure flushing.
For internal use.
If text is composed of str strings, they are written as-is (ie. assume ad-hoc encoding is providen by caller). If it
is composed of unicode strings, then they are converted to the specified encoding.
Parameters
• pipe (Popen object (file-like with write and flush methods)) –
the Popen pipe on what to write the text.
• text (string or list of strings) – the text to write.
• flushsequence (string (with n between tokens)) – lines of tokens to en-
sure flush by TreeTagger.
• encoding (str) – encoding of texts written on the pipe.
• errors (str) – how to manage encoding errors: strict/ignore/replace.
treetaggerwrapper.save_configuration()
Save configuration file for the TreeTagger wrapper.
treetaggerwrapper.split_sgml(text)
Split a text between SGML-tags and non-SGML-tags parts.
Parameters text (string) – the text to split.
Returns List of text/SgmlTag in their apparition order.
Return type list
TreeTagger Python Wrapper Documentation, Release 2.3
CHAPTER 11
Polls of taggers process Tests with treetaggerwrapper.TaggerPoll show limited benefit of multithreading processing, probably re-
lated to the large part of time spent in the preprocess chunking executed by Python code and dependant on the Python Global Interpreter Lock (GIL).
Another solution with Python standard packages is the multiprocessing module, which provides tools to dispatch computing between different process in place of threads, each process being independant with its own interpreter (so its own GIL).
The treetaggerpoll module and its class TaggerProcessPoll are for people preferring natural language processing over multiprocessing programming. . . :-)
A comparison using the following example, running on a Linux OS with 4 core Intel Xeon X5450 CPU, tested with 1 2 3 4 5 and 10 worker process, gives the result in table below — printed time is for the main process (which wait for its subprocess termination). This shows great usage of available CPU when using this module for chunking/tagging
(we can see that having more worker process than CPU is not interesting — by default the class build as many worker process as you have CPUs):
Table 1: Workers count comparison
workers printed time real CPU time
1 228.49 sec 3m48.527s
2 87.88 sec 1m27.918s
3 61.12 sec 1m1.154s
4 53.86 sec 0m53.907s
5 50.68 sec 0m50.726s
10 56.45 sec 0m56.487s 11.1 Short example This example is available in the source code repository, in test/ subdirectory. Here you can see that main module must have its main code wrapped in a if __name__ == '__main__': condition (for correct Windows support).
TreeTagger Python Wrapper Documentation, Release 2.3 It may take an optional parameter to select how many workers you want (by default as many workers as you have CPUs):
import sys import time JOBSCOUNT = 10000 def start_test(n=None):
start = time.time()
import treetaggerpoll
# Note: print() have been commented, you may uncomment them to see progress.
p = treetaggerpoll.TaggerProcessPoll(workerscount=n, TAGLANG="en")
res = []
text = "This is <NAME>'s own house, it's very nice. " * 40
print("Creating jobs")
for i in range(JOBSCOUNT):
# print(" Job", i)
res.append(p.tag_text_async(text))
print("Waiting for jobs to complete")
for i, r in enumerate(res):
# print(" Job", i)
r.wait_finished()
# print(str(r.result)[:50])
res[i] = None # Loose Job reference - free it.
p.stop_poll()
print("Finished after {:0.2f} seconds elapsed".format(time.time() - start))
if __name__ == '__main__':
if len(sys.argv) >= 2:
nproc = int(sys.argv[1])
else:
nproc = None
start_test(nproc)
If you have a graphical CPU usage, you should see a high average load on each CPU.
Warning: Windows support
For Windows users, using TaggerProcessPoll have implications on your code, see multiprocessing docs,
especially the Safe importing of main module part.
11.2 Main process poll classes
class treetaggerpoll.TaggerProcessPoll(workerscount=None, keepjobs=True,
wantresult=True, keeptagargs=True,
**kwargs)
Keep a poll of TreeTaggers process for processing with different threads.
Each poll manage a set of processes, able to do parallel chunking and tagging. All taggers in the
TreeTagger Python Wrapper Documentation, Release 2.3
same poll are created for same processing (with same options).
TaggerProcessPoll objects have same high level interface than TreeTagger ones with
_async at end of methods names.
Each of ..._asynch method returns a ProcJob object allowing to know if processing is fin-
ished, to wait for it, and to get the result.
If you want to properly terminate a TaggerProcessPoll, you must call its
TaggerProcessPoll.stop_poll() method.
Creation of a new TaggerProcessPoll.
By default a TaggerProcessPoll creates same count of process than there are CPU cores on
your computer .
Parameters
• workerscount (int) – number of worker process (and taggers) to create.
• keepjobs (bool) – poll keep references to Jobs to manage signal of their pro-
cessing and store back processing results — default to True.
• wantresult (bool) – worker process must return processing result to be stored
in the job — default to True.
• keeptagargs (bool) – must keep tagging arguments in ProcJob synchroniza-
tion object — default to True.
• kwargs – same parameters as treetaggerwrapper.TreeTagger.
__init__() for TreeTagger creation.
tag_text_async(text, numlines=False, tagonly=False, prepronly=False, tag-
blanks=False, notagurl=False, notagemail=False, notagip=False,
notagdns=False, nosgmlsplit=False)
See TreeTagger.tag_text() method and TaggerProcessPoll doc.
Returns a ProcJob object about the async process.
Return type ProcJob
tag_file_async(infilepath, encoding=’utf-8’, numlines=False, tagonly=False, pre-
pronly=False, tagblanks=False, notagurl=False, notagemail=False, no-
tagip=False, notagdns=False, nosgmlsplit=False)
See TreeTagger.tag_file() method and TaggerProcessPoll doc.
Returns a ProcJob object about the async process.
Return type ProcJob
tag_file_to_async(infilepath, outfilepath, encoding=’utf-8’, numlines=False,
tagonly=False, prepronly=False, tagblanks=False, notagurl=False,
notagemail=False, notagip=False, notagdns=False, nosgml-
split=False)
See TreeTagger.tag_file_to() method and TaggerProcessPoll doc.
Returns a ProcJob object about the async process.
Return type ProcJob
stop_poll()
Properly stop a TaggerProcessPoll.
Takes care of finishing waiting threads, and deleting TreeTagger objects (removing pipes con-
nexions to treetagger process).
Once called, the TaggerProcessPoll is no longer usable.
TreeTagger Python Wrapper Documentation, Release 2.3
class treetaggerpoll.ProcJob(poll, methname, keepjobs, kwargs)
Asynchronous job to process a text with a Tagger.
These objects are automatically created for you and returned by TaggerProcessPoll
methods TaggerProcessPoll.tag_text_async(), TaggerProcessPoll.
tag_file_async() and TaggerProcessPoll.tag_file_to_async().
You use them to know status of the asynchronous request, eventually wait for it to be finished, and
get the final result.
Note: If your TaggerProcessPoll has been created with keepjobs param set to False, you
can’t rely on the ProcJob object (neither finish state or result). And if you used wantresult param
set to False, the final result can only be "finished" or an exception information string.
Variables
• finished – Boolean indicator of job termination.
• result – Final job processing result — or exception.
wait_finished()
Lock on the ProcJob event signaling its termination.
Python Module Index t
treetaggerpoll, 29 treetaggerwrapper, 1
33
TreeTagger Python Wrapper Documentation, Release 2.3 |
@meghdad/mindmap | npm | JavaScript | Mind map Vue2 component
===
> A mind map Vue component inspired by [MindNode](https://mindnode.com), based on d3.js
> The currently implemented functions include basic editing, dragging, zooming, undoing, context menu, folding...
recent update
---
> The project will basically no longer be maintained
> Currently developing Vue3, d3v6 version [mind map component] (<https://github.com/hellowuxin/vue3-mindmap>), welcome to support
Install
---
```
npm install @hellowuxin/mindmap
```
PROPS
---
| Name | Type | Default | Description |
| --- | --- | --- | --- |
| v-model | Array | undefined | Set mind map data |
| width | Number | 100% | Set component width |
| height | Number | undefined | Set component height |
| xSpacing | Number | 80 | Set the horizontal interval of nodes |
| ySpacing | Number | 20 | Set the vertical interval of nodes |
| strokeWidth | Number | 4 | Set the width of the line |
| draggable | Boolean | true | Set whether the node can be dragged |
| gps | Boolean | true | Whether to display the center button |
| fitView | Boolean | true | Whether to show the zoom button |
| showNodeAdd | Boolean | true | Whether to display the add node button |
| keyboard | Boolean | true | Whether to respond to keyboard events |
| contextMenu | Boolean | true | Whether to respond to the right-click menu |
| zoomable | Boolean | true | Can zoom and drag |
| showUndo | Boolean | true | Whether to display the undo redo button |
| download | Boolean | true | Whether to display the download button |
EVENTS
---
| Name | arguments | Description |
| --- | --- | --- |
| updateNodeName | data, id | When updating the node name, pass in the node data and node id |
| click | data, id | When a node is clicked, the node data and node id are passed in |
Example
---
```
<template>
<mindmap v-model="data"></mindmap>
</template<script>
import mindmap from '@hellowuxin/mindmap'
export default {
components: { mindmap },
data: () => ({
data: [{
"name":"How to learn D3",
"children": [
{
"name":"Preliminary knowledge",
"children": [
{ "name":"HTML & CSS" },
{ "name":"JavaScript" },
...
]
},
{
"name":"Install",
"_children": [
{ "name": "collapse node" }
]
},
{
"name":"Advanced",
"left": true
},
...
]
}]
})
}
</script>
```
Keyboard Events
---
`⇥ tab`、`⏎ enter`、`⌫ backspace`、`⌘ cmd`+`z`、`⌘ cmd`+`y`
Interactive Logic
---
**Mouse**:space + left Click to move, right click Menu, ctrl + scroll wheel to zoom, left click to Select
**Touchpad**:Two-finger scrolling, two-finger menu, two-finger pinch-to-zoom, one-finger selection
to be solved
---
* [ ] Export to multiple formats
* [ ] Set the width and height of the node
* [ ] multiple root nodes
* [ ] ...
Readme
---
### Keywords
* mindmap
* vue
* d3 |
icalendar | hex | Erlang | ICalendar
===
[![Test](https://github.com/lpil/icalendar/actions/workflows/test.yml/badge.svg)](https://github.com/lpil/icalendar/actions/workflows/test.yml)
[![Module Version](https://img.shields.io/hexpm/v/icalendar.svg)](https://hex.pm/packages/icalendar)
[![Hex Docs](https://img.shields.io/badge/hex-docs-lightgreen.svg)](https://hexdocs.pm/icalendar/)
[![Total Download](https://img.shields.io/hexpm/dt/icalendar.svg)](https://hex.pm/packages/icalendar)
[![License](https://img.shields.io/hexpm/l/icalendar.svg)](https://github.com/lpil/icalendar/blob/master/LICENSE.md)
A small library for reading and writing ICalendar files.
This library is in maintenance mode
===
Bug fixes may be accepted but no new features will be added. If you wish to add new features I recommend creating and publishing a fork. If an active fork is created I will direct users from this project to the new one.
Installation
---
The package can be installed by adding `:icalendar` to your list of dependencies in `mix.exs`:
```
def deps do
[
{:icalendar, "~> 1.1.0"}
]
end
```
Usage
---
```
events = [
%ICalendar.Event{
summary: "Film with Amy and Adam",
dtstart: {{2015, 12, 24}, {8, 30, 00}},
dtend: {{2015, 12, 24}, {8, 45, 00}},
description: "Let's go see Star Wars.",
location: "123 Fun Street, Toronto ON, Canada"
},
%ICalendar.Event{
summary: "Morning meeting",
dtstart: Timex.now,
dtend: Timex.shift(Timex.now, hours: 3),
description: "A big long meeting with lots of details.",
location: "456 Boring Street, Toronto ON, Canada"
},
]
ics = %ICalendar{ events: events } |> ICalendar.to_ics File.write!("calendar.ics", ics)
# BEGIN:VCALENDAR
# CALSCALE:GREGORIAN
# VERSION:2.0
# BEGIN:VEVENT
# DESCRIPTION:Let's go see Star Wars.
# DTEND:20151224T084500Z
# DTSTART:20151224T083000Z
# LOCATION: 123 Fun Street\, Toronto ON\, Canada
# SUMMARY:Film with Amy and Adam
# END:VEVENT
# BEGIN:VEVENT
# DESCRIPTION:A big long meeting with lots of details.
# DTEND:20151224T223000Z
# DTSTART:20151224T190000Z
# LOCATION:456 Boring Street\, Toronto ON\, Canada
# SUMMARY:Morning meeting
# END:VEVENT
# END:VCALENDAR
```
See Also
---
* <https://en.wikipedia.org/wiki/ICalendar>
* <http://www.kanzaki.com/docs/ical/dateTime.html Copyright and License
---
Copyright (c) 2015 <NAME>
This library is released under the MIT License. See the [LICENSE.md](license.html) file for further details.
[License](license.html)
ICalendar
===
Generating ICalendars.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[encode_to_iodata(calendar, options \\ [])](#encode_to_iodata/2)
To create a Phoenix/Plug controller and view that output ics format
[encode_to_iodata!(calendar, options \\ [])](#encode_to_iodata!/2)
[from_ics(events)](#from_ics/1)
See [`ICalendar.Deserialize.from_ics/1`](ICalendar.Deserialize.html#from_ics/1).
[to_ics(events, options \\ [])](#to_ics/2)
See [`ICalendar.Serialize.to_ics/2`](ICalendar.Serialize.html#to_ics/2).
[Link to this section](#functions)
Functions
===
ICalendar.Deserialize protocol
===
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[from_ics(ics)](#from_ics/1)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ICalendar.Event
===
Calendars have events.
ICalendar.Property
===
Provide structure to define properties of an Event.
ICalendar.Recurrence
===
Adds support for recurring events.
Events can recur by frequency, count, interval, and/or start/end date. To see the specific rules and examples, see `add_recurring_events/2` below.
Credit to @fazibear for this module.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[get_recurrences(event, end_date \\ DateTime.utc_now())](#get_recurrences/2)
Given an event, return a stream of recurrences for that event.
[Link to this section](#functions)
Functions
===
ICalendar.Serialize protocol
===
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[to_ics(data, options \\ [])](#to_ics/2)
Serialize data to iCalendar format.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ICalendar.Util.DateParser
===
Responsible for parsing datestrings in predefined formats with [`parse/1`](#parse/1) and
[`parse/2`](#parse/2).
Credit to @fazibear for this module.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[valid_timezone()](#t:valid_timezone/0)
[Functions](#functions)
---
[parse(data, tzid \\ nil)](#parse/2)
Responsible for parsing datestrings in predefined formats into %DateTime{}
structs. Valid formats are defined by the "Internet Calendaring and Scheduling Core Object Specification" (RFC 2445).
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
ICalendar.Util.Deserialize
===
Deserialize ICalendar Strings into Event structs.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[build_event(lines)](#build_event/1)
[desanitized(string)](#desanitized/1)
This function should strip any sanitization that has been applied to content within an iCal string.
[parse_attr(arg1, acc)](#parse_attr/2)
[parse_rrule(rrule)](#parse_rrule/1)
This function builds an rrule struct.
[retrieve_kvs(line)](#retrieve_kvs/1)
This function extracts the key and value parts from each line of a iCalendar string.
[retrieve_params(key)](#retrieve_params/1)
This function extracts parameter data from a key in an iCalendar string.
[to_date(date_string)](#to_date/1)
[to_date(date_string, map)](#to_date/2)
This function is designed to parse iCal datetime strings into erlang dates.
[Link to this section](#functions)
Functions
===
ICalendar.Util.KV
===
Build ICalendar key-value strings.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[build(key, value)](#build/2)
Convert a key and value to an iCal line
[Link to this section](#functions)
Functions
===
ICalendar.Value protocol
===
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
[Functions](#functions)
---
[to_ics(data)](#to_ics/1)
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
API Reference
===
Modules
---
[ICalendar](ICalendar.html)
Generating ICalendars.
[ICalendar.Deserialize](ICalendar.Deserialize.html)
[ICalendar.Event](ICalendar.Event.html)
Calendars have events.
[ICalendar.Property](ICalendar.Property.html)
Provide structure to define properties of an Event.
[ICalendar.Recurrence](ICalendar.Recurrence.html)
Adds support for recurring events.
[ICalendar.Serialize](ICalendar.Serialize.html)
[ICalendar.Util.DateParser](ICalendar.Util.DateParser.html)
Responsible for parsing datestrings in predefined formats with [`parse/1`](#parse/1) and
[`parse/2`](#parse/2).
[ICalendar.Util.Deserialize](ICalendar.Util.Deserialize.html)
Deserialize ICalendar Strings into Event structs.
[ICalendar.Util.KV](ICalendar.Util.KV.html)
Build ICalendar key-value strings.
[ICalendar.Value](ICalendar.Value.html)
[Changelog](changelog.html) |
iccTraj | cran | R | Package ‘iccTraj’
September 18, 2023
Type Package
Title Estimates the Intraclass Correlation Coefficient for Trajectory
Data
Version 1.0.3
Depends R (>= 4.0)
Imports doParallel, dplyr, magic, trajectories, sp, spacetime, purrr,
utils, foreach
Description Estimates the intraclass correlation coefficient for trajectory data using a matrix of dis-
tances between trajectories. The distances implemented are the extended Hausdorff dis-
tances (Min et al. 2007) <doi:10.1080/13658810601073315> and the discrete Fréchet dis-
tance (Magdy et al. 2015) <doi:10.1109/IntelCIS.2015.7397286>.
License GPL (>= 2)
Encoding UTF-8
LazyData true
RoxygenNote 7.2.3
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-18 13:30:02 UTC
R topics documented:
gull_dat... 2
H... 2
IC... 3
iccTra... 4
interva... 6
gull_data Gull data
Description
A data frame with sample of 90 gull trajectories.
Usage
gull_data
Format
A data frame containing 90 trajectories
ID Subject identifier
trip Trip identifier
LONG Longitude
LAT Latitude
triptime Time in seconds when the locations were obtained
HD Computes extended Hausdorff distance between two trajectories.
Description
Computes extended Hausdorff distance between two trajectories.
Usage
HD(pp1, pp2, q = 1)
Arguments
pp1 Set of spatial points for the first trajectory. It can be a matrix of 2D points,
first column x/longitude, second column y/latitude, or a SpatialPoints or Spa-
tialPointsDataFrame object.
pp2 Set of spatial points for the second trajectory. It can be a matrix of 2D points,
first column x/longitude, second column y/latitude, or a SpatialPoints or Spa-
tialPointsDataFrame object.
q Quantile for the extended Hausdorff distance. Default value q=1 uses the maxi-
mum that leads to classical Hausdorff distance.
Value
A numerical value with the distance.
References
<NAME>., <NAME>., <NAME>., <NAME>. (2015). Review on trajectory similarity mea-
sures. 10.1109/IntelCIS.2015.7397286.
<NAME>., <NAME>., <NAME>. (2007) Extended Hausdorff distance for spatial objects in GIS.
International Journal of Geographical Information Science, 21:4, 459–475
Examples
# Take two trajectories
library(dplyr)
library(sp)
sample_data<-gull_data %>% filter(ID %in% c(5107912,5107913), trip %in% c("V02","V01"))
tr1<-gull_data %>% filter((ID == 5107912) & (trip=="V02"))
tr2<-gull_data %>% filter((ID == 5107913) & (trip=="V01"))
pts1 = SpatialPoints(tr1[c("LONG","LAT")], proj4string=CRS("+proj=longlat"))
pts2 = SpatialPoints(tr2[c("LONG","LAT")], proj4string=CRS("+proj=longlat"))
# Hausdorff distance
HD(pts1,pts2,q=1)
# Median Hausdorff distance
HD(pts1,pts2,q=0.5)
ICC Computes the intraclass correlation coefficient (ICC) using a matrix
of distances.
Description
Computes the intraclass correlation coefficient (ICC) using a matrix of distances.
Usage
ICC(X, nt)
Arguments
X Matrix with the pairwise distances.
nt Data frame with the number of trips by subject
Details
The intraclass correlation coeffcient is estimated using the distance matrix among trajectories.
Value
Data frame with the estimates of the ICC (r), the subjects’ mean sum-of-squares (MSA), the
between-subjects variance (sb), the total variance (st), and the within-subjects variance (se).
iccTraj Estimates the intraclass correlation coefficient (ICC) for trajectory
data
Description
Estimates the intraclass correlation coefficient (ICC) for trajectory data
Usage
iccTraj(
data,
ID,
trip,
LON,
LAT,
time,
projection = CRS("+proj=longlat"),
origin = "1970-01-01 UTC",
parallel = TRUE,
individual = TRUE,
distance = c("H", "F"),
bootCI = TRUE,
nBoot = 100,
q = 0.5
)
Arguments
data A data frame with the locations and times of trajectories. It is assumed the time
between locations is uniform. It must contain at least five columns: subject
identifier, trip identifier, latitude, longitude, and time of the reading.
ID Character string indicating the name of the subjects column in the dataset.
trip Character string indicating the trip column in the dataset.
LON Numeric. Longitude readings.
LAT Numeric. Latitude readings.
time Numeric. Time of the readings.
projection Projection string of class CRS-class.
origin Optional. Origin of the date-time. Only needed in the internal process to create
an object of type POSIXct.
parallel TRUE/FALSE value. Use parallel computation? Default value is TRUE.
individual TRUE/FALSE value. Compute individual within-subjects variances? Default
value is TRUE.
distance Metric used to compute the distances between trajectories. Options are **H**
for median Hausforff distance, and **F** for discrete Fréchet distance.
bootCI TRUE/FALSE value. If TRUE it will generate boostrap resamples. Default
value is TRUE.
nBoot Numeric. Number of bootstrap resamples. Ignored if "bootCI" is FALSE. De-
fault value is 100.
q Quantile for the extended Hausdorff distance. Default value q=0.5 leads to me-
dian Hausdorff distance.
Details
The intraclass correlation coefficient is estimated using the distance matrix among trajectories.
Bootstrap resamples are obtained using balanced randomized cluster bootstrap approach (Davison
and Hinkley, 1997; Field and Welsh, 2007)
Value
An object of class *iccTraj*.The output is a list with the following components:
• *est*. Data frame with the following estimates: the ICC (r), the subjects’ mean sum-of-
squares (MSA), the between-subjects variance (sb), the total variance (st), and the within-
subjects variance (se).
• *boot*. If bootCI argument is set to TRUE, data frame with the bootstrap estimates.
• *D*. Data frame with the pairwise distances among trajectories.
• *indW* Data frame with the follwoing columns: the subject’s identifier (ID), the individual
within-subjects variances (w), and the number of trips (n).
References
<NAME>., <NAME>. (1997). Bootstrap Methods and Their Application. Cambridge: Cam-
bridge University Press.
Field, C.A., <NAME>. (2007). Bootstrapping Clustered Data. Journal of the Royal Statistical
Society. Series B (Statistical Methodology). 69(3), 369-390.
Examples
# Using median Hausdorff distance.
Hd<-iccTraj(gull_data,"ID","trip","LONG","LAT","triptime")
Hd$est
# Using discrete Fréchet distance.
Fd<-iccTraj(gull_data,"ID","trip","LONG","LAT","triptime", distance="F")
Fd$est
interval Computes the confidence interval for the ICC
Description
Computes the confidence interval for the ICC
Usage
interval(x, conf = 0.95, method = c("EB", "AN", "ZT"))
Arguments
x An object of class "iccTraj"
conf Numeric. Level of confidence. Default is set to 0.95.
method String. Method used to estimate the confidence interval. Accepted values are
**EB** for Empirical Bootstrap, **AN** for asymptotic Normal, and **ZT**
for asymptotic Normal using the Z-transformation.
Details
Let θ̂ denote the ICC sample estimate and θiB denote the ICC bootstrap estimates with i = 1, . . . , B.
B
Let δα/2 B
and δ1−α/2 be the α2 and 1 − α2 percentiles of δiB = θiB − θ̂. The empirical bootstrap
B B
confidence interval is then estimated as θ̂ + δα/2 , θ̂ + δ1−α/2 .
Asymptotic Normal (AN) interval is obtained as θ̂±Z1−α/2 ∗SEB where SEB denotes the standard
deviation of θiB , and Z1−α/2 stands for the 1 − α/2 quantile of the standard Normal distribution.
In the ZT approach, the ICC is transformed using Fisher’s Z-transformation. Then, the AN approach
is applied to the transformed ICC.
Value
A vector with the two boundaries of the confidence interval.
Examples
# Using median Hausdorff distance
Hd<-iccTraj(gull_data,"ID","trip","LONG","LAT","triptime", parallel=FALSE, distance="H")
Hd$est
interval(Hd) |
spmoran | cran | R | Package ‘spmoran’
April 28, 2023
Type Package
Title Fast Spatial Regression using Moran Eigenvectors
Version 0.2.2.9
Date 2023-04-29
Author <NAME> <<EMAIL>>
Maintainer <NAME> <<EMAIL>>
Description
Functions for estimating spatial varying coefficient models, mixed models, and other spatial re-
gression models for Gaussian and non-Gaussian data. Moran eigenvectors are used to an approx-
imate Gaussian process modeling which is interpretable in terms of the Moran coeffi-
cient. The GP is used for modeling the spatial processes in residuals and regression coeffi-
cients. For details see Murakami (2021) <arXiv:1703.04467>.
License GPL (>= 2)
Encoding UTF-8
Imports sf, fields, vegan, Matrix, doParallel, foreach, ggplot2,
spdep, rARPACK, RColorBrewer, splines, FNN, methods
Suggests R.rsp,
VignetteBuilder R.rsp
NeedsCompilation no
Repository CRAN
Date/Publication 2023-04-28 21:10:09 UTC
R topics documented:
bes... 2
besf_v... 4
coef_margina... 8
coef_marginal_v... 9
es... 10
lse... 11
lsl... 13
meige... 14
meigen... 16
meigen_... 17
nongauss_... 18
plot_... 20
plot_q... 21
plot_... 22
predict... 23
predict0_v... 25
res... 27
resf_q... 31
resf_v... 33
weige... 39
besf Spatial regression with RE-ESF for very large samples
Description
Memory-free implementation of RE-ESF-based spatial regression for very large samples. This
model estimates residual spatial dependence, constant coefficients, and non-spatially varying coef-
ficients (NVC; coefficients varying with respect to explanatory variable value).
Usage
besf( y, x = NULL, nvc = FALSE, nvc_sel = TRUE, coords, s_id = NULL,
covmodel="exp", enum = 200, method = "reml", penalty = "bic", nvc_num = 5,
maxiter = 30, bsize = 4000, ncores = NULL )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K)
nvc If TRUE, NVCs are assumed on x. Otherwise, constant coefficients are as-
sumed. Default is FALSE
nvc_sel If TRUE, type of coefficients (NVC or constant) is selected through a BIC (de-
fault) or AIC minimization. If FALSE, NVCs are assumed across x. Alterna-
tively, nvc_sel can be given by column number(s) of x. For example, if nvc_sel
= 2, the coefficient on the second explanatory variable in x is NVC and the other
coefficients are constants. The Default is TRUE
coords Matrix of spatial point coordinates (N x 2)
s_id Optional. ID specifying groups modeling spatially dependent process (N x 1).
If it is specified, group-level spatial process is estimated. It is useful. e.g., for
multilevel modeling (s_id is given by the group ID) and panel data modeling
(s_id is given by individual location id). Default is NULL
covmodel Type of kernel to model spatial dependence. The currently available options are
"exp" for the exponential kernel, "gau" for the Gaussian kernel, and "sph" for
the spherical kernel
enum Number of Moran eigenvectors to be used for spatial process modeling (scalar).
Default is 200
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
penalty Penalty to select type of coefficients (NVC or constant) to stablize the estimates.
The current options are "bic" for the Baysian information criterion-type penalty
(N x log(K)) and "aic" for the Akaike information criterion (2K) (see Muller et
al., 2013). Default is "bic"
nvc_num Number of basis functions used to model NVC. An intercept and nvc_num nat-
ural spline basis functions are used to model each NVC. Default is 5
maxiter Maximum number of iterations. Default is 30
bsize Block/badge size. bsize x bsize elements are iteratively processed during the
parallelized computation. Default is 4000
ncores Number of cores used for the parallel computation. If ncores = NULL, the
number of available cores is detected. Default is NULL
Value
b Matrix with columns for the estimated coefficients on x, their standard errors,
z-values, and p-values (K x 4). Effective if nvc =FALSE
c_vc Matrix of estimated NVCs on x (N x K). Effective if nvc =TRUE
cse_vc Matrix of standard errors for the NVCs on x (N x K). Effective if nvc =TRUE
ct_vc Matrix of t-values for the NVCs on x (N x K). Effective if nvc =TRUE
cp_vc Matrix of p-values for the NVCs on x (N x K). Effective if nvc =TRUE
s Vector of estimated variance parameters (2 x 1). The first and the second ele-
ments denote the standard error and the Moran’s I value of the estimated spa-
tially dependent component, respectively. The Moran’s I value is scaled to take
a value between 0 (no spatial dependence) and 1 (the maximum possible spa-
tial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked
e Vector whose elements are residual standard error (resid_SE), adjusted condi-
tional R2 (adjR2(cond)), restricted log-likelihood (rlogLik), Akaike informa-
tion criterion (AIC), and Bayesian information criterion (BIC). When method =
"ml", restricted log-likelihood (rlogLik) is replaced with log-likelihood (logLik)
vc List indicating whether NVC are removed or not during the BIC/AIC minimiza-
tion. 1 indicates not removed whreas 0 indicates removed
r Vector of estimated random coefficients on Moran’s eigenvectors (L x 1)
sf Vector of estimated spatial dependent component (N x 1)
pred Vector of predicted values (N x 1)
resid Vector of residuals (N x 1)
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>. (2003). Spatial autocorrelation and spatial filtering: gaining understanding through
theory and scientific visualization. Springer Science & Business Media.
<NAME>. and <NAME>. (2015) Random effects specifications in eigenvector spatial filter-
ing: a simulation study. Journal of Geographical Systems, 17 (4), 311-331.
<NAME>. and <NAME>. (2019) A memory-free spatial additive mixed modeling for big
spatial data. Japan Journal of Statistics and Data Science. DOI:10.1007/s42081-019-00063-x.
See Also
resf
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
xgroup <- boston.c[,"TOWN"]
coords <- boston.c[,c("LON", "LAT")]
######## Regression considering spatially dependent residuals
#res <- besf(y = y, x = x, coords=coords)
#res
######## Regression considering spatially dependent residuals and NVC
######## (coefficients or NVC is selected)
#res2 <- besf(y = y, x = x, coords=coords, nvc = TRUE)
######## Regression considering spatially dependent residuals and NVC
######## (all the coefficients are NVCs)
#res3 <- besf(y = y, x = x, coords=coords, nvc = TRUE, nvc_sel=FALSE)
besf_vc Spatially and non-spatially varying coefficient (SNVC) modeling for
very large samples
Description
Memory-free implementation of SNVC modeling for very large samples. The model estimates
residual spatial dependence, constant coefficients, spatially varying coefficients (SVCs), non-spatially
varying coefficients (NVC; coefficients varying with respect to explanatory variable value), and
SNVC (= SVC + NVC). Type of coefficients can be selected through BIC/AIC minimization. By
default, it estimates a SVC model.
Note: SNVCs can be mapped just like SVCs. Unlike SVC models, SNVC model is robust against
spurious correlation (multicollinearity), so, stable (see Murakami and Griffith, 2020).
Usage
besf_vc( y, x, xconst = NULL, coords, s_id = NULL, x_nvc = FALSE, xconst_nvc = FALSE,
x_sel = TRUE, x_nvc_sel = TRUE, xconst_nvc_sel = TRUE, nvc_num=5,
method = "reml", penalty = "bic", maxiter = 30,
covmodel="exp",enum = 200, bsize = 4000, ncores=NULL )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables with spatially varying coefficients (SVC) (N x
K)
xconst Matrix of explanatory variables with constant coefficients (N x K_c). Default is
NULL
coords Matrix of spatial point coordinates (N x 2)
s_id Optional. ID specifying groups modeling spatially dependent process (N x 1). If
it is specified, group-level spatial process is estimated. It is useful for multilevel
modeling (s_id is given by the group ID) and panel data modeling (s_id is given
by individual location id). Default is NULL
x_nvc If TRUE, SNVCs are assumed on x. Otherwise, SVCs are assumed. Default is
FALSE
xconst_nvc If TRUE, NVCs are assumed on xconst. Otherwise, constant coefficients are
assumed. Default is FALSE
x_sel If TRUE, type of coefficient (SVC or constant) on x is selected through a BIC
(default) or AIC minimization. If FALSE, SVCs are assumed across x. Alter-
natively, x_sel can be given by column number(s) of x. For example, if x_sel =
2, the coefficient on the second explanatory variable in x is SVC and the other
coefficients are constants. The Default is TRUE
x_nvc_sel If TRUE, type of coefficient (NVC or constant) on x is selected through the
BIC (default) or AIC minimization. If FALSE, NVCs are assumed across x.
Alternatively, x_nvc_sel can be given by column number(s) of x. For example,
if x_nvc_sel = 2, the coefficient on the second explanatory variable in x is NVC
and the other coefficients are constants. The Default is TRUE
xconst_nvc_sel If TRUE, type of coefficient (NVC or constant) on xconst is selected through
the BIC (default) or AIC minimization. If FALSE, NVCs are assumed across
xconst. Alternatively, xconst_nvc_sel can be given by column number(s) of
xconst. For example, if xconst_nvc_sel = 2, the coefficient on the second ex-
planatory variable in xconst is NVC and the other coefficients are constants.The
Default is TRUE
nvc_num Number of basis functions used to model NVC. An intercept and nvc_num nat-
ural spline basis functions are used to model each NVC. Default is 5
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
penalty Penalty to select type of coefficients (SNVC, SVC, NVC, or constant) to sta-
blize the estimates. The current options are "bic" for the Baysian information
criterion-type penalty (N x log(K)) and "aic" for the Akaike information crite-
rion (2K) (see Muller et al., 2013). Default is "bic"
maxiter Maximum number of iterations. Default is 30
covmodel Type of kernel to model spatial dependence. The currently available options are
"exp" for the exponential kernel, "gau" for the Gaussian kernel, and "sph" for
the spherical kernel
enum Number of Moran eigenvectors to be used for spatial process modeling (scalar).
Default is 200
bsize Block/badge size. bsize x bsize elements are iteratively processed during the
parallelized computation. Default is 4000
ncores Number of cores used for the parallel computation. If ncores = NULL, the
number of available cores is detected. Default is NULL
Value
b_vc Matrix of estimated SNVC (= SVC + NVC) on x (N x K)
bse_vc Matrix of standard errors for the SNVCs on x (N x k)
z_vc Matrix of z-values for the SNVCs on x (N x K)
p_vc Matrix of p-values for the SNVCs on x (N x K)
B_vc_s List summarizing estimated SVCs (in SNVC) on x. The four elements are the
SVCs (N x K), the standard errors (N x K), z-values (N x K), and p-values (N x
K), respectively
B_vc_n List summarizing estimated NVCs (in SNVC) on x. The four elements are the
NVCs (N x K), the standard errors (N x K), z-values (N x K), and p-values (N x
K), respectively
c Matrix with columns for the estimated coefficients on xconst, their standard
errors, z-values, and p-values (K_c x 4). Effective if xconst_nvc = FALSE
c_vc Matrix of estimated NVCs on xconst (N x K_c). Effective if xconst_nvc = TRUE
cse_vc Matrix of standard errors for the NVCs on xconst (N x k_c). Effective if xconst_nvc
= TRUE
cz_vc Matrix of z-values for the NVCs on xconst (N x K_c). Effective if xconst_nvc =
TRUE
cp_vc Matrix of p-values for the NVCs on xconst (N x K_c). Effective if xconst_nvc
= TRUE
s List of variance parameters in the SNVC (SVC + NVC) on x. The first element is
a 2 x K matrix summarizing variance parameters for SVC. The (1, k)-th element
is the standard error of the k-th SVC, while the (2, k)-th element is the Moran’s I
value is scaled to take a value between 0 (no spatial dependence) and 1 (strongest
spatial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked. The second element of s is the vector of standard errors of
the NVCs
s_c Vector of standard errors of the NVCs on xconst
vc List indicating whether SVC/NVC are removed or not during the BIC/AIC min-
imization. 1 indicates not removed (replaced with constant) whreas 0 indicates
removed
e Vector whose elements are residual standard error (resid_SE), adjusted condi-
tional R2 (adjR2(cond)), restricted log-likelihood (rlogLik), Akaike informa-
tion criterion (AIC), and Bayesian information criterion (BIC). When method =
"ml", restricted log-likelihood (rlogLik) is replaced with log-likelihood (logLik)
pred Vector of predicted values (N x 1)
resid Vector of residuals (N x 1)
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>., and <NAME>. (2013) Model selection in linear mixed models. Statistical
Science, 28 (2), 136-167.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2017) A Moran coefficient-
based mixed effects approach to investigate spatially varying relationships. Spatial Statistics, 19,
68-89.
<NAME>., and <NAME>. (2019). Spatially varying coefficient modeling for large datasets:
Eliminating N from spatial regressions. Spatial Statistics, 30, 39-64.
<NAME>. and <NAME>. (2019) A memory-free spatial additive mixed modeling for big
spatial data. Japan Journal of Statistics and Data Science. DOI:10.1007/s42081-019-00063-x.
<NAME>., and <NAME>. (2020) Balancing spatial and non-spatial variations in varying
coefficient modeling: a remedy for spurious correlation. ArXiv.
See Also
resf_vc
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV"]
x <- boston.c[,c("CRIM", "AGE")]
xconst <- boston.c[,c("ZN","DIS","RAD","NOX", "TAX","RM", "PTRATIO", "B")]
xgroup <- boston.c[,"TOWN"]
coords <- boston.c[,c("LON", "LAT")]
############## SVC modeling1 #################
######## (SVC on x; Constant coefficients on xconst)
#res <- besf_vc(y=y,x=x,xconst=xconst,coords=coords, x_sel = FALSE )
#res
#plot_s(res,0) # Spatially varying intercept
#plot_s(res,1) # 1st SVC
#plot_s(res,2) # 2nd SVC
############## SVC modeling2 #################
######## (SVC or constant coefficients on x; Constant coefficients on xconst)
#res2 <- besf_vc(y=y,x=x,xconst=xconst,coords=coords )
############## SVC modeling3 #################
######## - Group-level SVC or constant coefficients on x
######## - Constant coefficients on xconst
#res3 <- besf_vc(y=y,x=x,xconst=xconst,coords=coords, s_id=xgroup)
############## SNVC modeling1 #################
######## - SNVC, SVC, NVC, or constant coefficients on x
######## - Constant coefficients on xconst
#res4 <- besf_vc(y=y,x=x,xconst=xconst,coords=coords, x_nvc =TRUE)
############## SNVC modeling2 #################
######## - SNVC, SVC, NVC, or constant coefficients on x
######## - NVC or Constant coefficients on xconst
#res5 <- besf_vc(y=y,x=x,xconst=xconst,coords=coords, x_nvc =TRUE, xconst_nvc=TRUE)
#plot_s(res5,0) # Spatially varying intercept
#plot_s(res5,1) # 1st SNVC (SVC + NVC)
#plot_s(res5,1,btype="svc")# SVC in the 1st SNVC
#plot_n(res5,1,xtype="x") # NVC in the 1st NVC on x
#plot_n(res5,6,xtype="xconst")# NVC in the 6t NVC on xcnost
coef_marginal Marginal effects evaluation
Description
This function evaluates the marginal effects from x (dy/dx) based on the estimation result of resf.
This funtion is for non-Gaussian models transforming y using nongauss_y.
Usage
coef_marginal( mod )
Arguments
mod Output from resf
Value
b Marginal effects from x (dy/dx)
See Also
resf
coef_marginal_vc Marginal effects evaluation from models with varying coefficients
Description
This function evaluates the marginal effects from x (dy/dx) based on the estimation result of resf_vc.
This funtion is for non-Gaussian models transforming y using nongauss_y.
Usage
coef_marginal_vc( mod )
Arguments
mod Output from resf_vc
Value
b_vc Matrix of the marginal effects of x (dy/dx) (N x K)
B_vc_n Matrix of the sub-marginal effects of x explained by the spatially varying coef-
ficients (N x K)
B_vc_s Matrix of the sub-marginal effects explained by the non-spatially varying coef-
ficients (N x K)
c Matrix of the marginal effects of xconst (N x K_const)
other List of other outputs, which are internally used
See Also
resf_vc
esf Spatial regression with eigenvector spatial filtering
Description
This function estimates the linear eigenvector spatial filtering (ESF) model. The eigenvectors are
selected by a forward stepwise method.
Usage
esf( y, x = NULL, vif = NULL, meig, fn = "r2" )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K). Default is NULL
vif Maximum acceptable value of the variance inflation factor (VIF) (scalar). For
example, if vif = 10, eigenvectors are selected so that the maximum VIF value
among explanatory variables and eigenvectors is equal to or less than 10. Default
is NULL
meig Moran eigenvectors and eigenvalues. Output from meigen or meigen_f
fn Objective function for the stepwise eigenvector selection. The adjusted R2
("r2"), AIC ("aic"), or BIC ("bic") are available. Alternatively, all the eigen-
vectors in meig are use if fn = "all". This is acceptable for large samples (see
Murakami and Griffith, 2019). Default is "r2"
Value
b Matrix with columns for the estimated coefficients on x, their standard errors,
t-values, and p-values (K x 4)
s Vector of statistics for the estimated spatial component (2 x 1). The first el-
ement is the standard error and the second element is the Moran’s I value of
the estimated spatially dependent component. The Moran’s I value is scaled to
take a value between 0 (no spatial dependence) and 1 (the maximum possible
spatial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked
r Matrix with columns for the estimated coefficients on Moran’s eigenvectors,
their standard errors, t-values, and p-values (L x 4)
vif Vector of variance inflation factors of the explanatory variables (N x 1)
e Vector whose elements are residual standard error (resid_SE), adjusted R2 (adjR2),
log-likelihood (logLik), AIC, and BIC
sf Vector of estimated spatial dependent component (Eγ) (N x 1)
pred Vector of predicted values (N x 1)
resid Vector of residuals (N x 1)
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>. (2003). Spatial autocorrelation and spatial filtering: gaining understanding through
theory and scientific visualization. Springer Science & Business Media.
Tiefelsdorf, M., and <NAME>. (2007). Semiparametric filtering of spatial autocorrelation: the
eigenvector approach. Environment and Planning A, 39 (5), 1193-1221.
<NAME>. and <NAME>. (2019) Eigenvector spatial filtering for large data sets: fixed and
random effects approaches. Geographical Analysis, 51 (1), 23-49.
See Also
resf
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE")]
coords <- boston.c[,c("LON", "LAT")]
#########Distance-based ESF
meig <- meigen(coords=coords)
esfD <- esf(y=y,x=x,meig=meig, vif=5)
esfD
#########Fast approximation
meig_f<- meigen_f(coords=coords)
esfD <- esf(y=y,x=x,meig=meig_f, vif=10, fn="all")
esfD
############################Not run
#########Topoligy-based ESF (it is commonly used in regional science)
#
#cknn <- knearneigh(coordinates(coords), k=4) #4-nearest neighbors
#cmat <- nb2mat(knn2nb(cknn), style="B")
#meig <- meigen(cmat=cmat, threshold=0.25)
#esfT <- esf(y=y,x=x,meig=meig)
#esfT
lsem Low rank spatial error model (LSEM) estimation
Description
This function estimates the low rank spatial error model.
Usage
lsem( y, x, weig, method = "reml" )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K)
weig eigenvectors and eigenvalues of a spatial weight matrix. Output from weigen
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
Value
b Matrix with columns for the estimated coefficients on x, their standard errors,
t-values, and p-values (K x 4)
s Vector of estimated variance parameters (2 x 1). The first and the second ele-
ments denote the estimated rho parameter (sp_lambda) quantfying the scale of
spatial dependent process, and the standard error of the process (sp_SE), respec-
tively.
e Vector whose elements are residual standard error (resid_SE), adjusted condi-
tional R2 (adjR2(cond)), restricted log-likelihood (rlogLik), Akaike informa-
tion criterion (AIC), and Bayesian information criterion (BIC). When method =
"ml", restricted log-likelihood (rlogLik) is replaced with log-likelihood (logLik)
r Vector of estimated random coefficients on the spatial eigenvectors (L x 1)
pred Vector of predicted values (N x 1)
resid Vector of residuals (N x 1)
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>. and <NAME>. (2018) Low rank spatial econometric models. Arxiv.
See Also
meigen, meigen_f
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
coords<- boston.c[,c("LON", "LAT")]
weig <- weigen( coords )
res <- lsem(y=y,x=x,weig=weig)
res
lslm Low rank spatial lag model (LSLM) estimation
Description
This function estimates the low rank spatial lag model.
Usage
lslm( y, x, weig, method = "reml", boot = FALSE, iter = 200 )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K)
weig eigenvectors and eigenvalues of a spatial weight matrix. Output from weigen
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
boot If it is TRUE, confidence intervals for the spatial dependence parameters (s), the
mean direct effects (de), and the mean indirect effects (ie), are estimated through
a parametric bootstrapping. Default is FALSE
iter The number of bootstrap replicates. Default is 200
Value
b Matrix with columns for the estimated coefficients on x, their standard errors,
t-values, and p-values (K x 4)
s Vector of estimated shrinkage parameters (2 x 1). The first and the second
elements denote the estimated rho parameter (sp_rho) quantfying the scale of
spatial dependence, and the standard error of the spatial dependent component
(sp_SE), respectively. If boot = TRUE, their 95 percent confidence intervals and
the resulting p-values are also provided
e Vector whose elements are residual standard error (resid_SE), adjusted condi-
tional R2 (adjR2(cond)), restricted log-likelihood (rlogLik), Akaike informa-
tion criterion (AIC), and Bayesian information criterion (BIC). When method =
"ml", restricted log-likelihood (rlogLik) is replaced with log-likelihood (logLik)
de Matrix with columns for the estimated mean direct effects on x. If boot = TRUE,
their 95 percent confidence intervals and the resulting p-values are also provided
ie Matrix with columns for the estimated mean indirect effects on x. If boot =
TRUE, their 95 percent confidence intervals and the resulting p-values are also
provided
r Vector of estimated random coefficients on the spatial eigenvectors (L x 1)
pred Vector of predicted values (N x 1)
resid Vector of residuals (N x 1)
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>. and <NAME>. (2018) Low rank spatial econometric models. Arxiv.
See Also
weigen, lsem
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
coords <- boston.c[,c("LON", "LAT")]
weig <- weigen(coords)
res <- lslm(y=y,x=x,weig=weig)
## res <- lslm(y=y,x=x,weig=weig, boot=TRUE)
res
meigen Extraction of Moran’s eigenvectors
Description
This function calculates Moran eigenvectors and eigenvalues.
Usage
meigen( coords = NULL, model = "exp", threshold = 0,
enum = NULL, cmat = NULL, s_id = NULL )
Arguments
coords Matrix of spatial point coordinates (N x 2). If cmat is specified, it is ignored
model Type of kernel to model spatial dependence. The currently available options are
"exp" for the exponential kernel, "gau" for the Gaussian kernel, and "sph" for
the spherical kernel. Default is "exp"
threshold Threshold for the eigenvalues (scalar). Suppose that lambda_1 is the maximum
eigenvalue, this function extracts eigenvectors whose corresponding eigenvalue
is equal or greater than (threshold x lambda_1). threshold must be a value be-
tween 0 and 1. Default is zero (see Details)
enum Optional. The muxmum acceptable mumber of eigenvectors to be extracted
(scalar)
cmat Optional. A user-specified spatial connectivity matrix (N x N). It must be pro-
vided when the user wants to use a spatial connectivity matrix other than the
default matrices
s_id Optional. Location/zone ID for modeling spatial effects across groups. If speci-
fied, Moran eigenvectors are extracted by groups. It is useful e.g. for multilevel
modeling (s_id is the groups) and panel data modeling (s_id is given by individ-
ual location id). Default is NULL
Details
If cmat is not provided and model = "exp" (default), this function extracts Moran eigenvectors from
MCM, where M = I - 11’/N is a centering operator. C is a N x N connectivity matrix whose (i, j)-th
element equals exp(-d(i,j)/h), where d(i,j) is the Euclidean distance between the sample sites i and
j, and h is given by the maximum length of the minimum spanning tree connecting sample sites
(see Dray et al., 2006). If cmat is provided, this function performs the same calculation after C is
replaced with cmat.
If threshold is not provided (default), all the eigenvectors corresponding to positive eigenvalue, ex-
plaining positive spatial dependence, are extracted to model positive spatial dependence. threshold
= 0.00 or 0.25 are standard assumptions (see Griffith, 2003; Murakami and Griffith, 2015).
Value
sf Matrix of the first L eigenvectors (N x L)
ev Vector of the first L eigenvalues (L x 1)
ev_full Vector of all eigenvalues (N x 1)
other List of other outcomes, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>., and <NAME>. (2006) Spatial modelling: a comprehensive framework
for principal coordinate analysis of neighbour matrices (PCNM). Ecological Modelling, 196 (3),
483-493.
<NAME>. (2003) Spatial autocorrelation and spatial filtering: gaining understanding through
theory and scientific visualization. Springer Science & Business Media.
<NAME>. and <NAME>. (2015) Random effects specifications in eigenvector spatial filter-
ing: a simulation study. Journal of Geographical Systems, 17 (4), 311-331.
See Also
meigen_f for fast eigen-decomposition
meigen0 Nystrom extension of Moran eigenvectors
Description
This function estimates Moran eigenvectors at unobserved sites using the Nystrom extension.
Usage
meigen0( meig, coords0, s_id0 = NULL )
Arguments
coords0 Matrix of spatial point coordinates of unobserved sites (N_0 x 2)
meig Moran eigenvectors and eigenvalues. Output from meigen or meigen_f
s_id0 Optional. ID specifying groups modeling spatial effects (N_0 x 1). If specified,
Moran eigenvectors are extracted by groups. It is useful e.g. for multilevel mod-
eling (s_id is the groups) and panel data modeling (s_id is given by individual
location id). Default is NULL
Value
sf Matrix of the first L eigenvectors at unobserved sites (N_0 x L)
ev Vector of the first L eigenvalues (L x 1)
ev_full Vector of all eigenvalues (N x 1)
Author(s)
<NAME>
References
<NAME>. and <NAME>. (2005) On the Nystrom method for approximating a gram matrix
for improved kernel-based learning. Journal of Machine Learning Research, 6 (2005), 2153-2175.
See Also
meigen, meigen_f
meigen_f Fast approximation of Moran eigenvectors
Description
This function performs a fast approximation of Moran eigenvectors and eigenvalues.
Usage
meigen_f( coords, model = "exp", enum = 200, s_id = NULL )
Arguments
coords Matrix of spatial point coordinates (N x 2)
model Type of kernel to model spatial dependence. The currently available options are
"exp" for the exponential kernel, "gau" for the Gaussian kernel, and "sph" for
the spherical kernel. Default is "exp"
enum Number of eigenvectors and eigenvalues to be extracted (scalar). Default is 200
s_id Optional. Location/zone ID for modeling spatial effects across groups. If speci-
fied, Moran eigenvectors are extracted by groups. It is useful e.g. for multilevel
modeling (s_id is the groups) and panel data modeling (s_id is given by individ-
ual location id). Default is NULL
Details
This function extracts approximated Moran eigenvectors from MCM. M = I - 11’/N is a centering
operator, and C is a spatial connectivity matrix whose (i, j)-th element is given by exp( -d(i,j)/h),
where d(i,j) is the Euclidean distance between the sample sites i and j, and h is a range parameter
given by the maximum length of the minimum spanning tree connecting sample sites (see Dray et
al., 2006).
Following a simulation result that 200 eigenvectors are sufficient for accurate approximation of ESF
models (Murakami and Griffith, 2019), this function approximates the 200 eigenvectors correspond-
ing to the 200 largest eigenvalues by default (i.e., enum = 200). If enum is given by a smaller value
like 100, the computation time will be shorter, but with greater approximation error. Eigenvectors
corresponding to negative eigenvalues are omitted from the enum eigenvectors.
Value
sf Matrix of the first L approximated eigenvectors (N x L)
ev Vector of the first L approximated eigenvalues (L x 1)
ev_full Vector of all approximated eigenvalues (enum x 1)
other List of other outcomes, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>., and <NAME>. (2006) Spatial modelling: a comprehensive framework
for principal coordinate analysis of neighbour matrices (PCNM). Ecological Modelling, 196 (3),
483-493.
<NAME>. and <NAME>. (2019) Eigenvector spatial filtering for large data sets: fixed and
random effects approaches. Geographical Analysis, 51 (1), 23-49.
See Also
meigen
nongauss_y Parameter setup for modeling non-Gaussian continuous data and
count data
Description
Parameter setup for modeling non-Gaussian continuous data and count data. The SAL transforma-
tion (see details) is used to model a wide variety of non-Gaussian data without explicitly assuming
data distribution (see Murakami et al., 2021 for further detail). In addition, Box-Cox transformation
is used for non-negative continuous variables while another transformation approximating overdis-
persed Poisson distribution is used for count variables. The output from this function is used as an
input of the resf and resf_vc functions. For further details about its implementation and case study
examples, see Murakami (2021).
Usage
nongauss_y( y_type = "continuous", y_nonneg = FALSE, tr_num = 0 )
Arguments
y_type Type of explained variables y. "continuous" for continuous variables and "count"
for count variables
y_nonneg Effective if y_type = "continuous". TRUE if y cannot take negative value. If
y_nonneg = TRUE and tr_num = 0, the Box-Cox transformation is applied to
y. If y_nonneg = TRUE and tr_num > 0, the Box-Cox transformation is applied
first to roughly Gaussianize y. Then, the SAL transformation is iterated tr_num
times to improve the modeling accuracy. Default is FALSE
tr_num Number of the SAL transformations (SinhArcsinh and Affine, where the use of
"L" stems from the "Linear") applied to Gaussianize y. Default is 0
Details
If tr_num >0, the SAL transformation is iterated tr_num times to Gaussianize y. The SAL trans-
formation is defined as SAL(y)=a+b*sinh(c*arcsinh(y)-d) where a,b,c,d are parameters. Based on
Rios and Tobar (2019), the iteration of the SAL transformation approximates a wide variety of non-
Gaussian distributions without explicitly assuming data distribution. The resf and resf_vc functions
return tr_par, which is a list whose k-th element includes the a,b,c,d parameters used for the k-th
SAL transformation.
In addition, for non-negative y (y_nonneg = TRUE), the Box-Cox transformation is applied prior to
the iterative SAL transformation. tr_num and y_nonneg can be selected by comparing the BIC (or
AIC) values across models. This compositionally-warped spatial regression approach is detailed in
Murakami et al. (2021).
For count data (y_type = "count"), an overdispersed Poisson distribution (Gaussian approximation)
is assumed. If tr_num > 0, the distribution is adjusted to fit the data (y) through the iterative SAL
transformations. y_nonneg is ignored if y_type = "count".
Value
nongauss List of parameters for modeling non-Gaussian data
References
<NAME> <NAME>. (2019) Compositionally-warped Gaussian processes. Neural Networks, 118,
235-246.
<NAME>. (2021) Transformation-based generalized spatial regression using the spmoran pack-
age: Case study examples, ArXiv.
<NAME>., <NAME>., <NAME>. and <NAME>. (2021) Compositionally-warped additive mixed
modeling for a wide variety of non-Gaussian data. Spatial Statistics, 43, 100520.
<NAME>., & <NAME>. (2021). Improved log-Gaussian approximation for over-dispersed
Poisson regression: application to spatial analysis of COVID-19. ArXiv, 2104.13588.
See Also
resf, resf_vc
Examples
###### Regression for non-negative data (BC trans.)
ng1 <-nongauss_y( y_nonneg = TRUE )
ng1
###### General non-Gaussian regression for continuous data (two SAL trans.)
ng2 <-nongauss_y( tr_num = 2 )
ng2
###### General non-Gaussian regression for non-negative continuous data
ng3 <-nongauss_y( y_nonneg = TRUE, tr_num = 5 )
ng3
###### Over-dispersed Poisson regression for count data
ng4 <-nongauss_y( y_type = "count" )
ng4
###### A general non-Gaussian regression for count data
ng5 <-nongauss_y( y_type = "count", tr_num = 5 )
ng5
############################## Fitting example
require(spdep);require(Matrix)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
xgroup<- boston.c[,"TOWN"]
coords<- boston.c[,c("LON","LAT")]
meig <- meigen(coords=coords)
res <- resf(y = y, x = x, meig = meig,nongauss=ng2)
res # Estimation results
plot(res$pdf,type="l") # Estimated probability density function
res$skew_kurt # Skew and kurtosis of the estimated PDF
res$pred_quantile[1:2,]# predicted value by quantile
coef_marginal(res) # Estimated marginal effects (dy/dx)
plot_n Plot non-spatially varying coefficients (NVCs)
Description
This function plots non-spatially varying coefficients (NVCs; coefficients varying with respect to
explanatory variable value) and their 95 percent confidence intervals
Usage
plot_n( mod, xnum = 1, xtype = "x", cex.lab = 20,
cex.axis = 15, lwd = 1.5, ylim = NULL, nmax = 20000 )
Arguments
mod Outpot from resf, besf, resf_vc, or besf_vc function
xnum The NVC on the xnum-th explanatory variable is plotted. Default is 1
xtype Effective for resf_vc and besf_vc. If "x", the num-th NVC in the spatially and
non-spatially varying coefficients on x is plotted. If "xconst", the num-th NVC
on xconst is plotted. Default is "x"
cex.lab The size of the x and y axis labels
cex.axis The size of the tick label numbers
lwd The width of the line drawing the coefficient estimates
ylim The limints of the y-axis
nmax If sample size exceeds nmax, nmax samples are randomly selected and plotted.
Default is 20,000
See Also
resf, besf, resf_vc, besf_vc
plot_qr Plot quantile regression coefficients estimated from SF-UQR
Description
This function plots regression coefficients estimated from the spatial filter unconditional quantile
regression (SF-UQR) model.
Usage
plot_qr( mod, pnum = 1, par = "b", cex.main = 20, cex.lab = 18, cex.axis = 15, lwd = 1.5 )
Arguments
mod Outpot from the resf_qr function
pnum A number specifying the parameter being plotted. If par = "b", the coefficients
on the pnum-th explanatory variable are plotted (intercepts are plotted if pnum
= 1). If par = "s" and pnum = 1, the estimated standard errors for the reidual
spatial process are plotted. If par = "s" and pnum = 2, the Moran’s I values
of the residual spatial process are plotted. The Moran’s I value is scaled to
take a value between 0 (no spatial dependence) and 1 (the maximum possible
spatial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked
par If it is "b", regression coefficeints are plotted. If it is "s", shrinkage (variance)
parameters for the residual spatial process are plotted. Default is "b"
cex.main Graphical parameter specifying the size of the main title
cex.lab Graphical parameter specifying the size of the x and y axis labels
cex.axis Graphical parameter specifying the size of the tick label numbers
lwd Graphical parameters specifying the width of the line drawing the coefficient
estimates
Note
See par for the graphical parameters
See Also
resf_qr
plot_s Mapping spatially (and non-spatially) varying coefficients (SVCs or
SNVC)
Description
This function plots spatially and non-spatially varying coefficients (SNVC) or spatially varying
coefficients (SVC). Note that SNVC = SVC + NVC (NVC is a coefficient varying with respect to
explanatory variable value)
Usage
plot_s( mod, xnum = 0, btype = "snvc", xtype = "x", pmax = NULL, ncol = 8,
col = NULL, inv =FALSE, brks = "regular", cex = 1, pch = 20, nmax = 20000)
Arguments
mod Outpot from resf, besf, resf_vc, or besf_vc function
xnum For resf_vc and besf_vc, xnum-th S(N)VC on x is plotted. If num = 0, spa-
tially varying intercept is plotted. For resf and besf, estimated spatially de-
pendent component in the residuals is plotted irrespective of the xnum value.
Default is 0
btype Effective for resf_vc and besf_vc. If "snvc" (default), SNVC (= SVC + NVC)
is plotted. If "svc" , SVC is plotted. If "nvc", NVC is plotted
xtype If "x" (default), coefficients on x is plotted. If "xconst", those on xconst is plotted
pmax The maximum p-value for the S(N)VC to be displayed. For example, if pmax =
0.05, only coefficients that are statistically significant at the 5 percent level are
plotted. If NULL, all the coefficients are plotted. Default is NULL
ncol Number of colors in the color palette. Default is 8
col Color palette used for the mapping. If NULL, the blue-pink-yellow color scheme
is used. Palettes in the RColorBrewer package are available. Default is NULL
inv If TRUE, the color palett is inverted. Default is FALSE
brks If "regular", color is changed at regular intervals. If "quantile", color is changed
for each quantile
cex Size of the dots representing sample sites
pch A number indicating the symbol to use
nmax If sample size exceeds nmax, nmax samples are randomly selected and plotted.
Default is 20,000
See Also
resf, besf, resf_vc, besf_vc
predict0 Spatial predictions
Description
This function predicts explained variables using eigenvector spatial filtering (ESF) or random effects
ESF. The Nystrom extension is used to perform a prediction minimizing the expected prediction
error
Usage
predict0( mod, meig0, x0 = NULL, xgroup0 = NULL, offset0 = NULL,
weight0 = NULL, compute_se=FALSE, compute_quantile = FALSE )
Arguments
mod Output from esf or resf
meig0 Moran eigenvectors at predicted sites. Output from meigen0
x0 Matrix of explanatory variables at predicted sites (N_0 x K). Default is NULL
xgroup0 Matrix of group IDs that may be group IDs (integers) or group names (N_0 x
K_group). Default is NULL
offset0 Vector of offset variables at predicted sites (N_0 x 1). Effective if y is count (see
nongauss_y). Default is NULL
weight0 Vector of weights for predicted sites (N_0 x 1). Required if compute_se = TRUE
or compute_quantile = TRUE
compute_se If TRUE, predictive standard error is evaulated. It is currently supported only
for continuous variables. If nongauss is specified in mod, standard error for the
transformed y is evaluated. Default is FALSE
compute_quantile
If TRUE, Matrix of the quantiles for the predicted values (N x 15) is evaulated.
It is currently supported only for continuous variables. Default is FALSE
Value
pred Matrix with the first column for the predicted values (pred). The second and
the third columns are the predicted trend component (xb) and the residual spa-
tial process (sf_residual). If xgroup0 is specified, the fourth column is the
predicted group effects (group). If tr_num > 0 or tr_nonneg ==TRUE (i.e., y
is transformed) in resf, another column including the predicted values in the
transformed/normalized scale (pred_trans) is inserted as the second column. In
addition, if compute_quantile =TRUE, predictive standard errors (pred_se) is
evaluated and inserted as another column
pred_quantile Effective if compute_quantile = TRUE. Matrix of the quantiles for the predicted
values (N x 15). It is useful to evaluate uncertainty in the predictive value
c_vc Matrix of estimated non-spatially varying coefficients (NVCs) on x0 (N x K).
Effective if nvc =TRUE in resf
cse_vc Matrix of standard errors for the NVCs on x0 (N x K).Effective if nvc =TRUE
in resf
ct_vc Matrix of t-values for the NVCs on x0 (N x K). Effective if nvc =TRUE in resf
cp_vc Matrix of p-values for the NVCs on x0 (N x K). Effective if nvc =TRUE in resf
References
<NAME>. and <NAME>. (2005) On the Nystrom method for approximating a gram matrix
for improved kernel-based learning. Journal of Machine Learning Research, 6 (2005), 2153-2175.
See Also
meigen0, predict0_vc
Examples
require(spdep)
data(boston)
samp <- sample( dim( boston.c )[ 1 ], 400)
d <- boston.c[ samp, ] ## Data at observed sites
y <- d[, "CMEDV"]
x <- d[,c("ZN","INDUS", "NOX","RM", "AGE", "DIS")]
coords <- d[,c("LON", "LAT")]
d0 <- boston.c[-samp, ] ## Data at unobserved sites
y0 <- d0[, "CMEDV"]
x0 <- d0[,c("ZN","INDUS", "NOX","RM", "AGE", "DIS")]
coords0 <- d0[,c("LON", "LAT")]
############ Model estimation
meig <- meigen( coords = coords )
mod <- resf(y=y, x=x, meig=meig)
## or
# mod <- esf(y=y,x=x,meig=meig)
############ Spatial prediction
meig0 <- meigen0( meig = meig, coords0 = coords0 )
pred0 <- predict0( mod = mod, x0 = x0, meig0 = meig0 )
pred0$pred[1:10,]
######################## If NVCs are assumed
#mod2 <- resf(y=y, x=x, meig=meig, nvc=TRUE)
#pred02 <- predict0( mod = mod2, x0 = x0, meig0 = meig0 )
#pred02$pred[1:10,] # Predicted explained variables
#pred02$c_vc[1:10,] # Predicted NVCs
predict0_vc Spatial predictions for explained variables and spatially varying coef-
ficients
Description
This function predicts explained variables and spatially and non-spatially varying coefficients. The
Nystrom extension is used to perform a prediction minimizing the expected prediction error
Usage
predict0_vc( mod, meig0, x0 = NULL, xgroup0 = NULL, xconst0 = NULL,
offset0 = NULL, weight0 = NULL, compute_se=FALSE, compute_quantile = FALSE )
Arguments
mod Output from resf_vc or besf_vc
meig0 Moran eigenvectors at predicted sites. Output from meigen0
x0 Matrix of explanatory variables at predicted sites whose coefficients are allowed
to vary across geographical space (N_0 x K). Default is NULL
xgroup0 Matrix of group indeces that may be group IDs (integers) or group names (N_0
x K_group). Default is NULL
xconst0 Matrix of explanatory variables at predicted sites whose coefficients are assumed
constant (or NVC) across space (N_0 x K_const). Default is NULL
offset0 Vector of offset variables at predicted sites (N x 1). Available if y is count (see
nongauss_y). Default is NULL
weight0 Vector of weights for predicted sites (N_0 x 1). Required if compute_se = TRUE
or compute_quantile = TRUE
compute_se If TRUE, predictive standard error is evaulated. It is currently supported only
for continuous variables. If nongauss is specified in mod, standard error for the
transformed y is evaluated. Default is FALSE
compute_quantile
If TRUE, Matrix of the quantiles for the predicted values (N x 15) is evaulated.
Default is FALSE
Value
pred Matrix with the first column for the predicted values (pred). The second and the
third columns are the predicted trend component (i.e., component explained by
x0 and xconst0) (xb) and the residual spatial process (sf_residual). If xgroup0
is specified, the fourth column is the predicted group effects (group) If tr_num
> 0 or tr_nonneg ==TRUE (i.e., y is transformed) in resf_vc, another column
including the predicted values in the transformed/normalized scale (pred_trans)
is inserted into the second column
b_vc Matrix of estimated spatially (and non-spatially) varying coefficients (S(N)VCs)
on x0 (N_0 x K)
bse_vc Matrix of estimated standard errors for the S(N)VCs (N_0 x K)
t_vc Matrix of estimated t-values for the S(N)VCs (N_0 x K)
p_vc Matrix of estimated p-values for the S(N)VCs (N_0 x K)
c_vc Matrix of estimated non-spatially varying coefficients (NVCs) on xconst0 (N_0
x K)
cse_vc Matrix of estimated standard errors for the NVCs (N_0 x K)
ct_vc Matrix of estimated t-values for the NVCs (N_0 x K)
cp_vc Matrix of estimated p-values for the NVCs (N_0 x K)
References
<NAME>. and <NAME>. (2005) On the Nystrom method for approximating a gram matrix
for improved kernel-based learning. Journal of Machine Learning Research, 6 (2005), 2153-2175.
<NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2017) A Moran coefficient-
based mixed effects approach to investigate spatially varying relationships. Spatial Statistics, 19,
68-89.
See Also
meigen0, predict0
Examples
require(spdep)
data(boston)
samp <- sample( dim( boston.c )[ 1 ], 300)
d <- boston.c[ samp, ] ## Data at observed sites
y <- d[, "CMEDV"]
x <- d[,c("ZN", "LSTAT")]
xconst <- d[,c("CRIM", "NOX", "AGE", "DIS", "RAD", "TAX", "PTRATIO", "B", "RM")]
coords <- d[,c("LON", "LAT")]
d0 <- boston.c[-samp, ] ## Data at unobserved sites
y0 <- d0[, "CMEDV"]
x0 <- d0[,c("ZN", "LSTAT")]
xconst0 <- d0[,c("CRIM", "NOX", "AGE", "DIS", "RAD", "TAX", "PTRATIO", "B", "RM")]
coords0 <- d0[,c("LON", "LAT")]
############ Model estimation
meig <- meigen( coords = coords )
mod <- resf_vc(y=y, x=x, xconst=xconst, meig=meig )
############ Spatial prediction of y and spatially varying coefficients
meig0 <- meigen0( meig = meig, coords0 = coords0 )
pred0 <- predict0_vc( mod = mod, x0 = x0, xconst0=xconst0, meig0 = meig0 )
pred0$pred[1:10,] # Predicted explained variables
pred0$b_vc[1:10,] # Predicted SVCs
pred0$bse_vc[1:10,]# Predicted standard errors of the SVCs
pred0$t_vc[1:10,] # Predicted t-values of the SNVCs
pred0$p_vc[1:10,] # Predicted p-values of the SNVCs
plot(y0,pred0$pred[,1]);abline(0,1)
############ or spatial prediction of spatially varying coefficients only
# pred00 <- predict0_vc( mod = mod, meig0 = meig0 )
# pred00$b_vc[1:10,]
# pred00$bse_vc[1:10,]
# pred00$t_vc[1:10,]
# pred00$p_vc[1:10,]
######################## If SNVCs are assumed on x
# mod2 <- resf_vc(y=y, x=x, xconst=xconst, meig=meig, x_nvc=TRUE,xconst_nvc=TRUE )
# pred02 <- predict0_vc( mod = mod2, x0 = x0, xconst0=xconst0 ,meig0 = meig0 )
# pred02$pred[1:10,] # Predicted explained variables
# pred02$b_vc[1:10,] # Predicted SNVCs
# pred02$bse_vc[1:10,]# Predicted standard errors of the SNVCs
# pred02$t_vc[1:10,] # Predicted t-values of the SNVCs
# pred02$p_vc[1:10,] # Predicted p-values of the SNVCs
# plot(y0,pred02$pred[,1]);abline(0,1)
resf Gaussian and non-Gaussian spatial regression models
Description
This model estimates regression coefficients, coefficients varying depending on x (non-spatially
varying coefficients; NVC), group effects, and residual spatial dependence. The random-effects
eigenvector spatial filtering, which is an approximate Gaussian process approach, is used for mod-
eling the spatial dependence. The explained variables are transformed to fit the data distribution
if nongauss is specified. Thus, this function is available for modeling Gaussian and non-Gaussian
continuous data and count data (see nongauss_y).
Usage
resf( y, x = NULL, xgroup = NULL, weight = NULL, offset = NULL,
nvc = FALSE, nvc_sel = TRUE, nvc_num = 5, meig,
method = "reml", penalty = "bic", nongauss = NULL )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K). Default is NULL
xgroup Matrix of group IDs. The IDs may be group numbers or group names (N x
K_group). Default is NULL
weight Vector of weights for samples (N x 1). If non-NULL, the adjusted R-squared
value is evaluated for weighted explained variables. Default is NULL
offset Vector of offset variables (N x 1). Available if y is count (y_type = "count" is
specified in the nongauss_y function). Default is NULL
nvc If TRUE, non-spatiallly varying coefficients (NVCs; coefficients varying with
respect to explanatory variable value) are asumed. If FALSE, constant coeffi-
cients are assumed. Default is FALSE
nvc_sel If TRUE, type of each coefficient (NVC or constant) is selected through a BIC
(default) or AIC minimization. If FALSE, NVCs are assumed across x. Alterna-
tively, nvc_sel can be given by column number(s) of x. For example, if nvc_sel
= 2, the coefficient on the second explanatory variable is NVC and the other
coefficients are constants. Default is TRUE
nvc_num Number of basis functions used to model NVC. An intercept and nvc_num nat-
ural spline basis functions are used to model each NVC. Default is 5
meig Moran eigenvectors and eigenvalues. Output from meigen or meigen_f
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
penalty Penalty to select type of coefficients (NVC or constant) to stablize the estimates.
The current options are "bic" for the Baysian information criterion-type penalty
(N x log(K)) and "aic" for the Akaike information criterion (2K). Default is "bic"
nongauss Parameter setup for modeling non-Gaussian continuous data or count data. Out-
put from nongauss_y
Details
This function estimates Gaussian and non-Gaussian spatial model for continuous and count data.
For non-Gaussian modeling, see nongauss_y.
Value
b Matrix with columns for the estimated constant coefficients on x, their standard
errors, t-values, and p-values (K x 4)
b_g List of K_group matrices with columns for the estimated group effects, their
standard errors, and t-values
c_vc Matrix of estimated NVCs on x (N x K). Effective if nvc = TRUE
cse_vc Matrix of standard errors for the NVCs on x (N x K). Effective if nvc = TRUE
ct_vc Matrix of t-values for the NVCs on x (N x K). Effective if nvc = TRUE
cp_vc Matrix of p-values for the NVCs on x (N x K). Effective if nvc = TRUE
s Vector of estimated variance parameters (2 x 1). The first and the second el-
ements are the standard error and the Moran’s I value of the estimated spa-
tially dependent process, respectively. The Moran’s I value is scaled to take a
value between 0 (no spatial dependence) and 1 (the maximum possible spa-
tial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked
s_c Vector of standard errors of the NVCs on xconst
s_g Vector of estimated standard errors of the group effects
e Error statistics. When y_type="continuous", it includes residual standard er-
ror (resid_SE), adjusted conditional R2 (adjR2(cond)), restricted log-likelihood
(rlogLik), Akaike information criterion (AIC), and Bayesian information crite-
rion (BIC). rlogLik is replaced with log-likelihood (logLik) if method = "ml".
resid_SE is replaced with the residual standard error for the transformed y (resid_SE_trans)
if nongauss is specified. When y_type="count", the error statistics includes
root mean squared error (RMSE), Gaussian likelihood approximating the model,
AIC and BIC based on the likelihood, and the proportion of the null deviance
explained by the model (deviance explained (%)). deviance explained, which is
also used in the mgcv package, corresponds to the adjusted R2 in case of the
linear regression
vc List indicating whether NVC are removed or not during the BIC/AIC minimiza-
tion. 1 indicates not removed whreas 0 indicates removed
r Vector of estimated random coefficients on Moran’s eigenvectors (L x 1)
sf Vector of estimated spatial dependent component (N x 1)
pred Matrix of predicted values for y (pred) and their standard errors (pred_se) (N x
2). If y is transformed by specifying nongauss_y, the predicted values in the
transformed/normalized scale are added as another column named pred_trans
pred_quantile Matrix of the quantiles for the predicted values (N x 15). It is useful to evaluate
uncertainty in the predictive value
tr_par List of the parameter estimates for the tr_num SAL transformations. The k-th
element of the list includes the four parameters for the k-th SAL transformation
(see nongauss_y)
tr_bpar The estimated parameter in the Box-Cox transformation
tr_y Vector of the transformed explaied variables
resid Vector of residuals (N x 1)
pdf Matrix whose first column consists of evenly spaced values within the value
range of y and the second column consists of the estimated value of the proba-
bility density function for y if y_type in nongauss_y is "continuous" and proba-
bility mass function (PMF) if y_type = "count". If offset is specified (and y_type
= "count"), the PMF given median offset value is evaluated
skew_kurt Skewness and kurtosis of the estimated probability density/mass function of y
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>. and <NAME>. (2015) Random effects specifications in eigenvector spatial filter-
ing: a simulation study. Journal of Geographical Systems, 17 (4), 311-331.
<NAME>., and <NAME>. (2020) Balancing spatial and non-spatial variations in varying co-
efficient modeling: a remedy for spurious correlation. Geographical Analysis, DOI: 10.1111/gean.12310.
<NAME>., <NAME>., <NAME>. and <NAME>. (2021) Compositionally-warped additive mixed
modeling for a wide variety of non-Gaussian data. Spatial Statistics, 43, 100520.
See Also
meigen, meigen_f, coef_marginal, besf
Examples
require(spdep);require(Matrix)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
xgroup<- boston.c[,"TOWN"]
coords<- boston.c[,c("LON","LAT")]
meig <- meigen(coords=coords)
# meig<- meigen_f(coords=coords) ## for large samples
#####################################################
######## Gaussian spatial regression models #########
#####################################################
res <- resf(y = y, x = x, meig = meig)
res
plot_s(res) ## spatially dependent component (intercept)
######## Group-wise random intercepts ###############
#res2 <- resf(y = y, x = x, meig = meig, xgroup = xgroup)
######## Group-wise random intercepts and ###########
######## Group-level spatial dependence ###########
#meig_g<- meigen(coords=coords, s_id = xgroup)
#res3 <- resf(y = y, x = x, meig = meig_g, xgroup = xgroup)
######## Coefficients varying depending on x ########
#res4 <- resf(y = y, x = x, meig = meig, nvc = TRUE)
#res4
#plot_s(res4) # spatially dependent component (intercept)
#plot_s(res4,5) # spatial plot of the 5-th NVC
#plot_s(res4,6) # spatial plot of the 6-th NVC
#plot_s(res4,13)# spatial plot of the 13-th NVC
#plot_n(res4,5) # 1D plot of the 5-th NVC
#plot_n(res4,6) # 1D plot of the 6-th NVC
#plot_n(res4,13)# 1D plot of the 13-th NVC
#####################################################
###### Non-Gaussian spatial regression models #######
#####################################################
#### Generalized model for continuous data ##############
# - Data distribution is estimated
#ng5 <- nongauss_y( tr_num = 2 )# 2 SAL transformations to Gaussianize y
#res5 <- resf(y = y, x = x, meig = meig, nongauss = ng5)
#res5 ## tr_num may be selected by comparing BIC (or AIC)
#plot(res5$pdf,type="l") # Estimated probability density function
#res5$skew_kurt # Skew and kurtosis of the estimated PDF
#res5$pred_quantile[1:2,]# predicted value by quantile
#coef_marginal(res5) # Estimated marginal effects (dy/dx)
#### Generalized model for non-negative continuous data #
# - Data distribution is estimated
#ng6 <- nongauss_y( tr_num = 2, y_nonneg = TRUE )
#res6 <- resf(y = y, x = x, meig = meig, nongauss = ng6 )
#coef_marginal(res6)
#### Overdispersed Poisson model for count data #####
# - y is assumed as a count data
#ng7 <- nongauss_y( y_type = "count" )
#res7 <- resf(y = y, x = x, meig = meig, nongauss = ng7 )
#### Generalized model for count data ###############
# - y is assumed as a count data
# - Data distribution is estimated
#ng8 <- nongauss_y( y_type = "count", tr_num = 2 )
#res8 <- resf(y = y, x = x, meig = meig, nongauss = ng8 )
resf_qr Spatial filter unconditional quantile regression
Description
This function estimates the spatial filter unconditional quantile regression (SF-UQR) model.
Usage
resf_qr( y, x = NULL, meig, tau = NULL, boot = TRUE, iter = 200, ncores=NULL )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables (N x K). Default is NULL
meig Moran eigenvectors and eigenvalues. Output from meigen or meigen_f
tau The quantile(s) to be modeled. It must be a number (or a vector of numbers)
strictly between 0 and 1. By default, tau = c(0.1, 0.2, ..., 0.9)
boot If it is TRUE, confidence intervals of regression coefficients are estimated by a
semiparametric bootstrapping. Default is TRUE
iter The number of bootstrap replications. Default is 200
ncores Number of cores used for the parallel computation. If ncores=NULL, which is
the default, the number of available cores is detected and used
Value
b Matrix of estimated regression coefficients (K x Q), where Q is the number of
quantiles (i.e., the length of tau)
r Matrix of estimated random coefficients on Moran eigenvectors (L x Q)
s Vector of estimated variance parameters (2 x 1). The first and the second ele-
ments denote the standard error and the Moran’s I value of the estimated spa-
tially dependent component, respectively. The Moran’s I value is scaled to take
a value between 0 (no spatial dependence) and 1 (the maximum possible spa-
tial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked
e Vector whose elements are residual standard error (resid_SE) and adjusted quasi
conditional R2 (quasi_adjR2(cond))
B Q matrices (K x 4) summarizing bootstrapped estimates for the regression co-
efficients. Columns of these matrices consist of the estimated coefficients, the
lower and upper bounds for the 95 percent confidencial intervals, and p-values.
It is returned if boot = TRUE
S Q matrices (2 x 3) summarizing bootstrapped estimates for the variance param-
eters. Columns of these matrices consist of the estimated parameters, the lower
and upper bounds for the 95 percent confidencial intervals. It is returned if boot
= TRUE
B0 List of Q matrices (K x iter) summarizing bootstrapped coefficients. The q-th
matrix consists of the coefficients on the q-th quantile. Effective if boot = TRUE
S0 List of Q matrices (2 x iter) summarizing bootstrapped variance parameters. The
q-th matrix consists of the parameters on the q-th quantile. Effective if boot =
TRUE
Author(s)
<NAME>
References
<NAME>. and <NAME>. (2017) Spatially filtered unconditional quantile regression. ArXiv.
See Also
plot_qr
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV" ]
x <- boston.c[,c("CRIM","ZN","INDUS", "CHAS", "NOX","RM", "AGE",
"DIS" ,"RAD", "TAX", "PTRATIO", "B", "LSTAT")]
coords <- boston.c[,c("LON", "LAT")]
meig <- meigen(coords=coords)
res <- resf_qr(y=y,x=x,meig=meig, boot=FALSE)
res
plot_qr(res,1) # Intercept
plot_qr(res,2) # Coefficient on CRIM
plot_qr(res,1,"s") # spcomp_SE
plot_qr(res,2,"s") # spcomp_Moran.I/max(Moran.I)
###Not run
#res <- resf_qr(y=y,x=x,meig=meig, boot=TRUE)
#res
#plot_qr(res,1) # Intercept + 95 percent confidence interval (CI)
#plot_qr(res,2) # Coefficient on CRIM + 95 percent CI
#plot_qr(res,1,"s") # spcomp_SE + 95 percent CI
#plot_qr(res,2,"s") # spcomp_Moran.I/max(Moran.I) + 95 percent CI
resf_vc Gaussian and non-Gaussian spatial regression models with varying
coefficients
Description
This model estimates regression coefficients, spatially varying coefficients (SVCs), non-spatially
varying coefficients (NVC; coefficients varying with respect to explanatory variable value), SNVC
(= SVC + NVC), group effects, and residual spatial dependence. The random-effects eigenvector
spatial filtering, which is an approximate Gaussian process approach, is used for modeling the
spatial process in coefficients and residuals. While the resf_vc function estimates a SVC model by
default, the type of coefficients (constant, SVC, NVC, or SNVC) can be selected through a BIC/AIC
minimization. The explained variables are transformed to fit the data distribution if nongauss is
specified. Thus, this function is available for modeling Gaussian and non-Gaussian continuous data
and count data (see nongauss_y).
Note that SNVCs can be mapped just like SVCs. SNVC model is more robust against spurious
correlation (multicollinearity) and stable than SVC models (see Murakami and Griffith, 2020).
Usage
resf_vc(y, x, xconst = NULL, xgroup = NULL, weight = NULL, offset = NULL,
x_nvc = FALSE, xconst_nvc = FALSE, x_sel = TRUE, x_nvc_sel = TRUE,
xconst_nvc_sel = TRUE, nvc_num = 5, meig, method = "reml",
penalty = "bic", miniter = NULL, maxiter = 30, nongauss = NULL )
Arguments
y Vector of explained variables (N x 1)
x Matrix of explanatory variables with spatially varying coefficients (SVC) (N x
K)
xconst Matrix of explanatory variables with constant coefficients (N x K_c). Default is
NULL
xgroup Matrix of group IDs. The IDs may be group numbers or group names (N x K_g).
Default is NULL
weight Vector of weights for samples (N x 1). When non-NULL, the adjusted R-squared
value is evaluated for weighted explained variables. Default is NULL
offset Vector of offset variables (N x 1). Available if y is count (y_type = "count" is
specified in the nongauss_y function). Default is NULL
x_nvc If TRUE, SNVCs are assumed on x. Otherwise, SVCs are assumed. Default is
FALSE
xconst_nvc If TRUE, NVCs are assumed on xconst. Otherwise, constant coefficients are
assumed. Default is FALSE
x_sel If TRUE, type of coefficient (SVC or constant) on x is selected through a BIC
(default) or AIC minimization. If FALSE, SVCs are assumed across x. Alter-
natively, x_sel can be given by column number(s) of x. For example, if x_sel =
2, the coefficient on the second explanatory variable in x is SVC and the other
coefficients are constants. The Default is TRUE
x_nvc_sel If TRUE, type of coefficient (NVC or constant) on x is selected through the
BIC (default) or AIC minimization. If FALSE, NVCs are assumed across x.
Alternatively, x_nvc_sel can be given by column number(s) of x. For example,
if x_nvc_sel = 2, the coefficient on the second explanatory variable in x is NVC
and the other coefficients are constants. The Default is TRUE
xconst_nvc_sel If TRUE, type of coefficient (NVC or constant) on xconst is selected through
the BIC (default) or AIC minimization. If FALSE, NVCs are assumed across
xconst. Alternatively, xconst_nvc_sel can be given by column number(s) of
xconst. For example, if xconst_nvc_sel = 2, the coefficient on the second ex-
planatory variable in xconst is NVC and the other coefficients are constants.The
Default is TRUE
nvc_num Number of basis functions used to model NVC. An intercept and nvc_num nat-
ural spline basis functions are used to model each NVC. Default is 5
meig Moran eigenvectors and eigenvalues. Output from meigen or meigen_f
method Estimation method. Restricted maximum likelihood method ("reml") and max-
imum likelihood method ("ml") are available. Default is "reml"
penalty Penalty to select varying coefficients and stablize the estimates. The current
options are "bic" for the Baysian information criterion-type penalty (N x log(K))
and "aic" for the Akaike information criterion (2K). Default is "bic"
miniter Minimum number of iterations. Default is NULL
maxiter Maximum number of iterations. Default is 30
nongauss Parameter setup for modeling non-Gaussian continuous and count data. Output
from nongauss_y
Details
This function estimates Gaussian and non-Gaussian spatial model for continuous and count data.
For non-Gaussian modeling, see nongauss_y.
Value
b_vc Matrix of estimated spatially and non-spatially varying coefficients (SNVC =
SVC + NVC) on x (N x K)
bse_vc Matrix of standard errors for the SNVCs on x (N x k)
t_vc Matrix of t-values for the SNVCs on x (N x K)
p_vc Matrix of p-values for the SNVCs on x (N x K)
B_vc_s List summarizing estimated SVCs (in SNVC) on x. The four elements are the
SVCs (N x K), the standard errors (N x K), t-values (N x K), and p-values (N x
K), respectively
B_vc_n List summarizing estimated NVCs (in SNVC) on x. The four elements are the
NVCs (N x K), the standard errors (N x K), t-values (N x K), and p-values (N x
K), respectively
c Matrix with columns for the estimated coefficients on xconst, their standard
errors, t-values, and p-values (K_c x 4). Effective if xconst_nvc = FALSE
c_vc Matrix of estimated NVCs on xconst (N x K_c). Effective if xconst_nvc = TRUE
cse_vc Matrix of standard errors for the NVCs on xconst (N x k_c). Effective if xconst_nvc
= TRUE
ct_vc Matrix of t-values for the NVCs on xconst (N x K_c). Effective if xconst_nvc =
TRUE
cp_vc Matrix of p-values for the NVCs on xconst (N x K_c). Effective if xconst_nvc
= TRUE
b_g List of K_g matrices with columns for the estimated group effects, their standard
errors, and t-values
s List of variance parameters in the SNVC (SVC + NVC) on x. The first element is
a 2 x K matrix summarizing variance parameters for SVC. The (1, k)-th element
is the standard error of the k-th SVC, while the (2, k)-th element is the Moran’s I
value is scaled to take a value between 0 (no spatial dependence) and 1 (strongest
spatial dependence). Based on Griffith (2003), the scaled Moran’I value is in-
terpretable as follows: 0.25-0.50:weak; 0.50-0.70:moderate; 0.70-0.90:strong;
0.90-1.00:marked. The second element of s is the vector of standard errors of
the NVCs
s_c Vector of standard errors of the NVCs on xconst
s_g Vector of standard errors of the group effects
vc List indicating whether SVC/NVC are removed or not during the BIC/AIC min-
imization. 1 indicates not removed (replaced with constant) whreas 0 indicates
removed
e Error statistics. When y_type="continuous", it includes residual standard er-
ror (resid_SE), adjusted conditional R2 (adjR2(cond)), restricted log-likelihood
(rlogLik), Akaike information criterion (AIC), and Bayesian information crite-
rion (BIC). rlogLik is replaced with log-likelihood (logLik) if method = "ml".
resid_SE is replaced with the residual standard error for the transformed y (resid_SE_trans)
if nongauss is specified. When y_type="count", the error statistics includes
root mean squared error (RMSE), Gaussian likelihood approximating the model,
AIC and BIC based on the likelihood, and the proportion of the null deviance
explained by the model (deviance explained (%)). deviance explained, which is
also used in the mgcv package, corresponds to the adjusted R2 in case of the
linear regression
pred Matrix of predicted values for y (pred) and their standard errors (pred_se) (N x
2). If y is transformed by specifying nongauss_y, the predicted values in the
transformed/normalized scale are added as another column named pred_trans
pred_quantile Matrix of the quantiles for the predicted values (N x 15). It is useful to evaluate
uncertainty in the predictive value
tr_par List of the parameter estimates for the tr_num SAL transformations. The k-th
element of the list includes the four parameters for the k-th SAL transformation
(see nongauss_y)
tr_bpar The estimated parameter in the Box-Cox transformation
tr_y Vector of the transformed explaied variables
resid Vector of residuals (N x 1)
pdf Matrix whose first column consists of evenly spaced values within the value
range of y and the second column consists of the estimated value of the proba-
bility density function for y if y_type in nongauss_y is "continuous" and prob-
ability mass function if y_type = "count". If offset is specified (and y_type =
"count"), the PMF given median offset value is evaluated
skew_kurt Skewness and kurtosis of the estimated probability density/mass function of y
other List of other outputs, which are internally used
Author(s)
<NAME>
References
<NAME>., <NAME>., <NAME>., <NAME>.A., and <NAME>. (2017) A Moran coefficient-
based mixed effects approach to investigate spatially varying relationships. Spatial Statistics, 19,
68-89.
<NAME>., <NAME>., <NAME>. and <NAME>. (2021) Compositionally-warped additive mixed
modeling for a wide variety of non-Gaussian data. Spatial Statistics, 43, 100520.
<NAME>., and <NAME>. (2021) Balancing spatial and non-spatial variations in varying co-
efficient modeling: a remedy for spurious correlation. Geographical Analysis, DOI: 10.1111/gean.12310.
<NAME>. (2003) Spatial autocorrelation and spatial filtering: gaining understanding through
theory and scientific visualization. Springer Science & Business Media.
See Also
meigen, meigen_f, coef_marginal, besf_vc
Examples
require(spdep)
data(boston)
y <- boston.c[, "CMEDV"]
x <- boston.c[,c("CRIM", "AGE")]
xconst <- boston.c[,c("ZN","DIS","RAD","NOX", "TAX","RM", "PTRATIO", "B")]
xgroup <- boston.c[,"TOWN"]
coords <- boston.c[,c("LON", "LAT")]
meig <- meigen(coords=coords)
# meig <- meigen_f(coords=coords) ## for large samples
#####################################################
############## Gaussian SVC models ##################
#####################################################
#### SVC or constant coefficients on x ##############
res <- resf_vc(y=y,x=x,xconst=xconst,meig=meig )
res
plot_s(res,0) # Spatially varying intercept
plot_s(res,1) # 1st SVC (Not shown because the SVC is estimated constant)
plot_s(res,2) # 2nd SVC
#### SVC on x #######################################
#res2 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, x_sel = FALSE )
#### Group-level SVC or constant coefficients on x ##
#### Group-wise random intercepts ###################
#meig_g <- meigen(coords, s_id=xgroup)
#res3 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig_g,xgroup=xgroup)
#####################################################
############## Gaussian SNVC models #################
#####################################################
#### SNVC, SVC, NVC, or constant coefficients on x ###
#res4 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, x_nvc =TRUE)
#### SNVC, SVC, NVC, or constant coefficients on x ###
#### NVC or Constant coefficients on xconst ##########
#res5 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, x_nvc =TRUE, xconst_nvc=TRUE)
#plot_s(res5,0) # Spatially varying intercept
#plot_s(res5,1) # Spatial plot of the SNVC (SVC + NVC) on x[,1]
#plot_s(res5,1,btype="svc")# Spatial plot of SVC in the SNVC
#plot_s(res5,1,btype="nvc")# Spatial plot of NVC in the SNVC
#plot_n(res5,1) # 1D plot of the NVC
#plot_s(res5,6,xtype="xconst")# Spatial plot of the NVC on xconst[,6]
#plot_n(res5,6,xtype="xconst")# 1D plot of the NVC on xconst[,6]
#####################################################
############## Non-Gaussian SVC models ##############
#####################################################
#### Generalized model for continuous data ##########
# - Probability distribution is estimated from data
#ng6 <- nongauss_y( tr_num = 2 )# 2 SAL transformations to Gaussianize y
#res6 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, nongauss = ng6 )
#res6 # tr_num may be selected by comparing BIC (or AIC)
#coef_marginal_vc(res6) # marginal effects from x (dy/dx)
#plot(res6$pdf,type="l") # Estimated probability density function
#res6$skew_kurt # Skew and kurtosis of the estimated PDF
#res6$pred_quantile[1:2,]# predicted value by quantile
#### Generalized model for non-negative continuous data
# - Probability distribution is estimated from data
#ng7 <- nongauss_y( tr_num = 2, y_nonneg = TRUE )
#res7 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, nongauss = ng7 )
#coef_marginal_vc(res7)
#### Overdispersed Poisson model for count data #####
# - y is assumed as a count data
#ng8 <- nongauss_y( y_type = "count" )
#res8 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, nongauss = ng8 )
#### Generalized model for count data ###############
# - y is assumed as a count data
# - Probability distribution is estimated from data
#ng9 <- nongauss_y( y_type = "count", tr_num = 2 )
#res9 <- resf_vc(y=y,x=x,xconst=xconst,meig=meig, nongauss = ng9 )
weigen Extract eigenvectors from a spatial weight matrix
Description
This function extracts eigenvectors and eigenvalues from a spatial weight matrix.
Usage
weigen( x = NULL, type = "knn", k = 4, threshold = 0.25, enum = NULL )
Arguments
x Matrix of spatial point coordinates (N x 2), sf polygon object (N spatial units),
or an user-specified spatial weight matrix (N x N) (see Details)
type Type of spatial weights. The currently available options are "knn" for the k-
nearest neighbor-based weights, and "tri" for the Delaunay triangulation-based
weights. If sf polygons are provided for x, type is ignored, and the rook-type
neighborhood matrix is created
k Number of nearest neighbors. It is used if type ="knn"
threshold Threshold for the eigenvalues (scalar). Suppose that lambda_1 is the maxi-
mum eigenvalue. Then, this fucntion extracts eigenvectors whose corresponding
eigenvalues are equal or greater than [threshold x lambda_1]. It must be a value
between 0 and 1. Default is 0.25 (see Details)
enum Optional. The muximum acceptable mumber of eigenvectors to be used for
spatial modeling (scalar)
Details
If user-specified spatial weight matrix is provided for x, this function returns the eigen-pairs of
the matrix. Otherwise, if sf polygon object is provided to x, the rook-type neighborhood matrix is
created using this polygon, and eigen-decomposed. Otherwise, if point coordinats are provided to
x, a spatial weight matrix is created according to type, and eigen-decomposed.
By default, the ARPACK routine is implemented for fast eigen-decomposition.
threshold = 0.25 (default) is a standard setting for topology-based ESF (see Tiefelsdorf and Griffith,
2007) while threshold = 0.00 is a usual setting for distance-based ESF.
Value
sf Matrix of the first L eigenvectors (N x L)
ev Vector of the first L eigenvalues (L x 1)
other List of other outcomes, which are internally used
Author(s)
<NAME>
References
Tiefelsdorf, M. and Griffith, D.A. (2007) Semiparametric filtering of spatial autocorrelation: the
eigenvector approach. Environment and Planning A, 39 (5), 1193-1221.
<NAME> <NAME>. (2018) Low rank spatial econometric models. Arxiv, 1810.02956.
See Also
meigen, meigen_f
Examples
require(spdep)
data(boston)
########## Rook adjacency-based W
poly <- st_read(system.file("shapes/boston_tracts.shp",package="spData")[1])
weig1 <- weigen( poly )
########## knn-based W
coords <- boston.c[,c("LON", "LAT")]
weig2 <- weigen( coords, type = "knn" )
########## Delaunay triangulation-based W
coords <- boston.c[,c("LON", "LAT")]
weig3 <- weigen( coords, type = "tri")
########## User-specified W
dmat <- as.matrix(dist(coords))
cmat <- exp(-dmat)
diag(cmat)<- 0
weig4 <- weigen( cmat, threshold = 0 ) |
spbabel | cran | R | Package ‘spbabel’
March 12, 2023
Type Package
Version 0.6.0
Title Convert Spatial Data Using Tidy Tables
Description Tools to convert from specific formats to more general forms of
spatial data. Using tables to store the actual entities present in spatial
data provides flexibility, and the functions here deliberately
minimize the level of interpretation applied, leaving that for specific
applications. Includes support for simple features, round-trip for 'Spatial' classes and long-form
tables, analogous to 'ggplot2::fortify'. There is also a more 'normal form' representation
that decomposes simple features and their kin to tables of objects, parts, and unique coordinates.
URL https://mdsumner.github.io/spbabel/
BugReports https://github.com/mdsumner/spbabel/issues
Depends R (>= 3.2.3)
Imports dplyr, methods, sp, tibble, rlang, pkgconfig
Suggests testthat, ggplot2, raster, sf, rmarkdown, covr, trip, viridis
LazyData yes
License GPL-3
RoxygenNote 7.2.3
Encoding UTF-8
ByteCompile TRUE
NeedsCompilation no
Author <NAME> [aut, cre]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-03-12 10:20:11 UTC
R topics documented:
spbabel-packag... 2
as_tibble.sf... 2
feature_tabl... 4
hole... 4
map_tabl... 5
mpoint... 5
sema... 6
s... 6
show,SpatialPolygonsDataFrame-metho... 7
s... 7
sptable.SpatialPolygon... 8
trac... 10
spbabel-package Convert between different types of spatial objects.
Description
Facilities for converting between different types of spatial objects, including an in-place method to
modify the underlying geometry of ’Spatial’ classes using data frame idioms.The spbabel package
provides functions to round-trip a Spatial object to a single table and back.
Details
sptable<- modify a Spatial object in-place
sptable create a tibble from Spatial DataFrame object
sp create Spatial DataFrame object from table
as_tibble.sfg Individual geometries as tibbles.
Description
Individual geometries as tibbles.
Usage
## S3 method for class 'sfg'
as_tibble(
x,
...,
.rows = NULL,
.name_repair = c("check_unique", "unique", "universal", "minimal"),
rownames = pkgconfig::get_config("tibble::rownames", NULL)
)
Arguments
x sf geometry of type sfg
... Unused, for extensibility.
.rows The number of rows, useful to create a 0-column tibble or just as an additional
check.
.name_repair Treatment of problematic column names:
• "minimal": No name repair or checks, beyond basic existence,
• "unique": Make sure names are unique and not empty,
• "check_unique": (default value), no name repair, but check they are unique,
• "universal": Make the names unique and syntactic
• a function: apply custom name repair (e.g., .name_repair = make.names
for names in the style of base R).
• A purrr-style anonymous function, see rlang::as_function()
This argument is passed on as repair to vctrs::vec_as_names(). See there
for more details on these terms and the strategies used to enforce them.
rownames How to treat existing row names of a data frame or matrix:
• NULL: remove row names. This is the default.
• NA: keep row names.
• A string: the name of a new column. Existing rownames are transferred
into this column and the row.names attribute is deleted. No name repair is
applied to the new column name, even if x already contains a column of that
name. Use as_tibble(rownames_to_column(...)) to safeguard against
this case.
Read more in rownames.
Value
tibble
feature_table Normal form for sf
Description
A ‘feature_table‘ is a normal form for simple features, where all branches are recorded in one table
with attributes object_, branch_, type_, parent_. All instances of parent_ are NA except for the
holes in multipolygon.
Usage
feature_table(x, ...)
Arguments
x sf object
... ignored
Details
There is wasted information stored this way, but that’s because this is intended as a lowest common
denominator format.
There are three tables, objects (the feature attributes and ID), branches (the parts), coordinates (the
X, Y, Z, M values).
holey Multi-part, multi-holed, neighbouring, not completely topological
polygons.
Description
Created in /data-raw/ from a manual drawing built in Manifold GIS.
map_table A decomposition of ’vector’ map data structures to tables.
Description
Creates a set of related tables to store the appropriate entities in spatial map data.
Usage
map_table(x, ...)
Arguments
x object to tidy
... arguments passed to methods
Details
The basic entities behind spatial data, and hence the "map tables" are:
vertices the positions in geometric space, e.g. x, y, z, time, long, lat, salinity etc.
branches a single connected chain of vertices, or "parts"
objects a collection of branches aligned to a row of metadata
This is the basic "topology" of traditional GIS vector data, for points, lines, polygons and their
multi-counterparts. By default map_tables will produce these tables and also de-duplicated the
input vertices, adding a fourth table to link vertices to branches.
Other topology types such as triangle or quad meshes can extend this four-entity model, or exist
without the branches at all. See "mesh_table" ??
These are currently classed as object_table, branch_table, branch_link_vertex_table, and vertex_table.
But there are no methods.
Value
list of tibbles
mpoint1 MultiPointsDataFrame data set
Description
MultiPointsDataFrame data set
semap "South-east" map data.
Description
Created in /data-raw/ semap is the sptable version of some of maptools ’wrld_simpl’ and seatt is
the matching attribute data, linked by ’object_’.
Created in /data-raw/.
Examples
# recreate as sp object
mp <- sp(semap, attr_tab = seatt, crs = "+proj=longlat +ellps=WGS84")
sf TBD Convert from dplyr tbl form to simple features.
Description
Not yet implemented.
Usage
sf(x, ...)
## S3 method for class 'data.frame'
sf(x, attr_tab = NULL, crs, ...)
Arguments
x tibble as created by sptable
... unused
attr_tab remaining data from the attributes
crs projection, defaults to NA_character_
Value
sf
show,SpatialPolygonsDataFrame-method 7
show,SpatialPolygonsDataFrame-method
sp methods
Description
Sp methods
Usage
## S4 method for signature 'SpatialPolygonsDataFrame'
show(object)
## S4 method for signature 'SpatialLinesDataFrame'
show(object)
## S4 method for signature 'SpatialPointsDataFrame'
show(object)
## S4 method for signature 'Spatial'
print(x, ...)
Arguments
object Spatial object
x Spatial object
... ignored
sp Convert from dplyr tbl form to Spatial*DataFrame.
Description
Convert from dplyr tbl form to Spatial*DataFrame.
Usage
sp(x, ...)
## S3 method for class 'data.frame'
sp(x, attr_tab = NULL, crs, ...)
Arguments
x tibble as created by sptable
... unused
attr_tab remaining data from the attributes
crs projection, defaults to NA_character_
Value
Spatial*
Examples
library(dplyr)
semap1 <- semap %>% dplyr::filter(y_ > -89.9999)
sp_obj <- sp(semap1, attr_tab = seatt, crs = "+proj=longlat +ellps=WGS84")
## look, seamless Antarctica!
## library(rgdal); plot(spTransform(sp_obj, "+proj=laea +lat_0=-70"))
sptable.SpatialPolygons
Convert from various forms to a table.
Description
Decompose a Spatial or sf object to a single table structured as a row for every coordinate in all the
sub-geometries, including duplicated coordinates that close polygonal rings, close lines and shared
vertices between objects.
Usage
## S3 method for class 'SpatialPolygons'
sptable(x, ...)
## S3 method for class 'SpatialLines'
sptable(x, ...)
## S3 method for class 'SpatialPointsDataFrame'
sptable(x, ...)
## S3 method for class 'SpatialMultiPointsDataFrame'
sptable(x, ...)
sptable(object) <- value
sptable(x, ...)
## S3 method for class 'trip'
map_table(x, ...)
Arguments
x Spatial object
... ignored
object Spatial object
value modified sptable version of object
Details
Input can be a of type sf or SpatialPolygonsDataFrame, SpatialLinesDataFrame, SpatialMultiPointsDataFrame
or a SpatialPointsDataFrame. For simplicity sptable and its inverses sp and sf assume that all
geometry can be encoded with object, branch, island, order, x and y. and that the type of topology
is identified by which of these are present.
For simple features objects with mixed types of topology the result is consistent, but probably not
useful. Columns that aren’t present in one type will be present, padded with NA. (This is work in
progress).
Value
Spatial object
tibble with columns
• SpatialPolygonsDataFrame "object_" "branch_" "island_" "order_" "x" "y_"
• SpatialLinesDataFrame "object_" "branch_" "order_" "x_" "y_"
• SpatialPointsDataFrame "object_" x_" "y_"
• SpatialMultiPointsDataFrame "object_" "branch_" "x_" "y_"
• sf some combination of the above
Examples
## holey is a decomposed SpatialPolygonsDataFrame
spdata <- sp(holey)
library(sp)
plot(spdata, col = rainbow(nrow(spdata), alpha = 0.4))
points(holey$x_, holey$y_, cex = 4)
holes <- subset(holey, !island_)
## add the points that only belong to holes
points(holes$x_, holes$y_, pch = "+", cex = 2)
## manipulate based on topology
## convert to not-holes
notahole <- holes
notahole$island_ <- TRUE
#also convert to singular objects - note that this now means we have an overlapping pair of polys
#because the door had a hole filled by another object
notahole$object_ <- notahole$branch_
plot(sp(notahole), add = TRUE, col = "red")
track Multi-object track with x, y, z, and time.
Description
Created in /data-raw/track.r |
incanter_incanter-mongodb | hex | Erlang | incanter.mongodb
===
A simple library that provides functions for persisting Incanter data structures using MongoDB.
Use incanter.mongodb in combination with the somnium.congomongo library.
For usage examples, see the Congomongo README at <http://github.com/somnium/congomongo>,
and the examples/blog/mongodb_datasets.clj file in the Incanter distribution.
Here are Somnium's descriptions of Congomongo's functions:
(mongo! & args) : Creates a Mongo object and sets the default database.
Keyword arguments include:
:host -> defaults to localhost
:port -> defaults to 27017
:db -> defaults to nil (you'll have to set it anyway, might as well do it now.)
(get-coll coll) : Returns a DBCollection object
(fetch coll & options) : Fetches objects from a collection. Optional arguments include
:where -> takes a query map
:only -> takes an array of keys to retrieve
:as -> what to return, defaults to :clojure, can also be :json or :mongo
:from -> argument type, same options as above
:one? -> defaults to false, use fetch-one as a shortcut
:count? -> defaults to false, use fetch-count as a shortcut
(fetch-one coll & options) : same as (fetch collection :one? true)
(fetch-count coll & options) : same as (fetch collection :count? true)
(insert! coll obj & options) : Inserts a map into collection. Will not overwrite existing maps.
Takes optional from and to keyword arguments. To insert as a side-effect only specify :to as nil.
(mass-insert! coll objs & options) : Inserts a sequence of maps.
(update! coll old new & options) : Alters/inserts a map in a collection. Overwrites existing objects.
The shortcut forms need a map with valid :_id and :_ns fields or a collection and a map with a valid :_id field.
(destroy! coll query-map) : Removes map from collection. Takes a collection name and a query map
(add-index! coll fields & options) : Adds an index on the collection for the specified fields if it does not exist.
Options include:
:unique -> defaults to false
:force -> defaults to true
(drop-index! coll fields) : Drops an index on the collection for the specified fields
(drop-all-indexes! coll) : Drops all indexes from a collection
(get-indexes coll & options) : Get index information on collection
(drop-database title) : drops a database from the mongo server
(set-database title) : atomically alters the current database
(databases) : List databases on the mongo server
(collections) : Returns the set of collections stored in the current database
(drop-collection coll) : Permanently deletes a collection. Use with care.
```
A simple library that provides functions for persisting Incanter data structures using MongoDB.
Use incanter.mongodb in combination with the somnium.congomongo library.
For usage examples, see the Congomongo README at http://github.com/somnium/congomongo,
and the examples/blog/mongodb_datasets.clj file in the Incanter distribution.
Here are Somnium's descriptions of Congomongo's functions:
(mongo! & args) : Creates a Mongo object and sets the default database.
Keyword arguments include:
:host -> defaults to localhost
:port -> defaults to 27017
:db -> defaults to nil (you'll have to set it anyway, might as well do it now.)
(get-coll coll) : Returns a DBCollection object
(fetch coll & options) : Fetches objects from a collection. Optional arguments include
:where -> takes a query map
:only -> takes an array of keys to retrieve
:as -> what to return, defaults to :clojure, can also be :json or :mongo
:from -> argument type, same options as above
:one? -> defaults to false, use fetch-one as a shortcut
:count? -> defaults to false, use fetch-count as a shortcut
(fetch-one coll & options) : same as (fetch collection :one? true)
(fetch-count coll & options) : same as (fetch collection :count? true)
(insert! coll obj & options) : Inserts a map into collection. Will not overwrite existing maps.
Takes optional from and to keyword arguments. To insert
as a side-effect only specify :to as nil.
(mass-insert! coll objs & options) : Inserts a sequence of maps.
(update! coll old new & options) : Alters/inserts a map in a collection. Overwrites existing objects.
The shortcut forms need a map with valid :_id and :_ns fields or
a collection and a map with a valid :_id field.
(destroy! coll query-map) : Removes map from collection. Takes a collection name and
a query map
(add-index! coll fields & options) : Adds an index on the collection for the specified fields if it does not exist.
Options include:
:unique -> defaults to false
:force -> defaults to true
(drop-index! coll fields) : Drops an index on the collection for the specified fields
(drop-all-indexes! coll) : Drops all indexes from a collection
(get-indexes coll & options) : Get index information on collection
(drop-database title) : drops a database from the mongo server
(set-database title) : atomically alters the current database
(databases) : List databases on the mongo server
(collections) : Returns the set of collections stored in the current database
(drop-collection coll) : Permanently deletes a collection. Use with care.
```
[raw docstring](#)
---
#### fetch-datasetclj
```
(fetch-dataset & args)
```
Queries a MongoDB database, accepting the same arguments as somnium.congomongo/fetch, but returning an Incanter dataset instead of a sequence of maps.
Examples:
(use '(incanter core datasets mongodb))
(require '[clojure.core.matrix.dataset :as ds])
(use 'somnium.congomongo)
;; first load some sample data
(def data (get-dataset :airline-passengers))
(view data)
;; a MongoDB server must be running on the localhost on the default port
;; for the following steps.
(mongo! :db "mydb")
(mass-insert! :airline-data (ds/row-maps data))
;; and then retrieve it
;; notice that the retrieved data set has two additional columns, :_id :_ns
(view (fetch-dataset :airline-data))
```
Queries a MongoDB database, accepting the same arguments as somnium.congomongo/fetch, but returning an Incanter dataset instead of a sequence of maps.
Examples:
(use '(incanter core datasets mongodb))
(require '[clojure.core.matrix.dataset :as ds])
(use 'somnium.congomongo)
;; first load some sample data
(def data (get-dataset :airline-passengers))
(view data)
;; a MongoDB server must be running on the localhost on the default port
;; for the following steps.
(mongo! :db "mydb")
(mass-insert! :airline-data (ds/row-maps data))
;; and then retrieve it
;; notice that the retrieved data set has two additional columns, :_id :_ns
(view (fetch-dataset :airline-data))
```
[source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-mongodb/src/incanter/mongodb.clj#L73)[raw docstring](#)
---
#### insert-datasetclj
```
(insert-dataset mongodb-coll dataset)
```
Inserts the rows of the Incanter dataset into the given MongoDB collection.
Examples:
(use '(incanter core datasets mongodb))
(require '[somnium.congomongo :refer [mongo! mass-insert!]])
(def data (get-dataset :airline-passengers))
(view data)
;; a MongoDB server must be running on the localhost on the default port
;; for the following steps.
(mongo! :db "mydb")
(insert-dataset :airline-data data)
;; notice that the retrieved data set has two additional columns, :_id :_ns
(view (fetch-dataset :airline-data))
```
Inserts the rows of the Incanter dataset into the given MongoDB collection.
Examples:
(use '(incanter core datasets mongodb))
(require '[somnium.congomongo :refer [mongo! mass-insert!]])
(def data (get-dataset :airline-passengers))
(view data)
;; a MongoDB server must be running on the localhost on the default port
;; for the following steps.
(mongo! :db "mydb")
(insert-dataset :airline-data data)
;; notice that the retrieved data set has two additional columns, :_id :_ns
(view (fetch-dataset :airline-data))
```
[source](https://github.com/incanter/incanter/blob/1.9.3/modules/incanter-mongodb/src/incanter/mongodb.clj#L107)[raw docstring](#) |
recommenderlab | cran | R | Package ‘recommenderlab’
September 20, 2023
Version 1.0.6
Date 2023-09-19
Title Lab for Developing and Testing Recommender Algorithms
Description Provides a research infrastructure to develop and evaluate
collaborative filtering recommender algorithms. This includes a sparse
representation for user-item matrices, many popular algorithms, top-N recommendations,
and cross-validation. Hahsler (2022) <doi:10.48550/arXiv.2205.12371>.
Classification/ACM G.4, H.2.8
Depends R (>= 3.5.0), Matrix, arules, proxy (>= 0.4-26)
Imports registry, methods, utils, stats, irlba, recosystem,
matrixStats
Suggests testthat
BugReports https://github.com/mhahsler/recommenderlab/issues
URL https://github.com/mhahsler/recommenderlab
License GPL-2
Copyright (C) <NAME>
NeedsCompilation no
Author <NAME> [aut, cre, cph]
(<https://orcid.org/0000-0003-2716-1405>),
<NAME> [ctb]
Maintainer <NAME> <<EMAIL>>
Repository CRAN
Date/Publication 2023-09-20 07:30:02 UTC
R topics documented:
binaryRatingMatri... 2
calcPredictionAccurac... 4
dissimilarit... 6
Erro... 8
evaluat... 9
evaluationResultList-clas... 11
evaluationResults-clas... 12
evaluationSchem... 13
evaluationScheme-clas... 15
funkSV... 16
getLis... 17
HybridRecommende... 18
internalFunction... 20
Jester5... 21
MovieLens... 22
MSWe... 23
normaliz... 24
plo... 25
predic... 26
ratingMatri... 27
realRatingMatri... 28
Recommende... 30
Recommender-clas... 32
sparseNAMatrix-clas... 32
topNLis... 34
binaryRatingMatrix Class "binaryRatingMatrix": A Binary Rating Matrix
Description
A matrix to represent binary rating data. 1 codes for a positive rating and 0 codes for either no or a
negative rating. This coding is common for market basked data where products are either bought or
not.
Objects from the Class
Objects can be created by calls of the form new("binaryRatingMatrix", data = im), where im is
an itemMatrix as defined in package arules, by coercion from a matrix (all non-zero values will
be a 1), or by using binarize for an object of class "realRatingMatrix".
Slots
data: Object of class "itemMatrix" (see package arules)
Extends
Class "ratingMatrix", directly.
Methods
coerce signature(from = "matrix", to = "binaryRatingMatrix"): The matrix needs to be a
logical matrix, or a 0-1 matrix (0 means FALSE and 1 means TRUE). NAs are interpreted as
FALSE.
coerce signature(from = "itemMatrix", to = "binaryRatingMatrix")
coerce signature(from = "data.frame", to = "binaryRatingMatrix")
coerce signature(from = "binaryRatingMatrix", to = "matrix")
coerce signature(from = "binaryRatingMatrix", to = "dgTMatrix")
coerce signature(from = "binaryRatingMatrix", to = "ngCMatrix")
coerce signature(from = "binaryRatingMatrix", to = "dgCMatrix")
coerce signature(from = "binaryRatingMatrix", to = "itemMatrix")
coerce signature(from = "binaryRatingMatrix", to = "list")
See Also
itemMatrix in arules, getList.
Examples
## create a 0-1 matrix
m <- matrix(sample(c(0,1), 50, replace=TRUE), nrow=5, ncol=10,
dimnames=list(users=paste("u", 1:5, sep=''),
items=paste("i", 1:10, sep='')))
m
## coerce it into a binaryRatingMatrix
b <- as(m, "binaryRatingMatrix")
b
## coerce it back to see if it worked
as(b, "matrix")
## use some methods defined in ratingMatrix
dim(b)
dimnames(b)
## counts
rowCounts(b) ## number of ratings per user
colCounts(b) ## number of ratings per item
## plot
image(b)
## sample and subset
sample(b,2)
b[1:2,1:5]
## coercion
as(b, "list")
head(as(b, "data.frame"))
head(getData.frame(b, ratings=FALSE))
## creation from user/item tuples
df <- data.frame(user=c(1,1,2,2,2,3), items=c(1,4,1,2,3,5))
df
b2 <- as(df, "binaryRatingMatrix")
b2
as(b2, "matrix")
calcPredictionAccuracy
Calculate the Prediction Error for a Recommendation
Description
Calculate prediction accuracy. For predicted ratings MAE (mean average error), MSE (means
squared error) and RMSE (root means squared error) are calculated. For topNLists various binary
classification metrics are returned (e.g., precision, recall, TPR, FPR).
Usage
calcPredictionAccuracy(x, data, ...)
## S4 method for signature 'realRatingMatrix,realRatingMatrix'
calcPredictionAccuracy(x, data, byUser = FALSE, ...)
## S4 method for signature 'topNList,realRatingMatrix'
calcPredictionAccuracy(x, data, byUser = FALSE,
given = NULL, goodRating = NA, ...)
## S4 method for signature 'topNList,binaryRatingMatrix'
calcPredictionAccuracy(x, data, byUser = FALSE,
given = NULL, ...)
Arguments
x Predicted items in a "topNList" or predicted ratings as a "realRatingMatrix"
data Observed true ratings for the users as a "RatingMatrix". The users have to be in
the same order as in x.
byUser logical; Should the accuracy measures be reported for each user individually
instead of being averaged over all users?
given how many items were given to create the predictions. If the data comes from
an evaluation scheme that usses all-but-x (i.e., a negative value for give), then
a vector with the number of items actually given for each prediction needs to be
supplied. This can be optained from the evaluation scheme es via getData(es,
"given").
goodRating If x is a "topNList" and data is a "realRatingMatrix" then goodRating is used
as the threshold for determining what rating in data is considered a good rating.
... further arguments.
Details
The function calculates the accuracy of predictions compared to the observed true ratings (data)
averaged over the users. Use byUser = TRUE to get the results for each user.
If both, the predictions are numeric ratings (i.e. a "realRatingMatrix"), then the error measures
RMSE, MSE and MAE are calculated.
If the predictions are a "topNList", then the entries of the confusion matrix (true positives TP, false
positives FP, false negatives FN and true negatives TN) and binary classification measures like
precision, recall, TPR and FPR are calculated. If data is a "realRatingMatrix", then goodRating
has to be specified to identify items that should be recommended (i.e., have a rating of goodRating
or more). Note that you need to specify the number of items given to the recommender to create
predictions. The number of predictions by user (N) is the total number of items in the data minus
the number of given items. The number of TP is limited by the size of the top-N list. Also, since
the counts for TP, FP, FN and TN are averaged over the users (unless byUser = TRUE is used), they
will not be whole numbers.
If the ratings are a "topNList" and the observed data is a "realRatingMatrix" then goodRating is
used to determine what rating in data is considered a good rating for calculating binary classifica-
tion measures. This means that an item in the topNList is considered a true positive if it has a rating
of goodRating or better in the observed data.
Value
Returns a vector with the appropriate measures averaged over all users. For byUser=TRUE, a matrix
with a row for each user is returned.
References
A<NAME> and <NAME> (2009). A Survey of Accuracy Evaluation Metrics of Recom-
mendation Tasks, Journal of Machine Learning Research 10, 2935-2962.
See Also
topNList, binaryRatingMatrix, realRatingMatrix.
Examples
### recommender for real-valued ratings
data(Jester5k)
## create 90/10 split (known/unknown) for the first 500 users in Jester5k
e <- evaluationScheme(Jester5k[1:500, ], method = "split", train = 0.9,
k = 1, given = 15)
e
## create a user-based CF recommender using training data
r <- Recommender(getData(e, "train"), "UBCF")
## create predictions for the test data using known ratings (see given above)
p <- predict(r, getData(e, "known"), type = "ratings")
p
## compute error metrics averaged per user and then averaged over all
## recommendations
calcPredictionAccuracy(p, getData(e, "unknown"))
head(calcPredictionAccuracy(p, getData(e, "unknown"), byUser = TRUE))
## evaluate topNLists instead (you need to specify given and goodRating!)
p <- predict(r, getData(e, "known"), type = "topNList")
p
calcPredictionAccuracy(p, getData(e, "unknown"), given = 15, goodRating = 5)
## evaluate a binary recommender
data(MSWeb)
MSWeb10 <- sample(MSWeb[rowCounts(MSWeb) >10,], 50)
e <- evaluationScheme(MSWeb10, method="split", train = 0.9,
k = 1, given = 3)
e
## create a user-based CF recommender using training data
r <- Recommender(getData(e, "train"), "UBCF")
## create predictions for the test data using known ratings (see given above)
p <- predict(r, getData(e, "known"), type="topNList", n = 10)
p
calcPredictionAccuracy(p, getData(e, "unknown"), given = 3)
calcPredictionAccuracy(p, getData(e, "unknown"), given = 3, byUser = TRUE)
dissimilarity Dissimilarity and Similarity Calculation Between Rating Data
Description
Calculate dissimilarities/similarities between ratings by users and for items.
Usage
## S4 method for signature 'binaryRatingMatrix'
dissimilarity(x, y = NULL, method = NULL, args = NULL, which = "users")
## S4 method for signature 'realRatingMatrix'
dissimilarity(x, y = NULL, method = NULL, args = NULL, which = "users")
similarity(x, y = NULL, method = NULL, args = NULL, ...)
## S4 method for signature 'ratingMatrix'
similarity(x, y = NULL, method = NULL, args = NULL, which = "users",
min_matching = 0, min_predictive = 0)
Arguments
x a ratingMatrix.
y NULL or a second ratingMatrix to calculate cross-(dis)similarities.
method (dis)similarity measure to use. Available measures are typically "cosine", "pearson",
"jaccard", etc. See dissimilarity for class itemMatrix in arules for details
about measures for binaryRatingMatrix and dist in proxy for realRatingMatrix.
Default for realRatingMatrix is cosine and for binaryRatingMatrix is jac-
card.
args a list of additional arguments for the methods.
which a character string indicating if the (dis)similarity should be calculated between
"users" (rows) or "items" (columns).
min_matching, min_predictive
Thresholds on the minimum number of ratings used to calculate the similarity
and the minimum number of ratings that can be used for prediction.
... further arguments.
Details
Most dissimlarites and similarities are calculated using the proxy package. Similarities are typically
converted into dissimilarities using s = 1/(1+d) or s = 1−d (used for Jaccard, Cosine and Pearson
correlation) depending on the measure.
Similarities are usually defined in the range of [0, 1], however, Cosine similarity and Pearson cor-
relation are defined in the interval [−1, 1]. We rescale these measures with s0 = 1/2(s + 1) to the
interval [0, 1].
Similarities are calculated using only the ratings that are available for both users/items. This can
lead to calculating the measure using only a very small number (maybe only one) of ratings.
min_matching is the required number of shared ratings to calculate similarities. To predict rat-
ings, there need to be additional ratings in argument y. min_predictive is the required number of
additional ratings to calculate similarities. If min_matching or min_predictive fails, then NA is
reported instead of the calculated similarity.
Value
returns an object of class "dist", "simil" or an appropriate object (e.g., a matrix with class
"crossdist" o "crosssimil") to represent a cross-(dis)similarity.
See Also
ratingMatrix, dissimilarity in arules, and dist in proxy.
Examples
data(MSWeb)
## between 5 users
dissimilarity(MSWeb[1:5,], method = "jaccard")
similarity(MSWeb[1:5,], method = "jaccard")
## between first 3 items
dissimilarity(MSWeb[,1:3], method = "jaccard", which = "items")
similarity(MSWeb[,1:3], method = "jaccard", which = "items")
## cross-similarity between first 2 users and users 10-20
similarity(MSWeb[1:2,], MSWeb[10:20,], method="jaccard")
Error Error Calculation
Description
Calculate the mean absolute error (MAE), mean square error (MSE), root mean square error (RMSE)
and for matrices also the Frobenius norm (identical to RMSE).
Usage
MSE(true, predicted, na.rm = TRUE)
RMSE(true, predicted, na.rm = TRUE)
MAE(true, predicted, na.rm = TRUE)
frobenius(true, predicted, na.rm = TRUE)
Arguments
true true values.
predicted predicted values
na.rm ignore missing values.
Details
Frobenius norm requires matrices.
Value
The error value.
Examples
true <- rnorm(10)
predicted <- rnorm(10)
MAE(true, predicted)
MSE(true, predicted)
RMSE(true, predicted)
true <- matrix(rnorm(9), nrow = 3)
predicted <- matrix(rnorm(9), nrow = 3)
frobenius(true, predicted)
evaluate Evaluate a Recommender Models
Description
Evaluates a single or a list of recommender model given an evaluation scheme and return evaluation
metrics.
Usage
evaluate(x, method, ...)
## S4 method for signature 'evaluationScheme,character'
evaluate(x, method, type="topNList",
n=1:10, parameter=NULL, progress = TRUE, keepModel=FALSE)
## S4 method for signature 'evaluationScheme,list'
evaluate(x, method, type="topNList",
n=1:10, parameter=NULL, progress = TRUE, keepModel=FALSE)
Arguments
x an evaluation scheme (class "evaluationScheme").
method a character string or a list. If a single character string is given it defines the rec-
ommender method used for evaluation. If several recommender methods need
to be compared, method contains a nested list. Each element describes a rec-
ommender method and consists of a list with two elements: a character string
named "name" containing the method and a list named "parameters" contain-
ing the parameters used for this recommender method. See Recommender for
available methods.
type evaluate "topNList" or "ratings"?
n a vector of the different values for N used to generate top-N lists (only if type="topNList").
parameter a list with parameters for the recommender algorithm (only used when method
is a single method).
progress logical; report progress?
keepModel logical; store used recommender models?
... further arguments.
Details
The evaluation uses the specification in the evaluation scheme to train a recommender models on
training data and then evaluates the models on test data. The result is a set of accuracy measures
averaged over the test users. See calcPredictionAccuracy for details on the accuracy measures
and the averaging. Note: Also the confusion matrix counts are averaged over users and therefore
not whole numbers.
See vignette("recommenderlab") for more details on the evaluaiton process and the used met-
rics.
Value
If a single recommender method is specified in method, then an object of class "evaluationResults"
is returned. If method is a list of recommendation models, then an object of class "evaluationResultList"
is returned.
See Also
calcPredictionAccuracy, evaluationScheme, evaluationResults. evaluationResultList.
Examples
### evaluate top-N list recommendations on a 0-1 data set
## Note: we sample only 100 users to make the example run faster
data("MSWeb")
MSWeb10 <- sample(MSWeb[rowCounts(MSWeb) >10,], 100)
## create an evaluation scheme (10-fold cross validation, given-3 scheme)
es <- evaluationScheme(MSWeb10, method="cross-validation",
k=10, given=3)
## run evaluation
ev <- evaluate(es, "POPULAR", n=c(1,3,5,10))
ev
## look at the results (the length of the topNList is shown as column n)
getResults(ev)
## get a confusion matrices averaged over the 10 folds
avg(ev)
plot(ev, annotate = TRUE)
## evaluate several algorithms (including a hybrid recommender) with a list
algorithms <- list(
RANDOM = list(name = "RANDOM", param = NULL),
POPULAR = list(name = "POPULAR", param = NULL),
HYBRID = list(name = "HYBRID", param =
list(recommenders = list(
RANDOM = list(name = "RANDOM", param = NULL),
POPULAR = list(name = "POPULAR", param = NULL)
)
)
)
)
evlist <- evaluate(es, algorithms, n=c(1,3,5,10))
evlist
names(evlist)
## select the first results by index
evlist[[1]]
avg(evlist[[1]])
plot(evlist, legend="topright")
### Evaluate using a data set with real-valued ratings
## Note: we sample only 100 users to make the example run faster
data("Jester5k")
es <- evaluationScheme(Jester5k[1:100], method="split",
train=.9, given=10, goodRating=5)
## Note: goodRating is used to determine positive ratings
## predict top-N recommendation lists
## (results in TPR/FPR and precision/recall)
ev <- evaluate(es, "RANDOM", type="topNList", n=10)
getResults(ev)
## predict missing ratings
## (results in RMSE, MSE and MAE)
ev <- evaluate(es, "RANDOM", type="ratings")
getResults(ev)
evaluationResultList-class
Class "evaluationResultList": Results of the Evaluation of a Multiple
Recommender Methods
Description
Contains the evaluation results for several runs using multiple recommender methods in form of
confusion matrices. For each run the used models might be avialable.
Objects from the Class
Objects are created by evaluate.
Slots
.Data: Object of class "list": a list of "evaluationResults".
Extends
Class "list", from data part.
Methods
avg signature(x = "evaluationResultList"): returns a list of average confusion matrices.
[ signature(x = "evaluationResultList", i = "ANY", j = "missing", drop = "missing")
coerce signature(from = "list", to = "evaluationResultList")
show signature(object = "evaluationResultList")
See Also
evaluate, evaluationResults.
evaluationResults-class
Class "evaluationResults": Results of the Evaluation of a Single Rec-
ommender Method
Description
Contains the evaluation results for several runs using the same recommender method in form of
confusion matrices. For each run the used model might be avialable.
Objects from the Class
Objects are created by evaluate.
Slots
results: Object of class "list": contains objects of class "ConfusionMatrix", one for each run
specified in the used evaluation scheme.
Methods
avg signature(x = "evaluationResults"): returns the evaluation metrics averaged of cross-
validation folds.
getConfusionMatrix signature(x = "evaluationResults"): Deprecated. Use getResults().
getResults signature(x = "evaluationResults"): returns a list of evaluation metrics with one
element for each cross-valudation fold.
getModel signature(x = "evaluationResults"): returns a list of used recommender models (if
avilable).
getRuns signature(x = "evaluationResults"): returns the number of runs/number of confu-
sion matrices.
show signature(object = "evaluationResults")
See Also
evaluate
evaluationScheme Creator Function for evaluationScheme
Description
Creates an evaluationScheme object from a data set. The scheme can be a simple split into training
and test data, k-fold cross-evaluation or using k independent bootstrap samples.
Usage
evaluationScheme(data, ...)
## S4 method for signature 'ratingMatrix'
evaluationScheme(data, method="split",
train=0.9, k=NULL, given, goodRating = NA)
Arguments
data data set as a ratingMatrix.
method a character string defining the evaluation method to use (see details).
train fraction of the data set used for training.
k number of folds/times to run the evaluation (defaults to 10 for cross-validation
and bootstrap and 1 for split).
given single number of items given for evaluation or a vector of length of data giving
the number of items given for each observation. Negative values implement
all-but schemes. For example, given = -1 means all-but-1 evaluation.
goodRating numeric; threshold at which ratings are considered good for evaluation. E.g.,
with goodRating=3 all items with actual user rating of greater or equal 3 are
considered positives in the evaluation process. Note that this argument is only
used if the ratingMatrix is a of subclass realRatingMatrix!
... further arguments.
Details
evaluationScheme creates an evaluation scheme (training and test data) with k runs and one of the
following methods:
"split" randomly assigns the proportion of objects specified by train to the training set and the
rest is used for the test set.
"cross-validation" creates a k-fold cross-validation scheme. The data is randomly split into k
parts and in each run k-1 parts are used for training and the remaining part is used for testing. After
all k runs each part was used as the test set exactly once.
"bootstrap" creates the training set by taking a bootstrap sample (sampling with replacement) of
size train times number of users in the data set. All objects not in the training set are used for
testing.
For evaluation, Breese et al. (1998) introduced the four experimental protocols called Given 2,
Given 5, Given 10 and All-but-1. During testing, the Given x protocol presents the algorithm with
only x randomly chosen items for the test user, and the algorithm is evaluated by how well it is able
to predict the withheld items. For All-but-x, the algorithm sees all but x withheld ratings for the
test user. given controls x in the evaluations scheme. Positive integers result in a Given x protocol,
while negative values produce a All-but-x protocol.
If a user does not have enough ratings to satisfy given, then the user is dropped from the evaluation
with a warning.
Value
Returns an object of class "evaluationScheme".
References
<NAME> (1995). "A study of cross-validation and bootstrap for accuracy estimation and model
selection". Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence,
pp. 1137-1143.
<NAME>, <NAME>, <NAME> (1998). "Empirical Analysis of Predictive Algorithms for Col-
laborative Filtering." In Uncertainty in Artificial Intelligence. Proceedings of the Fourteenth Con-
ference, pp. 43-52.
See Also
getData, evaluationScheme, ratingMatrix.
Examples
data("MSWeb")
MSWeb10 <- sample(MSWeb[rowCounts(MSWeb) >10,], 50)
MSWeb10
## simple split with 3 items given
esSplit <- evaluationScheme(MSWeb10, method="split",
train = 0.9, k=1, given=3)
esSplit
## 4-fold cross-validation with all-but-1 items for learning.
esCross <- evaluationScheme(MSWeb10, method="cross-validation",
k=4, given=-1)
esCross
evaluationScheme-class
Class "evaluationScheme": Evaluation Scheme
Description
An evaluation scheme created from a data set. The scheme can be a simple split into training and
test data, k-fold cross-evaluation or using k bootstrap samples.
Objects from the Class
Objects can be created by evaluationScheme(data, method="split", train=0.9, k=NULL, given=3).
Slots
data: Object of class "ratingMatrix"; the data set.
given: Object of class "integer"; given ratings are randomly selected for each evaluation user
and presented to the recommender algorithm to calculate recommend items/ratings. The rec-
ommended items are compared to the remaining items for the evaluation user.
goodRating: Object of class "numeric"; Rating at which an item is considered a positive for
evaluation.
k: Object of class "integer"; number of runs for evaluation. Default is 1 for method "split" and
10 for "cross-validation" and "bootstrap".
knownData: Object of class "ratingMatrix"; data set with only known (given) items.
method: Object of class "character"; evaluation method. Available methods are: "split", "cross-
validation" and "bootstrap".
runsTrain: Object of class "list"; internal repesentation for the split in training and test data for
the evaluation runs.
train: Object of class "numeric"; portion of data used for training for "split" and "bootstrap".
unknownData: Object of class "ratingMatrix"; data set with only unknown items.
Methods
getData signature(x = "evaluationScheme"): access data. Parameters are type ("train", "known"
or "unknown", "given") and run (1...k). "train" returns the training data for the run, "known"
returns the known ratings used for prediction for the test data, "unknown" returns the ratings
used for evaluation for the test data, and "given" returns the number of items that were given
in "known." If the given items was a positive number, then this will be a vector with this
number, but if given was negative (all-but-x), then the number of given items for each test
user will be different.
show signature(object = "evaluationScheme")
See Also
ratingMatrix and the creator function evaluationScheme.
funkSVD Funk SVD for Matrices with Missing Data
Description
Implements matrix decomposition by the stochastic gradient descent optimization popularized by
<NAME> to minimize the error on the known values. This function is used by the recommender
method "SVDF" (see Recommender).
Usage
funkSVD(x, k = 10, gamma = 0.015, lambda = 0.001,
min_improvement = 1e-06, min_epochs = 50, max_epochs = 200,
verbose = FALSE)
Arguments
x a matrix, potentially containing NAs.
k number of features (i.e, rank of the approximation).
gamma regularization term.
lambda learning rate.
min_improvement
required minimum improvement per iteration.
min_epochs minimum number of iterations per feature.
max_epochs maximum number of iterations per feature.
verbose show progress.
Details
Funk SVD decomposes a matrix (with missing values) into two components U and V . The singular
values are folded into these matrices. The approximation for the original matrix can be obtained by
R = U V 0.
This function predict in this implementation folds in new data rows by estimating the u vectors
using gradient descend and then calculating the reconstructed complete matrix r for these users via
r = uV 0 .
Value
An object of class "funkSVD" with components
U the U matrix.
V the V matrix.
parameters a list with parameter values.
Note
The code is based on the implmentation in package rrecsys by <NAME> and <NAME>.
References
<NAME>, <NAME>, and <NAME>. Matrix Factorization Techniques for Recommender Systems,
IEEE Computer, pp. 42-49, August 2009.
Examples
# this takes a while to run!
## Not run:
data("Jester5k")
# helper to calculate root mean squared error
rmse <- function(pred, truth) sqrt(sum((truth-pred)^2, na.rm = TRUE))
train <- as(Jester5k[1:100], "matrix")
fsvd <- funkSVD(train, verbose = TRUE)
# reconstruct the original rating matrix as R = UV'
r <- tcrossprod(fsvd$U, fsvd$V)
rmse(train, r)
# fold in new users for matrix completion
test <- as(Jester5k[101:105], "matrix")
p <- predict(fsvd, test, verbose = TRUE)
rmse(test, p)
## End(Not run)
getList List and Data.frame Representation for Recommender Matrix Objects
Description
Create a list or data.frame representation for various objects used in recommenderlab. These
functions are used in addition to available coercion to allow for parameters like decode.
Usage
getList(from, ...)
## S4 method for signature 'realRatingMatrix'
getList(from, decode = TRUE, ratings = TRUE, ...)
## S4 method for signature 'binaryRatingMatrix'
getList(from, decode = TRUE, ...)
## S4 method for signature 'topNList'
getList(from, decode = TRUE, ...)
getData.frame(from, ...)
## S4 method for signature 'ratingMatrix'
getData.frame(from, decode = TRUE, ratings = TRUE, ...)
Arguments
from object to be represented as a list.
decode use item names or item IDs (column numbers) for items?
ratings include ratings in the list or data.frame?
... further arguments (currently unused).
Details
Lists have one vector with items (and ratings) per user. The data.frame has one row per rating with
the user in the first column, the item as the second and the rating as the third.
Value
Returns a list or a data.frame.
See Also
binaryRatingMatrix, realRatingMatrix, topNList.
Examples
data(Jester5k)
getList(Jester5k[1,])
getData.frame(Jester5k[1,])
HybridRecommender Create a Hybrid Recommender
Description
Creates and combines recommendations using several recommender algorithms.
Usage
HybridRecommender(..., weights = NULL, aggregation_type = "sum")
Arguments
... objects of class ’Recommender’.
weights weights for the recommenders. The recommenders are equally weighted by
default.
aggregation_type
How are the recommendations aggregated. Options are "sum", "min", and "max".
Details
The hybrid recommender is initialized with a set of pretrained Recommender objects. Typically, the
algorithms are trained using the same training set. If different training sets are used, then, at least
the training sets need to have the same items in the same order.
Alternatively, hybrid recommenders can be created using the regular Recommender() interface.
Here method is set to HYBRID and parameter contains a list with recommenders and weights. rec-
ommenders are a list of recommender alorithms, where each algorithms is represented as a list with
elements name (method of the recommender) and parameters (the algorithms parameters). This
method can be used in evaluate()
For creating recommendations (predict), each recommender algorithm is used to create ratings.
The individual ratings are combined using a weighted sum where missing ratings are ignored.
Weights can be specified in weights.
Value
An object of class ’Recommender’.
See Also
Recommender
Examples
data("MovieLense")
MovieLense100 <- MovieLense[rowCounts(MovieLense) >100,]
train <- MovieLense100[1:100]
test <- MovieLense100[101:103]
## mix popular movies with a random recommendations for diversity and
## rerecommend some movies the user liked.
recom <- HybridRecommender(
Recommender(train, method = "POPULAR"),
Recommender(train, method = "RANDOM"),
Recommender(train, method = "RERECOMMEND"),
weights = c(.6, .1, .3)
)
recom
getModel(recom)
as(predict(recom, test), "list")
## create a hybrid recommender using the regular Recommender interface.
## This is needed to use hybrid recommenders with evaluate().
recommenders <- list(
RANDOM = list(name = "POPULAR", param = NULL),
POPULAR = list(name = "RANDOM", param = NULL),
RERECOMMEND = list(name = "RERECOMMEND", param = NULL)
)
weights <- c(.6, .1, .3)
recom <- Recommender(train, method = "HYBRID",
parameter = list(recommenders = recommenders, weights = weights))
recom
as(predict(recom, test), "list")
internalFunctions Internal Utility Functions
Description
Utility functions used internally by recommender algorithms. See files starting with RECOM in the
package’s R directory for examples of usage.
Usage
returnRatings(ratings, newdata,
type = c("topNList", "ratings", "ratingMatrix"),
n, randomize = NULL, minRating = NA)
getParameters(defaults, parameter)
Arguments
ratings a realRatingMatrix.
newdata a realRatingMatrix.
type type of recommendation to return.
n max. number of entries in the top-N list.
randomize randomization factor for producing the top-N list.
minRating do not include ratings less than this.
defaults list with parameters and default values.
parameter list with actual parameters.
Details
returnRatings is used in the predict function of recommender algorithms to return different types
of recommendations.
getParameters is a helper function which checks parameters for consistency and provides default
values. Used in the Recommender constructor.
Jester5k Jester dataset (5k sample)
Description
The data set contains a sample of 5000 users from the anonymous ratings data from the Jester Online
Joke Recommender System collected between April 1999 and May 2003.
Usage
data(Jester5k)
Format
The format of Jester5k is: Formal class ’realRatingMatrix’ [package "recommenderlab"]
The format of JesterJokes is: vector of character strings.
Details
Jester5k contains a 5000 x 100 rating matrix (5000 users and 100 jokes) with ratings between
-10.00 and +10.00. All selected users have rated 36 or more jokes.
The data also contains the actual jokes in JesterJokes.
References
<NAME>, <NAME>, <NAME>, and <NAME>. "Eigentaste: A Constant Time
Collaborative Filtering Algorithm." Information Retrieval, 4(2), 133-151. July 2001.
Examples
data(Jester5k)
Jester5k
## number of ratings
nratings(Jester5k)
## number of ratings per user
summary(rowCounts(Jester5k))
## rating distribution
hist(getRatings(Jester5k), main="Distribution of ratings")
## 'best' joke with highest average rating
best <- which.max(colMeans(Jester5k))
cat(JesterJokes[best])
MovieLense MovieLense Dataset (100k)
Description
The 100k MovieLense ratings data set. The data was collected through the MovieLens web site
(movielens.umn.edu) during the seven-month period from September 19th, 1997 through April
22nd, 1998. The data set contains about 100,000 ratings (1-5) from 943 users on 1664 movies.
Movie and user metadata is also provided in MovieLenseMeta and MovieLenseUser.
Usage
data(MovieLense)
Format
The format of MovieLense is an object of class "realRatingMatrix"
The format of MovieLenseMeta is a data.frame with movie title, year, IMDb URL and indicator
variables for 19 genres.
The format of MovieLenseUser is a data.frame with user age, sex, occupation and zip code.
Source
GroupLens Research, https://grouplens.org/datasets/movielens/
References
<NAME>., <NAME>., <NAME>., <NAME>.. An Algorithmic Framework for Performing
Collaborative Filtering. Proceedings of the 1999 Conference on Research and Development in
Information Retrieval. Aug. 1999.
Examples
data(MovieLense)
MovieLense
## look at the first few ratings of the first user
head(as(MovieLense[1,], "list")[[1]])
## visualize part of the matrix
image(MovieLense[1:100,1:100])
## number of ratings per user
hist(rowCounts(MovieLense))
## number of ratings per movie
hist(colCounts(MovieLense))
## mean rating (averaged over users)
mean(rowMeans(MovieLense))
## available movie meta information
head(MovieLenseMeta)
## available user meta information
head(MovieLenseUser)
MSWeb Anonymous web data from www.microsoft.com
Description
Vroots visited by users in a one week timeframe.
Usage
data(MSWeb)
Format
The format is: Formal class "binaryRatingMatrix".
Details
The data was created by sampling and processing the www.microsoft.com logs. The data records
the use of www.microsoft.com by 38000 anonymous, randomly-selected users. For each user, the
data lists all the areas of the web site (Vroots) that user visited in a one week timeframe in February
1998.
This dataset contains 32710 valid users and 285 Vroots.
Source
<NAME>., <NAME>. (2007). UCI Machine Learning Repository, Irvine, CA: University of
California, School of Information and Computer Science. https://archive.ics.uci.edu/
References
<NAME>, <NAME>., <NAME> (1998). Empirical Analysis of Predictive Algorithms for Col-
laborative Filtering, Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelli-
gence, Madison, WI.
Examples
data(MSWeb)
MSWeb
nratings(MSWeb)
## look at first two users
as(MSWeb[1:2,], "list")
## items per user
hist(rowCounts(MSWeb), main="Distribution of Vroots visited per user")
normalize Normalize the ratings
Description
Provides the generic for normalize/denormalize and a method to normalize/denormalize the ratings
in a realRatingMatrix.
Usage
normalize(x, ...)
## S4 method for signature 'realRatingMatrix'
normalize(x, method="center", row=TRUE)
denormalize(x, ...)
## S4 method for signature 'realRatingMatrix'
denormalize(x, method=NULL, row=NULL,
factors=NULL)
Arguments
x a realRatingMatrix.
method normalization method. Currently "center" or "Z-score".
row logical; normalize rows (or the columns)?
factors a list with the factors to be used for denormalizing (elements are "mean" and
"sds"). Usually these are not specified and the values stored in x are used.
... further arguments (currently unused).
Details
Normalization tries to reduce the individual rating bias by row centering the data, i.e., by subtracting
from each available rating the mean of the ratings of that user (row). Z-score in addition divides by
the standard deviation of the row/column. Normalization can also be done on columns.
Denormalization reverses normalization. It uses the normalization information stored in x unless
the user specifies method, row and factors.
Value
A normalized realRatingMatrix.
Examples
## create a matrix with ratings
m <- matrix(sample(c(NA,0:5),50, replace=TRUE, prob=c(.5,rep(.5/6,6))),
nrow=5, ncol=10, dimnames = list(users=paste('u', 1:5, sep=''),
items=paste('i', 1:10, sep='')))
## do normalization
r <- as(m, "realRatingMatrix")
r_n1 <- normalize(r)
r_n2 <- normalize(r, method="Z-score")
r
r_n1
r_n2
## show normalized data
image(r, main="Raw Data")
image(r_n1, main="Centered")
image(r_n2, main="Z-Score Normalization")
plot Plot Evaluation Results
Description
Creates precision-recall or ROC plots for recommender evaluation results.
Usage
## S4 method for signature 'evaluationResults'
plot(x, y,
avg = TRUE, add=FALSE, type= "b", annotate = FALSE, ...)
## S4 method for signature 'evaluationResultList'
plot(x, y,
xlim=NULL, ylim=NULL, col = NULL, pch = NULL, lty = 1,
avg = TRUE, type = "b", annotate= 0, legend="bottomright", ...)
Arguments
x the object to be plotted.
y a character string indicating the type of plot (e.g., "ROC" or "prec/rec").
avg plot average of runs?
add add to a plot?
type line type (see plot).
annotate annotate N (recommendation list size) to plot.
xlim,ylim plot limits (see plot).
col colors (see plot).
pch point symbol to use (see plot).
lty line type (see plot)
legend where to place legend (see legend).
... further arguments passed on to plot.
See Also
evaluationResults, evaluationResultList. See evaluate for examples.
predict Predict Recommendations
Description
Creates recommendations using a recommender model and data about new users.
Usage
## S4 method for signature 'Recommender'
predict(object, newdata, n = 10, data=NULL,
type="topNList", ...)
Arguments
object a recommender model (class "Recommender").
newdata data for active users (class "ratingMatrix") or the index of users in the training
data to create recommendations for. If an index is used then some recommender
algorithms need to be passed the training data as argument data. Some algo-
rithms may only support user indices.
n number of recommendations in the top-N list.
data training data needed by some recommender algorithms if newdata is a user in-
dex and not user data.
type type of recommendation. The default type is "topNList" which creates a top-
N recommendation list with recommendations. Some recommenders can also
predict ratings with type "ratings" which returns only predicted ratings with
known ratings represented by NA, or type "ratingMatrix" which returns a com-
pleted rating matrix (Note that the predicted ratings may differ from the known
ratings).
... further arguments.
Value
Returns an object of class "topNList" or of other appropriate classes.
See Also
Recommender, ratingMatrix.
Examples
data("MovieLense")
MovieLense100 <- MovieLense[rowCounts(MovieLense) >100,]
train <- MovieLense100[1:50]
rec <- Recommender(train, method = "POPULAR")
rec
## create top-N recommendations for new users
pre <- predict(rec, MovieLense100[101:102], n = 10)
pre
as(pre, "list")
## predict ratings for new users
pre <- predict(rec, MovieLense100[101:102], type="ratings")
pre
as(pre, "matrix")[,1:10]
## create recommendations using user ids with ids 1..10 in the
## training data
pre <- predict(rec, 1:10 , data = train, n = 10)
pre
as(pre, "list")
ratingMatrix Class "ratingMatrix": Virtual Class for Rating Data
Description
Defines a common class for rating data.
Objects from the Class
A virtual Class: No objects may be created from it.
Methods
[ signature(x = "ratingMatrix", i = "ANY", j = "ANY", drop = "ANY"): subset the rating ma-
trix (drop is ignorred).
coerce signature(from = "ratingMatrix", to = "list")
coerce signature(from = "ratingMatrix", to = "data.frame"): a data.frame with three columns.
Col 1 contains user ids, col 2 contains item ids and col 3 contains ratings.
colCounts signature(x = "ratingMatrix"): number of ratings per column.
rowCounts signature(x = "ratingMatrix"): number of ratings per row.
colMeans signature(x = "ratingMatrix"): column-wise rating means.
rowMeans signature(x = "ratingMatrix"): row-wise rating means.
dim signature(x = "ratingMatrix"): dimensions of the rating matrix.
dimnames<- signature(x = "ratingMatrix", value = "list"): replace dimnames.
dimnames signature(x = "ratingMatrix"): retrieve dimnames.
getNormalize signature(x = "ratingMatrix"): returns a list with normalization information
for the matrix (NULL if data is not normalized).
getRatings signature(x = "ratingMatrix"): returns all ratings in x as a numeric vector.
getRatingMatrix signature(x = "ratingMatrix"): returns the ratings as a sparse matrix. The
format is different for binary and real rating matices.
hasRating signature(x = "ratingMatrix"): returns a sparse logical matrix with TRUE for user-
item combinations which have a rating.
image signature(x = "ratingMatrix"): plot the matrix.
nratings signature(x = "ratingMatrix"): number of ratings in the matrix.
sample signature(x = "ratingMatrix"): sample from users (rows).
show signature(object = "ratingMatrix")
See Also
See implementing classes realRatingMatrix and binaryRatingMatrix. See getList, getData.frame,
similarity, dissimilarity and dissimilarity.
realRatingMatrix Class "realRatingMatrix": Real-valued Rating Matrix
Description
A matrix containing ratings (typically 1-5 stars, etc.).
Objects from the Class
Objects can be created by calls of the form new("realRatingMatrix", data = m), where m is
sparse matrix of class
dgCMatrix in package Matrix or by coercion from a regular matrix, a data.frame containing
user/item/rating triplets as rows, or a sparse matrix in triplet form (dgTMatrix in package Matrix).
Slots
data: Object of class
"dgCMatrix", a sparse matrix defined in package Matrix. Note that this matrix drops NAs
instead of zeroes. Operations on "dgCMatrix" potentially will delete zeroes.
normalize: NULL or a list with normalizaton factors.
Extends
Class "ratingMatrix", directly.
Methods
coerce signature(from = "matrix", to = "realRatingMatrix"): Note that unknown ratings
have to be encoded in the matrix as NA and not as 0 (which would mean an actual rating
of 0).
coerce signature(from = "realRatingMatrix", to = "matrix")
coerce signature(from = "data.frame", to = "realRatingMatrix"): coercion from a data.frame
with three columns. Col 1 contains user ids, col 2 contains item ids and col 3 contains ratings.
coerce signature(from = "realRatingMatrix", to = "data.frame"): produces user/item/rating
triplets.
coerce signature(from = "realRatingMatrix", to = "dgTMatrix")
coerce signature(from = "dgTMatrix", to = "realRatingMatrix")
coerce signature(from = "realRatingMatrix", to = "dgCMatrix")
coerce signature(from = "dgCMatrix", to = "realRatingMatrix")
coerce signature(from = "realRatingMatrix", to = "ngCMatrix")
binarize signature(x = "realRatingMatrix"): create a "binaryRatingMatrix" by setting all
ratings larger or equal to the argument minRating as 1 and all others to 0.
getTopNLists signature(x = "realRatingMatrix"): create top-N lists from the ratings in x.
Arguments are n (defaults to 10), randomize (default is NULL) and minRating (default is NA).
Items with a rating below minRating will not be part of the top-N list. randomize can be
used to get diversity in the predictions by randomly selecting items with a bias to higher rated
items. The bias is introduced by choosing the items with a probability proportional to the
rating (r − min(r) + 1)randomize . The larger the value the more likely it is to get very highly
rated items and a negative value for randomize will select low-rated items.
removeKnownRatings signature(x = "realRatingMatrix"): removes all ratings in x for which
ratings are available in the realRatingMatrix (of same dimensions as x) passed as the argument
known.
rowSds signature(x = "realRatingMatrix"): calculate the standard deviation of ratings for
rows (users).
colSds signature(x = "realRatingMatrix"): calculate the standard deviation of ratings for columns
(items).
See Also
See ratingMatrix inherited methods,
binaryRatingMatrix, topNList, getList and getData.frame. Also see dgCMatrix, dgTMatrix
and ngCMatrix in Matrix.
Examples
## create a matrix with ratings
m <- matrix(sample(c(NA,0:5),100, replace=TRUE, prob=c(.7,rep(.3/6,6))),
nrow=10, ncol=10, dimnames = list(
user=paste('u', 1:10, sep=''),
item=paste('i', 1:10, sep='')
))
m
## coerce into a realRatingMAtrix
r <- as(m, "realRatingMatrix")
r
## get some information
dimnames(r)
rowCounts(r) ## number of ratings per user
colCounts(r) ## number of ratings per item
colMeans(r) ## average item rating
nratings(r) ## total number of ratings
hasRating(r) ## user-item combinations with ratings
## histogram of ratings
hist(getRatings(r), breaks="FD")
## inspect a subset
image(r[1:5,1:5])
## coerce it back to see if it worked
as(r, "matrix")
## coerce to data.frame (user/item/rating triplets)
as(r, "data.frame")
## binarize into a binaryRatingMatrix with all 4+ rating a 1
b <- binarize(r, minRating=4)
b
as(b, "matrix")
Recommender Create a Recommender Model
Description
Learns a recommender model from given data.
Usage
Recommender(data, ...)
## S4 method for signature 'ratingMatrix'
Recommender(data, method, parameter=NULL)
Arguments
data training data.
method a character string defining the recommender method to use (see details).
parameter parameters for the recommender algorithm.
... further arguments.
Details
Recommender uses the registry mechanism from package registry to manage methods. This let’s
the user easily specify and add new methods. The registry is called recommenderRegistry. See
examples section.
Value
An object of class ’Recommender’.
See Also
Recommender, ratingMatrix, predict.
Examples
data("MSWeb")
MSWeb10 <- sample(MSWeb[rowCounts(MSWeb) >10,], 100)
rec <- Recommender(MSWeb10, method = "POPULAR")
rec
getModel(rec)
## save and read a recommender model
saveRDS(rec, file = "rec.rds")
rec2 <- readRDS("rec.rds")
rec2
unlink("rec.rds")
## look at registry and a few methods
recommenderRegistry$get_entry_names()
recommenderRegistry$get_entry("POPULAR", dataType = "binaryRatingMatrix")
recommenderRegistry$get_entry("SVD", dataType = "realRatingMatrix")
Recommender-class Class "Recommender": A Recommender Model
Description
Represents a recommender model learned for a given data set (a rating matrix).
Objects from the Class
Objects are created by the creator function Recommender(data, method, parameter = NULL)
Slots
method: Object of class "character"; used recommendation method.
dataType: Object of class "character"; concrete class of the input data.
ntrain: Object of class "integer"; size of training set.
model: Object of class "list"; the model.
predict: Object of class "function"; code to compute a recommendation using the model.
Methods
getModel signature(x = "Recommender"): retrieve the model.
predict signature(object = "Recommender"): create recommendations for new data (argument
newdata).
show signature(object = "Recommender")
See Also
See Recommender for the constructor function and a description of availble methods.
sparseNAMatrix-class Sparse Matrix Representation With NAs Not Explicitly Stored
Description
Coerce from and to a sparse matrix representation where NAs are not explicitly stored.
Usage
dropNA(x)
dropNA2matrix(x)
dropNAis.na(x)
Arguments
x a matrix for dropNA(), or a sparse matrix with dropped NA values for dropNA2matrix()
or dropNAis.na().
Details
The representation is based on the sparse dgCMatrix in Matrix but instead of zeros, NAs are
dropped. This is achieved by the following:
• Zeros are represented with a very small value (.Machine$double.xmin) so they do not get
dropped in the sparse representation.
• NAs are converted to 0 before cercions to dgCMatrix to make them not explicitly stored.
Caution: Be careful when working with the sparse matrix and sparse matrix operations (multipli-
cation, addition, etc.) directly.
• Sparse matrix operations will see 0 where NAs should be.
• Actual zero ratings have a small, but non-zero value (.Machine$double.xmin).
• Sparse matrix operations that can result in a true 0 need to be followed by replacing the 0 with
.Machine$double.xmin or other operations (like subsetting) may drop the 0.
dropNAis.na() correctly finds NA values in a sparse matrix with dropped NA values, while is.na()
does not work.
dropNA2matrix() converts the sparse representation into a dense matrix. NAs represented by
dropped values are converted to true NAs. Zeros are recovered by using zapsmall() which replaces
small values by 0.
Value
Returns a dgCMatrix or a matrix, respectively.
See Also
dgCMatrix in Matrix.
Examples
m <- matrix(sample(c(NA,0:5),50, replace=TRUE, prob=c(.5,rep(.5/6,6))),
nrow=5, ncol=10, dimnames = list(users=paste('u', 1:5, sep=''),
items=paste('i', 1:10, sep='')))
m
## drop all NAs in the representation. Zeros are represented by very small values.
sparse <- dropNA(m)
sparse
## convert back to matrix
dropNA2matrix(sparse)
## Note: be careful with the sparse representation!
## Do not use is.na, but use
dropNAis.na(sparse)
topNList Class "topNList": Top-N List
Description
Recommendations a Top-N list.
Objects from the Class
Objects can be created by predict with a recommender model and new data. Alternatively, objects
can be created from a realRatingMatrix using getTopNLists.
Slots
ratings: Object of class "list". Each element in the list represents a top-N recommendation (an
integer vector) with item IDs (column numbers in the rating matrix). The items are ordered in
each vector.
items: Object of class "list" or NULL. If available, a list of the same structure as items with the
ratings.
itemLabels: Object of class "character"
n: Object of class "integer" specifying the number of items in each recommendation. Note that
the actual number on recommended items can be less depending on the data and the used
algorithm.
Methods
coerce signature(from = "topNList", to = "list"): returns a list with the items (labels) in the
topNList.
coerce signature(from = "topNList", to = "realRatingMatrix"): creates a rating Matrix with
entries for the items in the topN list.
coerce signature(from = "topNList", to = "dgTMatrix")
coerce signature(from = "topNList", to = "dgCMatrix")
coerce signature(from = "topNList", to = "ngCMatrix")
coerce signature(from = "topNList", to = "matrix"): returns a dense matrix with the ratings
for the top-N items. All other items have a rating of NA.
c signature(x = "topNList"): combine several topN lists into a single list. The lists need to be
for the same data (i.e., items).
bestN signature(x = "topNList"): returns only the best n recommendations (second argument
is n which defaults to 10). The additional argument minRating can be used to remove all
entries with a rating below this value.
length signature(x = "topNList"): for how many users does this object contain a top-N list?
removeKnownItems signature(x = "topNList"): remove items from the top-N list which are
known (have a rating) for the user given as a ratingMatrix passed on as argument known.
colCounts signature(x = "topNList"): in how many top-N does each item occur?
rowCounts signature(x = "topNList"): number of recommendations per user.
show signature(object = "topNList")
See Also
evaluate, getList, realRatingMatrix |
harness | hex | Erlang | Harness
===
![Actions CI](https://github.com/NFIBrokerage/harness/workflows/Actions%20CI/badge.svg)
A command line tool for harnessing Elixir boilerplate.
See the [hex guides](https://hexdocs.pm/harness/welcome.html#content)
for detail documentation.
Looking for an example package? [`harness_dotfiles`](https://github.com/NFIBrokerage/harness_dotfiles)
should serve as a minimal example to get you going.
Development
---
Interested in developing harness? Currently it's in a tight spot because it doesn't have any test cases. Your best bet for blessing harness is to build and install harness archives locally and use the local installation to harness packages like `harness_micro_controller`.
```
mix archive.uninstall harness --force MIX_ENV=prod mix archive.build mix archive.install harness-0.0.0.ez --force
```
Installation
---
Harness is installed as an archive:
```
mix archive.install hex harness --force
```
Harness depends on elixir 1.9+. If you use `asdf`:
```
asdf install erlang 22.3 asdf install elixir 1.10.4-otp-22 asdf global erlang 22.3 asdf global elixir 1.10.4-otp-22 mix archive.install hex harness --force
```
Harness
===
harness the boilerplate!
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[ignore_patterns()](#ignore_patterns/0)
[version()](#version/0)
[Link to this section](#functions)
Functions
===
Harness.Manifest
===
A local project's harness configuration
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[archive_path()](#archive_path/0)
gets the path of the archive itself
[load(path)](#load/1)
Loads the manifest onto the project stack and loads dependencies
[read(path)](#read/1)
reads the manifest file from the path
[verify(path)](#verify/1)
Verifies that the generated files in .harness match the configuration of the harness.exs
[Link to this section](#functions)
Functions
===
Harness.Pkg behaviour
===
A behaviour for defining harness package modules.
Harness packages should add a `pkg.exs` to their root directory which describes a single module which implements this behaviour.
[Link to this section](#summary)
Summary
===
[Types](#types)
---
[t()](#t:t/0)
A package module's struct
[Functions](#functions)
---
[otp_app(generator)](#otp_app/1)
[path(generator)](#path/1)
[Callbacks](#callbacks)
---
[cast(opts)](#c:cast/1)
A function to transform incoming opts (in keyword format) into a package's struct ([`t/0`](#t:t/0)).
[links(t)](#c:links/1)
A list of links to create from the .harness directory to project root.
[Link to this section](#types)
Types
===
[Link to this section](#functions)
Functions
===
[Link to this section](#callbacks)
Callbacks
===
Harness.Renderer.Helpers
===
Helper functions for renders
These functions are accessible inside templates.
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[inspect_or_interpolate(item)](#inspect_or_interpolate/1)
Either inspects an item or interpolates it.
[pascal(item)](#pascal/1)
Converts a string or atom into PascalCase
[Link to this section](#functions)
Functions
===
Harness.Tree
===
Renders things as a tree
See the original implementation in Mix
[here](https://github.com/elixir-lang/elixir/blob/v1.10/lib/mix/lib/mix/utils.ex).
The original implementation has an optimization for dependency trees which prevents showing the same dependency tree twice. That's great for printing small dependency trees, but for file trees, we want to see the entire tree every time, even if a file or directory name is present many times.
The changes to the original implementation are shown as comments below:
[Link to this section](#summary)
Summary
===
[Functions](#functions)
---
[print_tree(nodes, callback, opts \\ [])](#print_tree/3)
Prints the given tree according to the callback.
The callback will be invoked for each node and it must return a `{printed, children}` tuple.
[Link to this section](#functions)
Functions
===
mix harness
===
Renders harness packages into the current directory
Configuration of which files to generate and link is according to the current directory's harness manifest (`harness.exs`).
Command line options
---
* `--no-compile` - skips the compilation of harness packages
* `--no-deps-check` - skips a check for out of date dependencies
mix harness.check
===
Checks that the `.harness` directory is up-to-date with the manifest
mix harness.clean
===
Removes links, files, and directories generated by harness
mix harness.compile
===
Compiles harness package dependencies
Loads the current harness manifest, checks to ensure dependencies have been feteched, and then compiles harness package files.
mix harness.get
===
Fetches harness dependencies according to a harness.exs
Harness dependencies follow the same format and rules as mix dependencies:
you may use (public/private) hex, git, or local paths, and dependencies may be semantically versioned when fetched via hex.
mix harness.loadpaths
===
Checks, compiles, and loads all harness packages
Command line options
---
* `--no-compile` - skips the compilation of harness packages
* `--no-deps-check` - skips a check for out of date dependencies
mix harness.update
===
Updates harness dependencies according to a harness.exs
This task mimics [`mix deps.update`](https://hexdocs.pm/mix/Mix.Tasks.Deps.Update.html) (and uses it for the implementation).
Any options are passed directly to the invocation of [`mix deps.update`](https://hexdocs.pm/mix/Mix.Tasks.Deps.Update.html)
API Reference
===
Modules
---
[Harness](Harness.html)
harness the boilerplate!
[Harness.Manifest](Harness.Manifest.html)
A local project's harness configuration
[Harness.Pkg](Harness.Pkg.html)
A behaviour for defining harness package modules.
[Harness.Renderer.Helpers](Harness.Renderer.Helpers.html)
Helper functions for renders
[Harness.Tree](Harness.Tree.html)
Renders things as a tree
Mix Tasks
---
[mix harness](Mix.Tasks.Harness.html)
Renders harness packages into the current directory
[mix harness.check](Mix.Tasks.Harness.Check.html)
Checks that the `.harness` directory is up-to-date with the manifest
[mix harness.clean](Mix.Tasks.Harness.Clean.html)
Removes links, files, and directories generated by harness
[mix harness.compile](Mix.Tasks.Harness.Compile.html)
Compiles harness package dependencies
[mix harness.get](Mix.Tasks.Harness.Get.html)
Fetches harness dependencies according to a harness.exs
[mix harness.loadpaths](Mix.Tasks.Harness.Loadpaths.html)
Checks, compiles, and loads all harness packages
[mix harness.update](Mix.Tasks.Harness.Update.html)
Updates harness dependencies according to a harness.exs |