Skip to content
Snippets Groups Projects
Commit 0066dcad authored by Ansel Neunzert's avatar Ansel Neunzert
Browse files

Major docs update

parent 8f1826d7
Branches
No related tags found
No related merge requests found
========
Overview
========
Goal
----
* Read in data for many channels and sources
* Plot data for each channel in an interactive format
* Easily see and keep track of combs
* Organize plots by channel and time/date
Status
------
Testing; likely to be buggy and constantly updated.
=======
Contact
=======
neunzert (at) umich (dot) edu
============
Dependencies
============
These scripts depend on a number of non-standard python libraries, namely:
* gwpy (for loading data from detector channels)
* bokeh (for interactive plotting)
* pandas (for handling tables of data)
* jinja2 (for creating web pages from a template)
To avoid installing all of these things independently, work on the LHO cluster and source the following virtualenv::
$ source /home/aneunzert/gwypybokeh/bin/activate
If you would like to work on a different cluster, email me and I will happily clone the virtualenv where necessary.
======
Basics
======
.. image:: finetooth_workflow.png
* Modular workflow
* All options are documented in their respective scripts; run ``python scriptname.py --help`` to see what is available (where scriptname is whatever python script you're trying to learn about).
* See `the tutorial <tutorial.rst>`_ (in progress) to get started.
* Comb finding is documented in `a separate file <combfinding.rst>`_ as it is currently a standalone task. This will also be addressed in the expanded version of the tutorial.
Documentation for this project has recently been overhauled. Rendered docs can be found on my public_html page `here <https://ldas-jobs.ligo-wa.caltech.edu/~aneunzert/FineToothDocs/build/html/>`_ . All source files are also available in the repository under docs/source.
from __future__ import division
import argparse
def makeparser():
parser = argparse.ArgumentParser(description="Calculate and save spectra using GWpy.")
parser.register('type', 'bool', (lambda x: x.lower() in ("yes", "true", "t", "1")))
parser.add_argument("--chlist",help="Channel list source (path to a newline-delimited text file)")
......@@ -13,8 +15,11 @@ parser.add_argument("--ffttime",help="FFT time (s) (default 20)",default=20,type
parser.add_argument("--fftoverlap",help="FFT overlap (default 0.5)",default=0.5,type=float)
parser.add_argument("--verbose",help="Verbose output (default false)",default=False)
parser.add_argument("--overwrite",help="Overwrite existing files (default false)",default=False)
args=parser.parse_args()
return parser
def main():
''' '''
import os
import sys
......@@ -25,6 +30,8 @@ from gwpy import time
import numpy as np
import hdf5io as h5
args=makeparser().parse_args()
# Get list of channels from the supplied txt file
f=open(args.chlist,'r')
chlist = [line[:-1] for line in f]
......@@ -70,3 +77,6 @@ for ch in chlist:
continue
print("Program complete.")
if __name__ == '__main__':
main()
......@@ -2,6 +2,7 @@ from __future__ import division
import argparse
import datetime
def makeparser():
parser=argparse.ArgumentParser()
parser.register('type', 'bool', (lambda x: x.lower() in ("yes", "true", "t", "1")))
parser.add_argument("-d", "--date",help="Date in format \"YYYY-MM-DD\" (Default is today).",default=datetime.date.today().strftime('%Y-%m-%d'))
......@@ -15,7 +16,12 @@ parser.add_argument("-s","--start",help="Start date for plots",default="")
parser.add_argument("-c","--cumulative",help="Compute cumulative plots (default=False)",default=False)
parser.add_argument("-w","--overwrite",help="Overwrite existing files.",type='bool',default=False)
parser.add_argument("-x","--checkOnly",help="Don't compute anything, just print which days have data.",type='bool',default=False)
args=parser.parse_args()
return parser
def main():
''' '''
args=makeparser().parse_args()
import sys
import os
......@@ -211,3 +217,6 @@ for ch in chlist:
else:
print("Could not time-averaged spectrum files either.")
continue
if __name__ == '__main__':
main()
......@@ -2,6 +2,7 @@ from __future__ import division
import argparse
import glob
def makeparser():
parser = argparse.ArgumentParser(description="Plot comb spectra using Bokeh. To run this script, first: $source /home/aneunzert/gwpybokeh/bin/activate")
parser.register('type', 'bool', (lambda x: x.lower() in ("yes", "true", "t", "1")))
parser.add_argument("--foldername",nargs="+",type=str,help="Name of folder to process (should point to a folder containing gz files generated by chdata.py)")
......@@ -16,7 +17,13 @@ parser.add_argument("--verbose",help="Verbose output for errors (default=False)"
parser.add_argument("--truefcutoff",help="Truncate the frequency range (not just limiting the x-axis of initial plotting). Good for high resolution. (default=False)",type='bool',default=False)
parser.add_argument("--knownLinesFile",help="A file containing known lines and their descriptions, which should be added to the plot",default=None)
parser.add_argument("--zeroKnownLines",help="Zero out the known lines (requires knownLinesFile)",type='bool',default=False)
args=parser.parse_args()
return parser
def main():
''' '''
args=makeparser().parse_args()
import os
import sys
......@@ -108,3 +115,7 @@ for folderpattern in args.foldername:
if args.verbose==True:
print(traceback.format_exc())
continue
if __name__ == '__main__':
main()
.literal {
font-weight: bold!important;
}
p {
padding-bottom:10px!important;
}
File moved
Introduction
============
+++++++++++++++++++++++++
Comb finding introduction
+++++++++++++++++++++++++
Notes
-----
This document is an overview of the function *auto_find_comb()* in combfinder.py .
NOTICE: this is in an early stage of development, is likely to be buggy, and will be constantly updated!
NOTICE: this is in an early stage of development, is likely to be buggy, and will be constantly updated. Trust nothing!
Quick start
-----------
......@@ -18,30 +22,27 @@ This will return a structured array containing frequency spacings ('sp'), offset
comb[2]['str']
Detailed explanation of optional parameters
-------------------------------------------
Complete syntax::
auto_find_comb(vFreq,vData,scaleRange=5,nCheckAdj=2,nConsecReq=5,nTopPeaks=600,nNearestAmpNeighbors=30,nMostCommonCombs=300,spacingmin=None,spacingmax=None,verbose=False,showBeforeZeroing=False):
Algorithm and parameter details
-------------------------------
This kind of requires a full explanation of the code. Optional parameter names are *emphasized* so that its clear where they appear.
.. autofunction:: combfinder.auto_find_comb
:noindex:
This algorithm proceeds as follows:
* Flatten the data, to avoid getting thrown off by changes in the overall noise level across the spectrum. For each bin, the sum of the nearest *scaleRange* bins is calculated, and the bin values is divided by that total.
* Flatten the data, to avoid getting thrown off by changes in the overall noise level across the spectrum. For each bin, the sum of the nearest ``scaleRange`` bins is calculated, and the bin values is divided by that total.
* Determine which points on the spectrum are peaks. Peaks are defined as bins with values higher than the *nCheckAdj* adjacent bins.
* Determine which points on the spectrum are peaks. Peaks are defined as bins with values higher than the ``nCheckAdj`` adjacent bins.
* Get a sorted list of the *nTopPeaks* highest-amplitude peaks.
* Get a sorted list of the ``nTopPeaks`` highest-amplitude peaks.
* For each peak in the list, consider its *nNearestAmpNeighbors* nearest neighbors in amplitude. For each (peak, neighbor) pair, calculate the spacing and offset of the simplest comb on which both members of the pair would fall. (Spacing = frequency difference, offset = frequency of either, modulo spacing.) Append this spacing and offset to a list of comb candidates.
* For each peak in the list, consider its ``nNearestAmpNeighbors`` nearest neighbors in amplitude. For each (peak, neighbor) pair, calculate the spacing and offset of the simplest comb on which both members of the pair would fall. (Spacing = frequency difference, offset = frequency of either, modulo spacing.) Append this spacing and offset to a list of comb candidates.
* If the user has supplied *spacingmin* or *spacingmax*, reject any combs with spacings that fall outside this range. Otherwise, skip this step.
* If the user has supplied ``spacingmin`` or ``spacingmax``, reject any combs with spacings that fall outside this range. Otherwise, skip this step.
* Count how many times each comb candidate appears in the list. Retain the *nMostCommonCombs* most common candidates.
* Count how many times each comb candidate appears in the list. Retain the ``nMostCommonCombs`` most common candidates.
* Calculate an improved strength statistic for each comb candidate retained. This uses *nCheckAdj* (again) and *nConsecReq*. More detail is given in the following section.
* Calculate an improved strength statistic for each comb candidate retained. This uses ``nCheckAdj`` (again) and ``nConsecReq``. More detail is given in the following section.
* Organize the combs by strength statistic.
......@@ -51,10 +52,10 @@ This algorithm proceeds as follows:
* Return all the combs found.
* If *verbose* is specified, print detailed information. If *showBeforeZeroing* is specified, show all the combs prior to the iterative zeroing process.
* If ``verbose`` is specified, print detailed information. If ``showBeforeZeroing`` is specified, show all the combs prior to the iterative zeroing process.
Strength statistic details
--------------------------
Comb "likelihood" statistic
---------------------------
At present, the strength of a comb is given by this formula (which was found by some vague logic + trial and error) (pseudocode).
......@@ -62,9 +63,9 @@ Note that this is a very ad-hoc method and I'm working on improving it.
* Predict bins where the comb should lie.
* Determine if each predicted bin contains a peak, using the requirement that it be higher than *nCheckAdj* of its neighbors.
* Determine if each predicted bin contains a peak, using the requirement that it be higher than ``nCheckAdj`` of its neighbors.
* If the bin contains a peak, and is also part of a row of more than *nConsecReq* consecutive peaks, consider it to be a 'real' part of the comb.
* If the bin contains a peak, and is also part of a row of more than ``nConsecReq`` consecutive peaks, consider it to be a 'real' part of the comb.
* Multiply the following quantities together:
......
# -*- coding: utf-8 -*-
#
# FineTooth documentation build configuration file, created by
# sphinx-quickstart on Thu Sep 8 13:52:19 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
def setup(app):
app.add_stylesheet('custom.css')
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.mathjax',
'sphinx.ext.viewcode',
'sphinx.ext.napoleon',
'sphinxarg.ext'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'FineTooth'
copyright = u'2016, Ansel Neunzert'
author = u'Ansel Neunzert'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'0.1'
# The full version, including alpha/beta/rc tags.
release = u'0.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'nature'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
html_title = u'FineTooth v0.1'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#
# html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'FineTooth'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'FineTooth.tex', u'FineTooth Documentation',
u'Ansel Neunzert', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'finetoothdocs', u'FineTooth Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'FineTooth', u'FineTooth Documentation',
author, 'FineTooth', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False
========================
Module and function docs
========================
chdata.py (script)
------------------
Source
~~~~~~
.. automodule:: chdata
:members:
Options
~~~~~~~
.. argparse::
:ref: chdata.makeparser
:prog: chdata
chdataFscan.py (script)
-----------------------
Source
~~~~~~
.. automodule:: chdataFscan
:members:
Options
~~~~~~~
.. argparse::
:ref: chdataFscan.makeparser
:prog: chdataFscan
chplot.py (script)
------------------
Source
~~~~~~
.. automodule:: chplot
:members:
Options
~~~~~~~
.. argparse::
:ref: chplot.makeparser
:prog: chplot
combfinder.py (module only)
---------------------------
.. automodule:: combfinder
:members:
hdf5io.py (module only)
-----------------------
(This module has yet to be documented/expanded)
.. automodule:: hdf5io
:members:
:undoc-members:
.. FineTooth documentation master file, created by
sphinx-quickstart on Thu Sep 8 13:52:19 2016.
FineTooth documentation
=======================
[`Gitlab repository <https://gitlab.aei.uni-hannover.de/aneunzert/FineTooth/>`_]
.. toctree::
:maxdepth: 2
overview
tutorial
funcdocs
combfinding
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
++++++++
Overview
++++++++
Goals
=====
* Read in data for many channels and sources
* Plot data for each channel in an interactive format
* Easily see and keep track of combs
* Organize plots by channel and time/date
Status
======
Testing; likely to be buggy and constantly updated.
Contact
=======
neunzert (at) umich (dot) edu
Dependencies
============
These scripts depend on a number of non-standard python libraries, namely:
* gwpy (for loading data from detector channels)
* bokeh (for interactive plotting)
* pandas (for handling tables of data)
* jinja2 (for creating web pages from a template)
To avoid installing all of these things independently, work on the LHO cluster and source the following virtualenv::
$ source /home/aneunzert/gwypybokeh/bin/activate
If you would like to work on a different cluster, email me and I will happily clone the virtualenv where necessary.
Structure
=========
.. image:: _static/finetooth_workflow.png
[WORK IN PROGRESS]
++++++++
Tutorial
++++++++
This is a tutorial for getting started with FineTooth. If you can run through this tutorial, everything should be in working order.
Loading and plotting data
-------------------------
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment