{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

How to extract a light curve and spectrum for an EPIC point-like source

\n", "
\n", "
\n", "\n", "\t\n", "\t\t\n", "\t\t\t\n", "\t\t\n", "\t\n", "
\n", "\t\t\t

Introduction

\n", "\t\t\t

This thread contains a step-by-step recipe to extract a light curve and a spectrum of a point-like source which is valid for all the EPIC X-ray cameras. In the case of the light curve, it includes subtracting the background and correcting the light curve for exposure losses.

\n", "

The purpose of this jupyter notebook thread is to illustrate how to use SAS within python notebooks. For an in-depth explanation on how to extract light curves and spectra for an EPIC point-like source using SAS, please refer to the standard SAS threads,

\n", " \n", "\t\t\t

Expected Outcome

\n", "\t\t\t

Spectrum and corrected light curve of a point-like source extracted from the XMM-Newton EPIC instruments.

\n", "\t\t\t

SAS Tasks to be Used

\n", "\t\t\t\n", "\t\t\t

Prerequisites

\n", "\t\t\t

It is assumed that the processed event lists, like the ones produced by the SAS tasks epproc and emproc, and corresponding attitude and summary files as well as the ccf.cif file are available. Calibrated event lists may also be obtained as a starting point for this thread from the archive.

\n", "\t\t\t\n", "\t\t\t

Useful Links

\n", "\t\t\t\n", "\t\t\t

Caveats

\n", "\t\t\t

Last Reviewed: 25 May 2023, for SAS v21.0

\n", "\t\t\t

Last Updated: 25 May 2023

\n", "\t\t\t
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "

Procedure

\n", "

As an example case, we will consider the extraction of a spectrum and a light curve from a pn event list (PN_evt.fits). The same recipe applies for MOS. We highlight here the general steps which are included in the notebook cells below.
\n", " 

\n", "\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

In order to run the following blocks, SAS needs to be inizialized in the terminal from which the Jupyter nootebook has been opened. This has to be done prior to launching the notebook. Follow the steps provided in the SAS Startup Thread in Python.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Import the following Python libraries,

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pysas.wrapper import Wrapper as w\n", "import os\n", "import os.path\n", "from os import path\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "from astropy.io import fits\n", "import pyds9\n", "from xspec import *\n", "import re" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Before we start, lets see what we have already defined in terms of SAS variables,

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inargs = []\n", "t = w('sasver', inargs)\n", "t.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

We still need to define the SAS_ODF and SAS_CCF variables (make sure the SAS_CCFPATH is also defined). Use the SAS task startsas as described in the SAS Startup Thread in Python to download an ODF. For this thread, we will assume that an ODF has already been downloaded. We will use the ODF with id 010486050 as an example and assume that there is already a directory (sas_file in the block below) with an existing CIF and SAS Summary File. In the block below introduce the absoute path to the Working directory (finished with '/') and the location of the CIF and SAS Summary File,

\n", "\n", "Note: the path to the CIF and SAS Summary File must be an absolute path begining with '/'." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "work_dir = 'absolute_path_to_wrk_directory'\n", "sas_file = 'absolute_path_to_cifSUMSAS_directory'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "inargs = [f'sas_ccf={sas_file}ccf.cif', f'sas_odf={sas_file}0466_0104860501_SCX00000SUM.SAS', f'workdir={work_dir}']" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "w('startsas', inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

This process should have defined the SAS_CCF an SAS_ODF variables.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

In the following blocks we will filter an EPIC-pn event file for periods of high background flaring activity as an example. The same procedure is valid as well for an EPIC-MOS event file.

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eventfile = work_dir+'0466_0104860501_EPN_S003_ImagingEvts.ds'" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Checking for EPIC-pn Event Files ..... \\n\")\n", "# Check if epproc has already run.\n", "if os.path.isfile(eventfile):\n", " print (\"File \"+eventfile+\" exists. \\n\")\n", "else:\n", " print (\"File \"+eventfile+\" does not exist, please check. \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define some parameters to produce the image and the name of the output file\n", "\n", "xbin=80 # xbin size\n", "ybin=80 # ybin size\n", "xcoord='X' # coordinate system\n", "ycoord='Y' # coordinate system\n", "\n", "out_IMFile = work_dir+'EPIC_PN_Image.fit' # Name of the output Image file " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"evselect\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'table={eventfile}','imagebinning=binSize',f'imageset={out_IMFile}','withimageset=yes',f'xcolumn={xcoord}',f'ycolumn={ycoord}',f'ximagebinsize={xbin}',f'yimagebinsize={ybin}']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce an image\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Visualize the image with ds9\n", "\n", "d = pyds9.DS9()\n", "d.set(\"file \"+out_IMFile)\n", "d.set('cmap bb')\n", "d.set('scale log')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Check the information that ds9 has about the regions defined in the ds9 window.\n", "\n", "print(d.get(\"regions\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the relevant information from the ds9 regions.\n", "\n", "region1=(re.split(\"circle|annulus\",d.get(\"regions\").partition(\"physical\")[2]))[1].replace('(','').replace(')','').replace('#',',')\n", "region2=(re.split(\"circle|annulus\",d.get(\"regions\").partition(\"physical\")[2]))[2].replace('(','').replace(')','').replace('#',',')\n", "\n", "print(\"Identified source region: \", region1)\n", "print(\"Identified source region: \", region2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Identify source and background regions using the 'white' color.\n", "\n", "c1=region1.partition(\"color\")[2].replace('=','').replace('\\n','')\n", "if(c1=='white'):\n", " regionsrc = region1\n", " regionbkg = region2\n", "else:\n", " regionsrc = region2\n", " regionbkg = region1\n", "\n", "# Save and print selected region coordinates.\n", "\n", "x_source = regionsrc.split(\",\")[0].replace('\\n','')\n", "y_source = regionsrc.split(\",\")[1].replace('\\n','')\n", "r_source = regionsrc.split(\",\")[2].replace('\\n','')\n", "print(\"The coordinates of the selected source region are: \\n\")\n", "print(\" x_source = \", x_source, \"(physical)\")\n", "print(\" y_source = \", y_source, \"(physical)\")\n", "print(\" r_source = \", r_source, \"(physical) \\n\") \n", " \n", "x_bkg = regionbkg.split(\",\")[0].replace('\\n','')\n", "y_bkg = regionbkg.split(\",\")[1].replace('\\n','')\n", "r_bkg = regionbkg.split(\",\")[2].replace('\\n','')\n", "print(\"The coordinates of the selected background region are: \\n\")\n", "print(\" x_bkg = \", x_bkg, \"(physical)\")\n", "print(\" y_bkg = \", y_bkg, \"(physical)\")\n", "print(\" r_bkg = \", r_bkg, \"(physical) \\n\") \n", "\n", "# If the background is an annulus, save and print R2.\n", "\n", "if \"annulus\" in str(d.get(\"regions\")):\n", " r2_bkg = regionbkg.split(\",\")[3].replace('\\n','')\n", " print(\" r2_bkg = \", r2_bkg, \"(physical)\") " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Lightcurve Extraction

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Function to plot Lightcurve\n", "def plotLC(plt,threshold,fileName): \n", " if fileName != \"NOT FOUND\":\n", " fitsFile = fits.open(fileName) \n", " prihdu = fitsFile[1].header\n", " if ('CUTVAL' in prihdu):\n", " threshold = prihdu['CUTVAL']\n", "\n", " cols = fitsFile[1].columns\n", " colName = None\n", " for i,x in enumerate(cols.names):\n", " if \"RATE\" in x:\n", " colName = cols.names[i]\n", " if \"COUNTS\" in x:\n", " colName = cols.names[i] \n", "\n", " data = fitsFile[1].data \n", " \n", " xdata = data.field('TIME') - min(data.field('TIME')) # extract the x data column\n", " ydata = data.field(colName)\n", "\n", " xmax=np.amax(xdata) \n", " xmin=np.amin(xdata) \n", "\n", " plt.plot(xdata,ydata,label=fileName) # plot the data\n", "\n", " if colName == 'RATE':\n", " plt.title(\"Lightcurve\")\n", " plt.xlabel(\"Time (s)\")\n", " plt.ylabel(\"Cts/s\")\n", " else:\n", " plt.title(\"Lightcurve\")\n", " plt.xlabel(\"Time (s)\")\n", " plt.ylabel(\"Counts\")\n", "\n", " if (threshold != 'None'):\n", " if (colName == 'COUNTS'):\n", " threshold=float(threshold)*100.\n", "\n", " y2data = [threshold]*len(xdata) \n", " plt.plot(xdata,y2data)\n", " plt.text(xmin+0.1*(xmax-xmin), threshold+0.01*threshold, str(threshold)+\" cts/sec\", ha='center') \n", " \n", " fitsFile.close() \n", " \n", " else:\n", " print(\"File not found \"+fileName+\"\\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the source region lightcurve\n", "\n", "# Define some parameters for filtering the event file and define the lightcurve binning\n", "\n", "q_flag = \"#XMMEA_EP\" # Quality flag for EPIC pn\n", "n_pattern = 4 # Pattern selection\n", "pn_pi_min = 200. # Low energy range eV\n", "pn_pi_max = 10000. # High energy range eV\n", "lc_bin = 100 # Lightcurve bin in secs\n", "\n", "# Define the output ligthcurve file name\n", "\n", "in_LCSRCFile = work_dir+'EPIC_PN_source_lightcurve.lc' # Name of the output source lightcurve" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"evselect\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "expression = f'{q_flag}&&(PATTERN<={n_pattern})&&((X,Y) IN circle({x_source},{y_source},{r_source}))&&(PI in [{pn_pi_min}:{pn_pi_max}])' # event filter expression\n", "inargs = [f'table={eventfile}','energycolumn=PI','withrateset=yes',f'rateset={in_LCSRCFile}',\n", " f'timebinsize={lc_bin}','maketimecolumn=yes','makeratecolumn=yes',f'expression={expression}']\n", "\n", "print(\" Filter expression to use: \"+expression+\" \\n\")\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce the source region light curve\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inspect light curve\n", "\n", "pn_threshold = 1.0 # cts/sec (only used here for display purposes. Draws a horizontal line at this value)\n", "plt.figure(figsize=(20,8)) # Size of figure\n", "\n", "plotLC(plt,pn_threshold,in_LCSRCFile) # Plot source region light curve\n", "\n", "plt.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the background region light curve\n", "\n", "# Define some parameters for filtering the event file and define the lightcurve binning\n", "\n", "q_flag = \"#XMMEA_EP\" # Quality flag for EPIC pn\n", "n_pattern = 4 # Pattern selection\n", "pn_pi_min = 200. # Low energy range eV\n", "pn_pi_max = 10000. # High energy range eV\n", "lc_bin = 100 # Lightcurve bin in secs\n", "\n", "# Define the output ligthcurve file name\n", "\n", "in_LCBKGFile = work_dir+'EPIC_PN_background_lightcurve.lc' # Name of the output background lightcurve" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"evselect\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "expression = f'{q_flag}&&(PATTERN<={n_pattern})&&((X,Y) IN circle({x_bkg},{y_bkg},{r_bkg}))&&(PI in [{pn_pi_min}:{pn_pi_max}])' # event filter expression\n", "inargs = [f'table={eventfile}','energycolumn=PI','withrateset=yes',f'rateset={in_LCBKGFile}',\n", " f'timebinsize={lc_bin}','maketimecolumn=yes','makeratecolumn=yes',f'expression={expression}']\n", "\n", "print(\" Filter expression to use: \"+expression+\" \\n\")\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce the background region light curve\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inspect light curve\n", "\n", "pn_threshold = 0.01 # cts/sec (only used here for display purposes. Draws a horizontal line at this value)\n", "plt.figure(figsize=(20,8)) # Size of figure\n", "\n", "plotLC(plt,pn_threshold,in_LCBKGFile) # Plot background region light curve\n", "\n", "plt.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define the output corrected ligthcurve file name\n", "\n", "in_LCFile = work_dir+'EPIC_PN_corrected_lightcurve.lc' # Name of the output corrected lightcurve" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Correct the light curve with the SAS task epiclccorr\n", "\n", "# SAS Command\n", "cmd = \"epiclccorr\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'eventlist={eventfile}',f'srctslist={in_LCSRCFile}',f'outset={in_LCFile}',\n", " f'bkgtslist={in_LCBKGFile}','withbkgset=yes','applyabsolutecorrections=yes']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce the corrected light curve\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Inspect light curve\n", "\n", "pn_threshold = 1.0 # cts/sec (only used here for display purposes. Draws a horizontal line at this value)\n", "plt.figure(figsize=(20,8)) # Size of figure\n", "\n", "plotLC(plt,pn_threshold,in_LCFile) # Plot corrected light curve\n", "plotLC(plt,pn_threshold,in_LCSRCFile) # Plot source region light curve\n", "plotLC(plt,pn_threshold,in_LCBKGFile) # Plot background region light curve\n", "\n", "plt.legend()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

Spectra Extraction

" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the source region spectrum\n", "\n", "# Define some parameters for filtering the event file\n", "\n", "q_flag = \"#XMMEA_EP\" # Quality flag for EPIC pn\n", "n_pattern = 4 # Pattern selection\n", "\n", "# Define the output ligthcurve file name\n", "\n", "in_SPSRCFile = work_dir+'EPIC_PN_source_spectrum.fits' # Name of the output source spectrum" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"evselect\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "expression = f'{q_flag}&&(PATTERN<={n_pattern})&&((X,Y) IN circle({x_source},{y_source},{r_source}))' # event filter expression\n", "inargs = [f'table={eventfile}','withspectrumset=yes',f'spectrumset={in_SPSRCFile}', \n", " 'energycolumn=PI','spectralbinsize=5','withspecranges=yes','specchannelmin=0',\n", " 'specchannelmax=20479',f'expression={expression}']\n", "print(\" Filter expression to use: \"+expression+\" \\n\")\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce the source spectrum\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract the source region spectrum\n", "\n", "# Define some parameters for filtering the event file\n", "\n", "q_flag = \"#XMMEA_EP\" # Quality flag for EPIC pn\n", "n_pattern = 4 # Pattern selection\n", "\n", "# Define the output ligthcurve file name\n", "\n", "in_SPBKGFile = work_dir+'EPIC_PN_background_spectrum.fits' # Name of the output background spectrum" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"evselect\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "expression = f'{q_flag}&&(PATTERN<={n_pattern})&&((X,Y) IN circle({x_bkg},{y_bkg},{r_bkg}))' # event filter expression\n", "inargs = [f'table={eventfile}','withspectrumset=yes',f'spectrumset={in_SPBKGFile}', \n", " 'energycolumn=PI','spectralbinsize=5','withspecranges=yes','specchannelmin=0',\n", " 'specchannelmax=20479',f'expression={expression}']\n", "print(\" Filter expression to use: \"+expression+\" \\n\")\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task with the parameters to produce the background spectrum\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"backscale\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'spectrumset={in_SPSRCFile}',f'badpixlocation={eventfile}']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task for the source spectrum\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"backscale\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'spectrumset={in_SPBKGFile}',f'badpixlocation={eventfile}']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task for the background spectrum\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define some parameters for rmfgen\n", "\n", "# Define the output redistribution matrix file name\n", "\n", "in_RESPFile = work_dir+'EPIC_PN.rmf' # Name of the output redistribution" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"rmfgen\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'spectrumset={in_SPSRCFile}',f'rmfset={in_RESPFile}']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task for generation of redistribution matrix\n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define some parameters for rmfgen\n", "\n", "# Define the output ancillary file name\n", "\n", "in_ARFFile = work_dir+'EPIC_PN.arf' # Name of the output ancillary" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"arfgen\" # SAS task to be executed \n", "\n", "print(\" Checking for Response File ..... \\n\")\n", "# Check if RESP file is available.\n", "if os.path.isfile(in_RESPFile):\n", " print (\"File \"+in_RESPFile+\" exists. \\n\")\n", "else:\n", " print (\"File \"+in_RESPFile+\" does not exist, please check. \\n\")\n", " \n", "# Arguments of SAS Command\n", "inargs = [f'spectrumset={in_SPSRCFile}',f'arfset={in_ARFFile}',\n", " 'withrmfset=yes',f'rmfset={in_RESPFile}',f'badpixlocation={eventfile}','detmaptype=psf']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task for generation of the ancillary file \n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define some parameters for specgruop\n", "\n", "# Define the output ancillary file name\n", "\n", "in_GRPFile = work_dir+'EPIC_PN_spectrum_grp.fits' # Name of the output specgruop" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# SAS Command\n", "cmd = \"specgroup\" # SAS task to be executed \n", "\n", "# Arguments of SAS Command\n", "inargs = [f'spectrumset={in_SPSRCFile}','mincounts=25','oversample=3',\n", " f'rmfset={in_RESPFile}',f'arfset={in_ARFFile}',\n", " f'backgndset={in_SPBKGFile}',f'groupedset={in_GRPFile}']\n", "\n", "print(\" SAS command to be executed: \"+cmd+\", with arguments; \\n\")\n", "inargs" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Execute the SAS task for generation of the ancillary file \n", "\n", "w(cmd, inargs).run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Before you start, clear all data and models\n", "\n", "AllModels.clear() # Clear all models\n", "AllData.clear() # Clear all data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load the group spectral file\n", "\n", "# The group file already contains in the header the names of all the necessary files needed by xspec\n", "# (source and background spectra, response and ancillery files)\n", "\n", "data1 = Spectrum(in_GRPFile) # load spectra groupped file\n", "AllData.show() # inspect loaded data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot results by opening a window directly on the notebook\n", "\n", "Plot.device = \"/svg\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# First inspection of the spectrum\n", "\n", "Plot.xAxis = \"keV\" # set X axis to energy units\n", "Plot.xLog = True # log scale\n", "Plot.yLog = True # log scale\n", "Plot(\"data background\") # plot source and background spectra" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define a model and its parameters\n", "\n", "model1 = Model(\"phabs*pow\") # powerlaw with absorption" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Xset.chatter \n", "# Fit model\n", "\n", "data1.ignore(\"**-0.2 10.-**\") # For the fit, ignore the range ourside (0.2-10.)\n", "data1.ignore(\"bad\") # For the fit, ignore bad channels\n", "\n", "Fit.nIterations = 100 # maximum number of iterations during fitting\n", "Fit.statMethod = \"cstat\" # fit statistics to be used\n", "Fit.perform() # perform fit\n", "cl = 2.706 # confidence level (90%)\n", "Fit.error(str(cl)+\" 1 2 3\") # derive errors at given confidence range for all parameters\n", "\n", "fene_ini = 0.2 # Initial energy to derive flux (keV)\n", "fene_end = 10. # End energy to derive flux (keV)\n", "AllModels.calcFlux(str(fene_ini)+\" \"+str(fene_end)+\" err\") # derive the flux and error in the 0.2-10 keV energy range according to fitted model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Extract information from fitted model\n", "\n", "nh = model1(1).values[1] # Model parameter 1\n", "nherr = AllModels(1)(1).error # Error model parameter 1\n", "alpha = model1(2).values[0] # Model parameter 2\n", "alphaerr = AllModels(1)(2).error # Error model parameter 2\n", "norm = model1(3).values[0] # Model parameter 3\n", "normerr = AllModels(1)(3).error # Error model parameter 3\n", "\n", "dof = Fit.dof # get fit dof\n", "statv = Fit.statMethod # fit statistics method\n", "stat = Fit.statistic # get value of fit statistics\n", "tstatv = Fit.statTest # test statistics method\n", "tstat = Fit.testStatistic # get value of test statistics\n", "flux = data1.flux # get flux\n", "\n", "print(\" Fit Statistics: \\n\")\n", "print(\" Value of fit stat: \"+str(stat)+\" (\"+statv+\")\\n\")\n", "print(\" Value of test stat: \"+str(tstat)+\" (\"+tstatv+\"\\n\")\n", "print(\" Fit dof: \"+str(dof)+\"\\n\")\n", "print(\" Model Parameters and their error: \\n\")\n", "print(\" Nh Value (10^22) : \"+str(100.*nh)+\" , \"+str(nherr)+\"\\n\")\n", "print(\" Alpha Value: \"+str(alpha)+\" , \"+str(alphaerr)+\"\\n\")\n", "print(\" Norm Value: \"+str(norm)+\" , \"+str(normerr)+\"\\n\")\n", "print(\" Flux Value (\"+ str(fene_ini)+\"-\"+str(fene_end)+\" keV) (ergs/cm^2/s): \"+str(flux[0])+\" , \"+\"(\"+str(flux[1])+\", \"+str(flux[2])+\")\\n\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot results by opening a window directly on the notebook\n", "\n", "Plot.device = \"/svg\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Plot data and folded model with residuals\n", "\n", "Plot.xAxis = \"keV\" # set X axis to energy units\n", "Plot.xLog = True # log scale\n", "Plot.yLog = True # log scale\n", "Plot(\"ldata\",\"residuals\") # plot data and folded model, with residuals" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
 \n", "
\n", "\n", "\t\n", "\t\t\n", "\t\t\t\n", "\t\t\n", "\t\n", "
\n", "\t\t\t

Caveats
\n", "\t\t\t
\n", "\t\t\tNone

\n", "\t\t\t
\n", "
\n", " \n", "\n", "
\n", "

 

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.2" } }, "nbformat": 4, "nbformat_minor": 5 }