Program Options

Important

This documentation is not meant to go over all the physics/astronomy or background knowledge required to fully understand what the various programs are doing.

IRAF Reduction

Note

The pipeline for this program is setup only for use at BSUO. This will be updated in the future.

Making heavy use of Astropy’s ccdproc and Photutils I was able to create an automatic data reduction process using Bias, Darks, and Flats to reduce science images.

The first thing that is done is set some global variables:

sigma_clip_low_thresh = None
sigma_clip_high_thresh = 3
sigclip = 5  # sigclip for cosmic ray removal
# rdnoise = 10.83  # * u.electron  # gathered from fits headers manually
# gain = 1.43  # * u.electron / u.adu  # gathered from fits headers manually
overwrite = True  # if the user wants to overwrite already existing files or not, by default it is set to True
# mem_limit = 1600e6  # maximum memory limit 4.5 Gb is the recommended which is 450e6 (i.e. 8.0 Gb would be 800e6)
dark_bool = "True"
# location = "bsuo"

If the user is using the pipeline as described in this section then the following lines of code are used:

        # checks whether the file paths from above are real
        while True:
            try:
                images_path = Path(path)
                calibrated_data = Path(calibrated)
                break
            except FileNotFoundError:
                print("Files were not found. Please try again.\n")
                path = input("Please enter a file path or folder name (if this code is in the same main folder): ")
                calibrated = input(
                    "Please enter a name for a new calibrated folder to not overwrite the original images: ")

        if location.lower() == "kpno":
            kpno()
        elif location.lower() == "ctio":
            ctio()
        elif location.lower() == "lapalma":
            lapalma()
        elif location.lower() == "bsuo":
            pass

        calibrated_data.mkdir(exist_ok=True)
        files = ccdp.ImageFileCollection(images_path)

        zero, overscan_region, trim_region = bias(files, calibrated_data, path, pipeline, mem_limit, gain)
        if not dark_bool:
            master_dark = None
        else:
            master_dark = dark(files, zero, calibrated_data, overscan_region, trim_region, gain, rdnoise, mem_limit)

        flat(files, zero, master_dark, calibrated_data, overscan_region, trim_region, gain, rdnoise, mem_limit)
        science_images(files, calibrated_data, zero, master_dark, trim_region, overscan_region)

Otherwise, these lines are used from the main function:

    if not pipeline:
        # allows the user to input where the raw images are and where the calibrated images go to
        path = input("Please enter a file pathway (i.e. C:\\folder1\\folder2\\[raw]) to where the raw images are or type "
                     "the word 'Close' to leave: ")
        # path = "C:\\Users\\Kyle\\OneDrive\\PhysicsAstro\\Astronomy\\Code\\IRAF\\Calibration"
        if path.lower() == "close":
            exit()
        # path = "Calibration2"
        calibrated = input("Please enter a file pathway for a new calibrated folder to not overwrite the original images "
                           "(C:\\folder1\\folder2\\[calibrated]): ")
        # calibrated = "C:\\test"
        # checks whether the file paths from above are real
        while True:
            try:
                images_path = Path(path)
                calibrated_data = Path(calibrated)
                break
            except FileNotFoundError:
                print("Files were not found. Please try again.\n")
                path = input("Please enter a file path or folder name (if this code is in the same main folder): ")
                calibrated = input("Please enter a name for a new calibrated folder to not overwrite the original images: ")

        if location.lower() == "kpno":
            gain, rdnoise = kpno()
        elif location.lower() == "ctio":
            gain, rdnoise = ctio()
        elif location.lower() == "lapalma":
            gain, rdnoise = lapalma()
        elif location.lower() == "bsuo":
            pass
        else:
            print("\nDo you want to load default options like gain and read noise? The defaults are for BSUO")
            while True:
                default_ans = input("To load defaults type 'Default' otherwise type 'New' to enter values: ")
                # default_ans = "default"
                if default_ans.lower() == "default":
                    break
                elif default_ans.lower() == "new":
                    new_default()
                    break
                else:
                    print("Please either enter 'Default' or 'New'.\n")

        calibrated_data.mkdir(exist_ok=True)
        files = ccdp.ImageFileCollection(images_path)

        zero, overscan_region, trim_region = bias(files, calibrated_data, path, pipeline, mem_limit, gain)
        if not dark_bool:
            master_dark = None
        else:
            master_dark = dark(files, zero, calibrated_data, overscan_region, trim_region, gain, rdnoise, mem_limit)

        flat(files, zero, master_dark, calibrated_data, overscan_region, trim_region, gain, rdnoise, mem_limit)
        science_images(files, calibrated_data, zero, master_dark, trim_region, overscan_region)

The only functional difference in these lines of code, is with the pipeline the user does not have to enter in folder and file locations manually where not using the pipeline the user does.

Default Values

If the user is not using the pipeline and is not at the BSUO location, then the user has the option to change those initial global settings with this function:

def new_default():
    """
    Generates new values that the user can enter

    :return: newly entered default values
    """
    global sigma_clip_high_thresh
    global sigma_clip_low_thresh
    global sigclip
    global dark_bool
    global location

    sigma_clip_low_thresh = (input("\nEnter a sigma clip low threshold, default is 'None': "))
    if sigma_clip_low_thresh.lower() == "none":
        sigma_clip_low_thresh = None
    else:
        sigma_clip_low_thresh = int(sigma_clip_high_thresh)
    sigma_clip_high_thresh = int(input("Enter a sigma clip high threshold, default is '3': "))
    gain = float(input("Enter a gain value, default is '1.43' electron/adu: "))
    rdnoise = float(input("Enter a readnoise value, default is '10.83' electrons: "))
    sigclip = int(input("Enter a sigma clip value for cosmic ray removal, default is '5': "))
    dark_bool = input("Are you using Dark Frames, default is True (enter 'True' or 'False'): ")
    if dark_bool.lower == "false":
        dark_bool = False
    elif dark_bool.lower == "true":
        dark_bool = True
    location = input("What is your observation location, default is bsuo (ctio, kpno, lapalma")

    return gain, rdnoise, dark_bool

Where the default value is displayed but the user can change them to whatever value they like. There are no try excepts to catch incorrect types so the user has to be extra careful when entering in values.

Reduction Functions

Main functions of this program are bias, dark, flat, and science. Howeverm they all call this function for the actual data reducing of each image:

def reduce(ccd, overscan_region, trim_region, num, zero, combined_dark, good_flat, gain, rdnoise):
    """
    This function takes the information for each section of the reduction process into a singular function for
    limits in duplication of the code.

    :param ccd: individual image
    :param overscan_region: the overscan region of the image
    :param trim_region: region of the image to trim
    :param num: tells the program what stage of the process it is in
    :param zero: master bias
    :param combined_dark: master dark
    :param good_flat: master flat
    :param gain: gain of the camera
    :param rdnoise: readout noise of the camera

    :return: depends on the stage of process, but will return a final image for each process
    """

    # subtract overscan, trims image, and gain corrects
    if overscan_region.lower() == "none":
        pass
    else:
        # new_ccd = ccdp.ccd_process(ccd, fits_section=overscan_region, trim=trim_region, gain=gain * u.electron / u.adu,
        #                            readnoise=rdnoise * u.electron, error=False)
        ccd = ccdp.subtract_overscan(ccd, fits_section=overscan_region, median=True, overscan_axis=None)

    trim_ccd = ccdp.trim_image(ccd, fits_section=trim_region)
    new_ccd = ccdp.gain_correct(trim_ccd, gain=gain * u.electron / u.adu)  # gain correct the image

    # this if statement checks whether the input is bias, dark, flat, or science reduction
    # bias combining
    if num == 0:
        return new_ccd
    # dark combining
    elif num == 1:
        # print(new_ccd.unit, zero.unit)
        # Subtract bias
        sub_ccd = ccdp.subtract_bias(new_ccd, zero)

        return sub_ccd
    # flat combining
    elif num == 2:
        # print(new_ccd.unit, zero.unit)
        # Subtract bias
        sub_ccd = ccdp.subtract_bias(new_ccd, zero)

        # Subtract the dark current
        if not dark_bool:
            final_ccd = sub_ccd
        else:
            final_ccd = ccdp.subtract_dark(sub_ccd, combined_dark, exposure_time='exptime', exposure_unit=u.second,
                                           scale=True)
        return final_ccd
    # science calibration
    elif num == 3:
        # Subtract bias
        if location.lower() == "bsuo":
            sub_ccd = ccdp.subtract_bias(new_ccd, zero)
        else:
            sub_ccd = ccdp.subtract_bias(new_ccd, zero)
        # Subtract the dark current
        if not dark_bool:
            reduced = sub_ccd
        else:
            reduced = ccdp.subtract_dark(sub_ccd, combined_dark, exposure_time='exptime', exposure_unit=u.second,
                                         scale=True)
        # flat field correct the science image based on filter
        reduced = ccdp.flat_correct(ccd=reduced, flat=good_flat, min_value=1.0)

        # cosmic ray reject above 5 sigmas and gain_apply is set to false because it changes the units of the image

Each process of reducing bias, darks, flats, and science image has its own section within the if statement.

Bias

The bias function as shown below, first takes a flat image from the raw folder (when not using the pipeline) and displays it for the user to see:

  • If there is an overscan region

  • Where the data section (trim region) is

The format for both of these variables is [columns, rows] as is the fits convention. However, if the user does not want to use all of the columns like for an overscan region, then an example would be this [2073:2115, :]. Where the only columns used are between 2073 and 2115 but all of the rows are being used. Likewise, an example trim region would be [20:2060, 12:2057]. This example uses columns 20-2060 and rows 12-2057.

For ccdproc, there is a weird bug where if the user enters the same rows for an overscan region and trim region, ccdproc errors out. So the recommendation is to use all of the rows for an overscan region and then specify where the user wants to trim the image after.

Once the each image has been reduced, they must be combined to create what is called a master bias or zero image. This is done specifically by these lines:


    print("\nFinished overscan correcting bias frames.")
    # combine all the output bias images into a master bias
    reduced_images = ccdp.ImageFileCollection(calibrated_data)
    calibrated_biases = reduced_images.files_filtered(imagetyp='BIAS', include_path=True)

    combined_bias = ccdp.combine(calibrated_biases,

The sigma_clip_dev_func computes the standard deviation about the center value (see here for more details regarding this). Also, the mem_limit can be changed but is set toa default value of 16 Gb (1600e6). This might need to be reduced to be between 6 and 10 Gb as the average RAM is now 16 Gb. The purpose of this value is to aim to reduce and split the task of combining into chunks to reduce system RAM usage.

        return reduced


def bias(files, calibrated_data, path, pipeline, mem_limit, gain):
    """
    Calibrates the bias images

    :param path: the raw images folder path
    :param files: file location where all raw images are
    :param calibrated_data: file location where the new images go
    :param pipeline: true or false for pipeline usage
    :param mem_limit: maximum memory chunk usage
    :param gain: gain of the camera

    :return: the combined bias image and the trim and overscan regions
    """

    # plots one of the flat image mean count values across all columns to find the trim and overscan regions
    if not pipeline:
        print("\n\nThe flat image that you enter next should be inside the " + "\033[1m" + "\033[93m" + "FIRST" +
              "\033[00m" + " folder that you entered above or this will crash.")
        while True:
            try:
                image = input(
                    "Please enter the name of one of the flat image to be looked at for overscan and data regions: ")
                # image = "Flat-Empty-B-Bin2-001-NoGEM.fts"  # testing
                cryo_path = Path(path)
                bias_1 = CCDData.read(cryo_path / image, unit='adu')
                break
            except FileNotFoundError:
                print("\nThe file you entered could not be found, please try entering " 
                      "\033[1m" + "\033[93m" + "JUST" + "\033[00m" + " the file name only.\n")
        # bias_1 = CCDData.read(cryo_path / 'bias-0001.fits', unit='adu')  # testing

        print("\n\nFor the overscan region, [columns, rows], and if you want all the columns then you want would enter, \n"
              "[1234:5678, 1234:5678] and this would say rows between those values and all the columns. \n"
              "This would also work if you wanted all the columns ([: , 1234:5678]).\n")
        bias_plot(bias_1)

        overscan_region = input("Please enter the overscan region you determined from the figure.\n"
                                "Example '[2073:2115, :]' or if you do not have an overscan region enter 'None': ")
        trim_region = input("Please enter the data section. Example '[20:2060, 12:2057]': ")
        print()
    else:
        overscan_region = "[2073:2115, :]"
        trim_region = "[20:2060, 12:2057]"

    print("\nStarting overscan on bias.\n")
    for ccd, file_name in files.ccds(imagetyp='BIAS', return_fname=True, ccd_kwargs={'unit': 'adu'}):
        new_ccd = reduce(ccd, overscan_region, trim_region, 0, zero=None, combined_dark=None, good_flat=None, gain=gain, rdnoise=rdnoise)

        list_of_words = file_name.split(".")
        new_fname = "{}.fits".format(list_of_words[0])

        # Save the result
        new_ccd.write(calibrated_data / new_fname, overwrite=overwrite)

        add_header(calibrated_data, new_fname, "BIAS", "None", None, None, None,
                   trim_region, overscan_region, gain, rdnoise)

        # output that an image is finished for updates to the user
        print("Finished overscan correction for " + str(new_fname))

    print("\nFinished overscan correcting bias frames.")
    # combine all the output bias images into a master bias
    reduced_images = ccdp.ImageFileCollection(calibrated_data)
    calibrated_biases = reduced_images.files_filtered(imagetyp='BIAS', include_path=True)

    combined_bias = ccdp.combine(calibrated_biases,
                                 method='average',
                                 sigma_clip=True, sigma_clip_low_thresh=sigma_clip_low_thresh,
                                 sigma_clip_high_thresh=sigma_clip_high_thresh,
                                 sigma_clip_func=np.ma.median, signma_clip_dev_func=mad_std,
                                 mem_limit=mem_limit
                                 )

    fname = "zero.fits"
    combined_bias.meta['combined'] = True
    combined_bias.write(calibrated_data / fname, overwrite=overwrite)

Dark

Once the zero image has been created, the next step of the process is to create a master_dark. However, some researchers forgo taking dark images as CCD’s are cooled down so much that the thermal noise created by them in darks is virtually negligible and this is why there is a variable called dark_bool which has a default value of True.

The process is the exact same as above for Bias except, the zero image is subtracted off each and every dark image. The combining of these newly reduced darks is also slightly different:

    plt.xlim(-50, 2130)
    plt.xlabel('pixel number')
    plt.ylabel('Counts')
    # plt.title('Count Values for Row 1000')
    # plt.show()


def dark(files, zero, calibrated_path, overscan_region, trim_region, gain, rdnoise, mem_limit):
    """
    Calibrates the dark frames.

    :param files: file location of raw images
    :param zero: master bias image
    :param calibrated_path: file location for the new images
    :param trim_region: trim region for images
    :param overscan_region: overscan region for images
    :param mem_limit: maximum memory chunk usage
    :param gain: gain of the camera
    :param rdnoise: readout noise of the camera

    :return: combined master dark
    """
    print("Starting dark calibration.\n")
    # calibrating a combining the dark frames
    for ccd, file_name in files.ccds(imagetyp='DARK', ccd_kwargs={'unit': 'adu'}, return_fname=True):
        sub_ccd = reduce(ccd, overscan_region, trim_region, 1, zero, combined_dark=None, good_flat=None, gain=gain, rdnoise=rdnoise)

        # new file name that uses the number from the original image
        list_of_words = file_name.split(".")
        new_fname = "{}.fits".format(list_of_words[0])

        # Save the result
        sub_ccd.write(calibrated_path / new_fname, overwrite=overwrite)

        add_header(calibrated_path, new_fname, "DARK", "None", None, None, None,
                   trim_region, overscan_region, gain, rdnoise)

        print("Finished overscan correction and bias subtraction for " + str(new_fname))

    print("\nFinished overscan correcting and bias subtracting all dark frames.")
    print("\nStarting combining dark frames.\n")
    time.sleep(10)
    reduced_images = ccdp.ImageFileCollection(calibrated_path)
    calibrated_darks = reduced_images.files_filtered(imagetyp='dark', include_path=True)

    combined_dark = ccdp.combine(calibrated_darks,
                                 method='average',
                                 sigma_clip=True, sigma_clip_low_thresh=sigma_clip_low_thresh,
                                 sigma_clip_high_thresh=sigma_clip_high_thresh,

As we now have to take into consideration the read nooise and gain of the CCD/camera.

Flat

As stated above, the process is virtually the same but now the zero and master dark are both subtracted off each and every flat image. Now the combining of the images is slightly different as we now have filters.

        final_ccd = reduce(ccd, overscan_region, trim_region, 2, zero, combined_dark, good_flat=None, gain=gain, rdnoise=rdnoise)

        # new file name with the filter and number from the original file
        list_of_words = file_name.split(".")
        new_fname = "{}.fits".format(list_of_words[0])

        # Save the result
        final_ccd.write(calibrated_path / new_fname, overwrite=overwrite)

        add_header(calibrated_path, new_fname, "FLAT", "None", None, None, None,
                   trim_region, overscan_region, gain, rdnoise)

This created master flats in each filter that the user is using.

Science

Again as stated above, the zero and master dark are subtracted from each science image and the master flats are used based on the filter used for each science image.

Adding to the Header

Each of the previous four functions discussed all call the add_header function within the reduction loops. This function adds various values to the headers of each individual image:

        good_flat = combined_flats[light.header['filter']]
        reduced = reduce(light, overscan_region, trim_region, 3, zero, combined_dark, good_flat, gain=gain, rdnoise=rdnoise)

        list_of_words = file_name.split(".")
        new_fname = "{}.fits".format(list_of_words[0])

        all_reds.append(reduced)
        reduced.write(calibrated_data / new_fname, overwrite=overwrite)

        hjd = light.header["JD-HELIO"]
        ra = light.header["RA"]
        dec = light.header["DEC"]

        add_header(calibrated_data, new_fname, science_imagetyp, "None", hjd, ra, dec, trim_region, overscan_region, gain, rdnoise)

        print("Finished calibration of " + str(new_fname))
    print("\nFinished calibrating all science images.")


def add_header(pathway, fname, imagetyp, filter_name, hjd, ra, dec, trim_region, overscan_region, gain, rdnoise):
    """
    Adds values to the header of each image reduced

    :param overscan_region: BIASSEC for the header
    :param trim_region: DATASEC for the header
    :param dec: declination of target object
    :param ra: right ascension of target object
    :param hjd: time of mid-exposure for each image
    :param pathway: pathway to the new file
    :param fname: file name
    :param imagetyp: image type (bias, flat, dark, light)
    :param filter_name: filter type (B, V, R)
    :param gain: gain of the camera
    :param rdnoise: readout noise of the camera

    :return: None
    """
    # bias_image = get_pkg_data_filename(pathway + "\\" + fname)

The goal of this is to make it easier in the future to tell what values were used in the reduction process.

BJD_TDB

The conversion between HJD and BJD_TDB is not an easy conversion. The purpose of including this in this package is to have a single time value across multiple telescopes or satellites. TESS uses BJD_TDB while BSUO and various SARA use HJD.

    fits.setval(image_name, "RDNOISE", value=rdnoise, comment="UNits of e-")
    fits.setval(image_name, "OBSERVAT", value=location, comment="Obs. Loc.")
    fits.setval(image_name, "IMAGETYP", value=imagetyp, comment="Image type")
    fits.setval(image_name, "DATASEC", value=trim_region, comment="Trim data section")
    fits.setval(image_name, "BIASSEC", value=overscan_region, comment="Overscan section")
    fits.setval(image_name, "EPOCH", value="J2000.0")
    # fits.setval(bias_image, "FILTER", value=filter_name)
    if imagetyp == "LIGHT":
        bjd = BJD_TDB(hjd, location, ra, dec)
        fits.setval(image_name, "BJD_TDB", value=bjd.value, comment="Bary. Julian Date, Bary. Dynamical Time")


def BJD_TDB(hjd, site_name, ra, dec):
    """
    Converts HJD to BJD_TDB

    :param ra: right ascension of target object
    :param dec: declination of target object
    :param hjd: HJD time for the mid-exposure image
    :param loc: location of the observations
    :return: Newly calculated BJD_TDB that is light time corrected
    """

Find Minimum

Note

This program was originally created by Alec J. Neal but updated for interactive usage by Kyle J. Koeller.

This program aims to find the times of minimum (ToM) from given light curve data. One thing this helps with is determine how accurate the period is.

The main function first asks the user how many filters they are using. Again this is assumed to be between 1-3 fiflters as that is what is used for BSUO and the SARA telescopes that Ball State has access to.

    """
    num_filters = input("How many filters do you have (i.e. 1-3) or type 'Close' to close the program: ")
    print()
    if num_filters == '1':
        while True:
            filter1 = input("Please enter a complete file pathway to your data file (i.e. C:\\folder1\\folder2\\data.txt: ")
            if path.exists(filter1):
                break
            else:
                print("\nOne of the files you have entered does not exist, please try all three again.\n")
                continue
        print("\nPress 'h' for help.\n")
        plot_obs([filter1], day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method='norm')
    elif num_filters == '2':
        while True:
            filter1 = input("Please enter a complete file pathway to data file 1 (i.e. C:\\folder1\\folder2\\data.txt: ")
            filter2 = input("Please enter a complete file pathway to data file 2 (i.e. C:\\folder1\\folder2\\data.txt: ")
            if path.exists(filter1) and path.exists(filter2):
                break
            else:
                print("\nOne of the files you have entered does not exist, please try all three again.\n")
                continue
        print("\nPress 'h' for help.\n")
        plot_obs([filter1, filter2], day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method='norm')
    elif num_filters == '3':
        while True:
            filter1 = input("Please enter a complete file pathway to data file 1 (i.e. C:\\folder1\\folder2\\data.txt: ")
            filter2 = input("Please enter a complete file pathway to data file 2 (i.e. C:\\folder1\\folder2\\data.txt: ")
            filter3 = input("Please enter a complete file pathway to data file 3 (i.e. C:\\folder1\\folder2\\data.txt: ")
            if path.exists(filter1) and path.exists(filter2) and path.exists(filter3):
                break
            else:
                print("\nOne of the files you have entered does not exist, please try all three again.\n")
                continue
        print("\nPress 'h' for help.\n")
        plot_obs([filter1, filter2, filter3], day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method='norm')
    elif num_filters.lower() == "close":
        exit()
    else:
        print("Please enter a number between 1-3 or the word 'Close'.\n")

Kwee van Woerdan

Any option that the user enters, the function plot_obs is called. The point of this function is to plot the main figure that the user interacts with varying keyboard press events (talk about that later). Within this function, the function KvW (Kwee van Woerdan) is called. This is a method utilizes the symmetry of a light curve to use an estimated location.

The ToM and its error is defined by the following:

    a, b, c = coef[::-1]
    if c - b ** 2 / (4 * a) < 0 or calc_S(fracHJD, flux, -b / (2 * a), npairs=npairs)[0] > paraS[0]:
        with np.errstate(divide='ignore'):
            p2 = calc.poly.regr_polyfit(paraHJD[:3], paraS[:3], 2)
        # coef, err, R2, results = parabola[:4]
        a2, b2, c2 = p2[0][::-1]
        if c2 - b2 ** 2 / (4 * a2) < 0 or calc_S(fracHJD, flux, -b2 / (2 * a2), npairs=npairs)[0] > paraS[0]:
            T_kw = T_BF
        else:
            T_kw = -b2 / (2 * a2)
    else:
        T_kw = -b / (2 * a)
    S_at_tkw, Z = calc_S(fracHJD, flux, T_kw, npairs=npairs)
    Z /= 2
    if need_error:
        if Z <= 1:
            # print(Z)
            print('\033[1m' + '\033[95m' + 'Too few observations!' + '\033[0m')
        sigt_kw = np.sqrt(S_at_tkw / (a * (Z - 1)))

        print('S(T_BF) =', paraS[0])
        print('S(T_KW) =', calc_S(fracHJD, flux, T_kw, npairs=npairs))
    else:

Key Events

When the plot gets displayed to the user, the user can interactively work with what is displayed using various key presses:


def press(event, Z, filter_files):
    """
    Pressing a key will do something

    Parameters
    ----------
    event : matplotlib event
        The event that is triggered by pressing a key
    Z : list
        List of the data to be plotted

    Returns
    -------
    None
    """
    global rb
    global lb
    global day
    if event.key == "d":
        # right boundary line
        lb = lb
        rb = event.xdata
        plt.close()
        plot_obs(filter_files, day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method="norm")
    elif event.key == "a":
        # left boundary line
        lb = event.xdata
        rb = rb
        plt.close()
        plot_obs(filter_files, day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method="norm")
    elif event.key == "w":
        # writes to a file
        pass
    elif event.key == "escape" or event.key == "q":
        # exits the program
        exit()
    elif event.key == "n":
        # goes to the next "day"
        day += 1
        plot_obs(filter_files, day=day, lb=lb, rb=rb, order=order, resolution=resolution, npairs=npairs,
                 para_range=None, norm_method="norm")
    elif event.key == "h":
        print("\nPress 'a' for right boundary, 'd' for left boundary, 'n' for the next day, 'w' to write to a file, "
              "or the 'ESC' or 'q' keys to close the figure.\n")
    else:
        print("\nPress 'a' for right boundary, 'd' for left boundary, 'n' for the next day, 'w' to write to a file, "

This functionaility is still a work in progress but the user can do the following:

  • Left and right boundaries

  • Move to the next “day” (next set of observations)

  • Close the plot

  • Display the options available

_images/min_program_demo.png

The ability to write the KvW value and error to a file will be added at a later date.

TESS Database Search/Download

Searching the TESS Database allows for even more data collection than what is availabe through telescope time. TESS is a great resource because if a user’s target object is in the database there is most likely a couple of months worth of data as TESS looks at the same spot in the sky for 27 days straight without stopping collecting data.

Searching TESS

Given an object’s name, the program will find sector numbers (the number of the month data was taken). If there are no sectors available for a particular star then the program will ask the user for another star name.

    while True:
        try:
            system_name = input("Enter in the TIC-ID given in SIMBAD (TIC 468293391) or the word 'Close' "
                                "to close the program: ")
            sector_table = Tesscut.get_sectors(objectname=system_name)
            break
        except astroquery.exceptions.ResolverError:
            print("\nThe TIC number you entered is invalid or there is no data for this given system.\n")

TESS ccd Information

TESS released information regarding its ccd’s (there are four on board the satellite) and this is compiled into a text file called tess_ccd_info.txt located here for reference. The program determined the gain given the sector’s camera and ccd values and only takes every 4th value given repeats.

    # slice = dc[2]  # could be used in the future

    sector_camera = []
    sector_ccd = []
    for count, val in enumerate(sector_table["camera"]):
        sector_camera.append(val)
        sector_ccd.append(sector_table["ccd"][count])

    # create a tuple form of the sector and TESS camera/ccd data
    list_gain = []
    a = list(zip(tess_camera, tess_ccd))  # tess cam and ccd's
    b = list(zip(sector_camera, sector_ccd))  # sect cam and ccd's

    # compare the TESS and sector information
    for _, sect in enumerate(b):
        for y, tess in enumerate(a):
            if sect == tess:
                list_gain.append(gain[y])

    # splits up list_gain into lists by every 1, 2, 3, or 4th value since there are 4 slices for each camera and ccd
    A = list_gain[::4]
    B = list_gain[1::4]

Downloading

Starting the downloading after finding sector numbers is as simple as telling the program to download all the sectors for a first time run or to download a specific sector for new data release.

    # sector_table.add_columns(["Gain"])
    # prints off the sector table to let the user know what sectors TESS has observed the object
    while True:
        print("\n The read noise for the cameras is between 7-11 electrons/pixel.")
        print(sector_table)
        print("\nDo you want to download " + "\033[1m" + "\033[93m" + "ALL" + "\033[00m" +
              " the TESS data?")
        download_ans = input("Type 'Yes' to download all Sector data or type 'No' to specify a sector that you want. "
                             "Or type 'Close' to leave: ")
        # prints the word 'ALL' in bold '\033[1m' and in yellow '\033[93m' must return to normal with '\033[0m'
        if download_ans.lower() == "yes":
            print("\nWhen TESS data starts the initial download, it downloads, essentially, a big ZIP file with "
                  "all the individual images inside. Below, please enter the entire file path.")
            print("Example output file path: 'C:\\folder1\\TESS_data\n'")

            for i in sector_table["sector"]:
                download(system_name, i)
                print("\nFinished downloading Sector " + str(i))
        elif download_ans.lower() == "no":
            sector_num = int(input("Which Sector would you like to download: "))
            download(system_name, sector_num)
        elif download_ans.lower() == 'close':
            print("The program will now exit.\n")
            break

For either choice, the program calls the download function. This function actively retrieves the data based on system name and sector number.



def download(system_name, i):
    """
    Download the sector data

    Parameters
    ----------
    system_name - name of the system to download sector data for
    i - sector number

    Returns
    -------
    None
    """
    while True:
        download_path = input("Please enter a file pathway where you want the sector ZIP file to go: ")
        if exists(download_path):
            break
        else:
            print("\nThe file pathway you entered does not exist. Please try again.\n")
    # downloads the pixel file data that can then be analyzed with AIJ
    print("\n\nStarting download of Sector " + str(i))
    manifest = Tesscut.download_cutouts(objectname=system_name, size=[30, 30] * u.arcmin, sector=i,

The default size is the maximum size allowed by TESS which is a 30x30 arcmin box. At this current time, there is no way to change this as an input, but if users are wanting this choice, this can be added at a later date.

TESSCut

At the line tCut(manifest, download_path), the program calls a new file named tesscut.py. The reason for this file is to “unzip” the downloaded file from TESS. TESS outputs a fits file that has all images for a given sector inside of this.

The extraction process is handled by this file and looks at the metadata of each extracted image to gather the mid exposure time in BJD_TDB at which that image was taken:

        bjd0 = 2457000.
        bjd1 = imagedata[i][0]
            bjd = bjd0 + bjd1

BJD to HJD

Converting from BJD_TDB to HJD is handled by the following function:

def bary_to_helio(ra, dec, bjd, obs_name):
    bary = Time(bjd, scale='tdb', format='jd')
    obs = EarthLocation.of_site(obs_name)
    star = SkyCoord(ra, dec, unit=(u.hour, u.deg))
    ltt = bary.light_travel_time(star, 'barycentric', location=obs)
    guess = bary - ltt
    delta = (guess + guess.light_travel_time(star, 'barycentric', obs)).jd - bary.jd
    guess -= delta * u.d

    ltt = guess.light_travel_time(star, 'heliocentric', obs)
    return guess.utc + ltt

Takes a RA, DEC, BJD_TDB, location as inputs and finds the light travel time corrected HJD. Currently the light travel time is not fully correct as it uses Greenwich England as the location but should be using the TESS satellite location. This will hopefully be added at a later date but is not a priority as the effect will be minimal.

Header Information

Lastly there are a few header updates that are added to each image as denoted by the following lines:

            outhdr = outlist.header
            outhdr['LST_UPDT'] = file_time
            outhdr['BJD_TDB'] = bjd
            outhdr['HJD'] = hjd.value
            outhdr['COMMENT'] = tess_ffi

The value LST_UPDT is when the file was edited by this program and the COMMENT is some information about the image like: date image was taken, sector number, etc.

AIJ Comparison Star Selector

Catalog Finder

The first thing this program does is search the Vizier APASS catalog. Given a RA and DEC of a star, the program does the following:

    """
    Processes the data from the Vizier query

    :param vizier_result: Table from Vizier
    :return: Panda dataframe
    """
    # converts the table result to a list format for putting values into lists
    table_list = []
    for i in vizier_result:

This looks at a box of size 30 arcmin and gathers all APASS magnitude known stars and compiles them into a table. The first three inputs into the result variable are:

  • columns Defines what to gather from the catalog database

  • row_limit Set to -1 to have no row limit (i.e. gather as many objects as possible)

  • column_filters Only gather data on the stars with Johnson B and V magnitudes less than 14

Cousins R

Note

Utilizes GPU accleration through numba for the calculations function.

After gathering all the comparison stars, the program then goes on to calculate the Cousins R band filter. For each and every comparison star by calling the calculations function:

    fits_file = input("Enter file pathway to one of your science image files for creating an overlay or "
                      "comparison stars: ")

    # get the image data for plotting purposes
    header_data_unit_list = fits.open(fits_file)
    image = header_data_unit_list[0].data
    header = header_data_unit_list[0].header

    # set variables to lists
    index_num = list(df[0])
    ra_catalog = list(df[1])
    dec_catalog = list(df[2])

    # convert the lists to degrees for plotting purposes
    ra_cat_new = (np.array(splitter(ra_catalog)) * 15) * u.deg
    dec_cat_new = np.array(splitter(dec_catalog)) * u.deg

    # text for the caption below the graph
    txt = "Number represents index value given in the final output catalog file."

    # Calculate the zscale interval for the image
    zscale = ZScaleInterval()

    # Calculate vmin and vmax for the image display
    vmin, vmax = zscale.get_limits(image)

    # plot the image and the overlays
    wcs = WCS(header)
    fig = plt.figure(figsize=(12, 8))
    fig.text(.5, 0.02, txt, ha='center')
    ax = plt.subplot(projection=wcs)
    plt.imshow(image, origin='lower', cmap='cividis', aspect='equal', vmin=vmin, vmax=vmax)
    plt.xlabel('RA')
    plt.ylabel('Dec')

    overlay = ax.get_coords_overlay('icrs')

The final equation used comes from this paper by rearranging equation 2 to solve for the Cousins R variable. The error for the val is given by the variable root and this uses basic add the errors in quadrature.

	RA	DEC	BMag	e_BMag	VMag	e_VMag	Rc	e_Rc
0	0:30:25.656	78:45:18.119	13.63	0.17	11.71	0.06	10.74	0.15
1	0:25:42.589	78:44:33.436	13.73	0.15	12.3	0.05	11.55	0.12
2	0:23:39.389	78:46:55.326	12.84	0.42	12.14	0.36	11.72	0.55
3	0:27:1.938	78:47:5.212	13.31	0.14	11.99	0.05	11.31	0.12
4	0:28:58.742	78:47:55.597	13.61	0.16	12.08	0.05	11.28	0.13
5	0:28:22.105	78:48:11.434	10.92	0.2	10.43	0.06	10.10	0.14
6	0:26:11.727	78:50:23.521	12.2	0.14	11.29	0.05	10.77	0.12
7	0:24:45.491	78:53:35.117	11.49	0.12	10.25	0.05	9.67	0.10
8	0:23:20.030	78:49:20.363	12.6	0.17	12.22	0.07	11.99	0.14

Gaia

The cousins_r function also searches the Gaia catalog by calling the gaia.py file and specifically the tess_mag function. The function takes numerous Gaia magnitude values and calculates the TESS magnitude. Here are a number of papers on the subject:

        G = g.phot_g_mean_mag[:4].value
        G_flux = g.phot_g_mean_flux_over_error[:4]
        BP = g.phot_bp_mean_mag[:4].value
        BP_flux = g.phot_bp_mean_flux_over_error[:4]
        RP = g.phot_rp_mean_mag[:4].value
        RP_flux = g.phot_rp_mean_flux_over_error[:4]

        if len(BP) == 0 or len(RP) == 0:
            # this equation is used if there are no BP or RP magnitudes
            if len(G) == 0 or len(G_flux) == 0:
                T_list.append(99.999)
                T_err_list.append(99.999)
            else:
                T = G - 0.403
                G_err = (2.5 / mt.log(10)) * (G_flux[0] ** -1)
                T_list.append(T[0])
                T_err_list.append(G_err)
        else:
            # calculate the TESS magnitude (T) and corresponding error (T_err)
            T = G - 0.00522555 * (BP - RP) ** 3 + 0.0891337 * (BP - RP) ** 2 - 0.633923 * (BP - RP) + 0.0324473
            T = T[0]

            # error propagation formulae
            G_err = (2.5 / mt.log(10)) * (G_flux[0] ** -1)
            BP_err = (2.5 / mt.log(10)) * (BP_flux[0] ** -1)
            RP_err = (2.5 / mt.log(10)) * (RP_flux[0] ** -1)

            Bp_Rp_err = mt.sqrt(BP_err ** 2 + RP_err ** 2)
            T_err = mt.sqrt(G_err ** 2 + (Bp_Rp_err * 3) ** 2)

            # append to respective lists with only 2 decimal places, instead of ~15
            T_list.append(float(format(T, ".2f")))
            T_err_list.append(float(format(T_err, ".3f")))

If the value of the TESS magnitude for a given comparison is NaN then the value and its error are set to 99.999 to effectively guarantee that it will not be used later.

Creating RADEC Files

The creation of RADEC files is carried out by the function create_radec and it uses Astro ImageJ (AIJ) convention of formatting these files:

    :param dec: Declination of the object
    :param pipeline: Boolean for if pipeline is being used
    :param folder_path: Folder pathway to where files get saved
    :param obj_name: Object name for pipeline
    :return: Text file pathway, RA, and DEC
    """
    if not pipeline:
        ra_input, dec_input = get_user_inputs()
    else:
        ra_input, dec_input = ra, dec

    ra_input2 = splitter([ra_input])
    dec_input2 = splitter([dec_input])

    result = query_vizier(ra_input2[0], dec_input2[0])
    df = process_data(result)

    if not pipeline:
        text_file = input("Enter a text file pathway and name for the output comparisons "
                          "(ex: C:\\folder1\\APASS_254037_catalog.txt): ")
    else:
        text_file = folder_path + "\\APASS_" + obj_name + "_catalog.txt"

    save_to_file(df, text_file)

    return text_file, ra_input2[0], dec_input2[0]


def create_header(ra, dec):
    """
    Creates the header string for the RADEC file.

    :param ra: Right ascension of the object of interest
    :param dec: Declination of the object of interest

    :return: The header string for the RADEC file
    """
    header = "#RA in decimal or sexagesimal HOURS\n" \
             "#Dec in decimal or sexagesimal DEGREES\n" \
             "#Ref Star=0,1,missing (0=target star, 1=ref star, missing->first ap=target, others=ref)\n" \
             "#Centroid=0,1,missing (0=do not centroid, 1=centroid, missing=centroid)\n" \
             "#Apparent Magnitude or missing (value = apparent magnitude, or value > 99 or missing = no mag info)\n" \
             "#Add one comma separated line per aperture in the following format:\n"
    header += "#RA, Dec, Ref Star, Centroid, Magnitude\n"
    header += str(conversion([ra])[0]) + ", " + str(conversion([dec])[0]) + ", 0, 1, 99.999\n"

    return header


def create_lines(ra_list, dec_list, mag_list, ra, dec, filt):
    """
    Creates the data lines string for the RADEC file.

    :param ra_list: The list of right ascensions
    :param dec_list: The list of declinations
    :param mag_list: The list of magnitudes
    :param ra: Right ascension of the object of interest
    :param dec: Declination of the object of interest
    :param filt: The filter for which the RADEC file is being created

    :return: The data lines string for the RADEC file
    """
    lines = ""
    ra_decimal = np.array(splitter(ra_list))
    dec_decimal = np.array(splitter(dec_list))

    for count, val in enumerate(ra_list):
        next_ra = float(ra_decimal[count])
        next_dec = float(dec_decimal[count])

        # Check where the RA and DEC given by the user at the beginning is in the file to make sure there is no
        # duplication
        angle = angle_dist(float(ra), float(dec), next_ra, next_dec)
        if angle:
            lines += str(val) + ", " + str(dec_list[count]) + ", " + "1, 1, " + str(mag_list[count]) + "\n"

The function creates four RADEC files for each main filter used by BSUO (Johnson B, Johnson V, and Cousins R) and writes them to individual files.

Overlay

Displaying where all the comparison stars are located is optional. The function overlay takes the list of RA and DEC of the comparison stars and overlays their locations onto an image that the user enters in. The program plots circles around the stars and numbers them underneath those circles.

        with open(outputfile + ".radec", "w") as file:
            file.write(output)

        file_list.append(outputfile + ".radec")

    print("\nFinished writing RADEC files for Johnson B, Johnson V, and Cousins R, and T.\n")

    return file_list


def overlay(df, tar_ra, tar_dec):
    """
    Creates an overlay of a science image with APASS objects numbered as seen in the catalog file that
    was saved previously

    :param tar_dec: target declination
    :param tar_ra: target right ascension
    :param df: input catalog DataFrame
_images/overlay_example.png

BSUO or SARA/TESSS Night Filters

Gather Data

When using Astro ImageJ (AIJ) produces .dat files that contain magntiude and flux data for every set of data anlyzed. The purpose of this code is to take all sets of the data files and combine them into a single file per filter, from the Night_Filters.py program.

The program first checks how many nights the use will be using and then gathers each file pathway from a for loop inside the get_nights_AIJ function.

    total_amag_err = []
    total_flux = []
    total_flux_err = []

    for i in range(night):
        df = file_path(i)
        if len(df.columns) == 7:
            # set parameters to lists from the file by the column header
            hjd = []
            bjd = []
            amag = []
            amag_err = []
            flux = []
            flux_err = []
            try:
                if num == 0:
                    hjd = list(df["HJD"])
                elif num == 1:
                    bjd = list(df["BJD_TDB"])
                    hjd = list(df["HJD"])
                amag = list(df["Source_AMag_T1"])
                amag_err = list(df["Source_AMag_Err_T1"])
                flux = list(df["rel_flux_T1"])
                flux_err = list(df["rel_flux_err_T1"])
            except KeyError:
                print("\nThe file you entered does not have the columns of HJD, Source_AMag_T1, or Source_AMag_Err_T1. "
                      "Please re-enter the file path and make sure its the correct file.\n")
                main(0)

            if num == 0:
                total_hjd.append(hjd)
            elif num == 1:
                total_bjd.append(bjd)
                total_hjd.append(hjd)
                new_bjd = [item for elem in total_bjd for item in elem]

            total_amag.append(amag)
            total_amag_err.append(amag_err)
            total_flux.append(flux)
            total_flux_err.append(flux_err)

            # converts the Dataframe embedded lists into a normal flat list
            new_hjd = [item for elem in total_hjd for item in elem]
            new_amag = [item for elem in total_amag for item in elem]
            new_amag_err = [item for elem in total_amag_err for item in elem]
            new_flux = [item for elem in total_flux for item in elem]
            new_flux_err = [item for elem in total_flux_err for item in elem]

            data_amount = 2
        elif len(df.columns == 5):
            # set parameters to lists from the file by the column header
            hjd = []
            amag = []
            amag_err = []
            try:
                if num == 0:
                    hjd = list(df["HJD"])
                elif num == 1:
                    bjd = list(df["BJD_TDB"])
                    hjd = list(df["HJD"])
                amag = list(df["Source_AMag_T1"])
                amag_err = list(df["Source_AMag_Err_T1"])
            except KeyError:
                print("\nThe file you entered does not have the columns of HJD, Source_AMag_T1, or Source_AMag_Err_T1. "
                      "Please re-enter the file path and make sure its the correct file.\n")
                c = 1
                main(c)

            if num == 0:
                total_hjd.append(hjd)
            elif num == 1:
                total_bjd.append(bjd)
                total_hjd.append(hjd)

            total_amag.append(amag)
            total_amag_err.append(amag_err)

The program checks if the user has both magnitude and flux data or just magnitude data by checking if there five or seven columns in the .dat files. Once that has been determined the program then writes those values into a text file:

            # converts the Dataframe embedded lists into a normal flat list
            new_bjd = [item for elem in total_bjd for item in elem]
            new_hjd = [item for elem in total_hjd for item in elem]
            new_amag = [item for elem in total_amag for item in elem]
            new_amag_err = [item for elem in total_amag_err for item in elem]
            data_amount = 1
        else:
            print("\nThe file you entered does not have the correct amount of columns.\n")
            main(0)

    if data_amount == 1:
        if num == 0:
            data2 = pd.DataFrame({
                "HJD": new_hjd,
                "AMag": new_amag,
                "AMag Error": new_amag_err
            })
        elif num == 1:
            data2 = pd.DataFrame({
                "BJD_TDB": new_bjd,
                "HJD": new_hjd,
                "AMag": new_amag,
                "AMag Error": new_amag_err
            })

        print("")
        output = input("What is the file output name (WITHOUT any file extension): ")

        # output the text files with a designation of magnitude or flux
        data2.to_csv(output + "_magnitudes.txt", index=False, header=False, sep="\t")
        print("")
        print("Fished saving the file to the same location as this program.\n\n")
    elif data_amount == 2:
2458379.568521	11.151862	0.00268
2458379.57057	11.156097	0.002681
2458379.574054	11.154805999999999	0.003379
2458379.575709	11.171213999999999	0.003407
2458379.577341	11.175478	0.0034130000000000002
2458379.578996	11.18955	0.0034289999999999998
2458379.5806279997	11.193944	0.0034170000000000003
2458379.58226	11.204361	0.00344
2458379.583904	11.207133	0.0034270000000000004
2458379.585524	11.217825	0.003436

Note

The same thing happens for TESS data (get_nights_TESS.py function) except at the time or writing this program (1-30-2023), BSUO only had the ablity to gather relative flux data but now with the gaia.py program, this needs to be updated to gather magnitude data as well.

O-C Plotting

Calculating Observed minus Calculated (O-C) is done by the OC_plot.py prigram. Given a period and first primary ToM the program calculates the O-C values and respctive errors. The program first asks the user if they are calculating BSUO/SARA data, TESS data, or plotting all their data.

BSUO/SARA

        if num == 1:
            first_time = input("Do you already have an Epoch value 'Yes', 'No', or 'Close' to close the program: ")
            if first_time.lower() == "no":
                T0 = 0
                To_err = 0
                period = float(input("Please enter the period for your system: "))
            elif first_time.lower() == "yes":
                T0, To_err, period = arguments()
                print("Example file pathway: C:\\folder1\\folder2\\[file name]")
            else:
                exit()
            while True:
                inB = input("Please enter your times of minimum file pathway for the Johnson B filter: ")
                inV = input("Please enter your times of minimum file pathway for the Johnson V filter: ")
                inR = input("Please enter your times of minimum file pathway for the Cousins R filter: ")
                try:
                    db = pd.read_csv(inB, header=None, delim_whitespace=True)
                    dv = pd.read_csv(inV, header=None, delim_whitespace=True)
                    dr = pd.read_csv(inR, header=None, delim_whitespace=True)
                    break
                except FileNotFoundError:
                    print("You have entered in an incorrect file or file pathway. Please try again.\n")
            _ = BSUO(T0, To_err, period, db, dv, dr)

As seen above, the program asks the user if they know what their T0 and To_err and if they don’t then the program calls the arguments function:

def arguments():
    """
    This function asks the user for the T0

    :return: T0, To_err, and period float values
    """
    while True:
        try:
            T0 = float(input("Please enter your Epoch number (ex. '2457143.761819') : "))  # First primary ToM
            To_err = float(input("Please enter the Epoch error (ex. 0.0002803). : "))  # error associated with the T0
            period = float(input("Please enter the period of your system (ex. 0.31297): "))  # period of a system
            break
        except ValueError:
            print("You have entered an invalid value. Please only enter float values and please try again.\n")
    return T0, To_err, period

For each index value for each filter, the ToM are averaged together. This gives the user the most likely time for that ToM:

    for count, val in enumerate(strict_B):
        # calculate ToM and its error
        minimum = (val + strict_V[count] + strict_R[count]) / 3
        err = sqrt(strict_B_err[count] ** 2 + strict_V_err[count] ** 2 + strict_R_err[count] ** 2) / 3

Calculations

After that, the programs calls the BSUO function which then calls the calculate_oc function that does the real calculations:

@jit(forceobj=True)
def calculate_oc(m, err, T0, T0_err, p):
    """
    Calculates O-C values and errors and find the eclipse number for primary and secondary eclipses
    :param m: ToM
    :param err: ToM error
    :param T0: first ToM
    :param T0_err: error for the T0
    :param p: period of the system

    :return: e (eclipse number), OC (O-C value), OC_err (corresponding O-C error)
    """
    if T0 == 0:
        T0 = m
        T0_err = err
    # get the exact E value
    E_act = (m - T0) / p
    # estimate for the primary or secondary eclipse by rounding to the nearest 0.5
    if E_act<=0:
        # if this is floor then the value would round down to a lower value instead of rounding up, results in values being off by 0.5
        e=ceil((E_act*2)+0.5)/2
    else:
        e=floor((E_act*2)+0.5)/2
    
    # calculate the calculated ToM and find the O-C value
    T_calc = T0 + (e * p)
    OC = "%.5f" % (m - T_calc)

    # determine the error of the O-C
    OC_err = "%.5f" % sqrt(T0_err ** 2 + err ** 2)

The very first line of the above code is utilizing the Numba package. The following lines are what calculate the eclipse number and if the E_act is positive then use the floor function and if E_act is negative use the ceiling function. The OC and OC_err only take the first five decimal places, otherwise there would be like 10 decimal places (unrealistic accuracy).

Minimums	Eclipse_#	O-C	O-C_Error
2457143.76182	0.0	0.00000	0.00014
2457143.91854	0.5	-0.00008	0.00028
2457161.75726	57.5	-0.00088	0.00022
2457161.91375	58.0	-0.00088	0.00019
2458842.68058	5428.5	-0.03889	0.00051
2458842.83709	5429.0	-0.03885	0.00083
2458842.99348	5429.5	-0.03895	0.00050
2458843.14992	5430.0	-0.03900	0.00096
2458843.30633	5430.5	-0.03907	0.00046

TESS

The only difference between BSUO/SARA to TESS data is that, TESS assumes only a single filter. So there is no averaging for the ToM to be done.

Plotting All Data

Note

The format for all data files must follow the format as seen in this example_OC_table.txt.

Once all calculations have been done through OC_plot.py or elsewhere, the program allows the user to plot all their data together. The program will first create a table that turns the input file into a .tex file that is formatted automatically to be turned into a paper ready table.

    table_header = "\\renewcommand{\\baselinestretch}{1.00} \small\\normalsize"
    table_header += '\\begin{center}\n' + '\\begin{longtable}{ccc}\n'
    table_header += '$BJD_{\\rm TDB}$ & ' + 'E & ' + 'O-C \\\ \n'
    table_header += '\\hline\n' + '\\endfirsthead\n'
    table_header += '\\multicolumn{3}{c}\n'
    table_header += '{\\tablename\ \\thetable\ -- \\textit{Continued from previous page}} \\\ \n'
    table_header += '$BJD_{\\rm TDB}$ & E & O-C \\\ \n'
    table_header += '\\hline\n' + '\\endhead\n' + '\\hline\n'
    table_header += '\\multicolumn{3}{c}{\\textit{Continued on next page}} \\\ \n'
    table_header += '\\endfoot\n' + '\\endlastfoot\n'

    minimum_lines = []
    for i in range(len(minimum)):
        line = str("%.5f" % minimum[i]) + ' & ' + str(e[i]) + ' & $' + str("%.5f" % o_c[i]) + ' \pm ' + str(
            "%.5f" % o_c_err[i]) + '$ ' + "\\\ \n"
        minimum_lines.append(line)

    output = table_header
    for count, line in enumerate(minimum_lines):
        output += line

    output += '\\hline\n' + '\\caption{NSVS 896797 O-C. The first column is the \n' \
                            '$BJD_{TDB}$ and column 2 is the epoch number with a whole number \n' \
                            'being a primary eclipse and a half integer value being a secondary \n' \
                            'eclipse. Column 3 is the $(O-C)$ value with the corresponding \n' \
                            '1$\\sigma$ error.}\n' \
              + '\\label{tbl:896797_OC}\n' + '\\end{longtable}\n' + '\\end{center}\n'
    output += '\\renewcommand{\\baselinestretch}{1.66} \\small\\normalsize'
\renewcommand{\baselinestretch}{1.00} \small\normalsize\begin{center}
\begin{longtable}{ccc}
$BJD_{\rm TDB}$ & E & O-C \\ 
\hline
\endfirsthead
\multicolumn{3}{c}
{\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ 
$BJD_{\rm TDB}$ & E & O-C \\ 
\hline
\endhead
\hline
\multicolumn{3}{c}{\textit{Continued on next page}} \\ 
\endfoot
\endlastfoot
2457143.76135 & 0.0 & $-0.00001 \pm 0.00020$ \\ 
2457143.91771 & 0.5 & $-0.00014 \pm 0.00026$ \\ 
2457161.75645 & 57.5 & $-0.00051 \pm 0.00020$ \\ 
2457161.91298 & 58.0 & $-0.00046 \pm 0.00019$ \\ 
\hline
\caption{NSVS 896797 O-C. The first column is the 
$BJD_{TDB}$ and column 2 is the epoch number with a whole number 
being a primary eclipse and a half integer value being a secondary 
eclipse. Column 3 is the $(O-C)$ value with the corresponding 
1$\sigma$ error.}
\label{tbl:896797_OC}
\end{longtable}
\end{center}
\renewcommand{\baselinestretch}{1.66} \small\normalsize

Weights are calculated by this line:


The program splits up the primary and secondary ToM to help show potential trends and the plotting occurs with these lines:


    fontsize = 14
_images/O_C_ex.png

When plotting the qudratic and linear fits, the program also produces a text file that shows the Least Squares fit for both fits. This file is also made to be used in a paper and is formatted in latex automatically.

\documentclass{report}
            \usepackage{booktabs}
            \begin{document}\begin{center}
\begin{tabular}{lclc}
\toprule
\textbf{Dep. Variable:}    &        y         & \textbf{  R-squared:         } &     0.982   \\
\textbf{Model:}            &       OLS        & \textbf{  Adj. R-squared:    } &     0.982   \\
\textbf{Method:}           &  Least Squares   & \textbf{  F-statistic:       } & 4.945e+04   \\
\textbf{Date:}             & Tue, 28 Mar 2023 & \textbf{  Prob (F-statistic):} &     0.00    \\
\textbf{Time:}             &     19:10:00     & \textbf{  Log-Likelihood:    } &    5410.8   \\
\textbf{No. Observations:} &         903      & \textbf{  AIC:               } & -1.082e+04  \\
\textbf{Df Residuals:}     &         901      & \textbf{  BIC:               } & -1.081e+04  \\
\textbf{Df Model:}         &           1      & \textbf{                     } &             \\
\textbf{Covariance Type:}  &    nonrobust     & \textbf{                     } &             \\
\bottomrule
\end{tabular}
\begin{tabular}{lcccccc}
                   & \textbf{coef} & \textbf{std err} & \textbf{t} & \textbf{P$> |$t$|$} & \textbf{[0.025} & \textbf{0.975]}  \\
\midrule
\textbf{Intercept} &       0.0001  &        0.000     &     0.895  &         0.371        &       -0.000    &        0.000     \\
\textbf{x}         &   -3.991e-06  &     1.79e-08     &  -222.382  &         0.000        &    -4.03e-06    &    -3.96e-06     \\
\bottomrule
\end{tabular}
\begin{tabular}{lclc}
\textbf{Omnibus:}       & 96.524 & \textbf{  Durbin-Watson:     } &     1.917  \\
\textbf{Prob(Omnibus):} &  0.000 & \textbf{  Jarque-Bera (JB):  } &   482.165  \\
\textbf{Skew:}          & -0.342 & \textbf{  Prob(JB):          } & 1.99e-105  \\
\textbf{Kurtosis:}      &  6.514 & \textbf{  Cond. No.          } &  4.27e+04  \\
\bottomrule
\end{tabular}
%\caption{OLS Regression Results}
\end{center}

O’Connell Effect

Note

Based on this paper and originally created by Alec J. Neal, the program OConnell.py solves various statistical anlalyses in regards to the O’Connell effect.

Data

The program first tasks the user with entering in light curve data for 1-3 filters (Johnson B, Johnson V, and Cousins R). Which then calls the function multi_OConnell_total and there are options for one or two filters as well.

    if prompt == "3":
        print("\nPlease enter full file pathways for the following prompt.\n")
        while True:
            infile1 = input("File 1 name: ")
            infile2 = input("File 2 name: ")
            infile3 = input("File 3 name: ")
            if path.exists(infile1) and path.exists(infile2) and path.exists(infile3):
                break
            else:
                print("\nOne of the files you have entered does not exist, please try all three again.\n")
                continue
        hjd = float(input("What is the HJD: "))
        period = float(input("What is the period: "))
        outputile = input("What is the output file name and pathway with .pdf exntension (i.e. C:\\folder1\\test.pdf): ")
        multi_OConnell_total([infile1, infile2, infile3], hjd, period, order=10, sims=1000,
                             sections=4, section_order=7, plot_only=False, save=True, outName=outputile)

Calculations

After the files have been entered, the program converst the magnitude data into flux data and finds the phase at which each data point occurs from the period:

    for band in range(bands):
        a, b = binning.polybinner(filter_files[band], Epoch, period, sections=sections,
                                       norm_factor='alt', section_order=section_order)[0][:2:]
        half = int(0.5 * resolution) + 1
        FT1 = FT.FT_plotlist(a, b, FT_order, resolution)
        FT2 = FT.FT_plotlist(a, -1 * b, FT_order, resolution)
        FTphase1, FTflux1 = FT1[0][:half:], FT1[1][:half:]
        FTphase2, FTflux2 = FT2[0][:half:], FT2[1][:half:]

Then plots the first half of the light curve flux on top of the second half light curve and finds the difference.

        dIlist = []
        for phase in FTphase1:
            dIlist.append(dI_phi(b, phase, FT_order))
        FTflux1 = np.array(FTflux1) + (1 - band) * offset
        FTflux2 = np.array(FTflux2) + (1 - band) * offset
        flux.plot(FTphase1, FTflux1, linestyle=styles[band], color=colors[band])
        flux.plot(FTphase2, FTflux2, '-', color=colors[band])
        # flux.annotate('B',xy=(-0.1,min(FTflux1)))
        if filter_names is not None:
            if len(filter_names) == bands:
                flux.text(-0.12, FTflux1[0], filter_names[band], fontsize=18, rotation=0)
        dI.plot(FTphase1, dIlist, linestyle=styles[band], color=colors[band])
_images/OConnell_plot.png

Once this gets plotted the program then finds the various statistical values and adds them to a list for each filter.

        for band in range(len(filter_files)):
            oc = OConnell_total(filter_files[band], Epoch, period, order, sims=sims,
                                sections=sections, section_order=section_order,
                                norm_factor=norm_factor, FT_order=order, FTres=FTres,
                                filterName='Filter ' + str(band + 1))
            a_all.append(oc[0][0])
            a_err_all.append(oc[0][1])
            b_all.append(oc[1][0])
            b_err_all.append(oc[1][1])
            dI_FT.append(oc[2][0])
            dI_FT_err.append(oc[2][1])
            dI_ave.append(oc[3][0])
            dI_ave_err.append(oc[3][1])
            OERs.append(oc[4][0])
            OERs_err.append(oc[4][1])
            LCAs.append(oc[5][0])
            LCAs_err.append(oc[5][1])
            a22s.append(oc[6][0])
            a22s_err.append(oc[6][1])
    # == FT coefficients ==
    a_MC_err = np.array(list(map(st.stdev, zip(*master_a))))
    b_MC_err = np.array(list(map(st.stdev, zip(*master_b))))
    a_total_err = (np.array(a_model_err) ** 2 + a_MC_err[:order + 1:] ** 2) ** 0.5
    b_total_err = (np.array(b_model_err) ** 2 + b_MC_err[:order + 1:] ** 2) ** 0.5

    a2_0125_a2 = a[2] * (0.125 - a[2])
    a2_0125_a2_err = sig_f(lambda x: x * (0.125 - x), a[2], a_total_err[2])

    # == OER ==
    OER = OConnell.OER_FT(a, b, order)
    OER_model_err = OConnell.OER_FT_error_fixed(a, b, a_model_err, b_model_err, order)
    OER_MC_err = st.stdev(OERlist)
    OER_total_err = (OER_model_err ** 2 + OER_MC_err ** 2) ** 0.5

    # == LCA ==
    LCA = OConnell.LCA_FT(a, b, order, 1024)
    LCA_model_err = OConnell.LCA_FT_error(a, b, a_model_err, b_model_err, order, 1024)[1]
    LCA_MC_err = st.stdev(LCAlist)
    LCA_total_err = (LCA_model_err ** 2 + LCA_MC_err ** 2) ** 0.5

    # == Delta_I ==
    Delta_I_025 = OConnell.Delta_I_fixed(b, order)
    Delta_I_025_model_err = OConnell.Delta_I_error_fixed(b_model_err, order)
    Delta_I_025_MC_err = st.stdev(dIlist)
    Delta_I_025_total_err = (Delta_I_025_model_err ** 2 + Delta_I_025_MC_err ** 2) ** 0.5

    # == Delta_I_ave ==
    DIave = OConnell.Delta_I_mean_obs(ob_phaselist, ob_fluxlist, ob_fluxerr, phase_range=0.05, weighted=False)
    Delta_I_ave = DIave[0]
    Delta_I_ave_err = DIave[1]

See vseq_updated.py for the actual calculations of all the values.

Output

Once the program goes through all the filters, it then creates a latex ready file for use in a paper with a table of all filters along with their respctive statistical values.

\begin{table}[H]
\begin{center}
\begin{tabular}{c|ccc}
\hline\hline
Parameter & Filter 1& Filter 2& Filter 3\\ 
\hline
$a_1$ & $-0.01769\pm 0.00029$ & $-0.01497\pm 0.00024$ & $-0.01428\pm 0.0016$ \\ 
$a_2$ & $-0.14912\pm 0.00064$ & $-0.14238\pm 0.00068$ & $-0.1405\pm 0.00125$ \\ 
$a_4$ & $-0.02487\pm 0.0003$ & $-0.02372\pm 0.00033$ & $-0.02319\pm 0.00143$ \\ 
$a_2(0.125-a_2)$ & $-0.04088\pm 0.00027$ & $-0.03807\pm 0.00028$ & $-0.0373\pm 0.00051$ \\ 
$2b_1$ & $0.00598\pm 0.00046$ & $0.00481\pm 0.00055$ & $0.00356\pm 0.00314$ \\ 
$\Delta I_{\rm FT}$ & $0.00646\pm 0.00141$ & $0.00462\pm 0.00177$ & $0.00415\pm 0.00784$ \\ 
$\Delta I_{\rm ave}$ & $0.00911\pm 0.00049$ & $0.00969\pm 0.00047$ & $0.0058\pm 0.00148$ \\ 
OER & $1.0138\pm 0.00151$ & $1.01268\pm 0.00205$ & $1.00894\pm 0.01198$ \\ 
LCA & $0.00456\pm 0.00141$ & $0.00351\pm 0.00151$ & $0.00531\pm 0.00458$ \\ 
\hline
\end{tabular}
\caption{Fourier and O'Connell stuff (1000 sims)}
\label{tbl:OConnell}
\end{center}
\end{table}

Color Light Curve

Note

Originally created by Alec J. Neal but originally updated for this package by Kyle Koeller.

This program is a little unique because instead of a command line interface, color_light_curve.py utilizes the tkinter package to use a GUI.

_images/gui_color_light.png

Required Files

The B file (Johnson B filter), V file (Johnson V), Epoch (first primary ToM), and Period are required for the program to run. The Max. tolerance and Lower limit should not be changed if the user does not know what they do. Once these required items have been entered, the program calls the ``subtract_LC``function.

Subtact Light Curve

The subtract_LC finds the B-V and V-R values and their errors.

    :return: returns the B-V value and other assorted values
    """
    B_HJD, B_mag, B_magerr = io.importFile_pd(Bfile)[:3:]
    V_HJD, V_mag, V_magerr = io.importFile_pd(Vfile)[:3:]

    Bpoly = binning.polybinner(Bfile, Epoch, period, sections=2, section_order=8)
    Bphase = Bpoly[1][0][0][1]
    aB = Bpoly[0][0]
    bB = Bpoly[0][1]
    Bnorm = Bpoly[1][0][4]
    Vphase = list(calc.astro.convert.HJD_phase(V_HJD, period, Epoch))
    B_flux = np.array(10 ** (-0.4 * np.array(B_mag))) / Bnorm
    obs = len(V_HJD)
    tolerance = best_tol(B_HJD, V_HJD, period, lower_lim=lower_lim, max_tol=max_tol)
    before_after = occ2(B_HJD, V_HJD, period, tolerance=tolerance)
    befores = before_after[1]
    afters = before_after[2]
    i_before = before_after[3]
    i_after = before_after[4]
    mean_diff = np.mean(before_after[5])

    B_interp_flux = []
    for n in range(obs):
        if len(befores[n]) == 0 or len(afters[n]) == 0:
            B_interp_flux.append(FT.sumatphase(Vphase[n], 10, aB, bB))
        else:
            B_interp_flux.append(lin_interp(V_HJD[n], befores[n][-1], afters[n][0],
                                            B_flux[i_before[n][-1]], B_flux[i_after[n][0]]))

    B_interp_mag = -2.5 * np.log10(np.array(B_interp_flux) * Bnorm)
    # quad_range=0.075
    BVquadphase = []
    BVquadmag = []
    # Vquad=[]
    for n in range(len(Vphase)):
        if 0.25 - quad_range < Vphase[n] < 0.25 + quad_range or 0.75 - quad_range < Vphase[n] < 0.75 + quad_range:
            BVquadphase.append(Vphase[n])
            BVquadmag.append(B_interp_mag[n] - V_mag[n])
    quadcolor = mean_mag(BVquadmag)
    colorerr = st.stdev(BVquadmag, xbar=quadcolor)
    print(index, " ", quadcolor, '+/-', colorerr)

    B_minus_V = B_interp_mag - np.array(V_mag)
    BV_mean = mean_mag(B_minus_V)
    # print(B_minus_V)
    BV_err = st.stdev(B_minus_V, xbar=BV_mean)

    print('ave diff =', round(mean_diff * 100, 3), '% of period')
    aVphase, aV_mag, aB_interp_mag = plot.aliasing2(Vphase, V_mag, B_interp_mag)[:3:]
    aBphase, aB_mag = plot.aliasing2(Bphase, B_mag, B_mag)[:2:]
    aB_minus_V = plot.aliasing2(Vphase, B_minus_V, B_minus_V)[1]
    B_V = [B_minus_V, BV_mean, BV_err, aB_minus_V]
    B = [aBphase, aB_mag, aB_interp_mag]
    V = [aVphase, aV_mag]

    # print('T =', vseq.Flower.T.Teff(quadcolor - (0.641 / 3.1)))
    if index == "BV":
        temp = []

At line 141, the program calculates the effective temperature found from each of these color index values from the Flower P. J. 1996 with the Torres 2010 polynomial update.

Inside the GUI, the program will tell the user what the effective temperatures are along with outputting those values into the command line. The B-V is what Flower specifically created the polynomial fit for but this program uses the same polynomial fit for the V-R which cannot be assumed to be correct. This might be updated to the correct polynomial at a later date.

Plotting

The program then plots the phase space light curve plots of each filter entered on an upper panel and the color index values in the lower panel as shown below:

_images/color_light_curve_ex.png

The program also plots a simple light curve plot of each filter separately from the color index plot.

_images/light_curve_ex.png