Create metamers using models of the ventral stream and run experiments to validate them

public public 1yr ago Version: v1.0-biorxiv 0 bookmarks

foveated-metamers

This repo contains the code for a vision science experiment investigating how human perception changes across the visual field using behavioral experiments and computational models inspired by the earlys tages of visual processing. We use these models to investigate what people cannot see, an approach that has a long history of vision science. If we know what information people are insensitive to, we can discard it or randomize it, and the resulting image should appear unchanged from the original.

See the preprint or the VSS 2023 poster for scientific details. You may also be interested in the website we put together for browsing through the synthesized images, and the OSF for bulk downloading the images or the behavioral data. Finally, you may be interested in plenoptic , a software package for generating metamers (and more!) for your own models.

If you re-use some component of this project in an academic publication, see the citation section for how to credit us.

Table of Contents

Usage

The data and code for this project are shared with the primary goal of enabling reproduction of the results presented in the associated paper. Novel analyses should be possible, but no guarantees.

To that end, we provide several entrypoints into the data for re-running the analysis, with a script to automate their download and proper arrangement.

As a note: snakemake commands create files. I recommend adding -n to any snakemake command when you run it for the first time. This will do a "dry run", so you can see what steps snakemake will take, without running anything.

The following steps will walk you through downloading the fully-processed data and recreating the figures, read further on in this README for details:

  1. Clone this repo.

    • Because we use git submodules, you'll also need to run the following two lines:

      git submodule sync
      git submodule update --init --recursive
      
  2. Open config.yml and modify the DATA_DIR path to wherever you wish to download the data (see config.yml section for details on this file).

  3. Install the required software:

    • Install miniconda on your system for python 3.7.

    • Install mamba : conda install mamba -n base -c conda-forge

    • Navigate to this directory and run mamba env create -f environment.yml to install the environment.

    • Run conda activate metamers to activate the python environment.

    • Additionally, install inkscape , version equal to or greater 1.0.2. It seems like that, with inkscape version 1.2.2, all images are embedded at full resolution, leading to a massive increase in file size. I'm not sure what change causes this. It works as intended with inkscape 1.1

    • Check if you have rsync available (you probably do) and install it if you don't (probably best to do so via a package manager).

  4. Run python download_data.py synthesis_input mcmc_fits figure_input to download the data required to create the papers in the main figure (this is about 20GB).

  5. Run snakemake -k -j N reports/paper_figures/fig-XX.svg (where N is the number of cores to use in parallel) to recreate a given figure from the paper (note the number must be 0-padded, i.e., fig-01.svg , not fig-1.svg ). These will end up in the reports/paper_figures/ directory. Note that they are svgs, a vector file format. If your default image viewer cannot open them, your browser can. They can be converted to pdfs using inkscape or Adobe Illustrator.

  6. If you wish to create all the figures from the main body of the text, run snakemake -k -j N main_paper_figures . If one job fails, this will continue to run the others (that's what the -k flag means).

If you wish to create the figures in the appendix as well:

  1. Download the additional data required: python download_data.py stimuli freeman2011_check_output (this is about 14GB).

  2. Run snakemake -k -j N reports/paper_figures/fig-AY-XX.svg (where XX must again be 0-padded, but Y does not) to create figure XX from appendix Y or snakemake -k -j N appendix_figures to create all the figures from appendices 1 through 5.

  3. The figures in appendix 6 have been split off because they require an additional 24GB data set, so to create these:

    • Download the additional data: python download_data.py mcmc_compare .

    • Create a single figure with snakemake -k -j N reports/paper_figures/fig-A6-XX.svg or all of them with snakemake -k -j N appendix_figures_mcmc_compare .

Some notes about the above:

  1. The workflow for figure creation looks intimidating: because parallelization is easy, I split up the process into many small jobs. Therefore, there's ~100 jobs for each of the above main_paper_figures and appendix_figures . Don't worry! They generally don't take that much time.

  2. The complete workflow is very long (going back to preparing the images for metamer synthesis), and so sometimes snakemake can take a long time to determine what to run. This problem can get exacerbated if the file modification timestamps get thrown off, so that snakemake thinks it needs to re-create some of the existing files. To limit the search space and force snakemake to only consider figure-related rules, use the included reports/figure_rules.txt and the --allowed-rules flag: cat reports/figure_rules.txt | xargs snakemake -prk main_paper_figures --allowed-rules . You can pass any argument to snakemake , as long as the command ends with --allowed-rules .

  3. Several of the figures in the paper (e.g., figure 4) include example metamers or other large images. We link these images into the figure, rather than embed them, until the very end, in order to reduce file size. Embedding them requires inkscape and an attached display (so it cannot be run on e.g., a compute cluster). You can do all the steps except embedding the large images by appending _no_embed to the file or target name. So, you would create reports/paper_figures/fig-04_no_embed.svg rather than reports/paper_figures/fig-04.svg to create that single figure, or call snakemake -k -j N main_paper_figures_no_embed / snakemake -k -j N appendix_figures_no_embed to create all of the main paper / appendix figures without embedding.

    • This allows you to run everything except the embedding on one machine that may be more powerful but lack a display (such as a compute cluster), and then finish up on e.g., your laptop. However, the paths used to link images will almost certainly not work when moving to a different machine, so if you view fig-04_no_embed.svg , you will see empty red squares where the images should go. When embedding the images, we correct the paths, so this is not a problem.

    • It is possible that snakemake will get confused when you switch machines and decide that it wants to re-run steps because the file modification timestamps appear out of order (this might happen, in particular, because of TEXTURE_DIR , which is used at the very beginning of the workflow; point it to something old or non-existant to avoid this!). To prevent this, use the same trick as above: append --allowed-rules embed_bitmaps_into_figures main_figures appendix_figures to any snakemake command to ensure that it will only run the embedding rule (and the main_figures / appendix_figures rules).

Reproducing someone else's research code is hard and, in all likelihood, you'll run into some problem. If that happens, please open an issue on this repo, with as much info about your machine and the steps you've taken as possible, and I'll try to help you fix the problem.

To understand what the snakemake command is doing, see the What's going on? section I wrote in the readme for another project (here's the zenodo doi in case that disappears).

What if I want to do more than recreate the figures?

I have focused on enabling others to recreate the figures, but you should be able to use this repo to do everything in the paper. In particular, you might want to:

... examine the metamers synthesized for this project

We've put together a website where you can browse all the metamers synthesized for this project, filtering and sorting by their metadata.

If you'd like to bulk download all of them, you can do so from the OSF page , see its README for how they're organized.

... synthesize some metamers

I don't recommend using this repo to do this unless you're trying to do exactly what I did (and even so, see here ). If you want to synthesize your own metamers, see plenoptic for a better tested, better documented, and more general implementation of metamer synthesis (plus more!).

But if you still want to try to synthesize some metamers using the code in this repo, download the figure_input data set and look at the path of the downloaded metamers. You can use snakemake to create metamers like that, and most parts of the path are options related to metamer synthesis, see METAMER_TEMPLATE_PATH in config.yml , as well as the create_metamers rule in Snakefile to get a sense for what these are.

You can also use the foveated_metamers.utils.generate_metamer_paths function to generate the list of paths that I used for metamers in this experiment (it can also be called from the command-line: python -m foveated_metamers.utils ). See the command-line's helpstring or the function's docstring for details, but here's an example: in order to get the path for one of the energy model metamers created with the llama target image for each scaling value used in the original vs. synth white noise comparison, run: python -m foveated_metamers.utils V1 --ref_image llama --seed_n 0 . Note that this prints out the files on a single line, so you may want to redirect the output to a file for later viewing (append > mets.txt ) or split it up for easier viewing (append | tr ' ' '\n' ).

To recreate any of my metamers, you'll also need to download the synthesis_input data set, which includes the target images we used, as well as statistics used for normalizing the models' representation.

You should also be aware that the pooling windows are very large once you get below scaling=0.1 , so I would start with a larger window size. It is also strongly recommended to use a GPU, which will greatly speed up synthesis.

... see what the experiment was like

The OSF project contains a video of a single training run shown to participants before performing the energy model original vs. synthesized comparison task. In it, participants view the metamers for two target images ( tiles and azulejos ) at the smallest and largest scaling values for this comparison ( .063 and .27 ), comparing them against the original image. Participants receive feedback in the training (the central dot turns green when they answer correctly) and are told their performance at the end of the run; no feedback was given in the actual experiment. It was expected that participants would get close to 100% on the easy trials (large scaling) and close to 50% on the hard trials (small scaling).

If you wish to try the experiment yourself, set up your environment for the experiment and download the experiment training tarball: python download_data.py experiment_training . You can then follow the instructions in the Training section of this readme (note that you won't be able to use the example_images.py script; if you're interested in this, open an issue and I'll rework it).

Note that the stimuli won't be rescaled by default: unless you have the exact same experimental set up as me (screen size 3840 by 2160 pixels, viewing distance of 40cm, 48.5 pixels per degree), you'll need to use the -p and -d flags when calling experiment.py to specify the size of your screen in pixels and degrees.

Also note that your monitor should be gamma-corrected to have a linear relationship between luminance and pixel value.

... run the full experiment

First, Set up your environment for the experiment and download the stimuli: python download_data.py stimuli .

You may also want to download the files used in the training and try that out .

For a given model and comparison, the full expeirment consists of 3 sessions, with 5 runs each. A single session lasts about an hour, with small breaks built in between runs, each of which lasts about 10 minutes. Each session contains 5 target images, so that each subject sees 15 of the total 20. All subjects see the first 10, then the final 10 are split into two groups, with even-numbered subjects seeing the first group, odd-numbered the second.

You'll need to generate presentation indices (which define what order the images are presented in; the ones for the training task are included in their tarball). To do so, use snakemake: snakemake -prk {DATA_DIR}/stimuli/{model_name}/task-split_comp-{comp}/{subj_name}/{subj_name}_task-split_comp-{comp}_idx_sess-{sess_num}_run-{run_num}.npy , where:

  • {DATA_DIR} : the DATA_DIR field from the config.yml file

  • {model_name} : either RGC_norm_gaussian (for the luminance model) or V1_norm_s6_gaussian (energy)

  • {comp} : one of met , ref , met-natural , ref-natural or ref-downsample-2 . This should match the {comp} wildcard from the stimulus file you downloaded.

  • {subj_name} : has the form sub-## , where ## a 0-padded integer. If this integer lies between 0 and 7 (inclusive), this will be the same presentation order as used in our experiment.

  • {sess_num} : 0-padded itneger between 0 and 2 (inclusive). The session determines which set of 5 target images are included.

  • {run_num} : 0-padded integer between 0 and 4 (inclusive). Each run contains 3 target images, so that run-01 contains target images {A,B,C} , run-02 contains {B,C,D} , run-03 contains {C,D,E} , run-04 contains {D,E,A} , and run-05 contains {E,A,B} .

You'll probably want to generate all the indices for a subject for a given model and comparison at once. You can do that by generating the dummy file: snakemake -prk {DATA_DIR}/stimuli/{model_name}/task-split_comp-{comp}/{subj_name}/{subj_name}_task-split_comp-{comp}_idx_tmp.txt .

Then read the Run experiment section of this readme.

Note that the stimuli won't be rescaled by default: unless you have the exact same experimental set up as me (screen size 3840 by 2160 pixels, viewing distance of 40cm, 48.5 pixels per degree), you'll need to use the -p and -d flags when calling experiment.py to specify the size of your screen in pixels and degrees.

And note that the above options allow you to run the experiment on a setup that has a different screen size (both in pixels and in degrees) than the intended one, the metamers were created with this specific set up in mind. Things should be approximately correct on a different setup (in particular, double-check that images are cropped, not stretched, when presented on a smaller monitor), but there's no guarantee. If you run this experiment, with these stimuli, on a different setup, my guess is that the psychophysical curves will look different, but that their critical scaling values should approximately match; that is, there's no guarantee that all scaling values will give images that will be equally confusable on different setups, but the maximum scaling value that leads to 50% accuracy should be about the same. The more different the viewing conditions, the less likely that this will hold.

Also note that your monitor should be gamma-corrected to have a linear relationship between luminance and pixel value.

... refit the psychophysical curves

Follow the first three steps in the usage section, so that you have setup your environment and specified DATA_DIR . Then, download the behavioral data: python download_data.py behavioral_data .

From here, it's up to you. If you'd like to use your own procedure, the OSF readme describes the most relevant columns of the behavioral .csv files.

If you'd like to use my procedure (using MCMC to fit a hierarchical Bayesian model, fitting all images and subjects simultaneously but each metamer model and comparison separately), you can do so using snakemake : snakemake -prk {DATA_DIR}/mcmc/{model_name}/task-split_comp-{comp}/task-split_comp-{comp}_mcmc_{mcmc_model}_step-{step_size}_prob-{accept_prob}_depth-{tree_depth}_c-{num_chains}_d-{num_draws}_w-{num_warmup}_s-{seed}.nc , where:

  • {DATA_DIR} : the DATA_DIR field from the config.yml file

  • {model_name} : either RGC_norm_gaussian (for the luminance model) or V1_norm_s6_gaussian (energy)

  • {comp} : one of met , ref , met-natural , ref-natural or ref-downsample-2 , depending on which comparison you wish to fit.

  • {mcmc_model} : which hierarchical model to fit to the data. partially-pooled is used in the main body of the paper, see the Methods section for details on it and appendix 6 for details on the other two. Only the met and ref comparisons have more than one subject, so the models are interchangeable for the rest.

    • partially-pooled : fit image-level and subject-level effects, no interaction between them.

    • unpooled : fit each psychophysical curve separately, no group effects.

    • partially-pooled-interactions : fit image-level and subject-level effects, plus an interation term.

  • {seed} : computational seed for reproducibility, must be an integer.

  • the rest are all hyperparameters for MCMC, see numpyro documentation and examples for more details.

Note that I've had issues guaranteeing exact reproducibility, even with the same seed. I have not been able to track down why this is.

If you refit the curves using MCMC, you'll want to check the diagnostics, effective sample size (ESS) and $\hat{R}$. You can create a plot summarizing these for each variable with snakemake (just replace .nc with _diagnostics.png ). Vethari et al., 2021 recommend that you look for $\hat{R} < 1.01$ for all variables.

You can download my fits to the data ( python download_data.py mcmc_fits for the partially pooled model, python download_data.py mcmc_compare for the other two) or use foveated_metamers.utils.get_mcmc_hyperparams to see what hyper-parameters I used to fit each comparison.

Setup

The analyses were all run on Linux (Ubuntu, Fedora, and CentOS, several different releases). Everything should work on Macs. For Windows, I would suggest looking into the Windows Subsystem for Linux , as Windows is very different from the others.

Software requirements

The python environment discussed below are required regardless of what you wish to do. There are several additional requirements that are optional:

  • If you are using download_data.py , you'll need rsync , which you probably already have. If not, it's probably best to grab it via your OS's package manager.

  • If you are synthesizing metamers, you'll need ffmpeg . This is probably also already installed on your machine, but if not, a static build is probably the easiest way to go.

  • If you're using GPUs for image synthesis, you'll need the python package dotlockfile as well: pip install dotlockfile .

  • For embedding images into our figures (the last step of figure creation for about a third of the figures), you need inkscape , version equal to or greater than 1.0.2.

    • We also need to know the location of your inkscape preference file. The default is probably correct, but see section config.yml for more details.
  • Appendix 3 contains a small comparison between the pooling windows used in this project and those found in Freeman and Simoncelli, 2011. If you wish to generate these windows yourself, you will need MATLAB along with Jeremy Freeman's metamer code and the matlabPyrTools toolbox. You may also download the windows from the OSF ( python download_data freeman2011_check_output ). See below for more details.

There's a separate python environment for running the environment, so install that if you're planning on running the experiment.

Python

This has been run and tested with version 3.7, unsure about more recent versions.

  1. Install miniconda with the appropriate python version.

  2. Install mamba : conda install mamba -n base -c conda-forge (I recommend using mamba instead of conda to install the environment because conda tends to hang while attempting to solve the environment).

  3. After cloning this repo, run the following from this directory to grab the git submodules:

    git submodule sync
    git submodule update --init --recursive
    
  4. In this directory, run mamba env create -f environment.yml

  5. If everything works, type conda activate metamers to activate this environment.

As of fall 2022, the packages that I know have version requirements are specified in environment.yml . However, in case updates to the dependencies break something, I've included the output of the pip freeze command from two different machines (my laptop and Flatiron's compute cluster) at reports/pip_freeze_laptop.txt and reports/pip_freeze_hpc.txt , so a working environment is cached (note that this includes more packages than specified in environment.yml , because it includes all their dependencies as well).

I am also running tests on github actions to check that the following parts of the workflow run: the data download script, the included notebook, metamer synthesis, the pooling models, and fitting the MCMC curves. For all of these, I'm just testing that they can run , not the full workflow completes or that I get the proper outcome, but that should be sufficient to see what package versions are compatible. (I'm not testing figure creation, because I couldn't come up with a way to do that didn't require more storage than is available for free from Github runners --- if you have a solution for this, I'd love to hear it).

Jupyter

This repo includes one notebook , for examining the differences between this project and Freeman and Simoncelli, 2011. You can view the cached output of the notebook online, or you can install jupyter locally and run the notebook.

There are two main ways of installing jupyter locally:

  1. Install jupyter in this metamers environment:
conda activate metamers
mamba install -c conda-forge jupyterlab

This is easy but, if you have multiple conda environments and want to use Jupyter notebooks in each of them, it will take up a lot of space.

  1. Use nb_conda_kernels :
# activate your 'base' environment, the default one created by miniconda
conda activate 
# install jupyter lab and nb_conda_kernels in your base environment
mamba install -c conda-forge jupyterlab
mamba install nb_conda_kernels
# install ipykernel in the calibration environment
mamba install -n metamers ipykernel

This is a bit more complicated, but means you only have one installation of jupyter lab on your machine.

Experiment environment

Install miniconda and mamba as described above (you probably don't need the submodules, but they won't hurt), then run:

mamba env create -f environment-psychopy.yml

Then, to activate, run conda activate psypy .

PsychoPy provides multiple backends. I'm now using the pyglet backend, but I've occasionally had issues with a weird XF86VidModeGetGammaRamp failed error . That's just something to be aware of.

This environment-psychopy.yml file pins all the package versions but I didn't spend a lot of time trying to figure out what exactly was necessary or worry about the versions overly-much. That is to say, I think as long as you can get a working psychopy install with the pyglet backend, the code should work. So you could try just to install psychopy , pyglet , h5py (used for storing behavioral data) and numpy (stimuli are saved as numpy arrays) and go from there.

Data

Source images

We use images from the authors' personal collection and the UPenn Natural Image Database as the targets for our metamer generation. This is because we need images that are large, linear (i.e., their pixel intensities are proportional to photon count, as you get from an image that has not been processed in any way), and openly-licensed.

  • Authors' personal collection:

    • WFB: azulejos, tiles, bike, graffiti, llama, terraces

    • EPS: ivy, nyc, rocks, boats, gnarled, lettuce

  • UPenn Natural Image Database: treetop (cd01A/DSC_0033), grooming (cd02A/DSC_0011), palm (cd02A/DSC_0043), leaves (cd12A/DSC_0030), portrait (cd58A/DSC_0001), troop (cd58A/DSC_0008).

  • Unpublished photos from David Brainard: quad (EXPOSURE_ASC/DSC_0014), highway (SNAPSHOTS/DSC_0200).

.tiff files of all these images can be downloaded from the OSF page . They have been demosaiced (using DCRAW) but otherwise untouched from the raw files. For image synthesis they were converted to 16-bit grayscale png files, cropped / expanded to 2048 by 2600 pixels (Prof. Brainard's photos and those from the UPenn Natural Image Database were 2014 pixels tall and so a small amount of reflection padding was used on these photos), and rescaled so all pixel values lay between .05 and .95. These png files are found in the synthesis_input tarball on the OSF page .

Download data

The data for this project is available on its OSF page (almost everything) and NYU's Faculty Digital Archive (the files required for appendix 6, which are the fits using the alternative MCMC models). The OSF readme describes its contents in detail.

We also provide the download_data.py script, which downloads and arranges the data into the structure that snakemake expects. The following data sets can be downloaded with this script:

  1. synthesis_input : the inputs required for synthesizing model metamers, this includes the original images whose model representation our metamers match and the set of statistics used to normalize model responses.

  2. stimuli : numpy arrays containing the model metamers used as stimuli in our experiment for each model and comparison separately, along with csv files with their metadata.

  3. behavioral_data : csv files containing participant responses, along with metadata (each model and comparison separate).

  4. mcmc_fits : .nc files containing the fits of the partially pooled hierarchical model to behavior, for each model and comparison separately

  5. figure_input : miscellaneous files not contained in the other data sets required to create the figures in the paper (some of the model metamers, etc.)

  6. freeman2011_check_input , freeman2011_check_output : the input and output files for a brief comparison between our pooling windows and energy model metamers and those presented in Freeman and Simoncelli, 2011. See below for more details.

  7. experiment_training : some files used to train participants, can also be viewed in order to get a sense for how the experiment was structured. See above for details.

  8. mcmc_compare : .nc files containing the fits of the alternative MCMC models to behavior, as well as the csv files evaluating each MCMC models performance, used for appendix 6.

To download one of the above, call python download_data.py {TARGET_DATASET} , replacing {TARGET_DATASET} with one of the above. Make sure you have specified DATA_DIR in config.yml first. The script will give you an estimate of how large each data set is and ask if you would like to continue.

There are several components found on the OSF page that cannot be downloaded using download_data.py . These are made available because they might be useful for others, but I do not expect them to be used by anyone with this repo, because they are not necessary for reproducing the figures.

config.yml

This is configuration file containing options used by snakemake . You need to modify the first one, DATA_DIR , and might need to modify the other paths in that first section. All the other options should be left unmodified. The file is commented explaining what each of the options are.

Note that, unless stated otherwise, you cannot use ~ in any of the paths in this file (you must write out the full path to your home directory, e.g., /home/billbrod or /Users/billbrod ). Also, the paths should probably not have capital letters -- there's a discrepancy between how Mac and Linux handle capital letters in paths, which might create problems.

Directory structure

  • Snakefile : used by snakemake to determine how to create the files for this project. Handles everything except the experiment.

  • foveated_metamers/ : library of functions used in this project

    • create_metamers.py : creates metamers.

    • stimuli.py : assembles the various metamer images into format required for running the experiment.

    • distances.py : finds distance in model space between images in an efficient way.

    • experiment.py : runs experiment.

    • analysis.py : basic analyses of behavioral data (gets raw behavioral data into format that can fit by psychophysical curves).

    • curve_fit.py : fits psychophysical curves to real or simulated data using pytorch . We didn't end up using this method of fitting the curves.

    • simulate.py : simulate behavioral data, for checking curve_fit.py performance, as well as how many trials are required.

    • mcmc.py : use Markov Chain Monte Carlo (MCMC) to fit a probabilistic model of psychophysical curves with numpyro . This is how the curves presented in the paper were fit.

    • statistics.py : compute some other image statistics (heterogeneity, Fourier amplitude spectra, etc).

    • plotting.py : plotting functions.

    • figures.py : creates various figures.

    • compose_figures.py : combines plots (as created by functions in figures.py ) into multi-panel figures.

    • other_data.py : functions to fit a line (hinged or not) to the Dacey 1992 data, which gives the receptive field size of retinal ganglion cells. This also uses numpyro and so looks fairly similar to mcmc.py .

    • create_mad_images.py : synthesize Maximally-Differentiating images (as in Wang and Simoncelli, 2008), to highlight mean-squared error remaining in human metamers.

    • create_other_synth.py : other ways to synthesize images to highlight mean-squared error remaining in human metamers.

    • observer_model.py : first steps towards an observer model to predict human performance when images are not metamers. Did not end up making much progress, so this is not present in the paper.

    • utils.py : various utility functions.

    • style.py : code for styling the figures.

  • extra_packages/ : additional python code used by this repo. The bits that live here were originally part of plenoptic , but were pulled out because it's a bad idea for a research project to be so heavily reliant on a project currently under development.

    • pooling-windows : git submodule that points to this repo , containing the pooling windows we use.

    • plenoptic_part : contains the models and metamer synthesis code (as well as some utilities) that were pulled out of plenoptic, branching at this commit (I used git filter-repo and so the history should be preserved). While the model code (and some of the utilities) have been deleted from plenoptic and are unique to this repo, the synthesis code here is a modified version of the one in plenoptic. If you wish to use synthesis for your own work use the plenoptic version , which is regularly tested and supported.

  • notebooks/ : jupyter notebooks for investigating this project in more detail.

    • Freeman_Check.ipynb : notebook checking that our windows are the same size as those from Freeman and Simoncelli, 2011 (and thus that the models' scaling parameter has the same meaning); see below for more details.
  • examples_images.py : script to open up some example images to show participants before the experiment (see Training section for how to use).

  • download_data.py : script to download and arrange data for reproducing results and figures. See Download data for how to use.

  • matlab/ : two matlab scripts using external matlab libraries. Neither are necessary: one is used to generate the windows from the Freeman and Simoncelli, 2011 paper (the output of which can be downloaded using download_data.py ) and the other generates some LGN-like image statistics that we didn't end up using.

  • data/ : contains some data files.

    • Dacey1992_RGC.csv : csv containing data from figure 2B of Dacey and Petersen, 1992, extracted using WebPlotDigitizer on July 15, 2021. To recreate that figure, use the snakemake rule dacey_figure . Note that we did not separate the data into nasal field and temporal, upper, and lower fields, as the paper does.

    • checksums.json : json file containing BLAKE2b hashes for the files downloadable via download_data.py , so we can check they downloaded corectly.

  • reports/ : contains a variety of figure-related files.

    • figures/ : these are figure components that I use when putting the figures together. They fall into two categories: schematics that are copied as is, with no changes (e.g., image space schematics, experiment schematic), and templates that we embed images into (e.g., the example metamer figures).

    • paper_figures/ : these are the actual figures used in the paper, as created by the snakemake file. There are none in the github repo, see Usage section for details on how to create them.

    • figure_rules.txt : this is a list of snakemake rules that create figures (rather than analyze the data). It can be used to limit snakemake's search of possible analysis paths. See Usage for details on how to use.

    • pip_freeze_laptop.txt , pip_freeze_hpc.txt : the outputs of pip freeze on two different machines, showing working environments as of fall 2022.

  • tests/test_models.py : contains a small number of tests of the pooling models, ran weekly and on every push (alongside other tests).

  • environment-psychopy.yml , environment.yml : yml files defining conda environment for the experiment (using psychopy ) and for everything. See Setup section for how to use.

  • greene.json , rusty.json : json files defining how snakemake should communicate with NYU's and Flatiron's SLURM clusters, respectively (works with the snakemake-slurm profile). I have written a section on how to use snakemake with a SLURM cluster in a readme for a different project , and may write something in more detail at some point. Reach out if you have questions.

  • config.yml : yml configuration file, defining paths, metamer path template, and some configuration for experiment structure, see here for details.

Usage details

The following sections contain some extra details that may be useful.

Check against Freeman and Simoncelli, 2011 windows

This project uses a modification of the pooling windows first described in Freeman and Simoncelli, 2011. We include some code to check our reimplementation of the windows and the extension to use Gaussians instead of raised-cosine falloffs. Basically, we want to make sure that our windows are the same size -- identical reimplementation is not important, but we want to make sure that the models' scaling parameter has the same interpretation; it should be the ratio between the eccentricity and the radial diameter of the windows at half-max amplitude. To do so, we include a notebook notebooks/Freeman_Check.ipynb , as well as some snakemake rules.

We check two things: that our windows' scaling parameter has the same meaning as that in the original paper, and that our V1 metamers look approximately the same. You can view this by looking at the Freeman_Check notebook and its cached outputs directly. If you wish to run the notebook locally, you'll need to download the input files ( python download_data.py freeman2011_check_input ) and then either download the files required ( python download_data.py freeman2011_check_output ) or generate them yourself ( snakemake -prk freeman_check ). Note that generating them yourself will require MATLAB with the Freeman metamer and matlabPyrTools toolboxes (set their paths correctly in config.yml )

Make sure you have Jupyter installed, then navigate to the notebooks/ directory on your terminal and activate the environment you install jupyter into ( metamers or base , depending on how you installed it), then run jupyter and open up the notebook. If you used the nb_conda_kernels method, you should be prompted to select your kernel the first time you open a notebook: select the one named "metamers".

A portion of the results presented in this notebook are also found in one of the paper's appendices.

Experiment notes

These are the notes I wrote for myself so I remembered how to run the experiment and could teach others.

To run the experiment, make sure that the stimuli array and presentation indices have been generated and are at the appropriate path. It's recommended that you use a chin-rest or bite bar to guarantee that your subject remains fixated on the center of the image; the results of the experiment rely very heavily on the subject's and model's foveations being identical.

Training

To teach the subject about the experiment, we want to introduce them to the structure of the task and the images used. The first one probably only needs to be done the first time a given subject is collecting data for each model / comparison, the second should be done at the beginning of each session.

(The following paths all assume that DATA_DIR is ~/Desktop/metamers , replace that with your actual path.)

  1. First, run a simple training run (if you haven't already, first run python download_data.py experiment_training to get the required files):

    • conda activate psypy

    • python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/training_noise/stimuli_comp-{comp}.npy sub-training 0 ; python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/training_{model}/stimuli_comp-{comp}.npy sub-training 0 where {comp} depends on which comparison you're running, and {model} is RGC_norm_gaussian or V1_norm_s6_gaussian , depending on which you're running.

    • Explanatory text will appear on screen, answer any questions.

    • This will run two separate training runs, both about one or two minutes, with feedback between each trial (the fixation dot will turn green if they answered correctly) and at the end of the run (overall performance).

    • The first one will just be comparing natural to noise images and so the subject should get 100%. The goal of this one is to explain the basic structure of the experiment.

    • The second will have two metamers, one easy and one hard, for each of two reference images. They should get 100% on the easy one, and about chance on the hard. The goal of this one is to show what the task is like with metamers and give them a feeling for what they may look like.

  2. Run:

    • conda activate metamers

    • python example_images.py {model} {subj_name} {sess_num} where {model} is V1 or RGC depending on which model you're running, and {subj_name} and {sess_num} give the name of the subject and number of this session, respectively.

    • This will open up three image viewers. Each has all 5 reference images the subject will see this session. One shows the reference images themselves, one the metamers with the lowest scaling value, and one the metamers with the highest scaling value (all linear, not gamma-corrected).

    • Allow the participant to flip between these images at their leisure, so they understand what the images will look like.

    • Note: this will not work unless you have the metamers for this comparison at the locations where they end up after synthesis (you probably don't). If you're interested in using this script, open an issue and I'll try to rework it.

Task structure

Unlike previous experiments, we use a split-screen task. Each trial lasts 1.4 seconds and is structured like so:

|Image 1 | Blank |Image 2 |Response| Blank |
|--------|--------|--------|--------|--------|
|200 msec|500 msec|200 msec| |500 msec|

Image 1 will consist of a single image divided vertically at the center by a gray bar. One half of image 2 will be the same as image 1, and the other half will have changed. The two images involved are either two metamers with the same scaling value (if comp=met or met-natural ) or a metamer and the reference image it is based on (if comp=ref or ref-natural ). The subject's task is to say whether the left or the right half changed. They have as long as they need to respond and receive no feedback.

To run the experiment:

  • Activate the psypy environment: conda activate psypy

  • Start the experiment script from the command line:

    • python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/{model}/stimuli_comp-{comp}.npy {subj_name} {sess_num} where {model}, {subj_name}, {sess_num}, {comp} are as described in the training section.

    • There are several other arguments the experiment script can take, run python foveated_metamers/experiment.py -h to see them, and see the other arguments section for more information.

  • Explain the task to the subject, as given in the example below (similar text will also appear on screen before each run for the participant to read)

  • When the subject is ready, press the space bar to begin the task.

  • You can press the space bar at any point to pause it, but the pause won't happen until the end of the current trial, so don't press it a bunch of times because it doesn't seem to be working. However, try not to pause the experiment at all.

  • You can press q/esc to quit, but don't do this unless truly necessary.

  • There will be a break half-way through the block. The subject can get up, walk, and stretch during this period, but remind them to take no more than a minute. When they're ready to begin again, press the space bar to resume.

  • The data will be saved on every trial, so if you do need to quit out, all is not lost. If you restart from the same run, we'll pick up where we left off.

  • The above command will loop through all five runs for a given session. To do a particular set of runs pass -r {run_1} {run_2} ... {run_n} to the call to experiment.py (where {run_1} through {run_n} are 0-indexed integers specifying the runs to include). For example, if you quit out on the third run and wanted to finish that one and then do runs 4 and 5, pass: -r 2 3 4 . If you just want to finish that run, you'd only pass -r 2 .

Recommended explanation to subjects:

In this experiment, you'll be asked to complete what we call an "2-Alternative Forced Choice task": you'll view an image, split in half, and then, after a brief delay, a second image, also split in half. One half of the second image will be the same as the first, but the other half will have changed. Your task is to press the left or right button to say which half you think changed. All the images will be presented for a very brief period of time, so pay attention. Sometimes, the two images will be very similar; sometimes, they'll be very different. For the very similar images, we expect the task to be hard. Just do your best!

You'll be comparing natural and synthesized images. The first image can be either natural or synthesized, so pay attention! You will receive no feedback, either during or after the run.

For this experiment, fixate on the center of the image the whole time and try not to move your eyes.

The run will last for about twelve minutes, but there will be a break halfway through. During the break, you can move away from the device, walk around, and stretch, but please don't take more than a minute.

This part will not be shown on screen, and so is important:

You'll complete 5 runs total. After each run, there will be a brief pause, and then the instruction text will appear again, to start the next run. You can take a break at this point, and press the spacebar when you're ready to begin the next run.

Other arguments

The experiment.py takes several optional arguments, several of which are probably relevant in order to re-run this on a different experiment set up:

  • --screen / -s : one integer which indicate which screens to use.

  • --screen_size_pix / -p : two integers which indicate the size of the screen(s) in pixels .

  • --screen_size_deg / -d : a single float which gives the length of the longest screen side in degrees.

For more details on the other arguments, run python foveated_metamers/experiment.py -h to see the full docstring.

NOTE: While the above options allow you to run the experiment on a setup that has a different screen size (both in pixels and in degrees) than the intended one, the metamers were created with this specific set up in mind. Things should be approximately correct on a different setup (in particular, double-check that images are cropped, not stretched, when presented on a smaller monitor), but there's no guarantee. If you run this experiment, with these stimuli, on a different setup, my guess is that the psychophysical curves will look different, but that their critical scaling values should approximately match; that is, there's no guarantee that all scaling values will give images that will be equally confusable on different setups, but the maximum scaling value that leads to 50% accuracy should be about the same. The more different the viewing conditions, the less likely that this will hold.

Experiment Checklist

The following is a checklist for how to run the experiment. Print it out and keep it by the computer.

Every time:

  1. Make sure monitor is using the correct icc profile ( linear-profile ; everything should look weirdly washed out). If not, hit the super key (the Windows key on a Windows keyboard) and type icc , open up the color manager and enable the linear profile.

First session only (on later sessions, ask if they need a refresher):

  1. Show the participant the set up and show the participants the wipes and say they can use them to wipe down the chinrest and button box.

  2. Tell the participant:

In this task, a natural image will briefly flash on screen, followed by a gray screen, followed by another image. Half of that second image will be the same as the first, half will have changed. Your task is to say which half has changed, using these buttons to say "left" or "right". You have as long as you'd like to respond, and you will not receive feedback. There will be a pause halfway through, as well as between runs; take a break and press the center button (labeled "space") to continue when you're ready. You won't press the buttons in the bottom row.

  1. Train the participant. Say:

Now, we'll do two brief training runs, each of which will last about a minute. In the first, you'll be comparing natural images and noise; the goal is so you understand the basic structure of the experiment. In the second, you'll be comparing those same natural images to some of the stimuli from the experiment; some will be easy, some hard. You'll receive feedback at the end of the run, to make sure you understand the task. I'll remain in the room to answer any questions.

There will be fixation dot in the center of some explanatory text at the beginning, use that to center yourself.

  1. Run (replace {model} with V1_norm_s6_gaussian or RGC_norm_gaussian ):
conda activate psypy
python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/training_noise/stimuli_comp-ref.npy sub-training 0 ; python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/training_{model}/stimuli_comp-ref.npy sub-training 0
  1. Answer any questions.

Every time:

  1. Show the participant the images they'll see this session, replacing {model} with V1 or RGC (no need to use the full name), and {subj_name} and {sess_num} as appropriate:
conda activate metamers
python example_images.py {model} {subj_name} {sess_num}
  1. Say the following and answer any questions:

These are the natural images you'll be seeing this session, as well as some easy and hard stimuli. You can look through them for as long as you'd like.

  1. Ask if they have any questions before the experiment.

  2. Say:

This will run through all 5 runs for this session. Each should take you 10 to 12 minutes. Come get me when you're done. As a reminder, you have as long as you'd like to respond, and you won't receive any feedback.

  1. Run, replacing {model} , {subj_name} , {sess_num} as above:
conda activate psypy
python foveated_metamers/experiment.py ~/Desktop/metamers/stimuli/{model}/stimuli_comp-ref.npy {subj_name} {sess_num}

Known issues

  1. When using multiprocessing (as done when fitting the psychophysical curves) from the command-line, I get OMP: Error #13: Assertion faliure at z_Linux_util.cpp(2361) on my Ubuntu 18.04 laptop. As reported here , this is a known issue, and the solution appears to be to set an environmental variable: running export KMP_INIT_AT_FORK=FALSE in the open terminal will fix the problem. Strangely, this doesn't appear to be a problem in a Jupyter notebook, but it does from IPython or the snakemake calls. I tried to set the environmental variable from within Snakefile, but I can't make that work. Running the calls with use_multiproc=False will also work, though it will obviously be much slower.

  2. When trying to use the embed_bitmaps_into_figure rule on a drive mounted using rclone (I had my data stored on a Google Drive that I was using rclone to mount on my laptop), I got a 'Bad file descriptor' error from python when it tried to write the snakemake log at the end of the step. It appears to be this issue , adding the --vfs-cache-mode writes flag to the rclone mount command worked (though I also had to give myself full permissions on the rclone cache folder: sudo chmod -R 777 ~/.cache/rclone ).

  3. As of June 2023, there appears to be some issue with svgutils and inkscape, where any svg file that is the output of my compose_figures rule cannot be opened by inkscape (attempting to do so leads inkscape to immediately crash). This means that none of the files in the compose_figures directory can be used as inputs to the embed_bitmaps_into_figures rule (but those in figures can be). I'm unclear why this is happening now, but have been unable to track it down.

Notes on reproducibility

The intention of sharing this code is to allow for the reproduction of the figures in the resulting paper. This is the code I used to synthesize the metamers used in the experiment, and you can use it to do so, but there are a couple things you should be aware of:

  • Results will not be identical on CPUs and GPUs. See PyTorch's notes on this.

  • I used stochastic weight averaging (SWA) for the energy model metamers. SWA seems to reduce the final loss by averaging metamer pixel values as we get near convergence (see here for more details). However, the version of SWA I used to generated the metamers for the experiment was from torchcontrib , which was archived in 2020 and is no longer maintained ( github repo ). In May 2022, I noticed that the torchcontrib SWA implementation no longer worked on my tests with the most recent versions of python (3.10) and pytorch (1.12), so I updated my code to work with the pytorch SWA implementation. The resulting metamers are not identical to the ones produced before, but they are similar in both visual quality and loss, and I believe they would be indistinguishable in a 2AFC task.

  • The metamer synthesis code found here (in extra_packages/plenoptic_part ) was very much a work in progress throughout this whole project and ended up becoming a tangled rats nest, as is the case for most research code.

For all the above reasons, I am sharing the synthesized metamers used in this experiment and recommend you use them directly if you need the exact images I used (to replicate my behavioral results, for example). If you wish to synthesize new metamers, whether using your own model or even using the ones from this paper, I strongly recommend you use the metamer synthesis code found in plenoptic , which is actively maintained and tested, though it is not identical to the procedure used here. Most important, it does not include a SWA implementation and probably will never include one, but I would be happy to help come up with how to add it in an extension or a fork.

Related repos

If you would like to generate your own metamers, see plenoptic , a python library for image synthesis, including metamers, MAD Competition, eigendistortions, and geodesics.

If you would like to use the pooling windows, see pooling-windows . This includes pytorch implementations of the Gaussian windows from this project, as well as the raised-cosine windows from Freeman and Simoncelli, 2011. The README describes how to use them for creating a version of the pooled luminance and energy models used in this project. Feel free to use the models in this repo, but the simpler version from that README may better suit your needs. The version in this repo includes a bunch of helper code, including for creating plots and the starts of paths not taken. The only important thing missing from the pooling-windows repo is normalization -- look for the normalize_dict attribute in extra_packages/plenoptic_part/simulate/ventral_stream.py to see how I implemented that, and feel free to reach out if you have trouble.

You may also be interested in the code used by two other papers that this project references a lot, Freeman and Simoncelli, 2011 and Wallis et al., 2019 . If you wish to use the Freeman code, note the bug in window creation pointed out by Wallis et al. (discussed in their appendix 1).

Citation

If you use the data, code, or stimuli from this project in an academic publication, please cite the preprint . If you use the code, please additionally cite the zenodo doi for the corresponding release (e.g., v1.0-biorxiv corresponds to the DOI https://doi.org/10.5281/zenodo.7948552 ).

References

  • Dacey, D. M., & Petersen, M. R. (1992). Dendritic field size and morphology of midget and parasol ganglion cells of the human retina. Proceedings of the National Academy of Sciences, 89(20), 9666–9670. http://dx.doi.org/10.1073/pnas.89.20.9666

  • Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14(9), 1195–1201. http://dx.doi.org/10.1038/nn.2889

  • Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Paul-Christian B"urkner (2021). Rank-normalization, folding, and localization: an improved $R$ for assessing convergence of mcmc (with discussion). Bayesian Analysis, 16(2), 667–718. http://dx.doi.org/10.1214/20-BA1221

  • Wallis, T. S., Funke, C. M., Ecker, A. S., Gatys, L. A., Wichmann, F. A., & Bethge, M. (2019). Image content is more important than bouma's law for scene metamers. eLife, 8(), . http://dx.doi.org/10.7554/elife.42512

  • Wang, Z., & Simoncelli, E. P. (2008). Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual discriminability. Journal of Vision, 8(12), 1–13. http://dx.doi.org/10.1167/8.12.8

Code Snippets

194
195
196
197
198
199
200
201
202
203
204
205
run:
    import contextlib
    import shutil
    import os.path as op
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            print("Copying outputs from %s to %s" % (op.dirname(input[0]), output[0]))
            # Copy over only those files that were generated by the call
            # with the correct gpu_num wildcard, ignore the others
            # (wildcards.gpu_num can only take values 0 or 1)
            ignore_gpu = {0: '*_gpu-1*', 1: '*_gpu-0*'}[int(wildcards.gpu_num)]
            shutil.copytree(op.dirname(input[0]), output[0], ignore=shutil.ignore_patterns(ignore_gpu))
SnakeMake From line 194 of main/Snakefile
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
run:
    import imageio
    import contextlib
    import foveated_metamers as fov
    from skimage import color
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            im = imageio.imread(input[0])
            # when loaded in, the range of this will be 0 to 255, we
            # want to convert it to 0 to 1
            im = fov.utils.convert_im_to_float(im)
            # convert to grayscale
            im = color.rgb2gray(im)
            # 1/2.2 is the standard encoding gamma for jpegs, so we
            # raise this to its reciprocal, 2.2, in order to reverse
            # it
            im = im**2.2
            dtype = eval('np.uint%s' % wildcards.bits)
            imageio.imwrite(output[0], fov.utils.convert_im_to_int(im, dtype))
263
264
265
shell:
    "dcraw -v -4 -q 3 -T {input}; "
    "mv {params.tiff_file} {output}"
SnakeMake From line 263 of main/Snakefile
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
run:
    import imageio
    import contextlib
    from skimage import color
    import foveated_metamers as fov
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            im = imageio.imread(input[0])
            curr_shape = np.array(im.shape)[:2]
            target_shape = [int(i) for i in wildcards.size.split(',')]
            print(curr_shape, target_shape)
            if len(target_shape) == 1:
                target_shape = 2* target_shape
            target_shape = np.array(target_shape)
            crop_amt = curr_shape - target_shape
            # this is ugly, but I can't come up with an easier way to make
            # sure that we skip a dimension if crop_amt is 0 for it
            cropped_im = im
            for i, c in enumerate(crop_amt):
                if c == 0:
                    continue
                else:
                    if i == 0:
                        cropped_im = cropped_im[c//2:-c//2]
                    elif i == 1:
                        cropped_im = cropped_im[:, c//2:-c//2]
                    else:
                        raise Exception("Can only crop up to two dimensions!")
            cropped_im = color.rgb2gray(cropped_im)
            imageio.imwrite(output[0], fov.utils.convert_im_to_int(cropped_im, np.uint16))
            # tiffs can't be read in using the as_gray arg, so we
            # save it as a png, and then read it back in as_gray and
            # save it back out
            cropped_im = imageio.imread(output[0], as_gray=True)
            imageio.imwrite(output[0], cropped_im.astype(np.uint16))
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
run:
    import imageio
    import contextlib
    import numpy as np
    from skimage import transform
    import foveated_metamers as fov
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            im = imageio.imread(input[0])
            dtype = im.dtype
            im = np.array(im, dtype=np.float32)
            print("Original image has dtype %s" % dtype)
            if 'full' in wildcards.img_preproc:
                print("Setting image to use full dynamic range")
                # set the minimum value to 0
                im = im - im.min()
                # set the maximum value to 1
                im = im / im.max()
            elif 'range' in wildcards.img_preproc:
                a, b = re.findall('range-([.0-9]+),([.0-9]+)', wildcards.img_preproc)[0]
                a, b = float(a), float(b)
                print(f"Setting range to {a:02f}, {b:02f}")
                if a > b:
                    raise Exception("For consistency, with range-a,b preprocessing, b must be"
                                    " greater than a, but got {a} > {b}!")
                # set the minimum value to 0
                im = im - im.min()
                # set the maximum value to 1
                im = im / im.max()
                # and then rescale
                im = im * (b - a) + a
            else:
                print("Image will *not* use full dynamic range")
                im = im / np.iinfo(dtype).max
            if 'downsample' in wildcards.img_preproc:
                downscale = float(re.findall('downsample-([.0-9]+)_', wildcards.img_preproc)[0])
                im = transform.pyramid_reduce(im, downscale)
            # always save it as 16 bit
            print("Saving as 16 bit")
            im = fov.utils.convert_im_to_int(im, np.uint16)
            imageio.imwrite(output[0], im)
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
run:
    import imageio
    import contextlib
    import numpy as np
    import foveated_metamers as fov
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            im = imageio.imread(input[0])
            dtype = im.dtype
            im = np.array(im, dtype=np.float32)
            print("Original image has dtype %s" % dtype)
            im = im / np.iinfo(dtype).max
            print("Raising image to 1/2.2, to gamma correct it")
            im = im ** (1/2.2)
            # always save it as 16 bit
            print("Saving as 16 bit")
            im = fov.utils.convert_im_to_int(im, np.uint16)
            imageio.imwrite(output[0], im)
406
407
408
409
410
411
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.stimuli.pad_image(input[0], wildcards.pad_mode, output[0])
SnakeMake From line 406 of main/Snakefile
423
424
425
426
427
428
429
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.stimuli.create_image(wildcards.image_type, int(wildcards.size), output[0],
                                     int(wildcards.period))
SnakeMake From line 423 of main/Snakefile
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
run:
    import imageio
    import contextlib
    from glob import glob
    import os.path as op
    import os
    from skimage import color
    import foveated_metamers as fov
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            os.makedirs(output[0])
            for i in glob(op.join(input[0], '*.jpg')):
                im = imageio.imread(i)
                im = fov.utils.convert_im_to_float(im)
                if im.ndim == 3:
                    # then it's a color image, and we need to make it grayscale
                    im = color.rgb2gray(im)
                if 'degamma' in wildcards.preproc:
                    # 1/2.2 is the standard encoding gamma for jpegs, so we
                    # raise this to its reciprocal, 2.2, in order to reverse
                    # it
                    im = im ** 2.2
                # save as a 16 bit png
                im = fov.utils.convert_im_to_int(im, np.uint16)
                imageio.imwrite(op.join(output[0], op.split(i)[-1].replace('jpg', 'png')), im)
483
484
485
486
487
488
489
490
491
492
493
run:
    import contextlib
    import sys
    sys.path.append(op.join(op.dirname(op.realpath(__file__)), 'extra_packages'))
    import plenoptic_part as pop
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            # scaling doesn't matter here
            v1 = pop.PooledV1(1, (512, 512), num_scales=6)
            pop.optim.generate_norm_stats(v1, input[0], output[0], (512, 512),
                                         index=params.index)
SnakeMake From line 483 of main/Snakefile
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
run:
    import torch
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            combined_stats = {}
            to_combine = [torch.load(i) for i in input]
            for k, v in to_combine[0].items():
                if isinstance(v, dict):
                    d = {}
                    for l in v:
                        s = []
                        for i in to_combine:
                            s.append(i[k][l])
                        d[l] = torch.cat(s, 0)
                    combined_stats[k] = d
                else:
                    s = []
                    for i in to_combine:
                        s.append(i[k])
                    combined_stats[k] = torch.cat(s, 0)
            torch.save(combined_stats, output[0])
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
run:
    import contextlib
    import plenoptic as po
    import sys
    sys.path.append(op.join(op.dirname(op.realpath(__file__)), '..', 'extra_packages', 'pooling-windows'))
    import pooling
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            img_size = [int(i) for i in wildcards.size.split(',')]
            kwargs = {}
            if wildcards.window_type == 'cosine':
                t_width = float(wildcards.t_width)
                std_dev = None
                min_ecc = float(wildcards.min_ecc)
            elif wildcards.window_type == 'gaussian':
                std_dev = float(wildcards.t_width)
                t_width = None
                min_ecc = float(wildcards.min_ecc)
            pooling.PoolingWindows(float(wildcards.scaling), img_size, min_ecc,
                                   float(wildcards.max_ecc), cache_dir=op.dirname(output[0]),
                                   transition_region_width=t_width, std_dev=std_dev,
                                   window_type=wildcards.window_type, **kwargs)
SnakeMake From line 630 of main/Snakefile
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            # bool('False') == True, so we do this to avoid that
            # situation
            if wildcards.clamp_each_iter == 'True':
                clamp_each_iter = True
            elif wildcards.clamp_each_iter == 'False':
                clamp_each_iter = False
            if wildcards.coarse_to_fine == 'False':
                coarse_to_fine = False
            else:
                coarse_to_fine = wildcards.coarse_to_fine
            if wildcards.init_type not in ['white', 'blue', 'pink', 'gray']:
                init_type = fov.utils.get_ref_image_full_path(wildcards.init_type)
            else:
                init_type = wildcards.init_type
            if resources.gpu == 1:
                get_gid = True
            elif resources.gpu == 0:
                get_gid = False
            else:
                raise Exception("Multiple gpus are not supported!")
            if wildcards.save_all:
                save_all = True
            else:
                save_all = False
            with fov.utils.get_gpu_id(get_gid, on_cluster=ON_CLUSTER) as gpu_id:
                fov.create_metamers.main(wildcards.model_name, float(wildcards.scaling),
                                         input.ref_image, int(wildcards.seed), float(wildcards.min_ecc),
                                         float(wildcards.max_ecc), float(wildcards.learning_rate),
                                         int(wildcards.max_iter), float(wildcards.loss_thresh),
                                         int(wildcards.loss_change_iter), output[0],
                                         init_type, gpu_id, params.cache_dir, input.norm_dict,
                                         wildcards.optimizer, float(wildcards.fract_removed),
                                         float(wildcards.loss_fract),
                                         float(wildcards.loss_change_thresh), coarse_to_fine,
                                         wildcards.clamp, clamp_each_iter, wildcards.loss,
                                         save_all=save_all, num_threads=resources.num_threads)
SnakeMake From line 824 of main/Snakefile
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            # bool('False') == True, so we do this to avoid that
            # situation
            if wildcards.clamp_each_iter == 'True':
                clamp_each_iter = True
            elif wildcards.clamp_each_iter == 'False':
                clamp_each_iter = False
            if wildcards.coarse_to_fine == 'False':
                coarse_to_fine = False
            else:
                coarse_to_fine = wildcards.coarse_to_fine
            if wildcards.init_type not in ['white', 'blue', 'pink', 'gray']:
                init_type = fov.utils.get_ref_image_full_path(wildcards.init_type)
            else:
                init_type = wildcards.init_type
            if resources.gpu == 1:
                get_gid = True
            elif resources.gpu == 0:
                get_gid = False
            else:
                raise Exception("Multiple gpus are not supported!")
            with fov.utils.get_gpu_id(get_gid, on_cluster=ON_CLUSTER) as gpu_id:
                # this is the same as the original call in the
                # create_metamers rule, except we replace max_iter with
                # extra_iter, set learning_rate to None, and add the
                # input continue_path at the end
                fov.create_metamers.main(wildcards.model_name, float(wildcards.scaling),
                                         input.ref_image, int(wildcards.seed), float(wildcards.min_ecc),
                                         float(wildcards.max_ecc), None,
                                         int(wildcards.extra_iter), float(wildcards.loss_thresh),
                                         int(wildcards.loss_change_iter), output[0],
                                         init_type, gpu_id, params.cache_dir, input.norm_dict,
                                         wildcards.optimizer, float(wildcards.fract_removed),
                                         float(wildcards.loss_fract),
                                         float(wildcards.loss_change_thresh), coarse_to_fine,
                                         wildcards.clamp, clamp_each_iter, wildcards.loss,
                                         input.continue_path, num_threads=resources.num_threads)
SnakeMake From line 901 of main/Snakefile
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
run:
    import foveated_metamers as fov
    import contextlib
    import numpy as np
    import shutil
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            if output[0].endswith('metamer_gamma-corrected.png'):
                if ('degamma' in wildcards.image_name or
                    any([i in wildcards.image_name for i in LINEAR_IMAGES])):
                    print(f"Saving gamma-corrected image {output[0]} as np.uint8")
                    im = np.load(input[0])
                    im = im ** (1/2.2)
                    im = fov.utils.convert_im_to_int(im, np.uint8)
                    imageio.imwrite(output[0], im)
                else:
                    print("Image already gamma-corrected, copying to {output[0]}")
                    shutil.copy(output[0].replace('_gamma-corrected', ''), output[0])
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.stimuli.collect_images(input[:6], output[0])
            df = []
            for i, p in enumerate(input):
                if i < 4:
                    image_name = op.basename(input[-2:][i//2]).replace('.pgm', '').replace('.png', '')
                    # dummy scaling value
                    scaling = 1
                    model = 'training'
                    seed = i % 2
                else:
                    image_name = op.basename(p).replace('.pgm', '').replace('.png', '')
                    scaling = None
                    model = None
                    seed = None
                df.append(pd.DataFrame({'base_signal': p, 'image_name': image_name, 'model': model,
                                        'scaling': scaling, 'seed': seed}, index=[0]))
            pd.concat(df).to_csv(output[1], index=False)
1051
1052
1053
1054
1055
1056
1057
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.stimuli.collect_images(input, output[0])
            fov.stimuli.create_metamer_df(input, output[1])
SnakeMake From line 1051 of main/Snakefile
1073
1074
1075
1076
1077
1078
1079
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.stimuli.collect_images(input, output[0])
            fov.stimuli.create_metamer_df(input, output[1])
SnakeMake From line 1073 of main/Snakefile
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
run:
    import foveated_metamers as fov
    import numpy as np
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim = np.load(input[0])
            stim_df = pd.read_csv(input[1])
            max_ecc = stim_df.max_ecc.dropna().unique()
            assert len(max_ecc) == 1, "Found multiple max_ecc!"
            masks, mask_df = fov.stimuli.create_eccentricity_masks(stim.shape[-2:],
                                                                   max_ecc[0])
            np.save(output[0], masks)
            mask_df.to_csv(output[1], index=False)
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim_df = pd.read_csv(input[0])
            try:
                ref_image_to_include = fov.stimuli.get_images_for_session(wildcards.subject,
                                                                          int(wildcards.sess_num),
                                                                          'downsample' in wildcards.comp)
                if 'training' in wildcards.model_name:
                    raise Exception("training models only allowed for sub-training!")
                # in a session, we show 5 images. each run has 3 of those
                # images, and we rotate so that each image shows up in 3
                # runs.
                idx = list(range(len(config['PSYCHOPHYSICS']['RUNS'])))
                r = int(wildcards.run_num)
                # we're 0-indexed, so r==4 is the 5th run
                if r > 4 or len(idx) != 5:
                    raise Exception("This only works for 5 runs per session!")
                idx = idx[r:r+3] + idx[:max(0, r-2)]
                ref_image_to_include = ref_image_to_include[idx]
            except ValueError:
                # then this is the training subject
                if int(wildcards.sess_num) > 0 or int(wildcards.run_num):
                    raise Exception("only session 0 and run 0 allowed for sub-training!")
                if 'training' not in wildcards.model_name:
                    raise Exception("only training models allowed for sub-training!")
                else:
                    # if it is the traning model, then the stimuli description
                    # has already been restricted to only the values we want
                    ref_image_idx = [0, 1]
                ref_image_to_include = stim_df.image_name.unique()[ref_image_idx]
            stim_df = stim_df.query("image_name in @ref_image_to_include")
            # we might have something after the - (like downsample-2), which
            # we don't want to include
            comp = 'met_v_' + wildcards.comp.split('-')[0]
            idx = fov.stimuli.generate_indices_split(stim_df, params.seed, comp, n_repeats=12)
            np.save(output[0], idx)
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
run:
    import foveated_metamers as fov
    import numpy as np
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[0])
            idx = np.load(input[1])
            expt_df = fov.analysis.create_experiment_df_split(df, idx)
            np.save(output[0], expt_df.correct_response.values)
1224
1225
shell:
    'echo "This is a temporary file used by Snakemake to create all run index .npy files. It is otherwise unused." > {output}'
SnakeMake From line 1224 of main/Snakefile
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
run:
    import foveated_metamers as fov
    import numpy as np
    import pandas as pd
    import re
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim_df = pd.read_csv(input[0])
            idx = np.load(input[1])
            trials = fov.analysis.summarize_trials(input[2])
            fig = fov.analysis.plot_timing_info(trials, wildcards.subject,
                                                wildcards.sess_num,
                                                wildcards.run_num)
            fig.savefig(output[1], bbox_inches='tight')
            df = fov.analysis.create_experiment_df_split(stim_df, idx)
            if len(wildcards.comp.split('-')) > 1:
                df['trial_type'] = df.trial_type.apply(lambda x: x+'-'+wildcards.comp.split('-')[1])
            df = fov.analysis.add_response_info(df, trials, wildcards.subject,
                                                wildcards.sess_num, wildcards.run_num)
            if wildcards.ecc_mask:
                mask_idx = int(wildcards.ecc_mask.split('-')[1])
                ecc_mask_df = pd.read_csv(input[3]).set_index('window_n')
                ecc_mask_df = ecc_mask_df.loc[mask_idx]
                df['min_ecc'] = ecc_mask_df.min_eccentricity
                # the outer-most mask in ecc_mask_df will have
                # max_eccentricity larger than the actual image (which is
                # equivalent to saying that the mask hasn't "turnd off" by
                # the edge of the image). for ease, we don't change max_ecc
                # in that case.
                if ecc_mask_df.max_eccentricity < df.max_ecc.unique()[0]:
                    df['max_ecc'] = ecc_mask_df.max_eccentricity
            df.to_csv(output[0], index=False)
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            expt_df = pd.concat([pd.read_csv(i) for i in input])
            expt_df.to_csv(output[0], index=False)
            g = fov.figures.performance_plot(expt_df, hue='subject_name',
                                             height=2.5, curve_fit=True,
                                             style='trial_type')
            g.fig.savefig(output[1], bbox_inches='tight')
            g = fov.figures.run_length_plot(expt_df, hue='subject_name')
            g.fig.savefig(output[2], bbox_inches='tight')
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim_df = pd.read_csv(input[-1])
            expt_df = pd.read_csv(input[0])
            g = fov.figures.compare_loss_and_performance_plot(expt_df, stim_df, x=wildcards.x)
            g.fig.savefig(output[0], bbox_inches='tight')
            g = fov.figures.compare_loss_and_performance_plot(expt_df, stim_df, x=wildcards.x, col_wrap=None, row='subject_name')
            g.fig.savefig(output[1], bbox_inches='tight')
            g = fov.figures.compare_loss_and_performance_plot(expt_df, stim_df, x=wildcards.x, col=None, plot_kind='line',
                                                              height=5, logscale_xaxis=True if wildcards.x=='loss' else False)
            g.fig.savefig(output[2], bbox_inches='tight')
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            if 'V1' in wildcards.model_name:
                scaling = config['V1']['scaling']
                s0 = .08
            elif 'RGC' in wildcards.model_name:
                scaling = config['RGC']['scaling']
                s0 = .01
            simul = fov.mcmc.simulate_dataset(s0, 5, num_subjects=4,
                                              trial_types=1, num_images=20,
                                              num_trials=36)
            obs = simul.observed_responses.copy().astype(np.float32)
            # block out simulated data like our actual data is blocked out
            obs[:, :, :len(simul.subject_name):2, 10:15] = np.nan
            obs[:, :, 1:len(simul.subject_name):2, 15:] = np.nan
            simul['observed_responses'] = obs
            simul = simul.to_dataframe().reset_index()
            simul = simul.rename(columns={'observed_responses': 'hit_or_miss_numeric'})
            simul['model'] = f'simulated_{wildcards.model_name}'
            simul.to_csv(output[0], index=False)
SnakeMake From line 1365 of main/Snakefile
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
run:
    import contextlib
    import foveated_metamers as fov
    import pandas as pd
    import jax
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            print(f"Running on {jax.device_count()} cpus!")
            dataset = fov.mcmc.assemble_dataset_from_expt_df(pd.read_csv(input[0]))
            mcmc_model = wildcards.mcmc_model
            interact_sd = None
            if 'interactions' in mcmc_model:
                try:
                    interact_sd = float(re.findall('partially-pooled-interactions-([.0-9]+)', mcmc_model)[0])
                except IndexError:
                    interact_sd = .1
                mcmc_model = 'partially-pooled-interactions'
            mcmc = fov.mcmc.run_inference(dataset, mcmc_model,
                                          float(wildcards.step_size),
                                          int(wildcards.num_draws),
                                          int(wildcards.num_chains),
                                          int(wildcards.num_warmup),
                                          int(wildcards.seed),
                                          float(wildcards.accept_prob),
                                          int(wildcards.tree_depth),
                                          interact_sd=interact_sd)
            # want to have a different seed for constructing the inference
            # data object than we did for inference itself
            inf_data = fov.mcmc.assemble_inf_data(mcmc, dataset,
                                                  mcmc_model,
                                                  int(wildcards.seed)+1,
                                                  interact_sd=interact_sd)
            inf_data.to_netcdf(output[0])
            # want to have a different seed for constructing the inference
            # data object than we did for inference itself
            inf_data_extended = fov.mcmc.assemble_inf_data(mcmc, dataset,
                                                           mcmc_model,
                                                           int(wildcards.seed)+10,
                                                           extend_scaling=True,
                                                           interact_sd=interact_sd)
            inf_data_extended.to_netcdf(output[1])
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
run:
    import foveated_metamers as fov
    import arviz as az
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            inf_data = az.from_netcdf(input[0])
            if wildcards.plot_type == 'post-pred-check':
                print("Creating posterior predictive check.")
                fig = fov.figures.posterior_predictive_check(inf_data, col='subject_name', row='image_name', height=1.5,
                                                             style='trial_type', hue='distribution')
            elif wildcards.plot_type == 'performance':
                print("Creating performance plot.")
                fig = fov.figures.posterior_predictive_check(inf_data,
                                                             hue='subject_name',
                                                             col='image_name',
                                                             height=2.5,
                                                             col_wrap=5,
                                                             style='trial_type')
            elif wildcards.plot_type == 'diagnostics':
                print("Creating MCMC diagnostics plot.")
                fig = fov.figures.mcmc_diagnostics_plot(inf_data)
            elif 'param-corr' in wildcards.plot_type:
                print("Creating MCMC parameter correlations plot.")
                fig = fov.figures.mcmc_parameter_correlation(inf_data,
                                                             wildcards.plot_type.split('_')[1])
            elif wildcards.plot_type == "param-avg-method":
                print("Creating MCMC parameter average comparison plot.")
                df = fov.mcmc.inf_data_to_df(inf_data, 'parameter grouplevel means',
                                             query_str="distribution=='posterior'", hdi=.95)
                df['avg_method'] = 'individual curves + avg'
                tmp = fov.mcmc.inf_data_to_df(inf_data, 'parameter grouplevel means Heiko method',
                                              query_str="distribution=='posterior'", hdi=.95)
                tmp['avg_method'] = 'overall effects + combine'
                df = pd.concat([df, tmp]).reset_index(drop=True)
                fig = fov.figures.psychophysical_grouplevel_means(df, style=['avg_method', 'trial_type'])
            elif wildcards.plot_type == 'psychophysical-params':
                print("Creating psychophysical parameters plot.")
                fig = fov.figures.psychophysical_curve_parameters(inf_data,
                                                                  rotate_xticklabels=True,
                                                                  aspect=3,
                                                                  height=5,
                                                                  style='trial_type')
            elif wildcards.plot_type == 'pairplot':
                print("Creating parameter pairplot.")
                fig = fov.figures.parameter_pairplot(inf_data, hue='subject_name')
            elif wildcards.plot_type == 'params':
                print("Creating parameter distribution plot.")
                fig = fov.figures.partially_pooled_parameters(inf_data, height=4, aspect=2.5,
                                                              rotate_xticklabels=True)
            elif wildcards.plot_type == 'interaction-params':
                print("Creating interaction parameter distribution plot.")
                fig = fov.figures.interaction_parameters(inf_data)
            elif wildcards.plot_type == 'metaparams':
                print("Creating metaparameter distribution plot.")
                fig = fov.figures.partially_pooled_metaparameters(inf_data, height=5)
            elif wildcards.plot_type == 'grouplevel':
                print("Creating parameter grouplevel means distribution plot.")
                fig = fov.figures.psychophysical_grouplevel_means(inf_data)
            else:
                raise Exception(f"Don't know how to handle plot_type {wildcards.plot_type}!")
            fig.savefig(output[0], bbox_inches='tight')
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
run:
    import foveated_metamers as fov
    import arviz as az
    import re
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            models = {}
            for i in input:
                inf = az.from_netcdf(i)
                name = re.findall('mcmc_([a-z-]+)_step', i)[0]
                models[name] = inf
            comp_df = az.compare(models, ic=wildcards.ic)
            ax = az.plot_compare(comp_df)
            ax.figure.savefig(output[1], bbox_inches='tight')
            # model label is found in the index of the df, so need to grab it back
            comp_df.reset_index().to_csv(output[0], index=False)
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
run:
    import foveated_metamers as fov
    import arviz as az
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            if wildcards.plot_type == 'psychophysical-params':
                df_type = 'psychophysical curve parameters'
            elif wildcards.plot_type == 'psychophysical-grouplevel':
                df_type = 'parameter grouplevel means'
            df = []
            for i in input:
                inf = az.from_netcdf(i)
                df.append(fov.mcmc.inf_data_to_df(inf, df_type,
                                                  query_str="distribution=='posterior'", hdi=.95))
            df = pd.concat(df)
            if wildcards.plot_type == 'psychophysical-params':
                fig = fov.figures.psychophysical_curve_parameters(df, style=['mcmc_model_type',
                                                                             'trial_type'],
                                                                  row='trial_type',
                                                                  height=5, aspect=3,
                                                                  rotate_xticklabels=True)
            elif wildcards.plot_type == 'psychophysical-grouplevel':
                fig = fov.figures.psychophysical_grouplevel_means(df, style=['mcmc_model_type', 'trial_type'])
            fig.savefig(output[0], bbox_inches='tight')
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
run:
    import plenoptic as po
    import foveated_metamers as fov
    import contextlib
    import pyrtools as pt
    import os.path as op
    import pandas as pd
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            images = po.to_numpy(po.load_images(input)).squeeze()
            df = []
            for n, im, out in zip(input, images, output[1:]):
                # 7th pyramid scale is dominated by the edge of the picture
                hg, tmp = fov.statistics.heterogeneity(im, pyramid_height=6)
                n = op.split(n)[-1].split('_')
                if 'symmetric' in n:
                    n = '_'.join(n[:2])
                else:
                    n = n[0]
                tmp['image'] = n
                df.append(tmp)
                print(n, out)
                # this pads out the arrays so we can plot them all on
                # imshow
                pads = []
                for i, h in enumerate(hg[::-1]):
                    ideal_shape = 2**i * np.array(hg[-1].shape)
                    pad = []
                    for x in ideal_shape - h.shape:
                        pad.append((np.ceil(x/2).astype(int),
                                    np.floor(x/2).astype(int)))
                    pads.append(pad)
                # need to reverse the order, since we construct it backwards
                pads = pads[::-1]
                hg = [np.pad(h, pads[i]) for i, h in enumerate(hg)]
                fig = pt.imshow(hg, zoom=.125,
                                title=[f'heterogeneity {n}\nscale {i}' for i in range(len(hg))])
                fig.savefig(out, bbox_inches='tight')
            df = pd.concat(df).reset_index(drop=True)
            df.to_csv(output[0], index=False)
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
run:
    import foveated_metamers as fov
    from collections import OrderedDict
    import xarray
    import contextlib
    import os
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            if wildcards.model_name.startswith('RGC') and wildcards.comp.startswith('met'):
                met_ref_imgs = ['llama', 'highway_symmetric', 'rocks', 'boats', 'gnarled']
            else:
                met_ref_imgs = LINEAR_IMAGES
            # make sure this is in the same order as LINEAR_IMAGES, whatever it is
            met_ref_imgs = sorted(met_ref_imgs, key=lambda x: LINEAR_IMAGES.index(x))
            seeds = set([int(re.findall('seed-(\d+)_', i)[0][-1]) for i in input if 'seed' in i])
            scalings = set([float(re.findall('scaling-([\d.]+)', i)[0])
                            for i in input if 'scaling' in i])
            # grab spectra for reference images
            ims = [i for i in input if 'scaling' not in i and 'seed' not in i]
            metadata = OrderedDict(model=wildcards.model_name, trial_type=f'met_v_{wildcards.comp}')
            ims = sorted(ims, key=lambda x: LINEAR_IMAGES.index([i for i in LINEAR_IMAGES if i in x][0]))
            assert len(ims) == len(LINEAR_IMAGES), f"Have too many images! Expected {len(LINEAR_IMAGES)}, but got {ims}"
            ref_image_spectra = fov.statistics.image_set_amplitude_spectra(ims, LINEAR_IMAGES, metadata)
            ref_image_spectra = ref_image_spectra.rename({'sf_amplitude': 'ref_image_sf_amplitude',
                                                          'orientation_amplitude': 'ref_image_orientation_amplitude'})
            spectra = []
            for scaling in scalings:
                tmp_ims = [i for i in input if len(re.findall(f'scaling-{scaling}{os.sep}', i)) == 1]
                tmp_spectra = []
                for seed in seeds:
                    # grab spectra for all images with matching seed_n and scaling.
                    metadata = OrderedDict(model=wildcards.model_name, trial_type=f'met_v_{wildcards.comp}',
                                           scaling=scaling, seed_n=seed)
                    ims = [i for i in tmp_ims if len(re.findall(f'seed-\d*{seed}_', i)) == 1]
                    ims = sorted(ims, key=lambda x: LINEAR_IMAGES.index([i for i in LINEAR_IMAGES if i in x][0]))
                    assert len(ims) == len(met_ref_imgs), f"Have too many images! Expected {len(met_ref_imgs)}, but got {ims}"
                    tmp_spectra.append(fov.statistics.image_set_amplitude_spectra(ims, met_ref_imgs, metadata))
                spectra.append(xarray.concat(tmp_spectra, 'seed_n'))
            spectra = xarray.concat(spectra, 'scaling')
            spectra = xarray.merge([spectra.rename({'sf_amplitude': 'metamer_sf_amplitude',
                                                    'orientation_amplitude': 'metamer_orientation_amplitude'}),
                                    ref_image_spectra])
            spectra.to_netcdf(output[0])
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
run:
    import foveated_metamers as fov
    import xarray
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            ds = xarray.load_dataset(input[0])
            if wildcards.amplitude_type == 'sf':
                g = fov.figures.amplitude_spectra(ds)
            elif wildcards.amplitude_type == 'orientation':
                g = fov.figures.amplitude_orientation(ds)
            elif wildcards.amplitude_type == 'orientation-demeaned':
                g = fov.figures.amplitude_orientation(ds, demean=True)
            g.savefig(output[0], bbox_inches='tight')
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fig, param, data = fov.simulate.test_optimization(float(wildcards.a0),
                                                              float(wildcards.s0),
                                                              n_opt=int(wildcards.n_seeds),
                                                              max_iter=int(wildcards.max_iter))
            fig.savefig(output[0], bbox_inches='tight')
            param.to_csv(output[1], index=False)
            data.to_csv(output[2], index=False)
SnakeMake From line 1769 of main/Snakefile
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fig, param, data = fov.simulate.test_num_trials(int(wildcards.n_trials), int(wildcards.n_boots),
                                                            float(wildcards.a0), float(wildcards.s0),
                                                            max_iter=int(wildcards.max_iter))
            fig.savefig(output[0], bbox_inches='tight')
            param.to_csv(output[1], index=False)
            data.to_csv(output[2], index=False)
SnakeMake From line 1792 of main/Snakefile
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
run:
    import foveated_metamers as fov
    import seaborn as sns
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.concat([pd.read_csv(f) for f in input])
            font_scale = {'poster': 1.7}.get(wildcards.context, 1)
            with sns.plotting_context(wildcards.context, font_scale=font_scale):
                g = fov.figures.simulate_num_trials(df)
                g.fig.savefig(output[0], bbox_inches='tight')
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
run:
    import foveated_metamers as fov
    import seaborn as sns
    import contextlib
    import re
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            font_scale = {'poster': 1.7}.get(wildcards.context, 1)
            max_ecc = float(re.findall('em-([0-9.]+)_', input[0])[0])
            with sns.plotting_context(wildcards.context, font_scale=font_scale):
                scaling = {MODELS[0]: config['RGC']['scaling'],
                           MODELS[1]: config['V1']['scaling']}[wildcards.model_name]
                fig = fov.figures.scaling_comparison_figure(wildcards.model_name,
                    wildcards.image_name, scaling, wildcards.seed, max_ecc=max_ecc)
                fig.savefig(output[0], bbox_inches='tight')
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
run:
    import foveated_metamers as fov
    import seaborn as sns
    import contextlib
    import torch
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            font_scale = {'poster': 1.7}.get(wildcards.context, 1)
            min_ecc = config['DEFAULT_METAMERS']['min_ecc']
            max_ecc = config['DEFAULT_METAMERS']['max_ecc']
            size = [int(i) for i in config['IMAGE_NAME']['size'].split(',')]
            image = torch.rand((1, 1, *size))
            with sns.plotting_context(wildcards.context, font_scale=font_scale):
                # remove the normalizing aspect, since we don't need it here
                model, _, _, _ = fov.create_metamers.setup_model(wildcards.model_name.replace('_norm', ''),
                                                                 float(wildcards.scaling),
                                                                 image, min_ecc, max_ecc, params.cache_dir)
                fig = fov.figures.pooling_window_area(model.PoolingWindows)
                fig.savefig(output[0], bbox_inches='tight')
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
run:
    import foveated_metamers as fov
    import torch
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            size = [int(i) for i in wildcards.size.split(',')]
            image = torch.rand((1, 1, *size))
            model, _, _, _ = fov.create_metamers.setup_model(f'RGC_{wildcards.window_type}',
                                                             float(wildcards.scaling),
                                                             image,
                                                             float(wildcards.min_ecc),
                                                             float(wildcards.max_ecc),
                                                             params.cache_dir)
            window = fov.utils.grab_single_window(model.PoolingWindows)
            torch.save(window, output[0])
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
run:
    import foveated_metamers as fov
    import contextlib
    import torch
    import imageio
    import matplotlib.pyplot as plt
    sys.path.append('extra_packages/pooling-windows/')
    import pooling
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, _ = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            image = fov.utils.convert_im_to_float(imageio.imread(input[0]))
            window = torch.load(input[1])
            # this is the same computation to get the window_max_amplitude
            # from the creation of the PoolingWindows attribute (but
            # asserting that the gaussian width is 1, which we guarantee
            # elsewhere). this way we don't need to load in the
            # corresponding PoolingWindows object (which may be huge) to
            # retrieve this value
            if 'gaussian' in wildcards.model_name:
                window_max_amplitude = (1 / (1 * pooling.pooling.GAUSSIAN_SUM)) ** 2
            elif 'cosine' in wildcards.model_name:
                window_max_amplitude = 1
            fig = fov.figures.pooling_window_example(window, image, vrange=(0, 1),
                                                     linewidths=float(wildcards.lw)*style['lines.linewidth'],
                                                     target_amp=window_max_amplitude / 2)
            fig.savefig(output[0])
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
run:
    import foveated_metamers as fov
    import seaborn as sns
    import contextlib
    import numpy as np
    import pandas as pd
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            font_scale = 1
            stim = np.load(input[0])
            stim_df = pd.read_csv(input[1])
            with sns.plotting_context('paper', font_scale=font_scale):
                fig, errors = fov.figures.synthesis_pixel_diff(stim, stim_df, float(wildcards.scaling))
                fig.savefig(output[0], bbox_inches='tight')
                np.save(output[1], errors)
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
run:
    import foveated_metamers as fov
    import seaborn as sns
    import contextlib
    import numpy as np
    import pyrtools as pt
    import pandas as pd
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            font_scale = 1
            scaling = params.model_dict['scaling']
            errors = np.zeros((len(scaling),
                               *[int(i) for i in config['IMAGE_NAME']['size'].split(',')]),
                              dtype=np.float32)
            for i, f in enumerate(input):
                errors[i] = np.load(f).mean(0)
            with sns.plotting_context('paper', font_scale=font_scale):
                fig = pt.imshow([e for e in errors], zoom=.5, col_wrap=4,
                                title=[f'scaling {sc}' for sc in scaling])
                fig.suptitle('Pixelwise squared errors, averaged across images\n',
                             va='bottom', fontsize=fig.axes[0].title.get_fontsize()*1.25)
                fig.savefig(output[0], bbox_inches='tight')
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    import matplotlib.pyplot as plt
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            expt_df = pd.read_csv(input[0])
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            col = 'image_name'
            hue = 'subject_name'
            height = fig_width / 6
            if wildcards.plot_focus == '_focus-image':
                hue = 'model'
            elif wildcards.plot_focus == '_focus-subject':
                col = None
                height = fig_width / 3
            expt_df.model = expt_df.model.map(lambda x: {'RGC': 'Retina'}.get(x.split('_')[0],
                                                                              x.split('_')[0]))
            df['model'] = df['model'].map(fov.plotting.MODEL_PLOT)
            df['trial_type'] = df['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
            g = fov.figures.performance_plot(expt_df, hue=hue,
                                             height=height, col=col,
                                             curve_fit=True,
                                             style='trial_type')
            if wildcards.context == 'paper':
                g.fig.suptitle('')
            g.fig.savefig(output[0], bbox_inches='tight')
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
run:
    import foveated_metamers as fov
    import contextlib
    import matplotlib.pyplot as plt
    import arviz as az
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            inf_data = az.from_netcdf(input[0])
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            if wildcards.plot_type == 'params-grouplevel':
                inf_data = fov.mcmc.inf_data_to_df(inf_data, 'parameter grouplevel means',
                                                   query_str="distribution=='posterior'", hdi=.95)
                inf_data['trial_type'] = inf_data['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
                col = 'level'
                fig = fov.figures.psychophysical_grouplevel_means(inf_data,
                                                                  height=fig_width/4,
                                                                  col=col)
                for ax in fig.axes:
                    ax.set_title(ax.get_title().replace('a0', 'gain').replace('s0', 'critical scaling'))
                fig.suptitle(fig._suptitle.get_text(), y=1.05)
            elif 'performance' in wildcards.plot_type:
                col = 'image_name'
                hue = 'subject_name'
                style = 'trial_type'
                height = fig_width / 6
                kwargs = {}
                if 'focus' in wildcards.plot_type:
                    inf_data = fov.mcmc.inf_data_to_df(inf_data, 'predictive grouplevel means', hdi=.95)
                    if 'focus-image' in wildcards.plot_type:
                        if 'one-ax' in wildcards.plot_type:
                            assert inf_data.model.nunique() == 1, "focus-image_one-ax currently only works with one model!"
                            pal = {k: fov.plotting.get_palette('model', inf_data.model.unique())[inf_data.model.unique()[0]]
                                   for k in fov.plotting.get_palette('image_name')}
                            hue = 'image_name'
                            col = None
                            height = fig_width / 2
                            kwargs['palette'] = pal
                        else:
                            hue = 'model'
                        inf_data = inf_data.query("level=='image_name'").rename(
                            columns={'dependent_var': 'image_name'})
                        inf_data['subject_name'] = 'all subjects'
                    elif 'focus-outlier' in wildcards.plot_type:
                        # want to highlight nyc and llama by changing their color ...
                        pal = fov.plotting.get_palette('image_name_focus-outlier', inf_data.model.unique())
                        # and plotting them on top of other lines
                        kwargs['hue_order'] = sorted(fov.plotting.get_palette('image_name').keys())
                        zorder = [2 if k in ['nyc', 'llama'] else 1 for k in kwargs['hue_order']]
                        hue = 'image_name'
                        col = None
                        height = fig_width / 2
                        inf_data = inf_data.query("level=='image_name'").rename(
                            columns={'dependent_var': 'image_name'})
                        inf_data['subject_name'] = 'all subjects'
                        kwargs['palette'] = pal
                        kwargs['hue_kws'] = {'zorder': zorder}
                    elif 'focus-subject' in wildcards.plot_type:
                        col = None
                        if 'one-ax' in wildcards.plot_type:
                            height = fig_width / 2
                            assert inf_data.model.nunique() == 1, "focus-image_one-ax currently only works with one model!"
                            pal = {k: fov.plotting.get_palette('model', inf_data.model.unique())[inf_data.model.unique()[0]]
                                   for k in fov.plotting.get_palette('subject_name')}
                            kwargs['palette'] = pal
                        else:
                            height = fig_width / 3
                        inf_data = inf_data.query("level=='subject_name'").rename(
                            columns={'dependent_var': 'subject_name'})
                        inf_data['image_name'] = 'all images'
                elif 'all' in wildcards.plot_type:
                    kwargs['col_wrap'] = 5
                    kwargs['hdi'] = 0
                    kwargs['markersize'] = .5 * plt.rcParams['lines.markersize']
                    # get rid of those lines that we don't have observations for
                    inf_data = fov.mcmc._drop_no_observations(inf_data, fov.mcmc.inf_data_to_df(inf_data, 'predictive', hdi=0))

                inf_data['model'] = inf_data['model'].map(fov.plotting.MODEL_PLOT)
                inf_data['trial_type'] = inf_data['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
                g = fov.figures.posterior_predictive_check(inf_data,
                                                           col=col,
                                                           hue=hue,
                                                           style=style,
                                                           height=height,
                                                           increase_size=False,
                                                           **kwargs)
                fig = g.fig
            else:
                raise Exception(f"Don't know how to handle plot type {wildcards.plot_type}!")
            if 'focus-outlier' in wildcards.plot_type or 'one-ax' in wildcards.plot_type:
                # don't need the legend here, it's not doing much
                warnings.warn("Removing legend, because it's not doing much.")
                fig.legends[0].remove()
                if 'V1' in wildcards.model_name:
                    warnings.warn("Removing ylabel so we don't have redundant labels when composing figure")
                    fig.axes[0].set_ylabel('')
                if wildcards.comp == 'ref':
                    warnings.warn("Removing xlabel so we don't have redundant labels when composing figure")
                    fig.axes[0].set_xlabel('')
            elif wildcards.plot_type == 'performance-all':
                fig.subplots_adjust(right=.95)
                # this only needs to be done on the first axis, because
                # their ticks are yoked together
                fig.axes[0].set_xticks(fig.axes[0].get_xticks()[::2])
            if wildcards.context == 'paper':
                fig.suptitle('')
                for i, ax in enumerate(fig.axes):
                    # also need to move the titles down a bit
                    if wildcards.plot_type == 'performance-all':
                        # titles on this one need to be a bit higher than the rest
                        ax.set_title(ax.get_title(), y=.91)
                    else:
                        ax.set_title(ax.get_title(), y=.85)
                    # still running into this issue
                    # https://github.com/mwaskom/seaborn/issues/2293 with
                    # things about this size, so we manually set the
                    # xticklabels invisible
                    if col == 'image_name' and i <= 14:
                        [xticklab.set_visible(False) for xticklab in ax.get_xticklabels()]
            fig.savefig(output[0], bbox_inches='tight', transparent=True)
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import matplotlib.pyplot as plt
    import arviz as az
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            df = []
            for i in input:
                inf = az.from_netcdf(i)
                df.append(fov.mcmc.inf_data_to_df(inf, 'parameter grouplevel means',
                                                  query_str="distribution=='posterior'", hdi=.95))
            df = pd.concat(df)
            df.model = df.model.map(fov.plotting.MODEL_PLOT)
            df.trial_type = df.trial_type.map(fov.plotting.TRIAL_TYPE_PLOT)
            fig = fov.figures.psychophysical_grouplevel_means(df, style=['mcmc_model_type', 'trial_type'],
                                                              height=fig_width/4, increase_size=False,
                                                              row_order=['s0', 'a0'])
            for i, ax in enumerate(fig.axes):
                if 'linear' == wildcards.axis_scale:
                    ax.set(yscale='linear')
                elif 'log' == wildcards.axis_scale:
                    ax.set(yscale='log')
                title = ax.get_title().replace('a0', "Max $d'$").replace('s0', 'Critical Scaling')
                title = title.split('|')[0]
                # remove title
                ax.set_title('')
                ax.set_xlabel(ax.get_xlabel().split('_')[0].capitalize())
                if i % 2 == 0:
                    ax.set_ylabel(title)
            if wildcards.yaxis_dbl == 'double':
                # need to draw so that the tick locations are set.
                fig.canvas.draw()
                # because the ylabels are longer for this one
                if wildcards.model_name == 'RGC_norm_gaussian' and wildcards.comp == 'ref':
                    pos = -.20
                else:
                    pos = -.15
                fov.plotting.add_asymptotic_performance_yaxis(fig.axes[2], position=pos)
            fig.suptitle('')
            # change title of this to be clearer. this is an
            # exceedingly hacky way of doing this.
            leg = fig.legends[0]
            leg.texts = [t.set_text(t.get_text().replace('Trial type', 'Comparison').replace('Mcmc', 'MCMC'))
                         for t in leg.texts]
            fig.savefig(output[0], bbox_inches='tight', transparent=True)
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    import arviz as az
    import matplotlib.pyplot as plt
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            df = []
            for i in input:
                inf = az.from_netcdf(i)
                tmp = fov.mcmc.inf_data_to_df(inf, 'psychophysical curve parameters',
                                              query_str="distribution=='posterior'",
                                              hdi=0)
                df.append(fov.mcmc._drop_no_observations(inf, tmp))
            df = pd.concat(df)
            df.model = df.model.map(fov.plotting.MODEL_PLOT)
            df.trial_type = df.trial_type.map(fov.plotting.TRIAL_TYPE_PLOT)
            comps = [('unpooled', 'partially-pooled'), ('unpooled', 'partially-pooled-interactions'),
                     ('partially-pooled-interactions', 'partially-pooled')]
            for i, c in enumerate(comps):
                g = fov.figures.compare_mcmc_psychophysical_params(df, c, height=fig_width/6,
                                                                   compare_method=wildcards.comp_method)
                g.fig.subplots_adjust(right=.95, hspace=.27)
                # change title of this to be clearer. this is an
                # exceedingly hacky way of doing this.
                leg = g.fig.legends[0]
                leg.texts = [t.set_text(t.get_text().replace('Trial type', 'Comparison'))
                             for t in leg.texts]
                if wildcards.context == 'paper':
                    for j, ax in enumerate(g.fig.axes):
                        # still running into this issue
                        # https://github.com/mwaskom/seaborn/issues/2293 with
                        # things about this size, so we manually set the
                        # xticklabels invisible
                        if j <= 14:
                            [xticklab.set_visible(False) for xticklab in ax.get_xticklabels()]
                        ax.set_title(ax.get_title(), y=.91)
                g.savefig(output[i], bbox_inches='tight', transparent=True)
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            _, fig_width = fov.style.plotting_style(wildcards.context)
            comp_df = []
            for p in input:
                model, comp, ic = re.findall('mcmc/([A-Za-z0-9_]+)/task-split_comp-([a-z-0-9]+)/task.*_ic-([a-z]+).csv', p)[0]
                tmp = pd.read_csv(p)
                tmp['model'] = model
                tmp['trial_type'] = f"metamer_vs_{comp.replace('ref', 'reference').replace('met', 'metamer').replace('-2', '')}"
                comp_df.append(tmp)
            comp_df = pd.concat(comp_df)
            comp_df.model = comp_df.model.map(fov.plotting.MODEL_PLOT)
            comp_df.trial_type = comp_df.trial_type.map(fov.plotting.TRIAL_TYPE_PLOT)
            # the downsample title has an extra newline, which we'll remove here
            comp_df.trial_type = comp_df.trial_type.map(lambda x: x.replace('\n(', ' ('))
            aspect = 2
            height = (fig_width/comp_df.model.nunique()) / aspect
            row_order = comp_df.trial_type.unique()
            row_order = row_order[[0, 1, 4, 2, 3]]
            g = fov.figures.mcmc_arviz_compare(comp_df, row='trial_type',
                                               col='model', aspect=aspect,
                                               row_order=row_order,
                                               height=height, sharex=False)
            g.savefig(output[0], bbox_inches='tight', transparent=True)
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import matplotlib.pyplot as plt
    import arviz as az
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            if wildcards.mcmc_plot_type == 'performance':
                df_kind = 'predictive grouplevel means'
                query_str = None
            elif 'params' in wildcards.mcmc_plot_type:
                df_kind = 'parameter grouplevel means'
                query_str = "distribution=='posterior'"
            df = []
            # use this because we may drop some of the x-values, but we
            # still want the same axes
            x_order = set()
            for f in input[:-1]:
                tmp = az.from_netcdf(f)
                if wildcards.focus.startswith('sub'):
                    subject_name = wildcards.focus.split('_')[0]
                    # each subject only sees 15 images, but we'll have
                    # parameters for all 20 for the partially pooled model
                    # because we modeled the image-level and subject-level
                    # effects separately and thus have predictions for the
                    # unseen (subject, image) pairs
                    if 'RGC_norm_gaussian' in f and 'comp-met' in f:
                        # this comparison only had one session
                        images = np.concatenate([fov.stimuli.get_images_for_session(subject_name, i,
                                                                                    'downsample' in f)
                                                 for i in range(1)])
                    else:
                        images = np.concatenate([fov.stimuli.get_images_for_session(subject_name, i,
                                                                                    'downsample' in f)
                                                 for i in range(3)])
                    x_order = x_order.union(set(tmp.posterior.subject_name.values))
                    tmp = tmp.sel(subject_name=subject_name, image_name=images)
                    # need to have subject_name as a dimension still
                    tmp.posterior = tmp.posterior.expand_dims('subject_name')
                    tmp.prior = tmp.prior.expand_dims('subject_name')
                    tmp.posterior_predictive = tmp.posterior_predictive.expand_dims('subject_name')
                    tmp.prior_predictive = tmp.prior_predictive.expand_dims('subject_name')
                    tmp.observed_data = tmp.observed_data.expand_dims('subject_name')
                df.append(fov.mcmc.inf_data_to_df(tmp,
                                                  df_kind, query_str,
                                                  hdi=.95))
            df = pd.concat(df)
            # if x_order is empty, we want it to be None
            if not x_order:
                x_order = None
            # else, make it a list and sort it
            else:
                x_order = sorted(list(x_order))
            query_str = None
            perf_query_str = None
            if wildcards.focus.startswith('sub'):
                focus = re.findall('(sub-[0-9]+)', wildcards.focus)[0]
                if 'comp-natural' in wildcards.focus:
                    query_str = "trial_type in ['metamer_vs_metamer', 'metamer_vs_reference', 'metamer_vs_metamer-natural', 'metamer_vs_reference-natural']"
                elif 'comp-downsample' in wildcards.focus:
                    query_str = "trial_type in ['metamer_vs_metamer', 'metamer_vs_reference', 'metamer_vs_metamer-downsample']"
                perf_query_str = f"level=='subject_name' & dependent_var=='{focus}'"
            elif wildcards.focus == 'comp-all':
                perf_query_str = "level=='all'"
            elif wildcards.focus == 'comp-base':
                query_str = 'trial_type in ["metamer_vs_metamer", "metamer_vs_reference"]'
                perf_query_str = 'level == "all"'
            elif wildcards.focus == 'comp-ref':
                query_str = 'trial_type in ["metamer_vs_reference"] '
                perf_query_str = 'level == "all"'
            else:
                raise Exception(f"Don't know how to handle focus {wildcards.focus}!")
            if query_str is not None:
                df = df.query(query_str)
            df['model'] = df['model'].map(fov.plotting.MODEL_PLOT)
            if wildcards.mcmc_plot_type == 'performance':
                df['trial_type'] = df['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
                if perf_query_str is not None:
                    df = df.query(perf_query_str)
                if not wildcards.focus.startswith('sub'):
                    df['subject_name'] = 'all subjects'
                else:
                    df = df.rename(columns={'dependent_var': 'subject_name'})
                df['image_name'] = 'all images'
                g = fov.figures.posterior_predictive_check(df, col=None,
                                                           hue='model',
                                                           style='trial_type',
                                                           height=fig_width/2.5,
                                                           aspect=2,
                                                           logscale_xaxis=True,
                                                           increase_size=False)
                g.fig.canvas.draw()
                fov.plotting.add_physiological_scaling_bars(g.ax, az.from_netcdf(input[-1]))
                fig = g.fig
                if 'line' in wildcards.focus:
                    sc = re.findall('line-scaling-([.0-9]+)', wildcards.focus)[0]
                    g.ax.axvline(float(sc), color='k')
            elif 'params' in wildcards.mcmc_plot_type:
                kwargs = {}
                mean_line = {'none': False, 'lines': 'lines-only', 'ci': True}[wildcards.mcmc_plot_type.split('-')[-1]]
                warnings.warn("Removing luminance model, metamer_vs_metamer for params plot (it's not informative)")
                df = df.query("trial_type != 'metamer_vs_metamer' or model != 'Luminance model'")
                if wildcards.focus.startswith('sub'):
                    df = df.query("level != 'subject_name'")
                    kwargs['fill_whole_xaxis'] = True
                    kwargs['aspect'] = 3.2
                    x_order = None
                df['trial_type'] = df['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
                fig = fov.figures.psychophysical_grouplevel_means(df,
                                                                  height=fig_width/4,
                                                                  mean_line=mean_line,
                                                                  x_order=x_order,
                                                                  increase_size=False,
                                                                  row_order=['s0', 'a0'],
                                                                  **kwargs)
                for i, ax in enumerate(fig.axes):
                    if 'linear' in wildcards.mcmc_plot_type:
                        if 'a0' in ax.get_title():
                            ylim = (0, 10)
                        elif 's0' in ax.get_title():
                            ylim = (0, .5)
                        ax.set(yscale='linear', ylim=ylim)
                    elif 'log' in wildcards.mcmc_plot_type:
                        if 'sub-00' in  wildcards.focus:
                            if 'a0' in ax.get_title():
                                ylim = (9e-1, 10)
                            elif 's0' in ax.get_title():
                                ylim = (1e-2, 1e0)
                        elif 'comp-ref' in wildcards.focus:
                            if 'a0' in ax.get_title():
                                ylim = (8e-1, 10)
                            elif 's0' in ax.get_title():
                                ylim = (1e-2, 2e-1)
                        else:
                            if 'a0' in ax.get_title():
                                ylim = (2e-1, 10)
                            elif 's0' in ax.get_title():
                                ylim = (1e-2, 1e0)
                        ax.set(yscale='log', ylim=ylim)
                    title = ax.get_title().replace('a0', "Max $d'$").replace('s0', 'Critical Scaling')
                    title = title.split('|')[0]
                    # remove title
                    ax.set_title('')
                    ax.set_xlabel(ax.get_xlabel().split('_')[0].capitalize())
                    # we only have 2 axes, then both need ylabel. If
                    # there's more than that, then we only do this for the
                    # even ones
                    if len(fig.axes) == 2 or i % 2 == 0:
                        ax.set_ylabel(title)
                if wildcards.context == 'paper':
                    warnings.warn("Removing legend, because other panel will have it.")
                    fig.legends[0].remove()
                fig.suptitle(fig._suptitle.get_text(), y=1.05)
            if wildcards.context == 'paper':
                fig.suptitle('')
                # change title of this to be clearer. this is an
                # exceedingly hacky way of doing this.
                if len(fig.legends) > 0:
                    leg = fig.legends[0]
                    leg.texts = [t.set_text(t.get_text().replace('Trial type', 'Comparison'))
                                 for t in leg.texts]
            fig.savefig(output[0], bbox_inches='tight', transparent=True)
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import matplotlib.pyplot as plt
    import seaborn as sns
    import arviz as az
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            df_kind = 'psychophysical curve parameters'
            query_str = "distribution=='posterior' & hdi==50"
            df = []
            for f in input:
                df.append(fov.mcmc.inf_data_to_df(az.from_netcdf(f), df_kind, query_str, hdi=.95))
            df = pd.concat(df)
            index_cols = [col for col in df.columns if col not in ['parameter', 'value']]
            df = pd.pivot_table(df, 'value', index_cols, 'parameter').reset_index()
            df = df.rename(columns={'a0': 'Gain', 's0': 'Critical scaling'})
            g = sns.relplot(data=df, x='Critical scaling', y='Gain', col='model', row='trial_type',
                            facet_kws={'sharex': 'col', 'sharey': 'col'})
            g.savefig(output[0], bbox_inches='tight')
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    import matplotlib.pyplot as plt
    import arviz as az
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            col = None
            logscale_xaxis = True
            curve_fit = 'to_chance'
            if wildcards.focus.startswith('sub'):
                query_str = f"subject_name=='{wildcards.focus}'"
            elif wildcards.focus.startswith('comp'):
                if wildcards.focus == 'comp-base':
                    query_str = f'trial_type in ["metamer_vs_metamer", "metamer_vs_reference"]'
                elif wildcards.focus == 'comp-ref':
                    query_str = f'trial_type == "metamer_vs_reference"'
                elif wildcards.focus == 'comp-ref-natural':
                    query_str = (f'trial_type in ["metamer_vs_reference-natural", "metamer_vs_reference"] & '
                                 'subject_name=="sub-00" & model == "V1_norm_s6_gaussian"')
                    col = 'image_name'
                    logscale_xaxis = False
                elif wildcards.focus == 'comp-met-natural':
                    query_str = (f'trial_type in ["metamer_vs_metamer-natural", "metamer_vs_metamer"] & '
                                 'subject_name=="sub-00" & model == "V1_norm_s6_gaussian"')
                    col = 'image_name'
                    curve_fit = True
                elif wildcards.focus == 'comp-all':
                    query_str = ''
                else:
                    raise Exception(f"Don't know how to handle focus {wildcards.focus}")
            else:
                # then assume this is an image
                query_str = f"image_name.str.startswith('{wildcards.focus}')"
            if query_str:
                df = pd.concat([pd.read_csv(f).query(query_str) for f in input[:-1]])
            else:
                df = pd.concat([pd.read_csv(f) for f in input[:-1]])
            df.model = df.model.map(lambda x: {'RGC': 'Retina'}.get(x.split('_')[0],
                                                                    x.split('_')[0]))
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            height = fig_width / 3 if col is None else fig_width / 6
            df['model'] = df['model'].map(fov.plotting.MODEL_PLOT)
            df['trial_type'] = df['trial_type'].map(fov.plotting.TRIAL_TYPE_PLOT)
            g = fov.figures.performance_plot(df, col=col,
                                             curve_fit=curve_fit,
                                             hue='model',
                                             height=height,
                                             style='trial_type',
                                             aspect=2 if col is None else 1,
                                             logscale_xaxis=logscale_xaxis)
            if col is None:
                # need to draw so that the following code can check text size
                g.fig.canvas.draw()
                fov.plotting.add_physiological_scaling_bars(g.ax, az.from_netcdf(input[-1]))
            if wildcards.context == 'paper':
                g.fig.suptitle('')
            g.fig.savefig(output[0], bbox_inches='tight')
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
run:
    import foveated_metamers as fov
    import pandas as pd
    import numpy as np
    import matplotlib.pyplot as plt
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim = np.load(input[0])
            stim_df = pd.read_csv(input[1])
            style, fig_width = fov.style.plotting_style('poster')
            plt.style.use(style)
            fig = fov.figures.ref_image_summary(stim, stim_df)
            fig.savefig(output[0], bbox_inches='tight', pad_inches=0)
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
run:
    import subprocess
    import shutil
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            shutil.copy(input[0], output[0])
            for i, im in enumerate(input[1:]):
                # we add the trailing " to make sure we only replace IMAGE1, not IMAGE10
                subprocess.call(['sed', '-i', f's|IMAGE{i+1}"|{im}"|', output[0]])
SnakeMake From line 2774 of main/Snakefile
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
run:
    import foveated_metamers as fov
    import matplotlib.pyplot as plt
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context, figsize='half')
            plt.style.use(style)
            fig = fov.figures.max_dprime_vs_asymptotic_performance(figsize=(fig_width, fig_width))
            fig.savefig(output[0], bbox_inches='tight', transparent=True)
2817
2818
2819
2820
2821
2822
run:
    import foveated_metamers as fov
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            fov.figures.synthesis_video(input[0], wildcards.model_name)
SnakeMake From line 2817 of main/Snakefile
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
run:
    import foveated_metamers as fov
    import plenoptic as po
    import torch
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            ref_image = po.load_images(input.ref_image)
            if input.norm_dict:
                norm_dict = torch.load(input.norm_dict)
            else:
                norm_dict = None
            if wildcards.model_name.startswith('Obs'):
                if wildcards.model_name == 'Obs_sfp':
                    # these values come from my spatial frequency
                    # preferences experiment, using fMRI to measure
                    # spatial frequency tuning in human V1
                    sf_params = {'sf_weighting_sigma': 2.2,
                                 'sf_weighting_slope': .12,
                                 'sf_weighting_intercept': 3.5,
                                 'sf_weighting_amplitude': 1,
                                 # this value was unmeasured in our
                                 # experiment, so I don't know what to do
                                 # with it
                                 'sf_weighting_mean_lum': 1}
                elif wildcards.model_name == 'Obs_null':
                    # these values should be exactly equivalent to the V1
                    # model, not reweighting the values at all
                    sf_params = {'sf_weighting_sigma': 1e10,
                                 'sf_weighting_slope': 0,
                                 'sf_weighting_intercept': 1,
                                 'sf_weighting_amplitude': 1,
                                 'sf_weighting_mean_lum': 1}
                else:
                    raise Exception("Don't know how to handle observer models without using sfp parameters!")
                model = fov.ObserverModel(float(wildcards.scaling), ref_image.shape[-2:],
                                          6, 3, normalize_dict=norm_dict,
                                          cache_dir=params.cache_dir,
                                          min_eccentricity=float(wildcards.min_ecc),
                                          max_eccentricity=float(wildcards.max_ecc),
                                          **sf_params)
            else:
                model = fov.create_metamers.setup_model(wildcards.model_name, float(wildcards.scaling),
                                                        ref_image, float(wildcards.min_ecc),
                                                        float(wildcards.max_ecc), params.cache_dir,
                                                        norm_dict)[0]
            synth_scaling = config[wildcards.synth_model_name.split('_')[0]]['scaling']
            met_imgs = ['llama', 'highway_symmetric', 'rocks', 'boats', 'gnarled']
            if not wildcards.synth_model_name.startswith('RGC') or any([wildcards.image_name.startswith(im) for im in met_imgs]):
                synth_scaling += config[wildcards.synth_model_name.split('_')[0]]['met_v_met_scaling']
            df = []
            for sc in synth_scaling:
                df.append(fov.distances.model_distance(model, wildcards.synth_model_name,
                                                       wildcards.image_name, sc))
            df = pd.concat(df).reset_index(drop=True)
            df['distance_model'] = wildcards.model_name
            df['distance_scaling'] = float(wildcards.scaling)
            df.to_csv(output[0], index=False)
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    import seaborn as sns
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.concat([pd.read_csv(f) for f in input]).reset_index(drop=True)
            df.to_csv(output[0], index=False)
            df['synthesis_model'] = df['synthesis_model'].apply(lambda x: x.split('_')[0])
            df['distance_model'] = df['distance_model'].apply(lambda x: x.split('_')[0])
            hue_order = fov.plotting.get_order('image_name')
            g = sns.catplot('synthesis_scaling', 'distance', 'ref_image', data=df,
                            col='trial_type', sharey=True, row='synthesis_model', kind='point',
                            sharex=False, col_order=['metamer_vs_reference', 'metamer_vs_metamer'],
                            hue_order=hue_order)
            for ijk, d in g.facet_data():
                ax = g.facet_axis(*ijk[:2])
                ax.set_xticklabels([f'{s:.03f}' for s in d.synthesis_scaling.unique()])
            g.set(yscale='log')
            g.savefig(output[1], bbox_inches='tight')
2982
2983
2984
2985
2986
2987
2988
2989
2990
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.concat([pd.read_csv(f) for f in input]).reset_index(drop=True)
            g = fov.figures.synthesis_distance_plot(df, x='metamer_vs_reference')
            g.savefig(output[0], bbox_inches='tight')
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
run:
    import foveated_metamers as fov
    import pandas as pd
    import numpy as np
    import scipy
    import pyrtools as pt
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim = np.load(input[0])
            stim_df = pd.read_csv(input[1])
            if wildcards.model_name == 'RGC_norm_gaussian' and wildcards.comp == 'met':
                # for this model and comparison, we only had 5 images
                names = stim_df.image_name.unique()[:5].tolist()
                stim_df = stim_df.query("image_name in @names")
            # create a dummy idx, which is not randomized (that's what setting seed=None does)
            idx = fov.stimuli.generate_indices_split(stim_df, None,
                                                     f'met_v_{wildcards.comp.split("-")[0]}',
                                                     12)
            # this contains all the relevant metadata we want for this comparison
            dist_df = fov.analysis.create_experiment_df_split(stim_df, idx)
            # remove duplicates, so we only have a single row for each
            # (target image, scaling, seed). this means we're throwing out
            # 3/4 of our rows
            if wildcards.comp.startswith('ref'):
                dist_df = dist_df.drop_duplicates(['image_name', 'scaling', 'unique_seed'])
            elif wildcards.comp.startswith('met'):
                # slightly more complicated to do this for the metamer comparisons
                tmp = dist_df.apply(lambda x: sorted(set([x.image_left_1, x.image_left_2, x.image_right_1, x.image_right_2])), 1)
                dist_df['seeds'] = tmp.apply(lambda x: f'{int(x[0])},{int(x[1])}')
                dist_df = dist_df.drop_duplicates(['image_name', 'scaling', 'seeds'])
                dist_df = dist_df.drop(columns=['seeds'])
            # initialize the images we'll use to compute radial means,
            # modified from fov.statistics.amplitude_spectra
            rbin = pt.synthetic_images.polar_radius(stim.shape[-2:]).astype(np.int)
            # only look at the circle that can fit in the image (i.e.,
            # throw away the corners, because there are fewer pixels out
            # there and so the averages won't be comparable)
            disk = pt.synthetic_images.polar_radius(stim.shape[-2:])
            thresh = min(stim.shape[-2:])//2
            disk = disk < thresh
            rbin[~disk] = rbin.max()+1
            # now iterate through all trials and compute the mse on each of
            # them
            df = []
            for _, row in dist_df.iterrows():
                # unpack this to get the index of the stimulus on left and right, for first
                # and second image
                i = row.trial_number
                [[l1, l2], [r1, r2]] = idx[:, i]
                # need to cast these as floats else the MSE will be *way*
                # off (like 100 instead of 8600). don't do the whole stim
                # array at once because that would *drastically* increase
                # memory use.
                img1 = stim[l1].astype(float)
                if l1 == l2:
                    img2 = stim[r2].astype(float)
                else:
                    img2 = stim[l2].astype(float)
                diff = np.square(img1-img2)
                diff = scipy.ndimage.mean(diff, labels=rbin, index=np.arange(thresh-1))
                row = row.to_dict()
                row.update({'mse': diff, 'distance_pixels': np.arange(len(diff))})
                tmp = pd.DataFrame(row)
                tmp['distance_degrees'] = tmp.distance_pixels * ((2*tmp.max_ecc) / max(img1.shape[-2:]))
                df.append(tmp)
            df = pd.concat(df)
            df.to_csv(output[0], index=False)
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
run:
    import foveated_metamers as fov
    import pandas as pd
    import matplotlib.pyplot as plt
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[0])
            # sometimes the values have floating point precision issues,
            # e.g., 0.058 becoming 0.579999999999. this prevents that, so the legend is prettier
            df.scaling = np.round(df.scaling, 3)
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            if wildcards.ecc == 'None':
                ecc_range = None
            else:
                ecc_range = [float(e) for e in wildcards.ecc.split(',')]
            g = fov.figures.radial_mse(df, height=fig_width/6, ecc_range=ecc_range)
            for i, ax in enumerate(g.axes.flatten()):
                # still running into this issue
                # https://github.com/mwaskom/seaborn/issues/2293 with
                # things about this size, so we manually set the
                # xticklabels invisible
                if i <= 14:
                    [xticklab.set_visible(False) for xticklab in ax.get_xticklabels()]
            g.savefig(output[0], bbox_inches='tight', transparent=True)
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
run:
    import foveated_metamers as fov
    import pandas as pd
    import numpy as np
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            stim = np.load(input[0])
            stim_df = pd.read_csv(input[1])
            if wildcards.model_name == 'RGC_norm_gaussian' and wildcards.comp == 'met':
                # for this model and comparison, we only had 5 images
                names = stim_df.image_name.unique()[:5].tolist()
                stim_df = stim_df.query("image_name in @names")
            # create a dummy idx, which is not randomized (that's what setting seed=None does)
            idx = fov.stimuli.generate_indices_split(stim_df, None,
                                                     f'met_v_{wildcards.comp.split("-")[0]}',
                                                     12)
            # this contains all the relevant metadata we want for this comparison
            dist_df = fov.analysis.create_experiment_df_split(stim_df, idx)
            expt_mse = np.empty(len(dist_df))
            met_mse = np.empty(len(dist_df))
            # now iterate through all trials and compute the mse on each of
            # them
            for i in range(len(dist_df)):
                # unpack this to get the index of the stimulus on left and right, for first
                # and second image
                [[l1, l2], [r1, r2]] = idx[:, i]
                # need to cast these as floats else the MSE will be *way*
                # off (like 100 instead of 8600). don't do the whole stim
                # array at once because that would *drastically* increase
                # memory use. and calculate_experiment_mse function handles
                # this internally
                img1 = stim[l1].astype(float)
                if l1 == l2:
                    img2 = stim[r2].astype(float)
                else:
                    img2 = stim[l2].astype(float)
                met_mse[i] = np.square(img1-img2).mean()
                expt_mse[i] = fov.distances.calculate_experiment_mse(stim, idx[:, i])
            dist_df['experiment_mse'] = expt_mse
            dist_df['full_image_mse'] = met_mse
            dist_df['trial_structure'] = dist_df.apply(fov.distances._get_trial_structure, 1)
            dist_df['changed_side'] = dist_df.trial_structure.apply(lambda x: x[-1])
            dist_df.to_csv(output[0], index=False)
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    import seaborn as sns
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[0])
            g = fov.figures.mse_heatmap(df, wildcards.mse)
            g.savefig(output[0])
            df = fov.plotting._remap_image_names(df)
            style = 'changed_side' if wildcards.mse == 'experiment_mse' else None
            g = sns.relplot(data=df, x='scaling', y=wildcards.mse,
                            hue='image_name', style=style, kind='line',
                            palette=fov.plotting.get_palette('image_name'),
                            hue_order=fov.plotting.get_order('image_name'))
            g.savefig(output[1])
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            mse_df = pd.read_csv(input[0])
            expt_df = pd.read_csv(input[1])
            g = fov.figures.compare_loss_and_performance_plot(expt_df,
                                                              mse_df, x=wildcards.mse)
            g.fig.savefig(output[0], bbox_inches='tight')
            g = fov.figures.compare_loss_and_performance_plot(expt_df,
                                                              mse_df, x=wildcards.mse,
                                                              col_wrap=None, row='subject_name')
            g.fig.savefig(output[1], bbox_inches='tight')
            g = fov.figures.compare_loss_and_performance_plot(expt_df,
                                                              mse_df,
                                                              x=wildcards.mse,
                                                              col=None,
                                                              plot_kind='line',
                                                              height=5)
            g.fig.savefig(output[2], bbox_inches='tight')
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
run:
    import foveated_metamers as fov
    import contextlib
    import numpy as np
    import imageio
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            bar_deg_size = 2
            screen_size_deg = 73.45
            screen_size_pix = 3840
            ref_image = imageio.imread(input[0])
            ref_image = ref_image / np.iinfo(ref_image.dtype).max
            metamer = imageio.imread(input[1])
            metamer = metamer / np.iinfo(metamer.dtype).max
            bar_pix_size = int(bar_deg_size * (screen_size_pix / screen_size_deg))
            bar = fov.distances._create_bar_mask(ref_image.shape[0], bar_pix_size)
            if wildcards.comp.startswith('ref'):
                orig = ref_image.copy()
            elif wildcards.comp.startswith('met'):
                orig = metamer.copy()
            orig = fov.distances._add_bar(orig, bar)
            imageio.imwrite(output[0], orig)
            half_width = ref_image.shape[-1] // 2
            if wildcards.direction == 'L':
                ref_image[..., :half_width] = metamer[..., :half_width]
            elif wildcards.direction == 'R':
                ref_image[..., half_width:] = metamer[..., half_width:]
            ref_image = fov.distances._add_bar(ref_image, bar)
            imageio.imwrite(output[1], ref_image)
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            mse = pd.read_csv(input.mse)
            # sometimes the values have floating point precision issues,
            # e.g., 0.058 becoming 0.579999999999. this prevents that, so
            # they match the string we're passing in
            mse.scaling = np.round(mse.scaling, 3)
            mse = mse.query(f"image_name=='{wildcards.ref_image}' & scaling=={wildcards.scaling}")
            if wildcards.direction != 'None':
                mse = mse.query(f"changed_side=='{wildcards.direction}'")
                target_err = mse.experiment_mse.min()
                direction = wildcards.direction
            else:
                direction = None
                target_err = mse.full_image_mse.min()
            if mse.empty:
                raise Exception(f"No comparisons match image {wildcards.ref_image}, scaling {wildcards.scaling},"
                                f" and direction {wildcards.direction}")
            # if the init_type corresponds to another image, we need its
            # full path. else, just the string identifying the type of
            # noise suffices
            if input.init_image:
                other_img = input.init_image
            else:
                other_img = wildcards.init_type
            fov.create_other_synth.main(input.ref_image, other_img,
                                        target_err, float(wildcards.lr),
                                        int(wildcards.max_iter),
                                        direction,
                                        int(wildcards.seed),
                                        output[0])
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
run:
    import foveated_metamers as fov
    import contextlib
    import numpy as np
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
                print(f"Saving gamma-corrected image {output[0]} as np.uint8")
                im = np.load(input[0])
                # we know this lies between 0 and 255. we make this 256 to allow for 255.001 and similar
                if im.max() < 1 or im.max() > 256:
                    raise Exception(f"Expected image to lie within (0, 255), but found max {im.max()}!")
                im = (im / 255) ** (1/2.2)
                im = fov.utils.convert_im_to_int(im, np.uint8)
                imageio.imwrite(output[0], im)
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
run:
    import foveated_metamers as fov
    import contextlib
    import matplotlib as mpl
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            # having issues with the default matplotlib backend causing
            # core dumps
            mpl.use('svg')
            if resources.gpu == 1:
                get_gid = True
            elif resources.gpu == 0:
                get_gid = False
            else:
                raise Exception("Multiple gpus are not supported!")
            # tradeoff_lambda can be a float or None
            try:
                tradeoff_lambda = float(wildcards.tradeoff_lambda)
            except ValueError:
                tradeoff_lambda = None
            with fov.utils.get_gpu_id(get_gid, on_cluster=ON_CLUSTER) as gpu_id:
                fov.create_mad_images.main('mse',
                                           wildcards.model_name,
                                           input.ref_image,
                                           wildcards.synth_target,
                                           int(wildcards.seed),
                                           float(wildcards.learning_rate),
                                           int(wildcards.max_iter),
                                           float(wildcards.stop_criterion),
                                           int(wildcards.stop_iters),
                                           output[0],
                                           input.init_image,
                                           gpu_id,
                                           wildcards.optimizer,
                                           tradeoff_lambda,
                                           float(wildcards.range_lambda),
                                           num_threads=resources.num_threads)
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
run:
    import foveated_metamers as fov
    import contextlib
    import numpy as np
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
                print(f"Saving gamma-corrected image {output[0]} as np.uint8")
                im = np.load(input[0])
                # we know this lies between 0 and 255. we make this 256 to allow for 255.001 and similar
                if im.max() < 1 or im.max() > 256:
                    raise Exception(f"Expected image to lie within (0, 255), but found max {im.max()}!")
                im = (im / 255) ** (1/2.2)
                im = fov.utils.convert_im_to_int(im, np.uint8)
                imageio.imwrite(output[0], im)
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
run:
    import foveated_metamers as fov
    import pandas as pd
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            dist_df = pd.read_csv(input[0])
            expt_df = pd.concat([pd.read_csv(f) for f in input[1:]])
            logscale_xaxis = True if wildcards.synthesis_model.startswith('V1') else False
            g = fov.figures.compare_distance_and_performance(expt_df, dist_df,
                                                             logscale_xaxis=logscale_xaxis)
            g.savefig(output[0])
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
run:
    import subprocess
    import contextlib
    import tempfile
    import shutil
    import os.path as op
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            # remove duplicate files
            files = set(input)
            # sort by scaling
            if wildcards.gif_direction == 'reverse':
                sort_files = list(reversed(sorted(files, key=lambda x: float(re.findall('scaling-([0-9.]+)', x)[0]))))
            elif wildcards.gif_direction == 'forward':
                sort_files = sorted(files, key=lambda x: float(re.findall('scaling-([0-9.]+)', x)[0]))
            # assert that there's only one per scaling
            scaling_vals = [float(re.findall('scaling-([0-9.]+)', f)[0]) for f in sort_files]
            if len(scaling_vals) != len(set(scaling_vals)):
                raise Exception("Each scaling value should show up exactly once!")
            tmpdir = tempfile.mkdtemp()
            for i, f in enumerate(sort_files):
                shutil.copy(f, op.join(tmpdir, f'frame-{i:02d}.png'))
            print('\n'.join(sort_files))
            subprocess.call(['ffmpeg', '-f', 'image2', '-framerate', wildcards.fr, '-i', f'{tmpdir}/frame-%02d.png', output[0]])
SnakeMake From line 3500 of main/Snakefile
3538
3539
3540
3541
3542
shell:
    "cd matlab; "
    "matlab -nodesktop -nodisplay -r \"addpath(genpath('{params.met_path}')); "
    "addpath(genpath('{params.matlabPyrTools_path}')); "
    "freeman_windows({wildcards.scaling}, '{params.output_dir}'); quit;\""
SnakeMake From line 3538 of main/Snakefile
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
run:
    import foveated_metamers as fov
    import h5py
    import contextlib
    import plenoptic as po
    import pyrtools as pt
    import sys
    sys.path.append('extra_packages/pooling-windows/')
    import pooling
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            to_plot = []
            titles = []
            for inp, sc in zip(input, [.5, .25]):
                mat_idx = {.5: -85, .25: -355}[sc]
                with h5py.File(inp, 'r') as f:
                    freeman_win = f[f['scale'][1, 0]]['maskMat'][:][..., mat_idx]
                gauss_idx = {.25: (25, 25), .5: (11, 12)}[sc]
                gauss_pw = pooling.PoolingWindows(sc, (512, 512),
                                                  max_eccentricity=13,
                                                  num_scales=5,
                                                  window_type='gaussian',
                                                  std_dev=1)
                gauss_win = po.to_numpy(gauss_pw.ecc_windows[1][gauss_idx[0]]) * po.to_numpy(gauss_pw.angle_windows[1][gauss_idx[1]])
                # normalize so that max value is 1, to match matlab implementation
                gauss_win /= gauss_win.max()
                to_plot.extend([freeman_win, gauss_win, freeman_win - gauss_win])
                titles.extend([f'{paper}, scaling {sc}' for paper in ['Freeman 2011', 'This paper', 'Difference']])
            fig = pt.imshow(to_plot, vrange='auto0', col_wrap=3)
            for ax, title in zip(fig.axes, titles):
                ax.set_title(title)
                # adjust the position to get rid of extra spacing where
                # subtitle used to be.
                bb = ax.get_position()
                if bb.y0 != 0:
                    bb.y0 = bb.y0 - bb.y0/4
                    ax.set_position(bb)
                fov.figures.add_fixation_cross(ax, cross_size=25)
            fig.savefig(output[0], bbox_inches='tight', transparent=True)
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
run:
    import contextlib
    import pandas as pd
    import seaborn as sns
    import foveated_metamers as fov
    import matplotlib.pyplot as plt
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[0])
            style, fig_width = fov.style.plotting_style(wildcards.context)
            plt.style.use(style)
            pal = fov.plotting.get_palette('cell_type', df.cell_type.unique())
            g = sns.relplot(data=df, x='eccentricity_deg', y='dendritic_field_diameter_min',
                            # this aspect is approximately that of the
                            # figure in the paper
                            hue='cell_type', aspect=1080/725, palette=pal)
            g.set(xscale='log', yscale='log', xlim=(.1, 100), ylim=(1, 1000),
                  xlabel='Eccentricity (degrees)',
                  ylabel='Dendritic field diameter (min of arc)')
            g.savefig(output[0])
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
run:
    import contextlib
    import pandas as pd
    import foveated_metamers as fov
    import arviz as az
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            dataset = fov.other_data.assemble_dacey_dataset(pd.read_csv(input[0]))
            mcmc = fov.other_data.run_phys_scaling_inference(dataset,
                                                             wildcards.line == 'offset',
                                                             float(wildcards.step_size),
                                                             int(wildcards.num_draws),
                                                             int(wildcards.num_chains),
                                                             int(wildcards.num_warmup),
                                                             int(wildcards.seed),
                                                             float(wildcards.accept_prob),
                                                             int(wildcards.tree_depth))
            # want to have a different seed for constructing the inference
            # data object than we did for inference itself
            inf_data = fov.other_data.assemble_inf_data(mcmc, dataset,
                                                        int(wildcards.seed)+1)
            inf_data.to_netcdf(output[0])
            fig = fov.figures.mcmc_diagnostics_plot(inf_data)
            fig.savefig(output[1], bbox_inches='tight')
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
run:
    import contextlib
    import pandas as pd
    import foveated_metamers as fov
    import arviz as az
    import seaborn as sns
    import matplotlib.pyplot as plt
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[0])
            inf_data = az.from_netcdf(input[1])
            style, fig_width = fov.style.plotting_style(wildcards.context, figsize='half')
            plt.style.use(style)
            fig = fov.figures.dacey_mcmc_plot(inf_data, df, logscale_axes='log' in wildcards.logscale,
                                              height=fig_width)
            fig.savefig(output[0], transparent=True)
3722
3723
shell:
    "cp {input} {output}"
SnakeMake From line 3722 of main/Snakefile
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
run:
    import contextlib
    import foveated_metamers as fov
    import matplotlib.pyplot as plt
    import tempfile
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context,
                                                        figsize='full')
            plt.style.use(style)
            table_fig = fov.figures.psychophysics_schematic_table((3/4*fig_width, 3/4*fig_width/2))
            name = tempfile.NamedTemporaryFile().name + '.svg'
            table_fig.savefig(name, bbox_inches='tight')
            fig = fov.compose_figures.psychophys_schematic_with_table(input[0], name,
                                                                      wildcards.context)
            fig.save(output[0])
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
run:
    import subprocess
    import shutil
    import contextlib
    import foveated_metamers as fov
    from glob import glob
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            orig_dpi = fov.figures.write_create_bitmap_resolution(input[0], wildcards.bitmap_dpi)
            ids, old_dd = fov.figures.get_image_ids(input[1], config['DATA_DIR'])
            shutil.copy(input[1], output[0])
            # if images were linked on a diferent machine, the data
            # directory is probably different, and this should fix it (see
            # fov.figures.grab_old_data_dir for how we figure out the old
            # data dir)
            if old_dd is not None:
                subprocess.call(['sed', '-i', f's|{old_dd}|{config["DATA_DIR"]}|g', output[0]])
            select_ids = ''.join([f'select-by-id:{id};' for id in ids])
            action_str = select_ids + "SelectionCreateBitmap;select-clear;" + select_ids + "EditDelete;"
            action_str += "FileSave;"
            print(f"inkscape action string:\n{action_str}")
            subprocess.call(['inkscape', '--batch-process', f'--actions={action_str}', output[0]])
            # the inkscape call above embeds the bitmaps but also
            # apparently creates a separate png file containing the
            # embedded bitmaps, which we want to remove. commas get
            # replaced with underscores in the paths of those files, so
            # check for those as well
            extra_files = glob(output[0] + '-*') + glob(output[0].replace(',', '_') + '-*')
            print(f"will remove the following: {extra_files}")
            for f in extra_files:
                try:
                    os.remove(f)
                except FileNotFoundError:
                    # then the file was removed by something else
                    continue
            fov.figures.write_create_bitmap_resolution(input[0], orig_dpi)
SnakeMake From line 3766 of main/Snakefile
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
run:
    import contextlib
    import plenoptic as po
    import torch
    from PIL import Image
    import imageio
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            img = Image.open(input[0])
            try:
                # jpgs can't be RGBA, so drop the alpha layer, if present
                # (we're not using it anyway)
                r, g, b, a = img.split()
                img = Image.merge('RGB', (r, g, b))
            except ValueError:
                # then this was either RGB or grayscale, so our current
                # image is fine
                pass
            try:
                img.save(output[0], optimize=True)
            except OSError:
                # in this case, our input was a grayscale 16 bit integer,
                # which we can't write as JPEG. so convert it to 8 bit
                # grayscale
                img = imageio.imread(input[0])
                img = img / np.iinfo(np.uint16).max
                img = Image.fromarray((img * np.iinfo(np.uint8).max).astype(np.uint8),
                                      mode='L')
                img.save(output[0], optimize=True)
            # just for kicks, compute some metrics between the two images
            img = po.load_images(input, as_gray=False)
            # again, drop alpha channel
            if img.shape[1] == 4:
                img = img[:, :3]
            imgs = torch.stack([img, po.load_images(output, as_gray=False)])
            msssim = po.metric.ms_ssim(*imgs)
            ssim = po.metric.ssim(*imgs)
            mse = po.metric.mse(*imgs)
            print("Metrics on difference between compressed and uncompressed images:")
            print(f"MSE: {mse}")
            print(f"SSIM: {ssim}")
            print(f"MS-SSIM: {msssim}")
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
run:
    import contextlib
    import sys
    import matplotlib.pyplot as plt
    import foveated_metamers as fov
    import plenoptic as po
    sys.path.append(op.join(op.dirname(op.realpath(__file__)), 'extra_packages/pooling-windows'))
    import pooling
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            size = [int(i) for i in wildcards.size.split(',')]
            pw = pooling.PoolingWindows(float(wildcards.scaling), size, std_dev=1,
                                        window_type='gaussian')
            # we ignore figure size, because we are going to rescale this
            # when we move it around
            style, _ = fov.style.plotting_style(wildcards.context)
            if wildcards.bg == 'none':
                # set both axes and figure facecolor to transparent
                style['axes.facecolor'] = (0, 0, 0, 0)
                style['figure.facecolor'] = (0, 0, 0, 0)
            elif wildcards.bg == 'white':
                # want to see the border of the axis
                style['axes.edgecolor'] = (0, 0, 0, 1)
                style['axes.linewidth'] = .5*float(wildcards.lw)*style['lines.linewidth']
            else:
                raise Exception("Can only handle background none or white!")
            plt.style.use(style)
            ax = None
            if 'random' in wildcards.fill:
                seed = int(wildcards.fill.split('-')[-1])
                np.random.seed(seed)
                ax = pw.plot_window_values(subset=False)
            elif wildcards.fill != 'none':
                im = po.load_images(input[0])
                ax = pw.plot_window_values(im, subset=False)
            # since this is being shrunk, we need to make the lines thicker
            ax = pw.plot_windows(ax=ax, subset=False,
                                 linewidths=float(wildcards.lw)*style['lines.linewidth'])
            if wildcards.bg == 'none':
                # this is the background image underneath the contour lines
                # -- we want it to be invisible so we can overlay these
                # contours on another image.
                ax.images[0].set_visible(False)
            else:
                # want to see the border of the axis
                ax.set_frame_on(True)
                ax.spines['top'].set_visible(True)
                ax.spines['right'].set_visible(True)
            ax.figure.savefig(output[0], bbox_inches='tight')
3935
3936
3937
3938
3939
3940
3941
3942
3943
run:
    import subprocess
    import shutil
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            shutil.copy(input[0], output[0])
            # we add the trailing " to make sure we only replace IMAGE1, not IMAGE10
            subprocess.call(['sed', '-i', f's|IMAGE1"|{input[1]}"|', output[0]])
SnakeMake From line 3935 of main/Snakefile
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
run:
    import subprocess
    import contextlib
    import foveated_metamers as fov
    import re
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            if 'model_schematic' in wildcards.fig_name:
                width = 'full' if 'full' in wildcards.fig_name else 'half'
                fig = fov.compose_figures.model_schematic(input[0],
                                                          input[1],
                                                          input[2:], width,
                                                          wildcards.context)
            elif 'metamer_comparison' in wildcards.fig_name:
                scaling = re.findall('scaling-([0-9,.]+)', wildcards.fig_name)[0]
                scaling = [float(sc) for sc in scaling.split(',')]
                if 'performance' in wildcards.fig_name:
                    # doesn't matter what we put in here, we're only using
                    # the outlier colors
                    pal = fov.plotting.get_palette('image_name_focus-outlier', ['V1'])
                    img_names = re.findall("metamer_comparison_([a-z,]+)_scaling", wildcards.fig_name)[0]
                    fig = fov.compose_figures.performance_metamer_comparison_small(input[1], input[0], scaling,
                                                                                   [pal[name] for name in img_names.split(',')])
                elif 'natural-seed' in wildcards.fig_name:
                    if 'V1' in wildcards.fig_name:
                        labels = ['Target image',
                                  'Energy model metamer init with natural image 1',
                                  'Energy model metamer init with natural image 2',
                                  'Energy model metamer init with natural image 3',
                                  'Energy model metamer init with white noise 1',
                                  'Energy model metamer init with white noise 2']
                    elif 'RGC' in wildcards.fig_name:
                        labels = ['Target image',
                                  'Luminance metamer init with natural image 1',
                                  'Luminance metamer init with natural image 2',
                                  'Luminance metamer init with natural image 3',
                                  'Luminance metamer init with white noise 1',
                                  'Luminance metamer init with white noise 2']
                    fig = fov.compose_figures.metamer_comparison(*input, labels,
                                                                 'nocutout' not in wildcards.fig_name,
                                                                 True, wildcards.context)
                else:
                    fig = fov.compose_figures.metamer_comparison(*input, scaling,
                                                                 'nocutout' not in wildcards.fig_name,
                                                                 False, wildcards.context)
            elif "all_comps_summary" in wildcards.fig_name:
                fig = fov.compose_figures.combine_one_ax_figs(input, wildcards.context)
            elif "performance_natural" in wildcards.fig_name:
                if 'sub-00' in wildcards.fig_name:
                    n = 1
                else:
                    n = None
                fig = fov.compose_figures.performance_comparison_natural(*input, n, wildcards.context)
            elif "performance_comparison" in wildcards.fig_name:
                if 'sub-00' in wildcards.fig_name:
                    n = 1
                else:
                    n = None
                fig = fov.compose_figures.performance_comparison(*input, n, wildcards.context)
            elif "radial_se" in wildcards.fig_name:
                fig = fov.compose_figures.radial_squared_error(*input, wildcards.context)
            elif "performance-all" in wildcards.fig_name:
                fig = fov.compose_figures.performance_all(*input, context=wildcards.context)
            fig.save(output[0])
SnakeMake From line 4005 of main/Snakefile
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
run:
    import subprocess
    import shutil
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            shutil.copy(input[0], output[0])
            for i, im in enumerate(input[1:]):
                print(f"Copying {im} into IMAGE{i+1}")
                # we add the trailing " to make sure we only replace IMAGE1, not IMAGE10
                subprocess.call(['sed', '-i', f's|IMAGE{i+1}"|{im}"|', output[0]])
SnakeMake From line 4159 of main/Snakefile
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
run:
    import subprocess
    import shutil
    import contextlib
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            shutil.copy(input[0], output[0])
            for i, im in enumerate(input[1:]):
                print(f"Copying {im} into IMAGE{i+1}")
                # we add the trailing " to make sure we only replace IMAGE1, not IMAGE10
                subprocess.call(['sed', '-i', f's|IMAGE{i+1}"|{im}"|', output[0]])
SnakeMake From line 4193 of main/Snakefile
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
run:
    import subprocess
    import contextlib
    import foveated_metamers as fov
    import plenoptic as po
    import matplotlib.pyplot as plt
    import re
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, _ = fov.style.plotting_style(wildcards.context)
            # needs to be an int so we can properly use it to slice into
            # the iamge in the cutout_figure calls below
            style['lines.linewidth'] = int(16*style['lines.linewidth'])
            window_size = 400
            offset = [-800, -1000]
            cross_size = 150
            # change sizes of things so they look comparable to the regular
            # version when put into the comparison plot
            if 'downsample' in wildcards.image_name:
                downsample_n = float(re.findall('downsample-([0-9]+)_', wildcards.image_name)[0])
                # these need to be ints
                window_size = int(window_size // downsample_n)
                offset = [int(o // downsample_n) for o in offset]
                style['lines.linewidth'] = int(style['lines.linewidth'] // downsample_n)
                cross_size = cross_size / downsample_n
            plt.style.use(style)
            # if we're loading in the metamer with window, it will have a
            # red oval on it, which we want to preserve
            im = po.load_images(input, as_gray=False)
            # if we're loading in an image that is truly grayscale (i.e.,
            # the ref_images_preproc ones), then it will only have one
            # channel, even with as_gray=False, so we need to set as_rgb
            # correctly.
            fig = po.imshow(im, title=None, as_rgb=True if im.shape[1] > 1 else False,
                            # need to make sure vrange is set, so the
                            # dynamic range is the same as everywhere else
                            vrange=(0, 1))
            # we do the periphery and fovea separately, so we can plot them
            # in separate colors
            if wildcards.fixation_cross == 'cross':
                fov.figures.add_fixation_cross(fig.axes[0], cross_size=cross_size)
            fov.figures.add_cutout_box(fig.axes[0], plot_fovea=False, colors='b',
                                       window_size=window_size,
                                       periphery_offset=offset)
            fig.savefig(output[1])
            fov.figures.add_cutout_box(fig.axes[0], plot_periphery=False,
                                       window_size=window_size,
                                       periphery_offset=offset)
            # we add an extra bit to the window size here so that the
            # addition of the cutout box doesn't cause the axes to resize
            # (and the full width of the lines are visible)
            fovea_fig = fov.figures.cutout_figure(im[0, 0], plot_periphery=False, label=False,
                                                  window_size=window_size+style['lines.linewidth'],
                                                  periphery_offset=offset)
            periphery_fig = fov.figures.cutout_figure(im[0, 0], plot_fovea=False, label=False,
                                                      window_size=window_size+style['lines.linewidth'],
                                                      periphery_offset=offset)
            fov.figures.add_cutout_box(fovea_fig.axes[0], plot_periphery=False, periphery_offset=offset,
                                       window_size=window_size)
            # note that plot_periphery=False here because the peripheral
            # cutout is centered
            fov.figures.add_cutout_box(periphery_fig.axes[0], plot_periphery=False, colors='b',
                                       periphery_offset=offset, window_size=window_size)
            if wildcards.fixation_cross == 'cross':
                fov.figures.add_fixation_cross(fovea_fig.axes[0], cross_size=cross_size)
            fig.savefig(output[0])
            fovea_fig.savefig(output[2])
            periphery_fig.savefig(output[3])
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
run:
    import foveated_metamers as fov
    import contextlib
    import seaborn as sns
    import pandas as pd
    import torch
    import matplotlib.pyplot as plt
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = []
            min_ecc = config['DEFAULT_METAMERS']['min_ecc']
            max_ecc = config['DEFAULT_METAMERS']['max_ecc']
            img = torch.rand(1, 1, 2048, 2600)
            n_pix = img.nelement()
            # don't include "norm" in name, because we don't set the normalization dict
            for model_name in ['RGC_gaussian', 'V1_s6_gaussian']:
                scaling = config[model_name.split('_')[0]]['scaling'] + config[model_name.split('_')[0]]['met_v_met_scaling']
                for sc in scaling:
                    model = fov.create_metamers.setup_model(model_name, sc, img, min_ecc, max_ecc, params.cache_dir)[0]
                    rep = model(img)
                    tmp = pd.DataFrame({'model': model_name, 'scaling': sc, 'num_stats': rep.shape[-1],
                                        'num_pixels': n_pix}, [0])
                    df.append(tmp)
            df = pd.concat(df)
            df.model = df.model.map({k.replace('_norm', ''): v for k, v in
                                     fov.plotting.MODEL_PLOT.items()})
            df.to_csv(output[0], index=False)
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import torch
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            img = torch.rand(1, 1, 2048, 2600)
            n_pix = img.nelement()
            df = pd.read_csv(input[0])
            g, popts = fov.figures.number_of_stats(df)
            g.savefig(output[0], bbox_inches='tight')
            rgc_df = df.query('model=="Luminance model"')
            v1_df = df.query('model=="Energy model"')
            result = (
                f'Luminance model number of statistics go from {rgc_df.num_stats.max()} for scaling {rgc_df.scaling.min()} '
                f'({100 *rgc_df.num_stats.max() / n_pix:.03f}%)\n'
                f'   to {rgc_df.num_stats.min()} for scaling {rgc_df.scaling.max()} ({100 * rgc_df.num_stats.min() / n_pix:.03f}%)\n'
                f'   this is fit with the equation {popts[0][0]:.02f} * scaling ^ -2 + {popts[0][1]:.02f} * scaling ^ -1\n'
                f'Energy model number of statistics go from {v1_df.num_stats.max()} for scaling {v1_df.scaling.min()} '
                f'({100 * v1_df.num_stats.max() / n_pix:.03f}%)\n'
                f'   to {v1_df.num_stats.min()} for scaling {v1_df.scaling.max()} ({100 * v1_df.num_stats.min() / n_pix:.03f}%)\n'
                f'   this is fit with the equation {popts[1][0]:.02f} * scaling ^ -2 + {popts[1][1]:.02f} * scaling ^ -1\n'
                f'For {n_pix} pixels\n\n'
                'The relationship between scaling and number of windows, and thus number of stats, should be exactly an '
                'inverse quadratic, but at larger scaling we get a breakdown of that, requiring more windows than expected,'
                ' which is due to the fact that those windows are so large and so we need large windows that are barely on '
                'the image in order for them to still uniformly tile.'
            )
            with open(output[-1], 'w') as f:
                f.writelines(result)
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import arviz as az
    from scipy import optimize
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            df = pd.read_csv(input[-1])
            crit_scaling = []
            for f in input[:-1]:
                tmp = az.from_netcdf(f)
                tmp = fov.mcmc.inf_data_to_df(tmp, 'parameter grouplevel means',
                                              "distribution == 'posterior' & level == 'all' & hdi == 50",
                                              hdi=.95)
                crit_scaling.append(tmp.query('parameter=="s0"'))
            crit_scaling = pd.concat(crit_scaling)
            crit_scaling['model'] = crit_scaling['model'].map(fov.plotting.MODEL_PLOT)
            def quadratic_dec(scaling, a2, a1):
                return a2*scaling**-2 + a1*scaling**-1
            out_str = ''
            for n, gb in df.groupby('model'):
                popt, _ = optimize.curve_fit(quadratic_dec, gb.scaling.values,
                                             gb.num_stats.values)
                for m, gb2 in crit_scaling.query(f"model=='{n}'").groupby('trial_type'):
                    assert(len(gb2)==1)
                    crit = gb2.value.iloc[0]
                    n_stat = quadratic_dec(crit, *popt)
                    pct = 100 * n_stat / df.num_pixels.mean()
                    m = fov.plotting.TRIAL_TYPE_PLOT[m].replace('\n', ' ')
                    out_str += (f'For model {n} and comparison {m}, we have:\n   critical scaling: {crit:.04f}\n'
                                f'   n stats: {n_stat:.02f}\n   percentage of pixels: {pct:.02f}%\n')
            with open(output[0], 'w') as f:
                f.writelines(out_str)
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
run:
    import foveated_metamers as fov
    import contextlib
    import pandas as pd
    import arviz as az
    import seaborn as sns
    import matplotlib as mpl
    import warnings
    with open(log[0], 'w', buffering=1) as log_file:
        with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
            style, fig_width = fov.style.plotting_style(wildcards.context, figsize='half')
            crit_scaling = []
            for f in input:
                tmp = az.from_netcdf(f)
                tmp = fov.mcmc.inf_data_to_df(tmp, 'parameter grouplevel means',
                                              "distribution == 'posterior' & level == 'all' & hdi == 50",
                                              hdi=.95).query("parameter=='s0'")
                tmp = tmp.drop(columns=['distribution', 'hdi', 'parameter', 'level', 'dependent_var',
                                        'mcmc_model_type'])
                tmp = tmp.rename(columns={'value': 'critical_scaling'}).reset_index(drop=True)
                crit_scaling.append(tmp)
            crit_scaling = pd.concat(crit_scaling)
            crit_scaling['model'] = crit_scaling['model'].map(fov.plotting.MODEL_PLOT)
            crit_scaling = pd.concat([crit_scaling, fov.figures.wallis_critical_scaling()])
            pal = fov.plotting.get_palette('model', fov.plotting.MODEL_PLOT.values())
            crit_scaling.model = crit_scaling.model.apply(lambda x: x.replace(' model', ''))
            pal = {k.replace(' model', ''): v for k, v in pal.items()}
            # color copied from Freeman and Simoncelli, 2011's V2 purple
            pal['Texture'] = (165/255, 109/255, 189/255)
            # put dummy data in for this Luminance met vs met, since we
            # can't actually fit
            tmp = crit_scaling.query("model=='Luminance' & trial_type == 'metamer_vs_reference'")
            tmp['trial_type']  = 'metamer_vs_metamer'
            if wildcards.norm == 'True':
                tmp['critical_scaling'] = 8*tmp.critical_scaling
                assert wildcards.logscale == 'log', "if normalized, scale must be log!"
            else:
                tmp['critical_scaling'] = 24*tmp.critical_scaling
            crit_scaling = pd.concat([crit_scaling, tmp])
            g = sns.FacetGrid(crit_scaling, hue='model', palette=pal, height=fig_width)
            g.map_dataframe(fov.plotting.vertical_pointplot, x='model', y='critical_scaling',
                            norm_y=wildcards.norm=='True')
            ylabel = 'Critical scaling'
            # bool('False') == True, so we do this to avoid that
            # situation
            if wildcards.logscale == 'log':
                if wildcards.norm == 'True':
                    ylabel += '\n(proportion of Original vs. Synth)'
                    g.ax.set_yscale('log', base=2)
                    g.ax.yaxis.set_minor_locator(mpl.ticker.LogLocator(2, subs=(.5, )))
                    g.ax.yaxis.set_major_formatter(mpl.ticker.StrMethodFormatter('{x:.0f}'))
                else:
                    g.ax.set_yscale('log', base=10)
                    g.ax.set(ylim=(.01, .5))
            g.set(xlabel='Pooling model', ylabel=ylabel, xlim=[-.5, 2.5])
            g.savefig(output[0])
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
run:
    import json
    import shutil
    import tarfile
    import foveated_metamers as fov
    ln_path_template = ('{model_path_name}/{target_image}/scaling-{scaling}/'
                        'seed-{random_seed}_init-{initialization_type}.png')
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '')
    metadata = fov.utils.rearrange_metamers_for_sharing(input, output_dir, ln_path_template)
    with open(op.join(output_dir, 'metadata.json'), 'w') as f:
        json.dump(metadata, f)
    with tarfile.open(output[0], 'w:gz') as tar:
        # we use the arcname argument to get just the relative directory
        # structure (e.g., 'metamers_energy_ref/' instead of
        # 'mnt/ceph/users/wbroderick/metamers/to_share/metamers_energy_ref/')
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
run:
    import tarfile
    import json
    import os
    import shutil
    tmp_dirs = [op.join(op.dirname(output[0]), *d) for d in [('synthesis_input', 'ref_images_preproc'),
                                                             ('synthesis_input', 'norm_stats'),
                                                             ('tiff_images', 'ref_images')]]
    for d in tmp_dirs:
        os.makedirs(d)
    output_dir = op.join(op.dirname(output[0]), 'synthesis_input')
    metadata = []
    for inp in input.ref_images:
        outp = op.join(tmp_dirs[0], op.split(inp)[-1])
        metadata.append({'file': op.sep.join(outp.split(op.sep)[-2:]),
                         'target_image': op.split(inp)[-1].split('_')[0],
                         'gamma': False})
        os.link(inp, outp)
    with open(op.join(output_dir, 'metadata.json'), 'w') as f:
        json.dump(metadata, f)
    os.link(input.stats, op.join(tmp_dirs[1], op.split(input.stats)[-1]))
    with tarfile.open(output[0], 'w:gz') as tar:
        # we use the arcname argument to get just the relative directory
        # structure (e.g., 'synthesis_input/' instead of
        # 'mnt/ceph/users/wbroderick/metamers/to_share/synthesis_input/')
        tar.add(output_dir, arcname='synthesis_input')

    shutil.rmtree(output_dir)
    output_dir = op.join(op.dirname(output[0]), 'tiff_images')
    metadata = []
    for inp in input.tiffs:
        outp = op.join(tmp_dirs[-1], op.split(inp)[-1])
        metadata.append({'file': op.sep.join(outp.split(op.sep)[-2:]),
                         'target_image': op.split(inp)[-1].split('.')[0],
                         'gamma': False})
        os.link(inp, outp)
    with open(op.join(output_dir, 'metadata.json'), 'w') as f:
        json.dump(metadata, f)
    with tarfile.open(output[-1], 'w:gz') as tar:
        tar.add(output_dir, arcname='tiff_images')
    shutil.rmtree(output_dir)
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
run:
    import shutil
    import tarfile
    import os
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '')
    os.makedirs(output_dir)
    for inp in input:
        outp = op.join(output_dir, op.split(inp)[-1])
        os.link(inp, outp)
    with tarfile.open(output[0], 'w:gz') as tar:
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
SnakeMake From line 4664 of main/Snakefile
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
run:
    import shutil
    import tarfile
    import os
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '') + os.sep
    os.makedirs(output_dir)
    for inp in input:
        outp = inp.replace(config["DATA_DIR"], output_dir)
        # clean up the name a bit
        outp_dir, outp = op.split(outp)
        outp_dir = op.join(op.sep, *outp_dir.split(op.sep)[:-1])
        outp = outp.split('_')[0] + op.splitext(outp)[-1]
        outp = op.join(outp_dir, outp)
        os.link(inp, outp)
    with tarfile.open(output[0], 'w:gz') as tar:
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
SnakeMake From line 4686 of main/Snakefile
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
run:
    import shutil
    import tarfile
    import os
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '') + os.sep
    os.makedirs(output_dir)
    for inp in input:
        outp = inp.replace(config["DATA_DIR"], output_dir)
        outp_dir = op.split(outp)[0]
        os.makedirs(outp_dir, exist_ok=True)
        os.link(inp, outp)
    with tarfile.open(output[0], 'w:gz') as tar:
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
SnakeMake From line 4716 of main/Snakefile
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
run:
    import shutil
    import tarfile
    import os
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '')
    os.makedirs(output_dir)
    for inp in input:
        model, comp = re.findall('behavioral/([A-Z0-9]+)_norm.*_comp-([a-z-0-9]+)/task', inp)[0]
        model = {'V1': 'energy', 'RGC': 'luminance'}[model]
        comp = comp.replace('-natural', '-nat').replace('-downsample-2', '_downsample')
        outp = op.join(output_dir, f'{model}_comp-{comp}_data.csv')
        os.link(inp, outp)
    with tarfile.open(output[0], 'w:gz') as tar:
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
SnakeMake From line 4741 of main/Snakefile
4765
4766
4767
run:
    import os
    os.link(input[0], output[0])
SnakeMake From line 4765 of main/Snakefile
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
run:
    import shutil
    import tarfile
    import os
    # this is just a temporary directory that we'll delete once we've
    # created the .tar.gz file
    output_dir = output[0].replace('.tar.gz', '') + os.sep
    os.makedirs(output_dir)
    for inp in input:
        outp = inp.replace(config["DATA_DIR"], output_dir)
        outp_dir = op.split(outp)[0]
        os.makedirs(outp_dir, exist_ok=True)
        os.link(inp, outp)
    with tarfile.open(output[0], 'w:gz') as tar:
        tar.add(output_dir, arcname=op.split(output_dir)[-1])
    shutil.rmtree(output_dir)
SnakeMake From line 4814 of main/Snakefile
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
run:
    import subprocess
    from datetime import datetime
    if wildcards.to_share.startswith('metamers_'):
        upload_path = f'metamers/{wildcards.to_share}'
    elif wildcards.to_share.startswith('stimuli_'):
        upload_path = f'stimuli/{wildcards.to_share}'
    elif wildcards.to_share.startswith('mcmc_'):
        upload_path = f'mcmc/{wildcards.to_share}'
    else:
        upload_path = wildcards.to_share
    subprocess.call(['osf', 'upload', '-Uf', input[1], f'osfstorage/{upload_path}'])
    with open(output[0], 'w') as f:
        f.write(f'Uploaded to {upload_path} at {datetime.now()}')
SnakeMake From line 4838 of main/Snakefile
4862
4863
4864
4865
run:
    import shutil
    # because these are two different filesystems, we copy rather than link
    shutil.copy(input[0], output[0])
SnakeMake From line 4862 of main/Snakefile
4873
4874
4875
4876
run:
    import shutil
    # because these are two different filesystems, we copy rather than link
    shutil.copy(input[0], output[0])
SnakeMake From line 4873 of main/Snakefile
4915
4916
4917
4918
4919
4920
4921
run:
    import hashlib
    import json
    with open(input[0], 'rb') as f:
        checksum = hashlib.blake2b(f.read())
    with open(output[0], 'w') as f:
        json.dump({wildcards.filename: checksum.hexdigest()}, f)
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
run:
    import json
    checksums = {}
    for inp in input:
        with open(inp) as f:
            tmp = json.load(f)
        if len(tmp) > 1:
            raise Exception(f"Expected one file but found {len(tmp)}!")
        # there's only one key, so this grabs it
        if list(tmp.keys())[0] in checksums.keys():
            raise Exception(f"{tmp.keys()[0]} already found in checksums dict!")
        checksums.update(tmp)
    with open(output[0], 'w') as f:
        json.dump(checksums, f)
4963
4964
4965
4966
4967
4968
4969
4970
4971
run:
    import json
    import foveated_metamers as fov
    ln_path_template = ('{model_path_name}/{target_image}/downsample-{downsampled}/scaling-{scaling}/'
                        'seed-{random_seed}_init-{initialization_type}_gamma-{gamma_corrected}.png')
    output_dir = op.dirname(output[0])
    metadata = fov.utils.rearrange_metamers_for_sharing(input, output_dir, ln_path_template)
    with open(output[0], 'w') as f:
        json.dump(metadata, f)
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
run:
    import os
    import json
    metadata = []
    for inp, outp in zip(input, output[:-1]):
        os.link(inp, outp)
        tgt_img, gamma = re.findall('([a-z]+)_gamma-([A-Za-z]+).png', outp)[0]
        ln_path = outp.replace('/mnt/ceph/users/wbroderick/foveated_metamers_to_share/', '')
        metadata.append({'file': ln_path, 'target_image': tgt_img, 'gamma': bool(gamma)})
    with open(output[-1], 'w') as f:
        json.dump(metadata, f)
5042
5043
5044
5045
5046
5047
5048
5049
5050
run:
    import json
    metadata = {}
    with open(input[0]) as f:
        metadata['metamers'] = json.load(f)
    with open(input[1]) as f:
        metadata['natural_images'] = json.load(f)
    with open(output[0], 'w') as f:
        json.dump(metadata, f)
5110
5111
5112
run:
    import shutil
    shutil.copy(input[0], output[0])
SnakeMake From line 5110 of main/Snakefile
ShowHide 91 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/billbrod/foveated-metamers
Name: foveated-metamers
Version: v1.0-biorxiv
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...