Integration of AISD School Catchments and Census Tract Data Workflow

public public 1yr ago 0 bookmarks

The purpose of this repository is to merge census tract and AISD shapefile data, in order to approximate the number of elementary, middle, and high school students from each census tract in a given school catchment. We want to do this for every 5 counties in Austin-Round Rock-San Marcos MSA:

  • Williamson

  • Bastrop

  • Travis

  • Hays

  • Caldwell

Kelly P. provided a demonstration of the workflow:

SABS_1516_SchoolLevels.zip notes

  • This directory contains the school catchments for almost every school district in Texas at all grade levels (e.g. elementary, middle, high school) for the 2015-16 school year.

  • Two districts relevant to the Austin Granular Model are missing: Burnet ISD and Austin ISD.

aisd_shapefiles/ notes

  • This directory contains the shapefiles for AISD elementary, middle and high schools.

  • These data are from the 2019-20 school year

2020_texas_census_tracts/

  • This directory contains the shapefile for all census tracts as defined in 2020

catchment_census_tract_overlap.ipynb

  • This short notebook contains an example of overlapping AISD elementary school catchments with the 2020 census tracts for Texas

Goals:

  1. Merge the AISD school catchments into the SABS school catchments by grade level to produce three shapefiles: one each for elementary, middle and high schools across Texas.

  2. Overlap the merged catchment shapefiles with the census shapefile and calculate the percentage overlap as ( overlap area / census tract area ). This will give us the percentage, by area, of the census tract assigned to each school.

  3. Write the results in CSV format (following the example in the notebook)

Authors

  • Ethan Ho (@eho-tacc)

  • Kelly Pierce

Snakemake Usage

If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above).

Step 1: Obtain a copy of this workflow

  1. Create a new github repository using this workflow as a template .

  2. Clone the newly created repository to your local system, into the place where you want to perform the data analysis.

Step 2: Configure workflow

Configure the workflow according to your needs via editing the files in the config/ folder. Adjust config.yaml to configure the workflow execution, and samples.tsv to specify your sample setup.

Step 3: Install Snakemake

Install Snakemake using conda :

conda create -c bioconda -c conda-forge -n snakemake snakemake

For installation details, see the instructions in the Snakemake documentation .

Step 4: Execute workflow

Activate the conda environment:

conda activate snakemake

Test your configuration by performing a dry-run via

snakemake --use-conda -n

Execute the workflow locally via

snakemake --use-conda --cores $N

using $N cores or run it in a cluster environment via

snakemake --use-conda --cluster qsub --jobs 100

or

snakemake --use-conda --drmaa --jobs 100

If you not only want to fix the software stack but also the underlying OS, use

snakemake --use-conda --use-singularity

in combination with any of the modes above. See the Snakemake documentation for further details.

Step 5: Investigate results

After successful execution, you can create a self-contained interactive HTML report with all results via:

snakemake --report report.html

This report can, e.g., be forwarded to your collaborators. An example (using some trivial test data) can be seen here .

Step 6: Commit changes

Whenever you change something, don't forget to commit the changes back to your github copy of the repository:

git commit -a
git push

Step 7: Obtain updates from upstream

Whenever you want to synchronize your workflow copy with new developments from upstream, do the following.

  1. Once, register the upstream repository in your local copy: git remote add -f upstream [email protected]:snakemake-workflows/aisd_shapefile_integration.git or git remote add -f upstream https://github.com/snakemake-workflows/aisd_shapefile_integration.git if you do not have setup ssh keys.

  2. Update the upstream version: git fetch upstream .

  3. Create a diff with the current version: git diff HEAD upstream/master workflow > upstream-changes.diff .

  4. Investigate the changes: vim upstream-changes.diff .

  5. Apply the modified diff via: git apply upstream-changes.diff .

  6. Carefully check whether you need to update the config files: git diff HEAD upstream/master config . If so, do it manually, and only where necessary, since you would otherwise likely overwrite your settings and samples.

Step 8: Contribute back

In case you have also changed or added steps, please consider contributing them back to the original repository:

  1. Fork the original repo to a personal or lab account.

  2. Clone the fork to your local system, to a different place than where you ran your analysis.

  3. Copy the modified files from your analysis to the clone of your fork, e.g., cp -r workflow path/to/fork . Make sure to not accidentally copy config file contents or sample sheets. Instead, manually update the example config files if necessary.

  4. Commit and push your changes to your fork.

  5. Create a pull request against the original repository.

Testing

Test cases are in the subfolder .test . They are automatically executed via continuous integration with Github Actions .

Code Snippets

 2
 3
 4
 5
 6
 7
 8
 9
10
import geopandas as gpd
import descartes
import pandas as pd
import os
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import datetime
import numpy as np
import contextily as ctx
14
15
16
17
data_dir = '../../data/'
atx_ct_shp = gpd.read_file(os.path.join(data_dir, '2020_texas_census_tracts/2020_texas_census_tracts.shp'))
es_shp = gpd.read_file(os.path.join(data_dir, 'aisd_shapefiles/20_21_elem_updt_v2.shp'))
sabs_shp = gpd.read_file(os.path.join(data_dir, 'SABS_1516_SchoolLevels/SABS_1516_Primary.shp'))
21
22
sabs_shp = sabs_shp[sabs_shp['stAbbrev'] == 'TX']
sabs_shp.shape
26
27
28
print(atx_ct_shp.crs.name)
print(es_shp.crs.name)
print(sabs_shp.crs.name)
32
33
34
atx_shp_osm = atx_ct_shp.to_crs({'init': 'EPSG:3857'})
es_shp_osm = es_shp.to_crs({'init': 'EPSG:3857'})
sabs_shp_osm = sabs_shp
38
assert atx_shp_osm.crs == es_shp_osm.crs == sabs_shp_osm.crs
42
sabs_shp_osm.head()
46
es_shp_osm.head()
50
atx_shp_osm.head()
54
es_shp_osm.plot(figsize=(10, 7))
58
59
es_shp_osm['es_area'] = es_shp_osm.area
atx_shp_osm['cbg_area'] = atx_shp_osm.area
63
es_atx = gpd.overlay(atx_shp_osm, es_shp_osm, how='intersection')
67
68
69
es_atx['overlap_area'] = es_atx.area
# what percent of an elementary catchment corresponds to the underlying CBG?
es_atx['pct_overlap'] = (es_atx['overlap_area'] / es_atx['cbg_area']) * 100
73
overlaps = es_atx[['GEOID', 'ISD', 'CAMPUS', 'SCHL_YEAR', 'pct_overlap']]
77
overlaps['level'] = 'elementary'
81
overlaps.to_csv('AISD_elem_census_tract_overlaps.csv')
10
cd /Users/TASethanho/tacc/projects/meyers/covid/aisd_shapefile_integration
22
ls data/2018_msas/
26
from glob import glob as g
30
msa_shps = g('./data/2018_msas/*.shp')
38
msas = [gpd.read_file(fp) for fp in msa_shps]
50
gpd.concat(msas)
58
pd.concat(msas)
62
gpd.GeoDataFrame(d.concat(msas))
66
gpd.GeoDataFrame(pd.concat(msas))
70
gpd.GeoDataFrame(pd.concat(msas)).plot()
74
gpd.GeoDataFrame(pd.concat(msas)).plot(figsize=(20, 10))
78
tx_msas = gpd.GeoDataFrame(pd.concat(msas))
86
tx_msas = tx_msas[tx_msas['NAME'].str.contains(', TX', case=False)]
90
tx_msas.plot(figsize=(20, 10))
98
tx_msas[tx_msas['NAME'].str.contains('Austin')]
102
tx_msas[tx_msas['NAME'].str.contains('Austin')].plot(figsize=(20, 10))
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import geopandas as gpd
import descartes
import pandas as pd
import os
from glob import glob
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import datetime
import numpy as np
import contextily as ctx
15
data_dir = '../../data/'
19
20
21
22
23
24
25
26
27
28
29
30
31
def find_fp_with_atx(query: str) -> gpd.GeoDataFrame:
    """Find the census shp that contains 'large' MSAs like Austin.
    Should be 25 in TX"""
    for fp in glob(query):
        gdf = gpd.read_file(fp)
        # Austin MSA
        has_atx = gdf['NAME'].str.contains('Austin', case=False)
        if has_atx.any():
            print(fp)
            return gdf[has_atx]
    assert 0
    # assert gdfs, "query returned null"
    # return gpd.GeoDataFrame(pd.concat(gdfs))
35
find_fp_with_atx(os.path.join(data_dir, '2018_msas/*.shp'))
39
40
41
atx_ct_shp = gpd.read_file(os.path.join(data_dir, '2020_texas_census_tracts/2020_texas_census_tracts.shp'))
es_shp = gpd.read_file(os.path.join(data_dir, 'aisd_shapefiles/20_21_elem_updt_v2.shp'))
sabs_shp = gpd.read_file(os.path.join(data_dir, 'SABS_1516_SchoolLevels/SABS_1516_Primary.shp'))
45
msa_query = os.path.join(data_dir, '2018_msas/*cbsa*.shp')
49
50
51
52
53
def get_msas(query: str) -> gpd.GeoDataFrame:
    """Concatenates all GDFs in glob `query`"""
    gdfs = [gpd.read_file(fp) for fp in glob(query)]
    assert gdfs, "query returned null"
    return gpd.GeoDataFrame(pd.concat(gdfs))
57
msas_shp = get_msas(msa_query)
61
62
63
sabs_shp = sabs_shp[sabs_shp['stAbbrev'] == 'TX']
tx_msas = msas_shp[msas_shp['NAME'].str.contains('TX', case=False)]
print(f"shape of MSAs GDF: {tx_msas.shape}")
67
68
69
print(
    tx_msas['NAME'].unique(),
    len(tx_msas['NAME'].unique()))
73
74
atx_msas = tx_msas[tx_msas['NAME'].str.contains('Austin', case=False)]
atx_msas
78
79
80
81
print(atx_ct_shp.crs.name)
print(es_shp.crs.name)
print(sabs_shp.crs.name)
print(atx_msas.crs.name)
85
86
87
88
atx_shp_osm = atx_ct_shp.to_crs({'init': 'EPSG:3857'})
es_shp_osm = es_shp.to_crs({'init': 'EPSG:3857'})
msa_shp_osm = atx_msas.to_crs({'init': 'EPSG:3857'})
sabs_shp_osm = sabs_shp
92
assert atx_shp_osm.crs == es_shp_osm.crs == sabs_shp_osm.crs == msa_shp_osm.crs
96
sabs_shp_osm.head()
100
es_shp_osm.head()
104
atx_shp_osm.head()
108
es_shp_osm.plot(figsize=(10, 7))
112
gpd.overlay(es_shp_osm, msa_shp_osm, how="intersection").plot()
116
sabs_shp_osm.shape
120
sabs_shp_osm['ncessch'].unique().shape
124
125
ix_keys = gpd.overlay(sabs_shp_osm, msa_shp_osm, how="intersection")['ncessch'].unique()
ix_keys
129
sabs_shp_osm[sabs_shp_osm['ncessch'].isin(ix_keys)].plot()
133
msa_shp_osm.plot()
137
gpd.overlay(msa_shp_osm, sabs_shp_osm, how="intersection").plot()
141
142
143
gpd.overlay(
    sabs_shp_osm[sabs_shp_osm['ncessch'].isin(ix_keys)],
    msa_shp_osm, how="intersection").plot()
147
148
149
150
151
152
gpd.overlay(
    msa_shp_osm,
    sabs_shp_osm[sabs_shp_osm['ncessch'].isin(ix_keys)],
    # how="symmetric_difference"
    how="difference"
).plot()
156
157
es_shp_osm['es_area'] = es_shp_osm.area
atx_shp_osm['cbg_area'] = atx_shp_osm.area
161
162
163
164
165
166
167
gpd.overlay(
    gpd.overlay(
        sabs_shp_osm[sabs_shp_osm['ncessch'].isin(ix_keys)],
        msa_shp_osm, how="intersection"),
    es_atx,
    how="union"
).plot()
171
es_atx = gpd.overlay(atx_shp_osm, es_shp_osm, how='intersection')
175
es_atx.plot()
179
180
181
es_atx['overlap_area'] = es_atx.area
# what percent of an elementary catchment corresponds to the underlying CBG?
es_atx['pct_overlap'] = (es_atx['overlap_area'] / es_atx['cbg_area']) * 100
185
overlaps = es_atx[['GEOID', 'ISD', 'CAMPUS', 'SCHL_YEAR', 'pct_overlap']]
189
overlaps['level'] = 'elementary'
193
overlaps.to_csv('AISD_elem_census_tract_overlaps.csv')
2
3
4
5
import geopandas as gpd
import pandas as pd
import os
from glob import glob
 9
10
data_dir = '../../data/'
sabs_shp_usa = gpd.read_file(os.path.join(data_dir, 'SABS_1516_SchoolLevels/SABS_1516_Primary.shp'))
14
msa_query = os.path.join(data_dir, '2018_msas/*cbsa*.shp')
18
19
20
21
22
def get_msas(query: str) -> gpd.GeoDataFrame:
    """Concatenates all GDFs in glob `query`"""
    gdfs = [gpd.read_file(fp) for fp in glob(query)]
    assert gdfs, "query returned null"
    return gpd.GeoDataFrame(pd.concat(gdfs))
26
msas_shp = get_msas(msa_query)
30
31
atx_msas = msas_shp[msas_shp['NAME'].str.contains('Austin-Round', case=False)]
atx_msas.head()
35
sabs_shp = sabs_shp_usa[sabs_shp_usa['stAbbrev'] == 'TX']
39
40
print(sabs_shp.crs.name)
print(atx_msas.crs.name)
44
45
atx_msas_osm = atx_msas.to_crs({'init': 'EPSG:3857'})
sabs_osm = sabs_shp
49
assert atx_msas_osm.crs == sabs_osm.crs
53
atx_msas_osm.plot()
57
58
# uncomment to view all SABS data in TX
# sabs_osm.plot()
62
63
olap_sabs_msa = gpd.overlay(atx_msas_osm, sabs_osm, how="intersection")
olap_sabs_msa.head()
67
olap_sabs_msa.plot()
71
atx_msas_osm.plot()
75
76
77
78
79
80
81
aisd_shp_osm = (
    gpd
    .read_file(os.path.join(data_dir, 'aisd_shapefiles/20_21_elem_updt_v2.shp'))
    .to_crs({'init': 'EPSG:3857'})
)
assert atx_msas_osm.crs == sabs_osm.crs == aisd_shp_osm.crs
aisd_shp_osm.plot()
85
86
with_aisd = gpd.overlay(olap_sabs_msa, aisd_shp_osm, how="union")
with_aisd.plot()
ShowHide 93 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/eho-tacc/aisd_shapefile_integration
Name: aisd_shapefile_integration
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...