PISM Experiment Workflow for Data Preprocessing, Model Runs, Tuning, and Postprocessing Using Snakemake

public public 1yr ago 0 bookmarks

Snakemake workflow for my PISM experiments. This handles input data preprocessing (download, conversion of units, remapping, merging), model runs (can submit to SLURM on HLRN directly), tuning (expanding variable combinations via itertools product) and postprocessing (visualization).

Note pynco must be installed from github!

put this in the environment:

git+https://github.com/nco/pynco/

Examples

prepare dataset

snakemake --cores 4 results/PISM_file/MillenialScaleOscillations_climatology_NHEM_20km.nc

submit slurm

snakemake --profile slurmsimple gi_heinrich_first

Show the graph

snakemake --forceall --dag results/PISM_file/MillenialScaleOscillations_climatology_NHEM_20km.nc | dot -Tpng > dag.png

plots

Heinrich events

Heinrich events

Code Snippets

42
43
shell:
    "python3 workflow/scripts/prepare_CESM_atmo.py {input.grid} {input.atmo} {input.stddev} {output.main} {output.refheight}"
54
55
shell:
    "python3 workflow/scripts/prepare_CESM_atmo.py {input.grid} {input.atmo} {input.stddev} {output.main} {output.refheight} --yearmean"
65
66
shell:
    "python3 workflow/scripts/prepare_CESM_ocean.py {input.grid} {input.ocean} {output.main}"
73
74
shell:
    "python3 workflow/scripts/prepare_CESM_ocean.py {input.grid} {input.ocean} {output.main} --yearmean"
81
82
shell:
  "python3 workflow/scripts/prepare_glacialindex.py {input.index} {output.main}"
88
89
shell:
  "python3 workflow/scripts/generate_fake_glacialindex.py {output.main}"
 99
100
shell:
    "cdo remapbil,{input.grid} {input.atmo} {output.main}"
108
109
shell:
    "cdo remapbil,{input.grid} {input.ocean} {output.main}"
120
121
shell:
    "cdo remapbil,{input.grid} {input.refheight} {output.refheight}"
133
134
135
136
137
shell:
    """
    cdo remapbil,{input.grid} {input.atmo} {output.main}
    python3 workflow/scripts/set_climatology_time.py {output.main}
    """
149
150
151
152
153
154
shell:
    """
    python3 workflow/scripts/generate_delta.py {input.time_series} {input.climatology} {output.main} {params.varname} --smoothing {wildcards.smoothing}
    ncap2 -O -s 'defdim("nv",2);time_bnds=make_bounds(time,$nv,"time_bnds");' {output.main} tmp_with_bnds.nc
    mv tmp_with_bnds.nc {output.main}
    """
165
166
167
168
169
170
shell:
    """
    python3 workflow/scripts/generate_delta.py {input.time_series} {input.climatology} {output.main} air_temp --smoothing {wildcards.smoothing}
    ncap2 -O -s 'defdim("nv",2);time_bnds=make_bounds(time,$nv,"time_bnds");' {output.main} tmp_with_bnds.nc
    mv tmp_with_bnds.nc {output.main}
    """
183
184
185
186
187
188
shell:
    """
    python3 workflow/scripts/generate_delta.py {input.time_series} {input.climatology} {output.main} {params.varname} --flattenall
    ncap2 -O -s 'defdim("nv",2);time_bnds=make_bounds(time,$nv,"time_bnds");' {output.main} tmp_with_bnds.nc
    mv tmp_with_bnds.nc {output.main}
    """
25
26
27
28
29
30
31
32
33
34
35
shell:
    """
    ncks {input.atmo} {output.main}
    ncks -A {input.ocean} {output.main}
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}

    cp {input.refheight} {output.refheight}
    """
51
52
shell:
    "workflow/scripts/assemble_model_glacialindex.sh {input.atmo0} {input.atmo1} {input.ocean0} {input.ocean1} {input.index} {input.heatflux} {input.topg} {input.thk} {input.oceankill} none {output.main}"
67
68
shell:
    "workflow/scripts/assemble_model_glacialindex.sh {input.atmo0} {input.atmo1} {input.ocean0} {input.ocean1} {input.index} {input.heatflux} {input.topg} {input.thk} {input.oceankill} none {output.main}"
86
87
shell:
    "workflow/scripts/assemble_model_glacialindex.sh {input.atmo0} {input.atmo1} {input.ocean0} {input.ocean1} {input.refheight0} {input.refheight1} {input.index} {input.heatflux} {input.topg} {input.thk} {input.oceankill} {input.tillphi} {output.main}"
105
106
shell:
    "workflow/scripts/assemble_model_glacialindex.sh {input.atmo0} {input.atmo1} {input.ocean0} {input.ocean1} {input.refheight0} {input.refheight1} {input.index} {input.heatflux} {input.topg} {input.thk} {input.oceankill} {input.tillphi} {output.main}"
120
121
122
123
124
125
126
127
128
129
130
shell:
    """
    ncks {input.atmo} {output.main}
    ncks -A {input.ocean} {output.main}
    ncks -A -v bheatflx {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}

    cp {input.refheight} {output.refheight}
    """
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
shell:
    """
    ncks {input.atmo} {output.main}
    ncks -A {input.ocean} {output.main}
    ncks -A -v bheatflx {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}

    # delete history
    ncatted -a history,global,d,, {output.main}
    ncatted -a history_of_appended_files,global,d,, {output.main}

    cp {input.refheight} {output.refheight}
    """
173
174
175
176
177
178
179
180
181
182
183
184
shell:
    """
    ncks {input.atmo} {output.main}
    ncks -A {input.ocean} {output.main}
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}
    ncks -A {input.tillphi} {output.main}

    cp {input.refheight} {output.refheight}
    """
217
218
219
220
221
222
223
224
225
226
227
228
229
shell:
    """
    ncks {input.atmo} {output.main}
    # don't add ocean, it has a different time base
    # ncks -A {input.ocean} {output.main}
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}
    ncks -A {input.tillphi} {output.main}

    cp {input.refheight} {output.refheight}
    """
247
248
249
250
251
252
253
254
255
256
257
258
shell:
    """
    ncks {input.atmo} {output.main}
    # don't add ocean, it has a different time base
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}
    ncks -A {input.tillphi} {output.main}

    cp {input.refheight} {output.refheight}
    """
276
277
278
279
280
281
282
283
284
285
286
287
shell:
    """
    ncks {input.atmo} {output.main}
    # don't add ocean, it has a different time base
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}
    ncks -A {input.tillphi} {output.main}

    cp {input.refheight} {output.refheight}
    """
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
  shell:
    """
{params.header} \\
-time_stepping.maximum_time_step {resources.max_dt} \\
-bootstrap True \\
-ocean.th.periodic True \\
-atmosphere.given.periodic True \\
-surface.pdd.std_dev.periodic True \\
-i {input.main} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-ys {params.start} \\
-ye {params.stop} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-bed_def lc \\
-sea_level constant,delta_sl \\
-ocean_delta_sl_file {input.sealevel} \\
-ocean th \\
-ocean_th_file {input.main} \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
  shell:
    """
{params.header} \\
-time_stepping.maximum_time_step {resources.max_dt} \\
-bootstrap False \\
-ocean.th.periodic True \\
-atmosphere.given.periodic True \\
-surface.pdd.std_dev.periodic True \\
-i {input.restart} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-y {params.duration} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-bed_def lc \\
-sea_level constant,delta_sl \\
-ocean_delta_sl_file {input.sealevel} \\
-ocean th \\
-ocean_th_file {input.main} \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
385
386
387
388
shell:
    """
    cdo -O mergetime {input} {output}
    """
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
  shell:
    """
{params.header} \\
-bootstrap True \\
-ocean.th.periodic True \\
-atmosphere.given.periodic True \\
-surface.pdd.std_dev.periodic True \\
-i {input.main} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-ys {params.start} \\
-ye {params.stop} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-bed_def lc \\
-sea_level constant,delta_sl \\
-ocean_delta_sl_file {input.sealevel} \\
-ocean th \\
-ocean_th_file {input.main} \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
  shell:
    """
{params.header} \\
-bootstrap False \\
-ocean.th.periodic True \\
-atmosphere.given.periodic True \\
-surface.pdd.std_dev.periodic True \\
-i {input.restart} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-y {params.duration} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-bed_def lc \\
-sea_level constant,delta_sl \\
-ocean_delta_sl_file {input.sealevel} \\
-ocean th \\
-ocean_th_file {input.main} \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
  shell:
    """
{params.header} \\
-bootstrap True \\
-atmosphere.given.periodic True \\
-atmosphere.file {input.main} \\
-surface.pdd.std_dev.periodic True \\
-i {input.main} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-ys {params.start} \\
-ye {params.stop} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-ocean pik \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
18
19
20
21
22
23
24
25
26
27
28
29
shell:
    """
    ncks {input.atmo} {output.main}
    ncks -A {input.ocean} {output.main}
    ncks -A {input.heatflux} {output.main}
    ncks -A -v topg {input.topg} {output.main}
    ncks -A -v thk {input.thk} {output.main}
    ncks -A {input.oceankill} {output.main}
    ncks -A {input.tillphi} {output.main}

    cp {input.refheight} {output.refheight}
    """
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
  shell:
    """
{params.header} \\
-bootstrap True \\
-atmosphere.given.periodic True \\
-surface.pdd.std_dev.periodic True \\
-i {input.main} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-ys {params.start} \\
-ye {params.stop} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-ocean th \\
-ocean.th.periodic True \\
-ocean_th_file {input.main}
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
  shell:
    """
{params.header} \\
-time_stepping.maximum_time_step {resources.max_dt} \\
-bootstrap True \\
-sea_level.constant.value -120 \\
-atmosphere.given.periodic True \\
-atmosphere.file {input.main} \\
-surface.pdd.std_dev.periodic True \\
-i {input.main} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-ys {params.start} \\
-ye {params.stop} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-ocean pik \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
  shell:
    """
{params.header} \\
-time_stepping.maximum_time_step {resources.max_dt} \\
-bootstrap False \\
-sea_level.constant.value -120 \\
-atmosphere.given.periodic True \\
-atmosphere.given.file {input.main} \\
-surface.pdd.std_dev.periodic True \\
-i {input.restart} \\
-o {output.main} \\
-ts_file {output.ts} \\
-extra_file {output.ex} \\
-y {params.duration} \\
-ts_times {params.ts_times} \\
-extra_times {params.ex_times} \\
-front_retreat_file {input.main} \\
-ocean pik \\
{params.grid} \\
{params.extra_vars} \\
{params.climate} \\
{params.dynamics} \\
{params.calving} \\
{params.always_on} \\
{params.marine_ice_sheets} \\

    """
389
390
391
392
shell:
    """
    cdo -O mergetime {input} {output}
    """
17
18
shell:
    "python3 workflow/scripts/prepare_shapiro.py {input.grid} {input.heatflux} {output} "
8
9
shell:
    "./workflow/scripts/pism_example_preprocess.sh"
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
  shell:
    """
spack load pism-sbeyer@current\\

srun pismr \\
-i {input} \\
-bootstrap \\
-o {output.main}.nc
-Mx 76 -My 141 -Mz 101 -Mbz 11 \\
-z_spacing equal -Lz 4000 -Lbz 2000 \\
-skip -skip_max 10 \\
-grid.recompute_longitude_and_latitude false \\
-grid.registration corner \\
-ys -10000 \\
-ye 0 \\
-surface given \\
-surface_given_file pism_Greenland_5km_v1.1.nc \\
-front_retreat_file pism_Greenland_5km_v1.1.nc \\
-sia_e 3.0 \\
-ts_file {output.ts} \\
-ts_times -10000:yearly:0 \\
-extra_file {output.ex} \\
-extra_times -10000:100:0 \\
-extra_vars diffusivity,temppabase,tempicethk_basal,bmelt,tillwat,velsurf_mag,mask,thk,topg,usurf \\

"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
  shell:
    """
spack load pism-sbeyer@master

srun pismr \
  -bootstrap True \
  -timestep_hit_multiples 1 \
  -options_left True \
  -tauc_slippery_grounding_lines True \
  -stress_balance ssa+sia \
  -pseudo_plastic True \
  -Mx 625 \
  -My 625 \
  -Mz 101 \
  -Mbz 11 \
  -Lz 5000 \
  -Lbz 2000 \
  -calving eigen_calving,thickness_calving \
  -thickness_calving_threshold 200 \
  -ocean th \
  -ocean_th_period 1 \
  -part_grid True \
  -cfbc True \
  -kill_icebergs True \
  -eigen_calving_K 1e16 \
  -subgl True \
  -ocean.th.periodic True \
  -pdd_sd_period 1 \
  -atmosphere given,elevation_change \
  -atmosphere_given_period 1 \
  -temp_lapse_rate 0 \
  -surface pdd \
  -surface.pdd.factor_ice 0.019 \
  -surface.pdd.factor_snow 0.005 \
  -surface.pdd.refreeze 0.1 \
  -surface.pdd.std_dev.periodic True \
  -atmosphere.given.periodic True \
  -sia_e 2 \
  -ssa_e 1 \
  -pseudo_plastic_q 0.25 \
  -till_effective_fraction_overburden 0.02 \
  -ys 0 \
  -ye 100 \
  -ts_times 10 \
  -extra_times 100 \
  -extra_vars thk,velsurf_mag,tillwat,velbase_mag,mask,climatic_mass_balance,temppabase,ice_surface_temp,air_temp_snapshot,topg,tauc,velsurf,surface_runoff_flux,tendency_of_ice_amount_due_to_basal_mass_flux,tendency_of_ice_amount_due_to_discharge \
  -o {output.main} \
  -ts_file {output.ts} \
  -extra_file {output.ex} \
  -i {input.main} \
  -front_retreat_file {input.main} \
  -pdd_sd_file {input.main} \
  -atmosphere_lapse_rate_file {input.refheight} \
    """
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
  shell:
    """
#spack load pism-sbeyer@master

mpirun -np 4 ~/pism-sbeyer/bin/pismr \
  -test_climate_models \
  -bootstrap True \
  -timestep_hit_multiples 1 \
  -options_left True \
  -stress_balance ssa+sia \
  -Mx 76 \
  -My 141 \
  -Mz 101 \
  -Mbz 11 \
  -Lz 4000 \
  -Lbz 2000 \
  -ocean pik \
  -atmosphere given \
  -surface pdd \
  -atmosphere.given.periodic True \
  -ys 0 \
  -ye 1 \
  -ts_times 1 \
  -extra_times daily \
  -extra_vars thk,climatic_mass_balance,ice_surface_temp,air_temp_snapshot,effective_air_temp,effective_precipitation,pdd_fluxes,pdd_rates \
  -o {output.main} \
  -ts_file {output.ts} \
  -extra_file {output.ex} \
  -i {input.main} \
  -front_retreat_file {input.main} \

    """
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
  shell:
    """
#spack load pism-sbeyer@master

mpirun -np 4 ~/pism-sbeyer/bin/pismr \
  -test_climate_models \
  -bootstrap True \
  -timestep_hit_multiples 1 \
  -options_left True \
  -stress_balance ssa+sia \
  -Mx 76 \
  -My 141 \
  -Mz 101 \
  -Mbz 11 \
  -Lz 4000 \
  -Lbz 2000 \
  -ocean pik \
  -atmosphere given \
  -surface pdd \
  -ys 0 \
  -ye 1 \
  -ts_times 1 \
  -extra_times daily \
  -extra_vars thk,climatic_mass_balance,ice_surface_temp,air_temp_snapshot,effective_air_temp,effective_precipitation,pdd_fluxes,pdd_rates \
  -o {output.main} \
  -ts_file {output.ts} \
  -extra_file {output.ex} \
  -i {input.main} \
  -front_retreat_file {input.main} \

    """
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
  shell:
    """

mpirun -np 4 ~/pism-sbeyer/bin/pismr \
  -bootstrap True \
  -timestep_hit_multiples 1 \
  -options_left True \
  -tauc_slippery_grounding_lines True \
  -Mx 625 \
  -My 625 \
  -Mz 101 \
  -Mbz 11 \
  -Lz 5000 \
  -Lbz 2000 \
  -calving eigen_calving,thickness_calving \
  -thickness_calving_threshold 200 \
  -ocean th \
  -ocean_th_period 1 \
  -ocean.th.periodic True \
  -part_grid True \
  -cfbc True \
  -kill_icebergs True \
  -eigen_calving_K 1e16 \
  -subgl True \
  -atmosphere_given_file {input.main} \
  -atmosphere index_forcing \
  -atmosphere_given_period 1 \
  -temp_lapse_rate 5 \
  -surface pdd \
  -surface.pdd.air_temp_all_precip_as_rain 275.15 \
  -surface.pdd.air_temp_all_precip_as_snow 271.15 \
  -surface.pdd.factor_ice 0.019 \
  -surface.pdd.factor_snow 0.005 \
  -surface.pdd.refreeze 0.1 \
  -surface.pdd.std_dev.periodic True \
  -atmosphere.given.periodic True \
  -atmosphere.index.periodic True \
  -sia_e 5 \
  -ssa_e 1 \
  -pseudo_plastic_q 0.4 \
   -till_effective_fraction_overburden 0.02 \
  -topg_to_phi 5,40,-300,700 \
  -ys -120000 \
  -ye -119900 \
  -ts_times 10 \
  -extra_times 100 \
  -extra_vars thk,usurf,velsurf_mag,bwat,velbase_mag,mask,climatic_mass_balance,effective_precipitation,effective_air_temp,temppabase, \
  -atmosphere_index_file {input.main} \
  -bed_def lc \
  -o {output.main} \
  -ts_file {output.ts} \
  -extra_file {output.ex} \
  -i {input.main} \
  -front_retreat_file {input.main} \
    """
17
18
shell:
      "bash workflow/scripts/plot/make_NHEM_regions_for_plot.sh {input.grid} {input.shapefile} {output.main}"
27
28
shell:
  "bash workflow/scripts/plot/plot_surges_1d.py {input.main} {input.regions} {output.main}"
38
39
shell:
  "python3 workflow/scripts/plot/plot_surges_1d.py {input.main} {input.regions} {output.main}"
51
52
shell:
  "python3 workflow/scripts/plot/plot_sealevel.py --observed datasets/sealevel/pism_dSL_Imbrie2006.nc {input.main} {output.main}"
84
85
shell:
  "python3 workflow/scripts/plot/plot_nhem_nice.py {input.main} {output.main} --timestep {resources.timestep}"
 5
 6
 7
 8
 9
10
11
12
13
14
  shell:
    """
DATAVERSION=1.1
DATAURL=http://websrv.cs.umt.edu/isis/images/a/a5/
DATANAME=Greenland_5km_v$DATAVERSION.nc

echo "fetching master file ... "
wget -nc ${{DATAURL}}${{DATANAME}} -O {output}  # -nc is "no clobber"
echo "  ... done."
    """
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
  shell:
    """
# extract paleo-climate time series into files suitable for option
# -sea_level ...,delta_SL
DATANAME={input}
SLSERIES={output}
echo -n "creating paleo-sea-level file $SLSERIES from $DATANAME ... "
ncks -O -v sealeveltimes,sealevel_time_series $DATANAME $SLSERIES
ncrename -O -d sealeveltimes,time $SLSERIES
ncrename -O -v sealeveltimes,time $SLSERIES
ncrename -O -v sealevel_time_series,delta_SL $SLSERIES
# reverse time dimension
ncpdq -O --rdr=-time $SLSERIES $SLSERIES
# make times follow same convention as PISM
ncap2 -O -s "time=-time" $SLSERIES $SLSERIES
ncatted -O -a units,time,m,c,"years since 1-1-1" $SLSERIES
ncatted -O -a calendar,time,c,c,"365_day" $SLSERIES
echo "done."

# add time bounds (https://www.pism.io/docs/climate_forcing/time-dependent.html)
ncap2 -O -s 'defdim("nv",2);time_bnds=make_bounds(time,$nv,"time_bnds");' \
      $SLSERIES $SLSERIES
echo

    """
14
15
16
17
shell:
    """
    wget -O {output} https://www.ncei.noaa.gov/pub/data/paleo/contributions_by_author/spratt2016/spratt2016.txt
    """
19
20
shell:
  "python3 workflow/scripts/prepare_LaskeMasters.py {input.grid} {input.sediment} {output} "
39
40
41
42
43
44
45
46
47
48
49
shell:
    """
    python3 workflow/scripts/prepare_tillphi.py {input.topg} {input.sediment} none {output} \
        {params.sediment_threshold} \
        {params.phi_min} \
        {params.phi_max} \
        {params.topg_min} \
        {params.topg_max} \
        {params.tauc_factor} \

    """
20
21
22
23
24
25
26
27
28
shell:
    """
    python3 workflow/scripts/prepare_ETOPO1.py {input.grid} {input.topg} {input.thk} output_tmp.nc
    ncatted -O -a _FillValue,,d,, output_tmp.nc
    ncatted -O -a missing_value,,d,, output_tmp.nc
    ncatted -O -a actual_range,,d,, output_tmp.nc
    cdo copy output_tmp.nc {output}
    rm output_tmp.nc
    """
41
42
shell:
    "python3 workflow/scripts/prepare_ICE-7G.py {input.grid} {input.main} {output} "
47
48
49
50
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn9927NAGrB120kto30k.nc
    """
55
56
57
58
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn9927NAGrB30kto0k.nc
    """
63
64
65
66
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn9894NAGrB120kto30k.nc
    """
71
72
73
74
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn9894NAGrB30kto0k.nc
    """
79
80
81
82
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn4041ANT30kto0k.nc
    """
87
88
89
90
shell:
    """
    wget -O {output} https://www.physics.mun.ca/~lev/GLAC1Dnn4041ANT120kto30k.nc
    """
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
shell:
    """
    ncks -O -3 {input.main} {output}_tmp # because ncrename does not work with netcdf4
    ncrename -O -v {params.timevar},time -d {params.timevar},time {output}_tmp
    ncatted -O -a units,time,o,c,"years since 1-1-1" {output}_tmp
    ncatted -O -a calendar,time,o,c,"365_day" {output}_tmp
    ncap2 -s 'time=time*1000-1' {output}_tmp 
    cdo -setmissval,0 -chname,HICE,thk -remapbil,{input.grid} -selvar,HICE {output}_tmp {output}
    ncatted -O -a _FillValue,,d,, {output}
    ncatted -O -a missing_value,,d,, {output}
    ncatted -O -a long_name,thk,d,, {output}
    ncatted -O -a source,thk,o,c,"Tarasov GLAC1Dnn{wildcards.glacversion}NAGrB120kto30k" {output}
    ncatted -O -a units,thk,o,c,"m" {output}
    ncatted -O -a standard_name,thk,o,c,"land_ice_thickness" {output}
    rm {output}_tmp
    """
138
139
140
141
142
shell:
    """
    # need timmean, because selyear does not accept --reduce_dim
    cdo --reduce_dim -timmean -selyear,{params.year} {input} {output}
    """
151
152
shell:
    "python3 workflow/scripts/prepare_oceankillmask.py {input} {output} --remove_himalaya"
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
set -euo pipefail

atmo0=$1
atmo1=$2
ocean0=$3
ocean1=$4
refheight0=$5
refheight1=$6
index=$7
heatflux=$8
topg=$9
thk=${10}
oceankill=${11}
till_phi=${12}
output=${13}

# convert to netcdf3 because in 4 the renaming of dimensions does not
# work
ncks -O -3 "$atmo0" tmp_atmo0_netcdf3.nc
ncrename -O -v time,time_periodic \
         -v time_bnds,time_bnds_periodic \
         -v air_temp,airtemp_0 \
         -v precipitation,precip_0 \
         -v air_temp_sd,airtempsd_0 \
         -d time,time_periodic \
         tmp_atmo0_netcdf3.nc atmo0_tmp.nc
ncks -O --fix_rec_dmn time_periodic atmo0_tmp.nc atmo0_tmp2.nc

ncks -O -3 "$atmo1" tmp_atmo1_netcdf3.nc
ncrename -O -v time,time_periodic \
         -v time_bnds,time_bnds_periodic \
         -v air_temp,airtemp_1 \
         -v precipitation,precip_1 \
         -v air_temp_sd,airtempsd_1 \
         -d time,time_periodic \
         tmp_atmo1_netcdf3.nc atmo1_tmp.nc
ncks -O --fix_rec_dmn time_periodic atmo1_tmp.nc atmo1_tmp2.nc


ncks atmo0_tmp2.nc "$output"
ncks -A atmo1_tmp2.nc "$output"

rm tmp_atmo0_netcdf3.nc atmo0_tmp.nc atmo0_tmp2.nc
rm tmp_atmo1_netcdf3.nc atmo1_tmp.nc atmo1_tmp2.nc

## reference height
ncrename -O -v referenceHeight,usurf_0 $refheight0 refheight0_tmp.nc
ncrename -O -v referenceHeight,usurf_1 $refheight1 refheight1_tmp.nc

ncatted -O -a standard_name,usurf_0,d,, refheight0_tmp.nc
ncatted -O -a standard_name,usurf_1,d,, refheight1_tmp.nc

ncks -A refheight0_tmp.nc "$output"
ncks -A refheight1_tmp.nc "$output"

rm refheight0_tmp.nc
rm refheight1_tmp.nc

# currently no index for the ocean is possible, so just use present day
ncks -3 "$ocean0" tmp_ocean0_netcdf3.nc
ncrename -v time,time_periodic \
         -v time_bnds,time_bnds_periodic \
         -d time,time_periodic \
         tmp_ocean0_netcdf3.nc ocean0_tmp.nc
ncks --fix_rec_dmn time_periodic ocean0_tmp.nc ocean0_tmp2.nc
ncks -A ocean0_tmp2.nc "${output}"

rm tmp_ocean0_netcdf3.nc ocean0_tmp.nc ocean0_tmp2.nc

ncks -A "$index" "$output"

ncks -A "$heatflux" "$output"
ncks -A -v topg  "$topg" "$output"
ncks -A -v thk  "$thk" "$output"
ncks -A  "$oceankill" "$output"

if [ "$till_phi" != "none" ]; then
    ncks -A "$till_phi" "$output"
fi
# make time use time bounds periodic
ncatted -O -a bounds,time_periodic,o,c,"time_bnds_periodic" "$output"

ncatted -a _FillValue,glac_index,d,, "$output"
ncatted -a _FillValue,time_bnds,d,, "$output"

# delete history
ncatted -a history,global,d,, "$output"
ncatted -a history_of_appended_files,global,d,, "$output"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
from netCDF4 import Dataset

import numpy as np
import argparse
from scipy.signal import savgol_filter


def read_timeseries(file, variable):

    rootgrp = Dataset(file, "r")
    time = rootgrp.variables['time'][:]
    temp = rootgrp.variables[variable][:]

    rootgrp.close()
    return time, temp


def get_mean(file, variable):
    """
    read climatology and compute total mean
    """
    rootgrp = Dataset(file, "r")
    temp = rootgrp.variables[variable][:]
    mean = np.mean(temp)
    rootgrp.close()
    return mean


def write_delta_var_file(file, time, delta_var, varname):
    rootgrp = Dataset(file, "w")
    nc_time = rootgrp.createDimension("time", None)
    nc_times = rootgrp.createVariable("time", "f4", ("time",))
    nc_times.units = 'months since 1-1-1'
    nc_times.calendar = '360_day'
    nc_delta_T = rootgrp.createVariable(
        varname, "f4", ("time", ))
    if varname == "delta_T":
        nc_delta_T.units = "Kelvin"
    if varname == "delta_P":
        nc_delta_T.units = "kg m-2 yr-1"
    nc_times[:] = time
    nc_delta_T[:] = delta_var
    rootgrp.close()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate the delta_T or delta_P for PISM from time mean and climatology")
    parser.add_argument("timeseries")
    parser.add_argument("climatology")
    parser.add_argument("output")
    parser.add_argument("variable", help="variable for which to generate delta")
    parser.add_argument("--smoothing", type=float, default=0.0)
    parser.add_argument("--flattenall", action="store_true", help="make a file which is always zero")
    parser.add_argument("--skip", type=int, default=0)

    args = parser.parse_args()
    time, variable = read_timeseries(args.timeseries, args.variable)
    mean = get_mean(args.climatology, args.variable)
    print(f"Mean of climatology is {mean}.")

    delta_var = variable - np.mean(variable)
    time = np.arange(len(delta_var))

    if args.variable == "air_temp":
        varname = "delta_T"
    elif args.variable == "precipitation":
        varname = "delta_P"
    else:
        raise ValueError("only air_temp or precipitation are allowed so far")

    print(varname) 



    if args.smoothing != 0:
        delta_var = savgol_filter(delta_var, args.smoothing, 3, mode="wrap")

    if args.flattenall:
        delta_var[:] = 0

    if args.skip != 0:
        delta_var = delta_var[::args.skip]
        time = time[::args.skip]
    write_delta_var_file(args.output, time, delta_var, varname)
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import argparse


def generate_fake_index():
    """
    Prepare glacial index file
    """
    secPerYear = 60 * 60 * 24 * 365

    time_bnds = [
        [0, 1],
        [1, 2],
        [2, 3],
        [3, 4],
        [4, 5],
        [5, 6],
        [6, 7],
        [7, 8],
        [8, 9],
        [9, 10],
        [10, 11],
        [11, 12],
        [12, 13],
        [13, 14],
        [14, 15],
        [15, 16],
        [16, 17],
        [17, 18],
        [18, 19],
        [19, 20],
        [20, 21],
        [21, 22],
        [22, 23],
    ]
    time_bnds = np.array(time_bnds)
    time_bnds_secs = time_bnds * secPerYear

    # compute time from bounds
    time = np.mean(time_bnds_secs, axis=1)
    index = (np.sin(time/secPerYear) + 1) / 2

    return index, time, time_bnds_secs


def generate_index_file(index, time, time_bnds, outputfile):

    time_bnds_xr = xr.DataArray(name="time_bnds",
                                data=time_bnds,
                                dims=["time", "bnds"])

    index_xr = xr.DataArray(name="glac_index",
                            data=index,
                            dims=["time"],
                            coords={
                                "time": time,
                            },
                            attrs=dict(
                                long_name="Glacial Index",
                                units="1",
                            ),
                            )

    output = xr.Dataset(
        dict(
            glac_index=index_xr,
            time_bnds=time_bnds_xr,
        )
    )
    output.time.attrs["standard_name"] = "time"
    output.time.attrs["long_name"] = "time"
    output.time.attrs["bounds"] = "time_bnds"
    output.time.attrs["units"] = "seconds since 1-1-1"
    output.time.attrs["calendar"] = "365_day"
    output.time.attrs["axis"] = "T"

    # set encoding, otherwise we have fillValue even in coordinates and
    # time and that's not CF compliant.
    encoding = {
        'time': {'dtype': 'float32', 'zlib': False, '_FillValue': None},
        'time_bnds': {'dtype': 'float32', 'zlib': False, '_FillValue': None},
    }
    output.to_netcdf(outputfile, encoding=None, engine="scipy")
    # output.to_netcdf("output_nc4.nc", encoding=encoding,)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate glacial index forcing")
    parser.add_argument("outputfile")
    args = parser.parse_args()

    index, time, time_bnds_sec = generate_fake_index()

    generate_index_file(
        index, time, time_bnds_sec,
        args.outputfile)
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
set -e  # exit on error

echo "# =================================================================================="
echo "# PISM std Greenland example: preprocessing"
echo "# =================================================================================="
echo

# get file; see page http://websrv.cs.umt.edu/isis/index.php/Present_Day_Greenland
DATAVERSION=1.1
DATAURL=http://websrv.cs.umt.edu/isis/images/a/a5/
DATANAME=results/PISM_example/Greenland_5km_v$DATAVERSION.nc

mkdir -p results/PISM_example/

echo "fetching master file ... "
wget -nc -O $DATANAME ${DATAURL}Greenland_5km_v1.1.nc   # -nc is "no clobber"
echo "  ... done."
echo

PISMVERSION=results/PISM_example/pism_Greenland_5km_v1.1.nc
echo -n "creating bootstrapable $PISMVERSION from $DATANAME ... "
# copy the vars we want, and preserve history and global attrs
ncks -O -v mapping,lat,lon,bheatflx,topg,thk,presprcp,smb,airtemp2m $DATANAME $PISMVERSION
# convert from water equivalent thickness rate ("m year-1") to "kg m-2 year-1".
# Assumes water density of 1000.0 kg m-3
ncap2 -O -s "precipitation=presprcp*1000.0" $PISMVERSION $PISMVERSION
ncatted -O -a units,precipitation,m,c,"kg m-2 year-1" $PISMVERSION
ncatted -O -a long_name,precipitation,c,c,"mean annual precipitation rate" $PISMVERSION
# delete incorrect standard_name attribute from bheatflx; there is no known standard_name
ncatted -a standard_name,bheatflx,d,, $PISMVERSION
# use pism-recognized name for 2m air temp
ncrename -O -v airtemp2m,ice_surface_temp $PISMVERSION
ncatted -O -a units,ice_surface_temp,c,c,"Celsius" $PISMVERSION
# use pism-recognized name and standard_name for surface mass balance, after
# converting from liquid water equivalent thickness per year to [kg m-2 year-1]
ncap2 -O -s "climatic_mass_balance=1000.0*smb" $PISMVERSION $PISMVERSION
# Note: The RACMO field smb has value 0 as a missing value, unfortunately,
# everywhere the ice thickness is zero.  Here we replace with 1 m a-1 ablation.
# This is a *choice* of the model of surface mass balance in thk==0 areas.
ncap2 -O -s "where(thk <= 0.0){climatic_mass_balance=-1000.0;}" $PISMVERSION $PISMVERSION
ncatted -O -a standard_name,climatic_mass_balance,m,c,"land_ice_surface_specific_mass_balance_flux" $PISMVERSION
ncatted -O -a units,climatic_mass_balance,m,c,"kg m-2 year-1" $PISMVERSION
# de-clutter by only keeping vars we want
ncks -O -v mapping,lat,lon,bheatflx,topg,thk,precipitation,ice_surface_temp,climatic_mass_balance \
  $PISMVERSION $PISMVERSION
# add projection information
ncatted -O -a proj,global,c,c,"+proj=stere +lat_0=90 +lat_ts=71 +lon_0=-39 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs" $PISMVERSION
ncap2 -A \
      -s 'land_ice_area_fraction_retreat=0 * thk' \
      -s 'where(thk > 0 || topg > 0) land_ice_area_fraction_retreat=1' \
      -s 'land_ice_area_fraction_retreat@units="1"' \
      $PISMVERSION $PISMVERSION
ncatted -a standard_name,land_ice_area_fraction_retreat,d,, $PISMVERSION

ncatted -O -a calendar,time,c,c,"365_day" $PISMVERSION

echo "done."
echo

# extract paleo-climate time series into files suitable for option
# -atmosphere ...,delta_T
TEMPSERIES=results/PISM_example/pism_dT.nc
echo -n "creating paleo-temperature file $TEMPSERIES from $DATANAME ... "
ncks -O -v oisotopestimes,temp_time_series $DATANAME $TEMPSERIES
ncrename -O -d oisotopestimes,time      $TEMPSERIES
ncrename -O -v temp_time_series,delta_T $TEMPSERIES
ncrename -O -v oisotopestimes,time      $TEMPSERIES
# reverse time dimension
ncpdq -O --rdr=-time $TEMPSERIES $TEMPSERIES
# make times follow same convention as PISM
ncap2 -O -s "time=-time" $TEMPSERIES $TEMPSERIES
ncatted -O -a units,time,m,c,"years since 1-1-1" $TEMPSERIES
ncatted -O -a calendar,time,c,c,"365_day" $TEMPSERIES
ncatted -O -a units,delta_T,m,c,"Kelvin" $TEMPSERIES
echo "done."
echo

# extract paleo-climate time series into files suitable for option
# -sea_level ...,delta_SL
SLSERIES=results/PISM_example/pism_dSL.nc
echo -n "creating paleo-sea-level file $SLSERIES from $DATANAME ... "
ncks -O -v sealeveltimes,sealevel_time_series $DATANAME $SLSERIES
ncrename -O -d sealeveltimes,time $SLSERIES
ncrename -O -v sealeveltimes,time $SLSERIES
ncrename -O -v sealevel_time_series,delta_SL $SLSERIES
# reverse time dimension
ncpdq -O --rdr=-time $SLSERIES $SLSERIES
# make times follow same convention as PISM
ncap2 -O -s "time=-time" $SLSERIES $SLSERIES
ncatted -O -a units,time,m,c,"years since 1-1-1" $SLSERIES
ncatted -O -a calendar,time,c,c,"365_day" $SLSERIES
echo "done."
echo
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
set -e
# set -x


if [[ $# -eq 0 ]] ; then
    echo "usage: $0 <template_grid>"
    exit 1
fi

template=$1
shapefile=$2
ncout=$3

reg_XY=$(gmt grdinfo -I- ${template})
dx=$(gmt grdinfo -Cn -o7 ${template})


# bamber="+proj=stere +lat_0=90 +lat_ts=71 +lon_0=-39 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
# epsg3413="+proj=stere +lat_0=90 +lat_ts=70 +lon_0=-45 +k=1 +x_0=0 +y_0=0 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
projNHEM="+ellps=WGS84 +datum=WGS84 +lat_ts=71.0 +proj=stere +x_0=0.0 +units=m +lon_0=-44.0 +lat_0=90.0"


ogr2ogr -f "GMT" ${ncout}.temp.gmt ${shapefile} -t_srs "$projNHEM"


nPolygons=$(cat ${ncout}.temp.gmt |grep -c "@P")
echo "Found $nPolygons polygons."

polyStrings=( "other" )
polyNumbers=( 0 )

for i in $(seq 1 $nPolygons)
do
    sed -n "/D${i}/,/>/p" ${ncout}.temp.gmt |\
        awk 'NR%1==0' |\
        gmt grdmask $reg_XY \
                -I${dx} -Gpoly${i}.nc -Np${i}

    # extract polygon name:
    pName=$(sed -n "/D${i}/,/#/p" ${ncout}.temp.gmt | head -n1 | cut -d '|' -f 2)
    polyStrings+=("${pName}")
    polyNumbers+=("${i}")
    echo $i $pName
  done

# function to join array values
function join_by { local IFS="$1"; shift; echo "$*"; }
polydoc=$(join_by , "${polyStrings[@]}")
polynums=$(join_by , "${polyNumbers[@]}")


# build one file
gmt grdmath poly1.nc poly2.nc ADD poly3.nc ADD = $ncout

# remove single files
for i in $(seq 1 $nPolygons)
do
    rm -f poly${i}.nc
done

# metadata
ncrename -v z,polygon $ncout

ncatted -O -a title,global,o,c,"NHEM regions" -h $ncout
ncatted -O -a flag_meanings,polygon,c,c,"$polydoc" -h $ncout
ncatted -O -a flag_values,polygon,c,s,"$polynums" -h $ncout
ncatted -O -a projection,global,c,c,"$projNHEM" -h $ncout


# clean up
rm -f gmt.history
rm -f ${ncout}.temp.gmt
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import matplotlib.pyplot as plt
import hyoga

import argparse

parser = argparse.ArgumentParser(description='plot ice data.')
parser.add_argument('inputfile')
parser.add_argument('outputfile')
parser.add_argument('--timestep', default=0)

args = parser.parse_args()


# fig, axes = plt.subplots(figsize=[11,8])#ncols=1) #Creating the basis for the plot
fig, axes = plt.subplots()#ncols=1) #Creating the basis for the plot

gdf = hyoga.open.paleoglaciers('bat19').to_crs("+ellps=WGS84 +datum=WGS84 +lat_ts=71.0 +proj=stere +x_0=0.0 +units=m +lon_0=-44.0 +lat_0=90.0")

# plot example data
#with hyoga.open.dataset('./gi_cold_mprange_clemdyn_-40000_-35000.nc') as ds:
# with hyoga.open.dataset('./ex_gi_cold_mprange_clemdyn_-35000_-30000.nc') as dsall:
with hyoga.open.dataset(args.inputfile) as dsall:
    cond = (dsall.x>-4700000) & (dsall.x<1550000) & (dsall.y>-49250000) & (dsall.y<960000)
    cond =(dsall.x<3.5e6) & (dsall.y<2.5e6)
    dssmall = dsall.where(cond,drop=True)
    # dssmall = dsall

    ds = dssmall.isel(age=0)
    ds.hyoga.plot.bedrock_altitude(center=False, ax=axes)
    # margin = ds.hyoga.plot.ice_margin(facecolor='white',zorder=-10)
    margin = ds.hyoga.plot.ice_margin(facecolor='white',)
    ds.hyoga.plot.surface_velocity(vmin=1e1, vmax=1e3, ax=axes)
    cont = ds.hyoga.plot.surface_altitude_contours(ax=axes)
    ds.hyoga.plot.surface_velocity_streamplot(
        cmap='Blues', vmin=1e1, vmax=1e3, density=(7, 7))


    # paleo glacier extent
    gdf.plot(ax=axes, alpha=0.2, zorder=0)


    # ds.hyoga.plot.scale_bar()
    axes.text(0.95, 0.02, '{} years ago'.format(ds.age.data),
        verticalalignment='bottom', horizontalalignment='right',
        transform=axes.transAxes,
        color='black', fontsize=11)

    axes.set_xlim(-6.24e6, 3.5e6)
    axes.set_ylim(-6.24e6, 2.5e6)

    plt.savefig(args.outputfile, dpi=200)

    # plt.show()



    # ds.hyoga.plot.ice_margin(facecolor='tab:blue')
    # ds.hyoga.plot.surface_hillshade()
    # ds.hyoga.plot.surface_velocity_streamplot(
    #     cmap='Blues', vmin=1e0, vmax=1e3, density=(15, 15))
    # ds.hyoga.plot.natural_earth()
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import numpy as np
from netCDF4 import Dataset
import netCDF4 as nc
import argparse
import matplotlib.pyplot as plt
import matplotlib


params = {
    # 'text.latex.preamble': ['\\usepackage{gensymb}'],
    'image.origin': 'lower',
    'image.interpolation': 'nearest',
    'image.cmap': 'gray',
    'axes.grid': False,
    'savefig.dpi': 150,  # to adjust notebook inline plot size
    'axes.labelsize': 8,  # fontsize for x and y labels (was 10)
    'axes.titlesize': 8,
    'font.size': 8,  # was 10
    'legend.fontsize': 6,  # was 10
    'xtick.labelsize': 8,
    'ytick.labelsize': 8,
    # 'text.usetex': True,
    'figure.figsize': [3.39, 2.10],
    'font.family': 'serif',
}
matplotlib.rcParams.update(params)

parser = argparse.ArgumentParser(
    description='',)

parser.add_argument('pism_ts_file')
parser.add_argument('output')
parser.add_argument('--observed', help="obseverd sea level change e.g. imbrie2006", default="none")
parser.add_argument('--usetex', help="use latex to improve some font rendering")
args = parser.parse_args()

if args.usetex:
    params['text.usetex'] = True
    params['text.latex.preamble'] = ['\\usepackage{gensymb}']

rootgrp = nc.MFDataset(args.pism_ts_file, "r")
time = rootgrp.variables['time'][:]
volume = rootgrp.variables['ice_volume'][:]
sealevel_pism = rootgrp.variables['sea_level_rise_potential'][:]
rootgrp.close()

    # "/home/sebastian/palmod/datasets/automaticIceData/input/sealevel/pism_dSL_Imbrie2006.nc", "r")
if args.observed != "none":
    rootgrp = Dataset(args.observed, "r")
    time_imbrie = rootgrp.variables['time'][:]
    sealevel_imbrie = rootgrp.variables['delta_SL'][:]
    rootgrp.close()
    time_imbrie = time_imbrie / 1000

# time_imbrie = time_imbrie * 60 * 60 * 24 * 365
time = time / (60 * 60 * 24 * 365) / 1000

plt.figure(figsize=(5, 2.5))
plt.plot(time, -sealevel_pism, label="PISM", color="darkorange")
if args.observed != "none":
    plt.plot(time_imbrie, sealevel_imbrie, label="Imbrie 2006", color="grey")
plt.xlabel("time (ka)")
plt.ylabel("sea level (m)")
plt.legend(frameon=False)

plt.gca().spines["top"].set_alpha(0.0)
plt.gca().spines["bottom"].set_alpha(0.3)
plt.gca().spines["right"].set_alpha(0.0)
plt.gca().spines["left"].set_alpha(0.3)

plt.tight_layout()
plt.savefig(args.output, dpi=300)
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
from tqdm import tqdm
from typing import List
from dataclasses import dataclass
import argparse
import matplotlib.pyplot as plt
import cmocean.cm as cmo
import numpy as np
import matplotlib

from netCDF4 import Dataset

params = {
    'text.latex.preamble': ['\\usepackage{gensymb}'],
    'image.origin': 'lower',
    'image.interpolation': 'nearest',
    'image.cmap': 'gray',
    'axes.grid': False,
    'savefig.dpi': 150,  # to adjust notebook inline plot size
    'axes.labelsize': 8,  # fontsize for x and y labels (was 10)
    'axes.titlesize': 8,
    'font.size': 8,  # was 10
    'legend.fontsize': 6,  # was 10
    'xtick.labelsize': 8,
    'ytick.labelsize': 8,
    'text.usetex': True,
    'figure.figsize': [3.39, 2.10],
    'font.family': 'serif',
}
matplotlib.rcParams.update(params)


parser = argparse.ArgumentParser()
parser.add_argument('netcdf', type=str)
parser.add_argument('regionfile',
                    type=str)
parser.add_argument('outfile',
                    type=str)
args = parser.parse_args()

ncName = args.netcdf
expName = ncName.split('/')[-1]


print("loading file...")
data = Dataset(ncName, mode='r')
x = data.variables['x'][:]
y = data.variables['y'][:]
time = data.variables['time'][:]

# convert to years
time = time / (60*60*24*365) / 1000

thk = data.variables['thk'][:]
# todo: fix this as soon as i have yield output
basal_yield = data.variables['tauc'][:] / 1000 #kPa
velocity = data.variables['velsurf_mag'][:]
tillwat = data.variables['tillwat'][:]
# acc = data.variables['surface_accumulation_flux'][:]
# run = data.variables['surface_runoff_flux'][:]
maskPISM = data.variables['mask'][:]
data.close()
print("loaded experiment file")

startpoint = 100
stoppoint = 350
time = time[startpoint:stoppoint]
thk = thk[startpoint:stoppoint, :, :]
basal_yield = basal_yield[startpoint:stoppoint, :, :]
velocity = velocity[startpoint:stoppoint, :, :]
tillwat = tillwat[startpoint:stoppoint, :, :]
maskPISM = maskPISM[startpoint:stoppoint, :, :]

dx = x[1] - x[0]
nTimesteps = thk.shape[0]

# print(thk.shape)

# load region file:
data_regions = Dataset(args.regionfile, mode='r')
regions = data_regions.variables['polygon'][:]
regi = data_regions.variables['polygon']
regionNames = regi.flag_meanings.split(",")
data_regions.close()

print("loaded region file")

print(regionNames)
# nRegions = len(regionNames)
nRegions = 1


def sea_level_rise_potential(volume):
    # volume = ice_volume_not_displacing_seawater(geometry, thickness_threshold)
    water_density = 1000
    ice_density = 910
    ocean_area = 362.5e6  # km ^ 2  cogley2010
    ocean_area = 362.5e12  # m ^ 2  cogley2010

    additional_water_volume = (ice_density / water_density) * volume
    sea_level_change = additional_water_volume / ocean_area
    return sea_level_change

################################


@dataclass
class RegionData:
    name: str
    id: int
    icemass: List
    icethickness: List
    icevolume: List
    icevelocity: List
    basal_yield: List
    tillwat: List
    mask: np.ndarray
    surfaceArea: float


allRegions = []
mask = regions == 2
surfaceArea = np.sum(mask) * dx**2 * 1e-6  # in km^2
allRegions.append(
    RegionData(
        name=regionNames[2],
        id=2,
        icemass=[],
        icevolume=[],
        icethickness=[],
        icevelocity=[],
        basal_yield=[],
        tillwat=[],
        mask=mask,
        surfaceArea=surfaceArea,
    )
)

# print(allRegions)

for t in tqdm(range(nTimesteps)):
    for i in range(nRegions):
        # print(i)
        currentRegion = allRegions[i]
        mask = currentRegion.mask
        current_iceMask = maskPISM[t, :, :] != 2
        # region_thk = mask*thk
        current_thk = thk[t, :, :]*mask
        current_basal_yield = basal_yield[t, :, :]*mask
        current_velocity = velocity[t, :, :]*mask
        current_tillwat = tillwat[t, :, :]*mask
        # print(np.sum(current_thk))
        volume = np.sum(current_thk) * dx**2
        currentRegion.icevolume.append(volume)
        mass = np.sum(current_thk) * dx**2 * 910/1e12
        currentRegion.icemass.append(mass)

        # mask with pism mask to get proper means
        current_thk = np.ma.masked_array(current_thk, mask=current_iceMask)
        current_tillwat = np.ma.masked_array(
            current_tillwat, mask=current_iceMask)
        #
        currentRegion.icethickness.append(current_thk.mean())
        currentRegion.tillwat.append(current_tillwat.mean())
        currentRegion.basal_yield.append(np.mean(current_basal_yield))
        currentRegion.icevelocity.append(np.mean(current_velocity))

        # allRegions[i].icemass.append(
        #     np.sum(region_thk[i, :, :]) * dx * dx * 910 / 1e12)
        # iceVolume.append(np.sum(region_thk[i, :, :]) * dx * dx)


# iceVolume = np.array(iceVolume) / 1e15
#
#

fig, (ax1, ax2, ax3, ax4) = plt.subplots(4, 1, sharex=True, figsize=(6, 6))

# adjust time to start from 0
time = time+50

# points to indicate phases
surgeindex = 202
surgex = time[surgeindex]
buildupindex = 178
buildupx = time[buildupindex]
stabindex = 206
stabx = time[stabindex]

for ax in [ax1, ax2, ax3, ax4]:
    ax.axvspan(24, 31, alpha=0.2, color='black')
    ax.axvline(x=buildupx, color="gray", linewidth=0.8)
    ax.axvline(x=surgex, color="firebrick", linewidth=0.8)
    ax.axvline(x=stabx, color="royalblue", linewidth=0.8)

for i in range(nRegions):
    currentRegion = allRegions[i]
    mass = currentRegion.icemass
    volume = currentRegion.icevolume
    velocity = currentRegion.icevelocity
    yield_stress = currentRegion.basal_yield
    thickness = currentRegion.icethickness
    tillwat = currentRegion.tillwat

    sealevel = sea_level_rise_potential(np.array(volume))
    surgey = thickness[surgeindex]
    buildupy = thickness[buildupindex]
    staby = thickness[stabindex]

    ax1.plot(time, thickness, label=currentRegion.name, color='k', lw=0.6)
    ax2.plot(time, tillwat, label=currentRegion.name, color='k', lw=0.6)
    ax3.plot(time, yield_stress, label=currentRegion.name, color='k', lw=0.6)
    ax4.plot(time, velocity, label=currentRegion.name, color='k', lw=0.6)

    ax1.plot(surgex, surgey, ls="", marker="o",
             label="points", color='firebrick', markersize=3)
    ax1.plot(buildupx, buildupy, ls="", marker="o",
             label="points", color='gray', markersize=3)
    ax1.plot(stabx, staby, ls="", marker="o",
             label="points", color='royalblue', markersize=3)

ax1.set_ylabel("Ice thickness(m)")
ax2.set_ylabel("Till water(m)")
ax3.set_ylabel("Basal yield stress\n (kPa)")
ax4.set_ylabel("Surface velocity\n (m/yr)")

ax4.set_xlabel("Time in ka")
# ax2.plot(trendline, label='trend')
#


# ax1.axhline(y=2.93)


plt.savefig(args.outfile, dpi=300)
plt.show()
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import shutil

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def set_climatology_time(file):
    """
    For climatology forcing the time needs to start from 0
    @TODO: use with for dataset
    @TODO: middle of month?
    """

    rootgrp = Dataset(file, "r+")
    if "nv" not in rootgrp.dimensions:
        print("creating nv dimension for time bounds")
        rootgrp.createDimension("nv", 2)

    nc_time = rootgrp.variables['time']
    nc_time[:] = [15, 45, 75, 105, 135, 165, 195, 225, 255, 285, 315, 345]

    if "time_bnds" not in rootgrp.variables:
        print("creating time bounds var")
        nc_time_bounds = rootgrp.createVariable(
            "time_bnds", "f8", ("time", "nv"))
    nc_time_bounds = rootgrp.variables['time_bnds']
    nc_time_bounds[:] = [[0, 30], [30, 60], [60, 90], [90, 120], [120, 150], [150, 180],
                         [180, 210], [210, 240], [240, 270], [270, 300], [300, 330], [330, 360]]
    print("done")
    nc_time.bounds = 'time_bnds'
    nc_time.units = 'days since 1-1-1'
    nc_time.calendar = '360_day'

    rootgrp.close()


def extract_variables(inputfile, variables):
    """
    pynco thinks that ncks is a SingleFileOperator and therefore
    the temporary output does not work there. This is a workaround for that.
    And also a wrapper for extracting variables.
    It returns the path of the resulting temp file
    @TODO: does it work with single vars?
    """
    # create a temporary file, use this as the output
    file_name_prefix = "nco_" + inputfile.split(os.sep)[-1]
    tmp_file = tempfile.NamedTemporaryFile(
        mode="w+b", prefix=file_name_prefix, suffix=".tmp", delete=False
    )
    output = tmp_file.name
    # cmd.append("--output={0}".format(output))

    options = "-v " + ','.join(variables)
    nco.ncks(input=inputfile, output=output, options=[
        options])
    return output


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_CESM_atmo(gridfile, inputfile, stddevfile, outputfile, outputfile_ref_Height, interpolationMethod, compute_yearmean):
    """
    Prepare CESM for use with PISM
    """
    # create a temporary file, use this as the output
    file_name_prefix = "cdoCESMatmo_" + inputfile.split(os.sep)[-1]
    tmp_file = tempfile.NamedTemporaryFile(
        mode="w+b", prefix=file_name_prefix, suffix=".tmp", delete=False
    )
    tmpfile = tmp_file.name

    extract_vars = ["PRECL", "PRECC", "TREFHT", "PHIS", "lat", "lon"]
    reduced_cesm = extract_variables(inputfile, extract_vars)

    remapped = remap("{} ", gridfile, reduced_cesm,
                     "-f nc4c", interpolationMethod)

    precipitation = cdo.expr(
        "'precipitation=(PRECL+PRECC)*60*60*24*365*1000'", input=remapped)  # convert to PISM units

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="precipitation", value="kg m-2 yr-1"),
        c.Atted(mode="o", att_name="long_name",
                var_name="precipitation", value="mean monthly precipitation rate"),
    ]
    nco.ncatted(input=precipitation, options=opt)
    nco.ncks(input=precipitation,
             output=tmpfile)

    # temperature
    temperature = cdo.expr(
        "'air_temp=TREFHT'", input=remapped)  #

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="air_temp", value="Kelvin"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="air_temp", value="air_temp"),
    ]
    nco.ncatted(input=temperature, options=opt)

    # this should fix the recurring issues with segfaults when assembling the
    # model file. I have no idea why this happens, but this might fix it
    # I'm sorry, future Sebi!!
    reduced_temp = extract_variables(temperature, ["air_temp"])
    nco.ncks(input=reduced_temp,
             output=tmpfile, append=True)

    # standard deviation
    stddevremapped = remap("{} ", gridfile, stddevfile,
                           "-f nc4c", interpolationMethod)

    temperature_stddev = cdo.expr(
        "'air_temp_sd=TREFHT'", input=stddevremapped)  #

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="air_temp_sd", value="Kelvin"),
        c.Atted(mode="o", att_name="long_name",
                var_name="air_temp_sd", value="air temperature standard deviation"),
    ]
    nco.ncatted(input=temperature_stddev, options=opt)
    nco.ncks(input=temperature_stddev,
             output=tmpfile, append=True)
    set_climatology_time(tmpfile)

    if compute_yearmean:
        annualmean = cdo.yearmean(input=tmpfile)
        nco.ncks(input=annualmean,
                 output=outputfile)

    else:
        shutil.move(tmpfile, outputfile)

        # reference height
    ref_height = cdo.expr(
        "'referenceHeight=PHIS/9.81'", input=remapped)
    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="referenceHeight", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="referenceHeight", value="surface_altitude"),
    ]
    nco.ncatted(input=ref_height, options=opt)
    cdo.timmean(input=ref_height, output=outputfile_ref_Height,
                options='--reduce_dim')


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate CESM atmo file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("inputfile")
    parser.add_argument("stddevfile")
    parser.add_argument("outputfile")
    parser.add_argument("outputfile_ref_Height")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    parser.add_argument("--yearmean", action="store_true",
                        help="compute annual mean of data")
    args = parser.parse_args()
    prepare_CESM_atmo(args.gridfile,
                      args.inputfile,
                      args.stddevfile,
                      args.outputfile,
                      args.outputfile_ref_Height,
                      args.interpolationMethod,
                      args.yearmean)
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
from gridfill import fill
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import sys

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def fill_NaN(file, variable, setcornerstomean=False):
    """
    Fill NaN values with extrapolated values.
    Later this should use a better extrapolation, maybe as in ISMIP6
    https://zenodo.org/record/3997257
    @TODO: use with for dataset
    """
    # from gridfill import fill
    kw = dict(eps=1e-4,
              relax=0.6,
              itermax=1e3,
              initzonal=False,
              cyclic=False,
              verbose=True)

    rootgrp = Dataset(file, "r+")

    nc_var = rootgrp.variables[variable]

    if nc_var.ndim == 3:
        for step in range(nc_var.shape[0]):
            current = nc_var[step, :, :]
            if setcornerstomean:
                # set corner values to mean to avoid extrapolating into too low values
                mean = np.mean(current)
                current[0, 0] = mean
                current[-1, -1] = mean
                current[0, -1] = mean
                current[-1, 0] = mean

            filled, converged = fill(current, 1, 0, **kw)

            nc_var[step, :, :] = filled
    elif nc_var.ndim == 2:
        current = nc_var[:]
        filled, converged = fill(current, 1, 0, **kw)
        nc_var[:] = filled
    else:
        print("variable has to be 2d or 3d!")
        sys.exit()

    rootgrp.close()


def set_climatology_time(file):
    """
    For climatology forcing the time needs to start from 0
    @TODO: use with for dataset
    @TODO: middle of month?
    """

    rootgrp = Dataset(file, "r+")
    if "nv" not in rootgrp.dimensions:
        print("creating nv dimension for time bounds")
        rootgrp.createDimension("nv", 2)

    nc_time = rootgrp.variables['time']
    nc_time[:] = [15, 45, 75, 105, 135, 165, 195, 225, 255, 285, 315, 345]

    if "time_bnds" not in rootgrp.variables:
        print("creating time bounds var")
        nc_time_bounds = rootgrp.createVariable(
            "time_bnds", "f8", ("time", "nv"))
    nc_time_bounds = rootgrp.variables['time_bnds']
    nc_time_bounds[:] = [[0, 30], [30, 60], [60, 90], [90, 120], [120, 150], [150, 180],
                         [180, 210], [210, 240], [240, 270], [270, 300], [300, 330], [330, 360]]
    print("done")
    nc_time.bounds = 'time_bnds'
    nc_time.units = 'days since 1-1-1'
    nc_time.calendar = '360_day'

    rootgrp.close()


def extract_variables(inputfile, variables):
    """
    pynco thinks that ncks is a SingleFileOperator and therefore
    the temporary output does not work there. This is a workaround for that.
    And also a wrapper for extracting variables.
    It returns the path of the resulting temp file
    @TODO: does it work with single vars?
    """
    # create a temporary file, use this as the output
    file_name_prefix = "nco_" + inputfile.split(os.sep)[-1]
    tmp_file = tempfile.NamedTemporaryFile(
        mode="w+b", prefix=file_name_prefix, suffix=".tmp", delete=False
    )
    output = tmp_file.name
    # cmd.append("--output={0}".format(output))

    options = "-v " + ','.join(variables)
    nco.ncks(input=inputfile, output=output, options=[
        options])
    return output


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_CESM_ocean(gridfile, inputfile, outputfile, interpolationMethod, compute_yearmean):

    # these were found using cdo showlevel, there is probably a better way
    # @TODO
    # also comment form Jorge: better use upper 400m or 200-1000m because that
    # is where shelfs are at around Antarctica
    upper200mlevels = "500,1500,2500,3500,4500,5500,6500,7500,8500,9500,10500,11500,12500,13500,14500,15500,16509.8398,17547.9043,18629.127,19766.0273"
    upper = cdo.vertmean(
        input="-setattribute,theta_ocean@units='Celsius' -chname,TEMP,theta_ocean -chname,SALT,salinity_ocean -sellevel,{} {}".format(upper200mlevels, inputfile))

    # using negative indexing to get lowest level
    # bottom_salt = cdo.sellevidx(-1, input=remapped)
    # ncks -d depth,-1 in.nc out.nc
    # opt = [
    #     c.LimitSingle("z_t", -1)
    # ]
    # # @TODO make wrapper for this tempfile stuff as well, I guess
    # nco.ncks(input=inputfile, output="bottom.nc", options=opt)

    remapped = remap("{} ", gridfile, upper,
                     "-f nc4c", interpolationMethod)

    opt = [
        c.Atted(mode="o", att_name="standard_name",
                var_name="theta_ocean", value="theta_ocean"),
        c.Atted(mode="o", att_name="long_name",
                var_name="theta_ocean", value="potential temperature of the adjacent ocean"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="salinity_ocean", value="salinity_ocean"),
        c.Atted(mode="o", att_name="long_name",
                var_name="salinity_ocean", value="ocean salinity"),
    ]
    nco.ncatted(input=remapped, options=opt)

    fill_NaN(remapped, "theta_ocean")
    fill_NaN(remapped, "salinity_ocean")

    if compute_yearmean:
        set_climatology_time(remapped)
        annualmean = cdo.yearmean(input=remapped)
        nco.ncks(input=annualmean,
                 output=outputfile)
    else:
        set_climatology_time(remapped)
        nco.ncks(input=remapped,
                 output=outputfile)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate CESM atmo file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("inputfile")
    parser.add_argument("outputfile")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    parser.add_argument("--yearmean", action="store_true",
                        help="compute annual mean of data")
    args = parser.parse_args()
    prepare_CESM_ocean(args.gridfile,
                       args.inputfile,
                       args.outputfile,
                       args.interpolationMethod,
                       args.yearmean)
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import shutil

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_ETOPO1(gridfile, input_bed, input_usurf, outputfile, interpolationMethod):

    remapped_bed = remap("{} ", gridfile, input_bed,
                         "-f nc4c", interpolationMethod)

    rDict = {
        'z': 'topg',
    }
    nco.ncrename(input=remapped_bed, options=[c.Rename("variable", rDict)])

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="topg", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="topg", value="bedrock_altitude"),
        c.Atted(mode="o", att_name="long_name",
                var_name="topg", value=""),
        c.Atted(mode="o", att_name="source",
                var_name="topg", value="ETOPO1 1 Arc-Minute Global Relief Model. http://www.ngdc.noaa.gov/mgg/global/global.html"),
    ]
    nco.ncatted(input=remapped_bed, options=opt)
    nco.ncks(input=remapped_bed,
             output=outputfile)

    remapped_usurf = remap("{} ", gridfile, input_usurf,
                           "-f nc4c", interpolationMethod)

    rDict = {
        'z': 'usurf',
    }
    nco.ncrename(input=remapped_usurf, options=[c.Rename("variable", rDict)])

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="usurf", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="usurf", value="surface_altitude"),
        c.Atted(mode="o", att_name="long_name",
                var_name="usurf", value=""),
        c.Atted(mode="o", att_name="source",
                var_name="usurf", value="ETOPO1 1 Arc-Minute Global Relief Model. http://www.ngdc.noaa.gov/mgg/global/global.html"),
    ]
    nco.ncatted(input=remapped_usurf, options=opt)

    nco.ncks(input=remapped_usurf,
             output=outputfile, append=True)

    # thickness is computed from usurf and topg
    # need to make sure that it's always positive
    thk = cdo.expr(
        "'thk = ((usurf-topg) > 0) ? (usurf-topg) : (0.0)'", input=outputfile, options="-f nc4c")
    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="thk", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="thk", value="land_ice_thickness"),
        c.Atted(mode="o", att_name="long_name",
                var_name="thk", value=""),
        c.Atted(mode="o", att_name="source",
                var_name="thk", value="ETOPO1 1 Arc-Minute Global Relief Model. http://www.ngdc.noaa.gov/mgg/global/global.html"),
    ]
    nco.ncatted(input=thk, options=opt)

    nco.ncks(input=thk,
             output=outputfile, append=True)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate ETOPO1 file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("input_bed")
    parser.add_argument("input_usurf")
    parser.add_argument("outputfile")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    args = parser.parse_args()
    prepare_ETOPO1(args.gridfile,
                   args.input_bed,
                   args.input_usurf,
                   args.outputfile,
                   args.interpolationMethod,
                   )
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import argparse
from tqdm import tqdm


def check_monotonicity(vector):
    return np.all(vector[1:] >= vector[:-1])


def read_csv_and_compute_index(csvfile):
    """
    Prepare glacial index file
    """
    # input = "./input/glacialIndex/41586_2004_BFnature02805_MOESM1_ESM.csv"
    secPerYear = 60 * 60 * 24 * 365

    deltaO18 = np.genfromtxt(csvfile, delimiter=',')
    # the csv uses inverval notation, so every year is double
    # lets only take every second one

    time = deltaO18[::2, 0] / 1000  # no idea why I need to do that...¬
    delta = deltaO18[::2, 1]

    # compute bounds
    time_bnds = deltaO18[::-1, 0] * -1  # reverse direction
    time_bnds = time_bnds.reshape((len(time_bnds)//2, 2))
    time_bnds = time_bnds / 1000  # now it's years

    time_bnds_secs = time_bnds * secPerYear

    # compute time from bounds
    time = np.mean(time_bnds_secs, axis=1)

    # convert time to seconds and invert direction
    # time = time[::-1] * secPerYear * -1

    # also need to reverse the delta
    delta = delta[::-1]

    LGMindex = len(delta) - 421  # at 21k years before present (year 2000)
    IGindex = len(delta) - 1  # interglacial is now

    print('LGM time is {} ka'.format(time[LGMindex] / secPerYear / 1000))
    print('IG time is {} ka'.format(time[IGindex] / secPerYear / 1000))

    def gen_index(delta, deltaPD, deltaLGM):
        """Eq. 1 in Niu et al 2017"""
        return (delta - deltaPD) / (deltaLGM - deltaPD)

    index = gen_index(delta, delta[IGindex], delta[LGMindex])
    time_ka = time / secPerYear/1000

    return index, time, time_ka, time_bnds_secs


def generate_index_file(index, time, time_bnds, outputfile):

    time_bnds_xr = xr.DataArray(name="time_bnds",
                                data=time_bnds,
                                dims=["time", "bnds"])

    index_xr = xr.DataArray(name="glac_index",
                            data=index,
                            dims=["time"],
                            coords={
                                "time": time,
                            },
                            attrs=dict(
                                long_name="Glacial Index",
                                units="1",
                            ),
                            )

    output = xr.Dataset(
        dict(
            glac_index=index_xr,
            time_bnds=time_bnds_xr,
        )
    )
    output.time.attrs["standard_name"] = "time"
    output.time.attrs["long_name"] = "time"
    output.time.attrs["bounds"] = "time_bnds"
    output.time.attrs["units"] = "seconds since 1-1-1"
    output.time.attrs["calendar"] = "365_day"
    output.time.attrs["axis"] = "T"

    # set encoding, otherwise we have fillValue even in coordinates and
    # time and that's not CF compliant.
    encoding = {
        'time': {'dtype': 'float32', 'zlib': False, '_FillValue': None},
        'time_bnds': {'dtype': 'float32', 'zlib': False, '_FillValue': None},
    }
    output.to_netcdf(outputfile, encoding=None, engine="scipy")
    # output.to_netcdf("output_nc4.nc", encoding=encoding,)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate glacial index forcing")
    parser.add_argument("indexfile")
    parser.add_argument("outputfile")
    args = parser.parse_args()

    index, time, time_ka, time_bnds_sec = read_csv_and_compute_index(
        args.indexfile)

    generate_index_file(
        index, time, time_bnds_sec,
        args.outputfile)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import shutil

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_ice7g(gridfile, input, outputfile, interpolationMethod):

    remapped = remap("{} -selvar,stgit,Topo", gridfile, input,
                     "-f nc4c -b F32", interpolationMethod)

    rDict = {
        'stgit': 'thk',
        'Topo': 'usurf',
    }
    nco.ncrename(input=remapped, options=[c.Rename("variable", rDict)])

    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="thk", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="thk", value="land_ice_thickness"),
        c.Atted(mode="o", att_name="long_name",
                var_name="thk", value=""),
        c.Atted(mode="o", att_name="source",
                var_name="thk", value="ICE-7G_NA https://www.atmosp.physics.utoronto.ca/~peltier/data.php"),
    ]
    nco.ncatted(input=remapped, options=opt)
    nco.ncks(input=remapped,
             output=outputfile)

    # topg is computed as difference
    topg = cdo.expr(
        "'topg = (usurf-thk)'", input=outputfile)
    opt = [
        c.Atted(mode="o", att_name="units",
                var_name="topg", value="m"),
        c.Atted(mode="o", att_name="standard_name",
                var_name="topg", value="bedrock_altitude"),
        c.Atted(mode="o", att_name="long_name",
                var_name="topg", value=""),
        c.Atted(mode="o", att_name="source",
                var_name="topg", value="ICE-7G_NA https://www.atmosp.physics.utoronto.ca/~peltier/data.php"),
    ]
    nco.ncatted(input=topg, options=opt)

    nco.ncks(input=topg,
             output=outputfile, append=True)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate CESM atmo file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("inputfile")
    parser.add_argument("outputfile")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    args = parser.parse_args()
    prepare_ice7g(args.gridfile,
                  args.inputfile,
                  args.outputfile,
                  args.interpolationMethod,
                  )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import shutil

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_Laske_Masters(gridfile, input, outputfile, interpolationMethod):

    remapped_file = remap("{} ", gridfile, input,
                          "-f nc4c", interpolationMethod)

    nco.ncks(input=remapped_file,
             output=outputfile,)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate CESM Sediment file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("inputfile")
    parser.add_argument("outputfile")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    args = parser.parse_args()
    prepare_Laske_Masters(args.gridfile,
                          args.inputfile,
                          args.outputfile,
                          args.interpolationMethod,
                          )
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
import sys
import numpy as np
import cv2 as cv
from netCDF4 import Dataset
import argparse


def generate_ocean_kill_mask(topgfile, outputfile, level=-2000, filter_length=25, remove_himalaya=False):

    topg_level = level

    src = Dataset(topgfile, "r")
    topg = src .variables["topg"][:]
    x = src .variables["x"][:]
    y = src .variables["y"][:]

    # new_thk = empty_like(topg)
    okill = np.zeros(topg.shape, dtype="uint8")

    # smoothing
    topg_smooth = cv.blur(topg, (filter_length, filter_length))

    #okill[topg_smooth>=topg_level] = 1.0
    np.putmask(okill, topg_smooth >= topg_level,
               1.0)  # a litle faster than previous method

    # second method using the longest contour
    # currently not used here
    #okill2 = np.zeros(topg.shape, dtype="uint8")
    # start image processing
    # https://stackoverflow.com/questions/44588279/find-and-draw-the-largest-contour-in-opencv-on-a-specific-color-python

    # work on initial data
    # im2, contours, hierarchy = cv.findContours(okill, cv.RETR_EXTERNAL,
    contours, hierarchy = cv.findContours(okill, cv.RETR_EXTERNAL,
                                          cv.CHAIN_APPROX_NONE)
    if len(contours) != 0:
        # find largest contour by area
        cnt = max(contours, key=cv.contourArea)
    else:
        print("ERROR: no contour found for given contour level")
        sys.exit(2)

    #print ("max area %f" % cv.contourArea(cnt))
    # mask based on selected contour
    # currently not used
    #cv.drawContours(okill2, [cnt], 0, 1, -1)

    if remove_himalaya:
        # option to remove ice in himalaya because the climate model is not so
        # great there
        okill[440:, 440:] = 0

    dst = Dataset(outputfile, "w")

    # Copy dimensions
    for dname, the_dim in src.dimensions.items():
        if dname in ['y', 'x']:
            # print(dname, len(the_dim))
            dst.createDimension(
                dname,
                len(the_dim) if not the_dim.isunlimited() else None)

    # Copy variables
    for vname, varin in src.variables.items():
        if vname in [
                'y',
                'x',
                'lat',
                'lon',
                'mapping',
        ]:
            out_var = dst.createVariable(vname, varin.datatype,
                                         varin.dimensions)
            # Copy variable attributes
            for att in varin.ncattrs():
                # print att
                if att not in ['_FillValue', 'name']:
                    setattr(out_var, att, getattr(varin, att))
            # Write data
            out_var[:] = varin[:]

    nc_mask = dst.createVariable('land_ice_area_fraction_retreat', 'd',
                                 ('y', 'x'))
    nc_mask[:] = okill
    nc_mask.units = "1"
    nc_mask.coordinates = "lat lon"
    nc_mask.long_name = "mask specifying fixed calving front locations"
    nc_mask.doc = 'ocean kill mask from topg_level {:.3f} m after smoothing with filter length {:d} pixel'.format(level, filter_length
                                                                                                                  )
    src.close()
    dst.close()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate ocean kill file based on topography")
    parser.add_argument("inputfile")
    parser.add_argument("outputfile")
    parser.add_argument("--level", default=-2000)
    parser.add_argument("--filter_length", default=25)
    parser.add_argument("--remove_himalaya", action="store_true")
    args = parser.parse_args()
    generate_ocean_kill_mask(args.inputfile,
                             args.outputfile,
                             level=args.level,
                             filter_length=args.filter_length,
                             remove_himalaya=args.remove_himalaya,
                             )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import socket
from pathlib import Path
import tempfile
import os
import getpass
import numpy as np
from netCDF4 import Dataset
import argparse
import shutil

from nco import Nco
from nco import custom as c
from cdo import Cdo
nco = Nco()

# todo: make this a config thing!
# on k19 I should use /work/sbeyer/tmp for tmp files
if socket.gethostname() == "k19":
    tempdir = "/work/"+getpass.getuser() + "/tmp"
    cdo = Cdo(tempdir=tempdir)  # python
else:
    cdo = Cdo()
    # cdo.debug = True


def remap(cdostring, gridfile, input, options, interpolationMethod):
    if interpolationMethod == "bilinear":
        remapped = cdo.remapbil(
            cdostring.format(gridfile), input=input, options=options)
    elif interpolationMethod == "conservative":
        remapped = cdo.remapycon(
            "{} ".format(gridfile), input=input, options=options)
    else:
        raise ValueError(
            'interpolationMethod {} unknown. Stopping.'.format(interpolationMethod))
    return remapped


def prepare_SHAPIRO(gridfile, input, outputfile, interpolationMethod):

    remapped = remap("{} ", gridfile, input,
                     "-f nc4c -b F32", interpolationMethod)

    positive = cdo.expr(
        "'bheatflx = (bheatflx < 0) ? 0.0 : bheatflx'", input=remapped)
    nco.ncks(input=positive,
             output=outputfile)


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate CESM atmo file on icemodel grid")
    parser.add_argument("gridfile")
    parser.add_argument("inputfile")
    parser.add_argument("outputfile")
    parser.add_argument("--interpolationMethod", default="bilinear",
                        choices=["bilinear", "conservative"])
    args = parser.parse_args()
    prepare_SHAPIRO(args.gridfile,
                    args.inputfile,
                    args.outputfile,
                    args.interpolationMethod,
                    )
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
import sys
import numpy as np
from netCDF4 import Dataset
import argparse


def generate_tillphi(topgfile, sedimentfile, maskfile, outfile,
                     sediment_threshold,
                     phi_min, phi_max, topg_min, topg_max, tauc_factor):

    # compute tillphi from topg:
    src = Dataset(topgfile, mode='r')
    topg = src.variables['topg'][:]

    print(sediment_threshold, phi_min, phi_max,
          topg_min, topg_max, tauc_factor)
    print(topg)

    tillphi = phi_min + (topg - topg_min) * \
        (phi_max-phi_min)/(topg_max-topg_min)
    tillphi[topg < topg_min] = phi_min
    tillphi[topg > topg_max] = phi_max

    # handle sediment map
    # first divide everything by 100 after a tan has been applied to it
    tillphi_rad = tillphi * np.pi / 180
    # tanphi = np.tan(tillphi_rad)  # for testing only

    m = (np.pi*0 + np.arctan(tauc_factor*np.tan(tillphi_rad))) / tillphi_rad
    smallphi_rad = tillphi_rad * m
    smallphi_deg = smallphi_rad / np.pi * 180
    # tanphi_small = np.tan(smallphi_rad)  # for testing only

    # determine where to reduce tillphi
    if maskfile != "none":
        data = Dataset(maskfile, mode="r")
        sedimask = data.variables['sedimentmask'][:]
        data.close()
        tillphi_sediment = np.where(
            sedimask > 0, tillphi, smallphi_deg)
    else:
        data = Dataset(sedimentfile, mode='r')
        thk_sediment = data.variables['thk_sediment'][:]
        data.close()
        tillphi_sediment = np.where(
            thk_sediment < sediment_threshold, tillphi, smallphi_deg)

    # write to the output file
    dst = Dataset(outfile, mode='w')

    ##############################################################
    # Copy dimensions
    for dname, the_dim in src.dimensions.items():
        if dname in ['y', 'x']:
            # print(dname, len(the_dim))
            dst.createDimension(
                dname,
                len(the_dim) if not the_dim.isunlimited() else None)

    # Copy variables
    for vname, varin in src.variables.items():
        if vname in [
                'y',
                'x',
                'lat',
                'lon',
                'mapping',
        ]:
            out_var = dst.createVariable(vname, varin.datatype,
                                         varin.dimensions)
            # Copy variable attributes
            for att in varin.ncattrs():
                # print att
                if att not in ['_FillValue', 'name']:
                    setattr(out_var, att, getattr(varin, att))
            # Write data
            out_var[:] = varin[:]

    nc_tillphi = dst.createVariable(
        'tillphi', 'f4', ('y', 'x'), zlib=True)
    nc_tillphi.units = "degrees"
    nc_tillphi.long_name = "till friction angle computed from topg and sediment mask"
    nc_tillphi[:] = tillphi_sediment

    src.close()
    dst.close()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="Generate ocean kill file based on topography")
    parser.add_argument("topgfile")
    parser.add_argument("sedimentfile")
    parser.add_argument("maskfile")
    parser.add_argument("outfile")
    parser.add_argument("sediment_threshold", type=float)
    parser.add_argument("phi_min", type=float)
    parser.add_argument("phi_max", type=float)
    parser.add_argument("topg_min", type=float)
    parser.add_argument("topg_max", type=float)
    parser.add_argument("tauc_factor", type=float)
    args = parser.parse_args()

    generate_tillphi(args.topgfile,
                     args.sedimentfile,
                     args.maskfile,
                     args.outfile,
                     args.sediment_threshold,
                     args.phi_min,
                     args.phi_max,
                     args.topg_min,
                     args.topg_max,
                     args.tauc_factor)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from netCDF4 import Dataset
import argparse


def set_climatology_time(file):
    """
    For climatology forcing the time needs to start from 0
    @TODO: use with for dataset
    @TODO: middle of month?
    """

    rootgrp = Dataset(file, "r+")
    if "nv" not in rootgrp.dimensions:
        print("creating nv dimension for time bounds")
        rootgrp.createDimension("nv", 2)

    nc_time = rootgrp.variables['time']
    nc_time[:] = [15, 45, 75, 105, 135, 165, 195, 225, 255, 285, 315, 345]

    if "time_bnds" not in rootgrp.variables:
        print("creating time bounds var")
        nc_time_bounds = rootgrp.createVariable(
            "time_bnds", "f8", ("time", "nv"))
    nc_time_bounds = rootgrp.variables['time_bnds']
    nc_time_bounds[:] = [[0, 30], [30, 60], [60, 90], [90, 120], [120, 150], [150, 180],
                         [180, 210], [210, 240], [240, 270], [270, 300], [300, 330], [330, 360]]
    print("done")
    nc_time.bounds = 'time_bnds'
    nc_time.units = 'days since 1-1-1'
    nc_time.calendar = '360_day'

    rootgrp.close()


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="set simple climatology time")
    parser.add_argument("inputfile")
    args = parser.parse_args()
    set_climatology_time(args.inputfile)
ShowHide 75 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/sebastianbeyer/glaciaconda
Name: glaciaconda
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...