This is an old version of the notebook, see the latest here.

From the PO.DAAC Cookbook, to access the GitHub version of the notebook, follow this link.

SWOT Simulated North American Continent Hydrology Dataset Exploration in the Cloud

Accessing and Visualizing SWOT Simulated Datasets

Requirement:

This tutorial can only be run in an AWS cloud instance running in us-west-2: NASA Earthdata Cloud data in S3 can be directly accessed via earthaccess python library; this access is limited to requests made within the US West (Oregon) (code: us-west-2) AWS region.

Learning Objectives:

  • Access all 5 products of SWOT HR sample data (archived in NASA Earthdata Cloud) within the AWS cloud, without downloading to local machine
  • Visualize accessed data

SWOT Simulated Level 2 North America Continent KaRIn High Rate Version 1 Datasets:

  1. River Vector Shapefile - SWOT_SIMULATED_NA_CONTINENT_L2_HR_RIVERSP_V1

DOI: https://doi.org/10.5067/KARIN-2RSP1

  1. Lake Vector Shapefile - SWOT_SIMULATED_NA_CONTINENT_L2_HR_LAKESP_V1

DOI: https://doi.org/10.5067/KARIN-2LSP1

  1. Water Mask Pixel Cloud NetCDF - SWOT_SIMULATED_NA_CONTINENT_L2_HR_PIXC_V1

DOI: https://doi.org/10.5067/KARIN-2PIX1

  1. Water Mask Pixel Cloud Vector Attribute NetCDF - SWOT_SIMULATED_NA_CONTINENT_L2_HR_PIXCVEC_V1

DOI: https://doi.org/10.5067/KARIN-2PXV1

  1. Raster NetCDF - SWOT_SIMULATED_NA_CONTINENT_L2_HR_RASTER_V1

DOI: https://doi.org/10.5067/KARIN-2RAS1

Notebook Author: Cassie Nickles, NASA PO.DAAC (Aug 2022)

Libraries Needed

import glob
import os
import requests
import s3fs
import fiona
import netCDF4 as nc
import h5netcdf
import xarray as xr
import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import hvplot.xarray
import earthaccess
from earthaccess import Auth, DataCollections, DataGranules, Store

Earthdata Login

An Earthdata Login account is required to access data, as well as discover restricted data, from the NASA Earthdata system. Thus, to access NASA data, you need Earthdata Login. Please visit https://urs.earthdata.nasa.gov to register and manage your Earthdata Login account. This account is free to create and only takes a moment to set up. We use earthaccess to authenticate your login credentials below.

#auth = earthaccess.login(strategy="interactive", persist=True) #if you do not have a netrc created, this line will do so with your credentials
auth = earthaccess.login(strategy="netrc")

Set up an s3fs session for Direct Access

s3fs sessions are used for authenticated access to s3 bucket and allows for typical file-system style operations. Below we create session by passing in the data access information.

fs_s3 = earthaccess.get_s3fs_session(daac='PODAAC', provider='POCLOUD')

Single File Access

The s3 access link can be found using earthaccess data search. Since this collection consists of Reach and Node files, we need to extract only the granule for the Reach file. We do this by filtering for the ‘Reach’ title in the data link.

Alternatively, Earthdata Search (see tutorial) can be used to manually search for a single file.

1. River Vector Shapefiles

#retrieves granule from the day we want
river_results = earthaccess.search_data(short_name = 'SWOT_SIMULATED_NA_CONTINENT_L2_HR_RIVERSP_V1', temporal = ('2022-08-22 19:24:41', '2022-08-22 19:30:37'))
#finds the s3 link of the one granule we want (The collection contains both Reaches and Nodes, but here we want only the Reach)
river_data_urls = []
for g in river_results:
    for l in earthaccess.results.DataGranule.data_links(g, access='direct'):
        if "Reach" in l:
            river_data_urls.append(l)
print(river_data_urls[0])

The native format for this data is a .zip file, and we want the .shp file within the .zip file, so we will create a Fiona AWS session using the credentials from setting up the s3fs session above to access the shapefiles within the shp files. If we don’t do this, the alternative would be to download the data to the cloud environment and extract the .zip file there.

fiona_session=fiona.session.AWSSession(
        aws_access_key_id=fs_s3.storage_options["key"],
        aws_secret_access_key=fs_s3.storage_options["secret"],
        aws_session_token=fs_s3.storage_options["token"]
    )
# We use the zip+ prefix so fiona knows that we are operating on a zip file
river_shp_url = f"zip+{river_data_urls[0]}"

with fiona.Env(session=fiona_session):
    SWOT_HR_shp1 = gpd.read_file(river_shp_url) 

#view the attribute table
SWOT_HR_shp1 
fig, ax = plt.subplots(figsize=(11,7))
SWOT_HR_shp1.plot(ax=ax, color='black')

2. Lake Vector Shapefiles

The lake vector shapefiles can be accessed in the same way as the river shapefiles above.

lake_results = earthaccess.search_data(short_name = 'SWOT_SIMULATED_NA_CONTINENT_L2_HR_LAKESP_V1', temporal = ('2022-08-22 19:24:18', '2022-08-22 19:30:50'))
#find the s3 link of the desired granule (This collection has three options: Obs, Unassigned, and Prior - we want Obs)
lake_data_urls = []
for g in lake_results:
    for l in earthaccess.results.DataGranule.data_links(g, access='direct'):
        if "Obs" in l:
            lake_data_urls.append(l)
print(lake_data_urls[0])

The native format for this data is a .zip file, and we want the .shp file within the .zip file, so we will create a Fiona AWS session using the credentials from setting up the s3fs session above to access the shapefiles within the shp files. If we don’t do this, the alternative would be to download the data to the cloud environment and extract the .zip file there.

fiona_session=fiona.session.AWSSession(
        aws_access_key_id=fs_s3.storage_options["key"],
        aws_secret_access_key=fs_s3.storage_options["secret"],
        aws_session_token=fs_s3.storage_options["token"]
    )
# We use the zip+ prefix so fiona knows that we are operating on a zip file
lake_shp_url = f"zip+{lake_data_urls[0]}"

with fiona.Env(session=fiona_session):
    SWOT_HR_shp2 = gpd.read_file(lake_shp_url) 

#view the attribute table
SWOT_HR_shp2
fig, ax = plt.subplots(figsize=(7,12))
SWOT_HR_shp2.plot(ax=ax, color='black')

3. Water Mask Pixel Cloud NetCDF

Accessing the remaining files is different than the shp files above. We do not need to unzip the files because they are stored in native netCDF files in the cloud. For the rest of the products, we will open via xarray.

watermask_results = earthaccess.search_data(short_name = 'SWOT_SIMULATED_NA_CONTINENT_L2_HR_PIXC_V1', temporal = ('2022-08-22 19:29:00', '2022-08-22 19:29:11'), point = ('-90', '35'))

The pixel cloud netCDF files are formatted with three groups titled, “pixel cloud”, “tvp”, or “noise” (more detail here). In order to access the coordinates and variables within the file, a group must be specified when calling xarray open_dataset.

ds_PIXC = xr.open_mfdataset(earthaccess.open([watermask_results[0]]), group = 'pixel_cloud', engine='h5netcdf')
ds_PIXC
plt.scatter(x=ds_PIXC.longitude, y=ds_PIXC.latitude, c=ds_PIXC.height)
plt.colorbar().set_label('Height (m)')

4. Water Mask Pixel Cloud Vector Attribute NetCDF

vector_results = earthaccess.search_data(short_name = 'SWOT_SIMULATED_NA_CONTINENT_L2_HR_PIXCVEC_V1', temporal = ('2022-08-22 19:29:00', '2022-08-22 19:29:11'), point = ('-90', '35'))
ds_PIXCVEC = xr.open_mfdataset(earthaccess.open([vector_results[0]]), decode_cf=False,  engine='h5netcdf')
ds_PIXCVEC
pixcvec_htvals = ds_PIXCVEC.height_vectorproc.compute()
pixcvec_latvals = ds_PIXCVEC.latitude_vectorproc.compute()
pixcvec_lonvals = ds_PIXCVEC.longitude_vectorproc.compute()

#Before plotting, we set all fill values to nan so that the graph shows up better spatially
pixcvec_htvals[pixcvec_htvals > 15000] = np.nan
pixcvec_latvals[pixcvec_latvals > 80] = np.nan
pixcvec_lonvals[pixcvec_lonvals > 180] = np.nan
plt.scatter(x=pixcvec_lonvals, y=pixcvec_latvals, c=pixcvec_htvals)
plt.colorbar().set_label('Height (m)')

5. Raster NetCDF

raster_results = earthaccess.search_data(short_name = 'SWOT_SIMULATED_NA_CONTINENT_L2_HR_RASTER_V1', temporal = ('2022-08-22 19:28:50', '2022-08-22 19:29:11'), point = ('-90', '35'))
#this collection has 100m and 250m granules, but we only want 100m
raster_data = []
for g in raster_results:
    for l in earthaccess.results.DataGranule.data_links(g, access='direct'):
        if "100m" in l:
            raster_data.append(l)
print(raster_data)
ds_raster = xr.open_mfdataset(earthaccess.open([raster_data[0]], provider = 'POCLOUD'), engine='h5netcdf')
ds_raster

It’s easy to analyze and plot the data with packages such as hvplot!

ds_raster.wse.hvplot.image(y='y', x='x')