API reference

High-level functions

CloudnetPy’s high-level functions provide a simple mechanism to process cloud remote sensing measurements into Cloudnet products. A full processing goes in steps. Each step produces a file which used as an input for the next step.

Raw data conversion

Different Cloudnet instruments provide raw data in various formats (netCDF, binary, text) that first need to be converted into homogeneous Cloudnet netCDF files containing harmonized units and other metadata. This initial processing step is necessary to ensure that the subsequent processing steps work with all supported instrument combinations.

instruments.mira2nc(raw_mira: str | list[str], output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts METEK MIRA-35 cloud radar data into Cloudnet Level 1b netCDF file.

This function converts raw MIRA file(s) into a much smaller file that contains only the relevant data and can be used in further processing steps.

Parameters:
  • raw_mira – Filename of a daily MIRA .mmclx or .zncfile. Can be also a folder containing several non-concatenated .mmclx or .znc files from one day or list of files. znc files take precedence because they are the newer filetype

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key value pair is name.

  • uuid – Set specific UUID for the file.

  • date – Expected date as YYYY-MM-DD of all profiles in the file.

Returns:

UUID of the generated file.

Raises:
  • ValidTimeStampError – No valid timestamps found.

  • FileNotFoundError – No suitable input files found.

  • ValueError – Wrong suffix in input file(s).

  • TypeError – Mixed mmclx and znc files.

Examples

>>> from cloudnetpy.instruments import mira2nc
>>> site_meta = {'name': 'Vehmasmaki'}
>>> mira2nc('raw_radar.mmclx', 'radar.nc', site_meta)
>>> mira2nc('raw_radar.znc', 'radar.nc', site_meta)
>>> mira2nc('/one/day/of/mira/mmclx/files/', 'radar.nc', site_meta)
>>> mira2nc('/one/day/of/mira/znc/files/', 'radar.nc', site_meta)
instruments.rpg2nc(path_to_l1_files: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) tuple[str, list][source]

Converts RPG-FMCW-94 cloud radar data into Cloudnet Level 1b netCDF file.

This function reads one day of RPG Level 1 cloud radar binary files, concatenates the data and writes a netCDF file.

Parameters:
  • path_to_l1_files – Folder containing one day of RPG LV1 files.

  • output_file – Output file name.

  • site_meta – Dictionary containing information about the site. Required key value pairs are altitude (metres above mean sea level) and name.

  • uuid – Set specific UUID for the file.

  • date – Expected date in the input files. If not set, all files will be used. This might cause unexpected behavior if there are files from several days. If date is set as ‘YYYY-MM-DD’, only files that match the date will be used.

Returns:

2-element tuple containing

  • UUID of the generated file.

  • Files used in the processing.

Raises:

ValidTimeStampError – No valid timestamps found.

Examples

>>> from cloudnetpy.instruments import rpg2nc
>>> site_meta = {'name': 'Hyytiala', 'altitude': 174}
>>> rpg2nc('/path/to/files/', 'test.nc', site_meta)
instruments.basta2nc(basta_file: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts BASTA cloud radar data into Cloudnet Level 1b netCDF file.

This function converts daily BASTA file into a much smaller file that contains only the relevant data and can be used in further processing steps.

Parameters:
  • basta_file – Filename of a daily BASTA .nc file.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key is name.

  • uuid – Set specific UUID for the file.

  • date – Expected date of the measurements as YYYY-MM-DD.

Returns:

UUID of the generated file.

Raises:

ValueError – Timestamps do not match the expected date.

Examples

>>> from cloudnetpy.instruments import basta2nc
>>> site_meta = {'name': 'Palaiseau', 'latitude': 48.718, 'longitude': 2.207}
>>> basta2nc('basta_file.nc', 'radar.nc', site_meta)
instruments.galileo2nc(raw_files: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts ‘Galileo’ cloud radar data into Cloudnet Level 1b netCDF file.

Parameters:
  • raw_files – Input file name or folder containing multiple input files.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key value pair is name. Optional are latitude, longitude, altitude and snr_limit (default = 3).

  • uuid – Set specific UUID for the file.

  • date – Expected date as YYYY-MM-DD of all profiles in the file.

Returns:

UUID of the generated file.

Raises:

ValidTimeStampError – No valid timestamps found.

Examples

>>> from cloudnetpy.instruments import galileo2nc
>>> site_meta = {'name': 'Chilbolton'}
>>> galileo2nc('raw_radar.nc', 'radar.nc', site_meta)
>>> galileo2nc('/one/day/of/galileo/files/', 'radar.nc', site_meta)
instruments.copernicus2nc(raw_files: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts ‘Copernicus’ cloud radar data into Cloudnet Level 1b netCDF file.

Parameters:
  • raw_files – Input file name or folder containing multiple input files.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key value pair is name. Optional are latitude, longitude, altitude and ‘calibration_offset’ (default = -146.8).

  • uuid – Set specific UUID for the file.

  • date – Expected date as YYYY-MM-DD of all profiles in the file.

Returns:

UUID of the generated file.

Raises:

ValidTimeStampError – No valid timestamps found.

Examples

>>> from cloudnetpy.instruments import copernicus2nc
>>> site_meta = {'name': 'Chilbolton'}
>>> copernicus2nc('raw_radar.nc', 'radar.nc', site_meta)
>>> copernicus2nc('/one/day/of/copernicus/files/', 'radar.nc', site_meta)
instruments.ceilo2nc(full_path: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts Vaisala, Lufft and Campbell Scientific ceilometer data into Cloudnet Level 1b netCDF file.

This function reads raw Vaisala (CT25k, CL31, CL51, CL61), Lufft (CHM 15k, CHM 15k-x) and Campbell Scientific (CS135) ceilometer files and writes the data into netCDF file. Three variants of the backscatter are saved:

  1. Raw backscatter, beta_raw

  2. Signal-to-noise screened backscatter, beta

  3. SNR-screened backscatter with smoothed weak background, beta_smooth

With CL61 two additional depolarisation parameters are saved:

  1. Signal-to-noise screened depolarisation, depolarisation

  2. SNR-screened depolarisation with smoothed weak background, depolarisation_smooth

CL61 screened backscatter is screened using beta_smooth mask to improve detection of weak aerosol layers and supercooled liquid clouds.

Parameters:
  • full_path – Ceilometer file name.

  • output_file – Output file name, e.g. ‘ceilo.nc’.

  • site_meta – Dictionary containing information about the site and instrument. Required key value pairs are name and altitude (metres above mean sea level). Also, ‘calibration_factor’ is recommended because the default value is probably incorrect. If the background noise is not range-corrected, you must define: {‘range_corrected’: False}. You can also explicitly set the instrument model with e.g. {‘model’: ‘cl61d’}.

  • uuid – Set specific UUID for the file.

  • date – Expected date as YYYY-MM-DD of all profiles in the file.

Returns:

UUID of the generated file.

Raises:

RuntimeError – Failed to read or process raw ceilometer data.

Examples

>>> from cloudnetpy.instruments import ceilo2nc
>>> site_meta = {'name': 'Mace-Head', 'altitude': 5}
>>> ceilo2nc('vaisala_raw.txt', 'vaisala.nc', site_meta)
>>> site_meta = {'name': 'Juelich', 'altitude': 108,
'calibration_factor': 2.3e-12}
>>> ceilo2nc('chm15k_raw.nc', 'chm15k.nc', site_meta)
instruments.pollyxt2nc(input_folder: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts PollyXT Raman lidar data into Cloudnet Level 1b netCDF file.

Parameters:
  • input_folder – Path to pollyxt netCDF files.

  • output_file – Output filename.

  • site_meta

    Dictionary containing information about the site with keys:

    • name: Name of the site (mandatory)

    • altitude: Site altitude in [m] (mandatory).

    • latitude (optional).

    • longitude (optional).

    • zenith_angle: If not the default 5 degrees (optional).

    • snr_limit: If not the default 2 (optional).

  • uuid – Set specific UUID for the file.

  • date – Expected date of the measurements as YYYY-MM-DD.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.instruments import pollyxt2nc
>>> site_meta = {'name': 'Mindelo', 'altitude': 13, 'zenith_angle': 6,
'snr_limit': 3}
>>> pollyxt2nc('/path/to/files/', 'pollyxt.nc', site_meta)
instruments.hatpro2nc(path_to_files: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) tuple[str, list][source]

Converts RPG HATPRO microwave radiometer data into Cloudnet Level 1b netCDF file.

This function reads one day of RPG HATPRO .LWP and .IWV binary files, concatenates the data and writes it into netCDF file.

Parameters:
  • path_to_files – Folder containing one day of RPG HATPRO files.

  • output_file – Output file name.

  • site_meta

    Dictionary containing information about the site with keys:

    • name: Name of the site (required)

    • altitude: Site altitude in [m] (optional).

    • latitude (optional).

    • longitude (optional).

  • uuid – Set specific UUID for the file.

  • date – Expected date in the input files. If not set, all files will be used. This might cause unexpected behavior if there are files from several days. If date is set as ‘YYYY-MM-DD’, only files that match the date will be used.

Returns:

2-element tuple containing

  • UUID of the generated file.

  • Files used in the processing.

Raises:

ValidTimeStampError – No valid timestamps found.

Examples

>>> from cloudnetpy.instruments import hatpro2nc
>>> site_meta = {'name': 'Hyytiala', 'altitude': 174}
>>> hatpro2nc('/path/to/files/', 'hatpro.nc', site_meta)
instruments.radiometrics2nc(full_path: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | date | None = None) str[source]

Converts Radiometrics .csv file into Cloudnet Level 1b netCDF file.

Parameters:
  • full_path – Input file name or folder containing multiple input files.

  • output_file – Output file name, e.g. ‘radiometrics.nc’.

  • site_meta – Dictionary containing information about the site and instrument. Required key value pairs are name and altitude (metres above mean sea level).

  • uuid – Set specific UUID for the file.

  • date – Expected date as YYYY-MM-DD of all profiles in the file.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.instruments import radiometrics2nc
>>> site_meta = {'name': 'Soverato', 'altitude': 21}
>>> radiometrics2nc('radiometrics.csv', 'radiometrics.nc', site_meta)
instruments.parsivel2nc(disdrometer_file: str | PathLike | Iterable[str | PathLike], output_file: str, site_meta: dict, uuid: str | None = None, date: str | date | None = None, telegram: Sequence[int | None] | None = None, timestamps: Sequence[datetime] | None = None) str[source]

Converts OTT Parsivel-2 disdrometer data into Cloudnet Level 1b netCDF file.

Parameters:
  • disdrometer_file – Filename of disdrometer file or list of filenames.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key is name.

  • uuid – Set specific UUID for the file.

  • date – Expected date of the measurements as YYYY-MM-DD.

  • telegram – List of measured value numbers as specified in section 11.2 of the instrument’s operating instructions. Unknown values are indicated with None. Telegram is required if the input file doesn’t contain a header.

  • timestamps – Specify list of timestamps if they are missing in the input file.

Returns:

UUID of the generated file.

Raises:

DisdrometerDataError – Timestamps do not match the expected date, or unable to read the disdrometer file.

Examples

>>> from cloudnetpy.instruments import parsivel2nc
>>> site_meta = {'name': 'Lindenberg', 'altitude': 104, 'latitude': 52.2,
'longitude': 14.1}
>>> uuid = parsivel2nc('parsivel.log', 'parsivel.nc', site_meta)
instruments.thies2nc(disdrometer_file: str, output_file: str, site_meta: dict, uuid: str | None = None, date: str | date | None = None) str[source]

Converts Thies-LNM disdrometer data into Cloudnet Level 1b netCDF file.

Parameters:
  • disdrometer_file – Filename of disdrometer .log file.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key is name.

  • uuid – Set specific UUID for the file.

  • date – Expected date of the measurements as YYYY-MM-DD.

Returns:

UUID of the generated file.

Raises:

DisdrometerDataError – Timestamps do not match the expected date, or unable to read the disdrometer file.

Examples

>>> from cloudnetpy.instruments import thies2nc
>>> site_meta = {'name': 'Lindenberg', 'altitude': 104, 'latitude': 52.2,
'longitude': 14.1}
>>> uuid = thies2nc('thies-lnm.log', 'thies-lnm.nc', site_meta)
instruments.ws2nc(weather_station_file: str | list[str], output_file: str, site_meta: dict, uuid: str | None = None, date: str | None = None) str[source]

Converts weather station data into Cloudnet Level 1b netCDF file.

Parameters:
  • weather_station_file – Filename of weather-station ASCII file.

  • output_file – Output filename.

  • site_meta – Dictionary containing information about the site. Required key is name.

  • uuid – Set specific UUID for the file.

  • date – Expected date of the measurements as YYYY-MM-DD.

Returns:

UUID of the generated file.

Raises:
  • WeatherStationDataError – Unable to read the file.

  • ValidTimeStampError – No valid timestamps found.

The categorize file

The categorize file concatenates all input data into common time / height grid.

categorize.generate_categorize(input_files: dict, output_file: str, uuid: str | None = None, options: dict | None = None) str[source]

Generates a Cloudnet Level 1c categorize file.

This function rebins measurements into a common height/time grid and classifies them into different scatterer types, such as ice, liquid, insects, etc. The radar signal is corrected for atmospheric attenuation, and error estimates are computed. The results are saved in output_file, a compressed netCDF4 file.

Parameters:
  • input_files (dict) – Contains filenames for calibrated radar, lidar, and model files. Optionally, it can also include disdrometer, mwr (containing the LWP variable), and lv0_files (a list of RPG Level 0 files).

  • output_file (str) – The full path of the output file.

  • uuid (str) – Specific UUID to assign to the generated file.

  • options (dict) – Dictionary containing optional parameters.

Returns:

UUID of the generated file.

Return type:

str

Raises:

RuntimeError – Raised if the categorize file creation fails.

Notes

A separate MWR file is not required when using an RPG cloud radar that measures liquid water path (LWP). In this case, the radar file can also serve as the MWR file (e.g., {‘mwr’: ‘radar.nc’}). If no MWR file is provided, liquid attenuation correction cannot be performed.

If RPG L0 files are included as additional input, the Voodoo method is used to detect liquid droplets.

Examples

>>> from cloudnetpy.categorize import generate_categorize
>>> input_files = {
...     'radar': 'radar.nc',
...     'lidar': 'lidar.nc',
...     'model': 'model.nc',
...     'mwr': 'mwr.nc'
... }
>>> generate_categorize(input_files, 'output.nc')
>>> input_files['lv0_files'] = ['file1.LV0', 'file2.LV0']  # Add RPG LV0 files
>>> generate_categorize(input_files, 'output.nc')  # Use the Voodoo method

Product generation

Starting from the categorize file, several geophysical products can be generated.

products.generate_classification(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet classification product.

This function reads the initial classification masks from a categorize file and creates a more comprehensive classification for different atmospheric targets. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_classification
>>> generate_classification('categorize.nc', 'classification.nc')
products.generate_iwc(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet ice water content product.

This function calculates ice water content using the so-called Z-T method. In this method, ice water content is calculated from attenuated-corrected radar reflectivity and model temperature. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_iwc
>>> generate_iwc('categorize.nc', 'iwc.nc')

References

Hogan, R.J., M.P. Mittermaier, and A.J. Illingworth, 2006: The Retrieval of Ice Water Content from Radar Reflectivity Factor and Temperature and Its Use in Evaluating a Mesoscale Model. J. Appl. Meteor. Climatol., 45, 301–317, https://doi.org/10.1175/JAM2340.1

products.generate_lwc(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet liquid water content product.

This function calculates cloud liquid water content using the so-called adiabatic-scaled method. In this method, liquid water content measured by microwave radiometer is used to constrain the theoretical liquid water content of observed liquid clouds. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_lwc
>>> generate_lwc('categorize.nc', 'lwc.nc')

References

Illingworth, A.J., R.J. Hogan, E. O’Connor, D. Bouniol, M.E. Brooks, J. Delanoé, D.P. Donovan, J.D. Eastment, N. Gaussiat, J.W. Goddard, M. Haeffelin, H.K. Baltink, O.A. Krasnov, J. Pelon, J. Piriou, A. Protat, H.W. Russchenberg, A. Seifert, A.M. Tompkins, G. van Zadelhoff, F. Vinit, U. Willén, D.R. Wilson, and C.L. Wrench, 2007: Cloudnet. Bull. Amer. Meteor. Soc., 88, 883–898, https://doi.org/10.1175/BAMS-88-6-883

products.generate_drizzle(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet drizzle product.

This function calculates different drizzle properties from cloud radar and lidar measurements. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_drizzle
>>> generate_drizzle('categorize.nc', 'drizzle.nc')

References

O’Connor, E.J., R.J. Hogan, and A.J. Illingworth, 2005: Retrieving Stratocumulus Drizzle Parameters Using Doppler Radar and Lidar. J. Appl. Meteor., 44, 14–27, https://doi.org/10.1175/JAM-2181.1

products.generate_der(categorize_file: str, output_file: str, uuid: str | None = None, parameters: Parameters | None = None) str[source]
Generates Cloudnet effective radius of liquid water droplets

product according to Frisch et al. 2002.

This function calculates liquid droplet effective radius def using the Frisch method. In this method, def is calculated from radar reflectivity factor and microwave radiometer liquid water path. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

  • parameters – Tuple of specific fixed parameters (ddBZ, N, dN, sigma_x, dsigma_x, dQ)

  • approach. (used in Frisch) –

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_der
>>> generate_der('categorize.nc', 'der.nc')
>>>
>>> from cloudnetpy.products.der import Parameters
>>> params = Parameters(2.0, 100.0e6, 200.0e6, 0.25, 0.1, 5.0e-3)
>>> generate_der('categorize.nc', 'der.nc', parameters=params)

References

Frisch, S., Shupe, M., Djalalova, I., Feingold, G., & Poellot, M. (2002). The Retrieval of Stratus Cloud Droplet Effective Radius with Cloud Radars, Journal of Atmospheric and Oceanic Technology, 19(6), 835-842. Retrieved May 10, 2022, from https://doi.org/10.1175/1520-0426(2002)019%3C0835:TROSCD%3E2.0.CO;2

products.generate_ier(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet ice effective radius product.

This function calculates ice particle effective radius using the Grieche et al. 2020 method which uses Hogan et al. 2006 to estimate ice water content and alpha from Delanoë et al. 2007. In this method, effective radius of ice particles is calculated from attenuated-corrected radar reflectivity and model temperature. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_ier
>>> generate_ier('categorize.nc', 'ier.nc')

References

Hogan, R. J., Mittermaier, M. P., & Illingworth, A. J. (2006). The Retrieval of Ice Water Content from Radar Reflectivity Factor and Temperature and Its Use in Evaluating a Mesoscale Model, Journal of Applied Meteorology and Climatology, 45(2), 301-317. from https://journals.ametsoc.org/view/journals/apme/45/2/jam2340.1.xml

Delanoë, J., Protat, A., Bouniol, D., Heymsfield, A., Bansemer, A., & Brown, P. (2007). The Characterization of Ice Cloud Properties from Doppler Radar Measurements, Journal of Applied Meteorology and Climatology, 46(10), 1682-1698. from https://journals.ametsoc.org/view/journals/apme/46/10/jam2543.1.xml

Griesche, H. J., Seifert, P., Ansmann, A., Baars, H., Barrientos Velasco, C., Bühl, J., Engelmann, R., Radenz, M., Zhenping, Y., and Macke, A. (2020): Application of the shipborne remote sensing supersite OCEANET for profiling of Arctic aerosols and clouds during Polarstern cruise PS106, Atmos. Meas. Tech., 13, 5335–5358. from https://doi.org/10.5194/amt-13-5335-2020,

Visualizing results

CloudnetPy offers an easy-to-use plotting interface:

plotting.generate_figure(filename: PathLike | str, variables: list[str], *, show: bool = True, output_filename: PathLike | str | None = None, options: PlotParameters | None = None) Dimensions[source]

Generate a figure based on the given filename and variables.

Parameters:
  • filename – The path to the input file.

  • variables – A list of variable names to plot.

  • show – Whether to display the figure. Defaults to True.

  • output_filename – The path to save the figure. Defaults to None.

  • options – Additional plot parameters. Defaults to None.

Returns:

Dimensions of a generated figure in pixels.

Return type:

Dimensions

class plotting.PlotParameters(dpi: float = 120, max_y: int = 12, title: bool = True, subtitle: bool = True, mark_data_gaps: bool = True, grid: bool = False, edge_tick_labels: bool = False, show_sources: bool = False, footer_text: str | None = None, plot_meta: PlotMeta | None = None, raise_on_empty: bool = False)[source]

Class representing the parameters for plotting.

dpi

The resolution of the plot in dots per inch.

Type:

float

max_y

Maximum y-axis value (km) in 2D time / height plots.

Type:

int

title

Whether to display the title of the plot.

Type:

bool

subtitle

Whether to display the subtitle of the plot.

Type:

bool

mark_data_gaps

Whether to mark data gaps in the plot.

Type:

bool

grid

Whether to display grid lines in the plot.

Type:

bool

edge_tick_labels

Whether to display tick labels on the edges of the plot.

Type:

bool

show_sources

Whether to display the sources of plotted data (i.e. instruments and model).

Type:

bool

footer_text

The text to display in the footer of the plot.

Type:

str | None

plot_meta

Additional metadata for the plot.

Type:

cloudnetpy.plotting.plot_meta.PlotMeta | None

raise_on_empty

Whether to raise an error if no data is found for a plotted variable.

Type:

bool

class plotting.PlotMeta(cmap: str = 'viridis', clabel: str | Sequence[tuple[str, str]] | None = None, plot_range: tuple[float, float] | None = None, log_scale: bool = False, moving_average: bool = True, contour: bool = False, zero_line: bool = False, time_smoothing_duration: int = 0)[source]

A class representing the metadata for plotting.

cmap

The colormap to be used for the plot.

Type:

str

clabel

The label for the colorbar. It can be a single string, a sequence of tuples containing the label and units for each colorbar, or None if no colorbar is needed.

Type:

str | collections.abc.Sequence[tuple[str, str]] | None

plot_range

The range of values to be plotted. It can be a tuple containing the minimum and maximum values, or None if the range should be automatically determined.

Type:

tuple[float, float] | None

log_scale

Whether to plot data values in a logarithmic scale.

Type:

bool

moving_average

Whether to plot a moving average in a 1d plot.

Type:

bool

contour

Whether to plot contours on top of a filled colormap.

Type:

bool

zero_line

Whether to plot a zero line in a 1d plot.

Type:

bool

time_smoothing_duration

The duration of the time smoothing window (in 2d plots) in minutes.

Type:

int

class plotting.Dimensions(fig, axes, pad_inches: float | None = None)[source]

Dimensions of a generated figure in pixels. Elements such as the figure title, labels, colorbar and legend are excluded from the margins.

width

Figure width in pixels.

Type:

int

height

Figure height in pixels.

Type:

int

margin_top

Space between top edge of image and plotted data in pixels.

Type:

int

margin_right

Space between right edge of image and plotted data in pixels.

Type:

int

margin_bottom

Space between bottom edge of image and plotted data in pixels.

Type:

int

margin_left

Space between left edge of image and plotted data in pixels.

Type:

int

Categorize modules

Categorize is CloudnetPy’s subpackage. It contains several modules that are used when creating the Cloudnet categorize file.

radar

Radar module, containing the Radar class.

class categorize.radar.Radar(full_path: str)[source]

Radar class, child of DataSource.

Parameters:

full_path – Cloudnet Level 1 radar netCDF file.

radar_frequency

Radar frequency (GHz).

Type:

float

folding_velocity

Radar’s folding velocity (m/s).

Type:

float

location

Location of the radar, copied from the global attribute location of the input file.

Type:

str

sequence_indices

Indices denoting the different altitude regimes of the radar.

Type:

list

source_type

Type of the radar, copied from the global attribute source of the radar_file. Can be free form string but must include either ‘rpg’ or ‘mira’ denoting one of the two supported radars.

Type:

str

rebin_to_grid(time_new: ndarray) list[source]

Rebins radar data in time using mean.

Parameters:

time_new – Target time array as fraction hour. Updates time attribute.

remove_incomplete_pixels() None[source]

Mask radar pixels where one or more required quantities are missing.

All valid radar pixels must contain proper values for Z, and v and also for width if exists. Otherwise there is some kind of problem with the data and the pixel should not be used in any further analysis.

filter_speckle_noise() None[source]

Removes speckle noise from radar data.

Any isolated radar pixel, i.e. “hot pixel”, is assumed to exist due to speckle noise. This is a crude approach and a more sophisticated method could be implemented here later.

filter_1st_gate_artifact() None[source]

Removes 1st range gate velocity artifact.

filter_stripes(variable: str) None[source]

Filters vertical and horizontal stripe-shaped artifacts from radar data.

correct_atten(attenuations: RadarAttenuation) None[source]

Corrects radar echo for liquid and gas attenuation.

Parameters:

attenuations – Radar attenuation object.

References

The method is based on Hogan R. and O’Connor E., 2004, https://bit.ly/2Yjz9DZ and the original Cloudnet Matlab implementation.

calc_errors(attenuations: RadarAttenuation, is_clutter: ndarray) None[source]

Calculates uncertainties of radar echo.

Calculates and adds Z_error, Z_sensitivity and Z_bias CloudnetArray instances to data attribute.

Parameters:
  • attenuations – 2-D attenuations due to atmospheric gases.

  • is_clutter – 2-D boolean array denoting pixels contaminated by clutter.

References

The method is based on Hogan R. and O’Connor E., 2004, https://bit.ly/2Yjz9DZ and the original Cloudnet Matlab implementation.

add_meta() None[source]

Copies misc. metadata from the input file.

lidar

Lidar module, containing the Lidar class.

class categorize.lidar.Lidar(full_path: str)[source]

Lidar class, child of DataSource.

Parameters:

full_path – Cloudnet Level 1 lidar netCDF file.

interpolate_to_grid(time_new: ndarray, height_new: ndarray) list[int][source]

Interpolate beta using nearest neighbor.

mwr

Mwr module, containing the Mwr class.

class categorize.mwr.Mwr(full_path: str)[source]

Microwave radiometer class, child of DataSource.

Parameters:

full_path – Cloudnet Level 1b mwr file.

rebin_to_grid(time_grid: ndarray) None[source]

Approximates lwp and its error in a grid using mean.

Parameters:

time_grid – 1D target time grid.

model

Model module, containing the Model class.

class categorize.model.Model(model_file: str, alt_site: float, options: dict | None = None)[source]

Model class, child of DataSource.

Parameters:
  • model_file – File name of the NWP model file.

  • alt_site – Altitude of the site above mean sea level (m).

  • options – Dictionary containing optional parameters.

source_type

Model type, e.g. ‘gdas1’ or ‘ecwmf’.

Type:

str

model_heights

2-D array of model heights (one for each time step).

Type:

ndarray

mean_height

Mean of model_heights.

Type:

ndarray

data_sparse

Model variables in common height grid but without interpolation in time.

Type:

dict

data_dense

Model variables interpolated to Cloudnet’s dense time / height grid.

Type:

dict

interpolate_to_common_height() None[source]

Interpolates model variables to common height grid.

interpolate_to_grid(time_grid: ndarray, height_grid: ndarray) list[source]

Interpolates model variables to Cloudnet’s dense time / height grid.

Parameters:
  • time_grid – The target time array (fraction hour).

  • height_grid – The target height array (m).

Returns:

Indices fully masked profiles.

calc_wet_bulb() None[source]

Calculates wet-bulb temperature in dense grid.

screen_sparse_fields() None[source]

Removes model fields that we don’t want to write in the output.

classify

categorize.classify.classify_measurements(data: Observations) ClassificationResult[source]

Classifies radar/lidar observations.

This function classifies atmospheric scatterers from the input data. The input data needs to be averaged or interpolated to the common time / height grid before calling this function.

Parameters:

data – A Observations instance.

Returns:

A ClassificationResult instance.

References

The Cloudnet classification scheme is based on methodology proposed by Hogan R. and O’Connor E., 2004, https://bit.ly/2Yjz9DZ and its proprietary Matlab implementation.

Notes

Some individual classification methods are changed in this Python implementation compared to the original Cloudnet methodology. Especially methods classifying insects, melting layer and liquid droplets.

melting

Functions to find melting layer from data.

categorize.melting.find_melting_layer(obs: ClassData, *, smooth: bool = True) ndarray[source]

Finds melting layer from model temperature, ldr, and velocity.

Melting layer is detected using linear depolarization ratio, ldr, Doppler velocity, v, and wet-bulb temperature, Tw.

The algorithm is based on ldr having a clear Gaussian peak around the melting layer. This signature is caused by the growth of ice crystals into snowflakes that are much larger. In addition, when snow and ice melt, emerging heavy water droplets start to drop rapidly towards ground. Thus, there is also a similar positive peak in the first difference of v.

The peak in ldr is the primary parameter we analyze. If ldr has a proper peak, and v < -1 m/s in the base, melting layer has been found. If ldr is missing we only analyze the behaviour of v, which is always present, to detect the melting layer.

Model temperature is used to limit the melting layer search to a certain temperature range around 0 C. For ECMWF the range is -4..+3, and for the rest -8..+6.

Notes

This melting layer detection method is novel and needs to be validated. Also note that there might be some detection problems with strong updrafts of air. In these cases the absolute values for speed do not make sense (rain drops can even move upwards instead of down).

Parameters:
  • obs – The ClassData instance.

  • smooth – If True, apply a small Gaussian smoother to the melting layer. Default is True.

Returns:

2-D boolean array denoting the melting layer.

freezing

Module to find freezing region from data.

categorize.freezing.find_freezing_region(obs: ClassData, melting_layer: ndarray) ndarray[source]

Finds freezing region using the model temperature and melting layer.

Every profile that contains melting layer, subzero region starts from the mean melting layer height. If there are (long) time windows where no melting layer is present, model temperature is used in the middle of the time window. Finally, the subzero altitudes are linearly interpolated for all profiles.

Parameters:
  • obs – The ClassData instance.

  • melting_layer – 2-D boolean array denoting melting layer.

Returns:

2-D boolean array denoting the sub-zero region.

Notes

It is not clear how model temperature and melting layer should be ideally combined to determine the sub-zero region. This current method differs slightly from the original Matlab code and should be validated more carefully later.

categorize.freezing.find_t0_alt(temperature: ndarray, height: ndarray) ndarray[source]

Interpolates altitudes where temperature goes below freezing.

Parameters:
  • temperature – 2-D temperature (K).

  • height – 1-D altitude grid (m).

Returns:

1-D array denoting altitudes where the temperature drops below 0 deg C.

falling

Module to find falling hydrometeors from data.

categorize.falling.find_falling_hydrometeors(obs: ClassData, is_liquid: ndarray, is_insects: ndarray) ndarray[source]

Finds falling hydrometeors.

Falling hydrometeors are radar signals that are a) not insects b) not clutter. Furthermore, falling hydrometeors are strong lidar pixels excluding liquid layers (thus these pixels are ice or rain). They are also weak radar signals in very cold temperatures.

Parameters:
  • obs – The ClassData instance.

  • is_liquid – 2-D boolean array of liquid droplets.

  • is_insects – 2-D boolean array of insects.

Returns:

2-D boolean array containing falling hydrometeors.

References

Hogan R. and O’Connor E., 2004, https://bit.ly/2Yjz9DZ.

insects

Module to find insects from data.

categorize.insects.find_insects(obs: ClassData, melting_layer: ndarray, liquid_layers: ndarray, prob_lim: float = 0.8) tuple[ndarray, ndarray][source]

Returns insect probability and boolean array of insect presence.

Insects are classified by estimating heuristic probability of insects from various individual radar parameters and combining these probabilities. Insects typically yield small echo and spectral width but high linear depolarization ratio (ldr), and they are present in warm temperatures.

The combination of echo, ldr and temperature is generally the best proxy for insects. If ldr is not available, we use other radar parameters.

Insects are finally screened from liquid layers and melting layer - and above melting layer.

Parameters:
  • obs – The ClassData instance.

  • melting_layer – 2D array denoting melting layer.

  • liquid_layers – 2D array denoting liquid layers.

  • prob_lim – Probability higher than this will lead to positive detection. Default is 0.8.

Returns:

2-element tuple containing

  • 2-D boolean flag of insects presence.

  • 2-D probability of pixel containing insects.

Return type:

tuple

Notes

This insect detection method is novel and needs to be validated.

atmos

droplet

This module has functions for liquid layer detection.

categorize.droplet.correct_liquid_top(obs: ClassData, is_liquid: ndarray, is_freezing: ndarray, limit: float = 200) ndarray[source]

Corrects lidar detected liquid cloud top using radar data.

Parameters:
  • obs – The ClassData instance.

  • is_liquid – 2-D boolean array denoting liquid clouds from lidar data.

  • is_freezing – 2-D boolean array of sub-zero temperature, derived from the model temperature and melting layer based on radar data.

  • limit – The maximum correction distance (m) above liquid cloud top.

Returns:

Corrected liquid cloud array.

References

Hogan R. and O’Connor E., 2004, https://bit.ly/2Yjz9DZ.

categorize.droplet.find_liquid(obs: ClassData, peak_amp: float = 1e-06, max_width: float = 300, min_points: int = 3, min_top_der: float = 1e-07, min_lwp: float = 0, min_alt: float = 100) ndarray[source]

Estimate liquid layers from SNR-screened attenuated backscatter.

Parameters:
  • obs – The ClassData instance.

  • peak_amp – Minimum value of peak. Default is 1e-6.

  • max_width – Maximum width of peak. Default is 300 (m).

  • min_points – Minimum number of valid points in peak. Default is 3.

  • min_top_der – Minimum derivative above peak, defined as (beta_peak-beta_top) / (alt_top-alt_peak). Default is 1e-7.

  • min_lwp – Minimum value from linearly interpolated lwp (kg m-2) measured by the mwr. Default is 0.

  • min_alt – Minimum altitude of the peak from the ground. Default is 100 (m).

Returns:

2-D boolean array denoting liquid layers.

References

The method is based on Tuononen, M. et.al, 2019, https://acp.copernicus.org/articles/19/1985/2019/.

categorize.droplet.ind_base(dprof: ndarray, ind_peak: int, dist: int, lim: float) int[source]

Finds base index of a peak in profile.

Return the lowermost index of profile where 1st order differences below the peak exceed a threshold value.

Parameters:
  • dprof – 1-D array of 1st discrete difference. Masked values should be 0, e.g. dprof = np.diff(masked_prof).filled(0)

  • ind_peak – Index of (possibly local) peak in the original profile. Note that the peak must be found with some other method before calling this function.

  • dist – Number of elements investigated below p. If ( p - dist)<0, search starts from index 0.

  • lim – Parameter for base index. Values greater than 1.0 are valid. Values close to 1 most likely return the point right below the maximum 1st order difference (within dist points below p). Values larger than 1 more likely accept some other point, lower in the profile.

Returns:

Base index of the peak.

Raises:

IndexError – Can’t find proper base index (probably too many masked values in the profile).

Examples

Consider a profile

>>> x = np.array([0, 0.5, 1, -99, 4, 8, 5])

that contains one bad, masked value

>>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0, 0, 0])
    [0, 0.5, 1.0, --, 4.0, 8.0, 5.0]

The 1st order difference is now

>>> dx = np.diff(mx).filled(0)
    [0.5, 0.5, 0, 0, 4, -3]

From the original profile we see that the peak index is 5. Let’s assume our base can’t be more than 4 elements below peak and the threshold value is 2. Thus we call

>>> ind_base(dx, 5, 4, 2)
    4

When x[4] is the lowermost point that satisfies the condition. Changing the threshold value would alter the result

>>> ind_base(dx, 5, 4, 10)
    1

See also

droplet.ind_top()

categorize.droplet.ind_top(dprof: ndarray, ind_peak: int, nprof: int, dist: int, lim: float) int[source]

Finds top index of a peak in profile.

Return the uppermost index of profile where 1st order differences above the peak exceed a threshold value.

Parameters:
  • dprof – 1-D array of 1st discrete difference. Masked values should be 0, e.g. dprof = np.diff(masked_prof).filled(0)

  • nprof – Length of the profile. Top index can’t be higher than this.

  • ind_peak – Index of (possibly local) peak in the profile. Note that the peak must be found with some other method before calling this function.

  • dist – Number of elements investigated above p. If (p + dist) > nprof, search ends to nprof.

  • lim – Parameter for top index. Values greater than 1.0 are valid. Values close to 1 most likely return the point right above the maximum 1st order difference (within dist points above p). Values larger than 1 more likely accept some other point, higher in the profile.

Returns:

Top index of the peak.

Raises:

IndexError – Can not find proper top index (probably too many masked values in the profile).

See also

droplet.ind_base()

categorize.droplet.interpolate_lwp(obs: ClassData) ndarray[source]

Linear interpolation of liquid water path to fill masked values.

Parameters:

obs – The ClassData instance.

Returns:

Liquid water path where the masked values are filled by interpolation.

Products modules

Products is CloudnetPy’s subpackage. It contains several modules that correspond to different Cloudnet products.

classification

Module for creating classification file.

products.classification.generate_classification(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet classification product.

This function reads the initial classification masks from a categorize file and creates a more comprehensive classification for different atmospheric targets. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_classification
>>> generate_classification('categorize.nc', 'classification.nc')

iwc

Module for creating Cloudnet ice water content file using Z-T method.

products.iwc.generate_iwc(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet ice water content product.

This function calculates ice water content using the so-called Z-T method. In this method, ice water content is calculated from attenuated-corrected radar reflectivity and model temperature. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_iwc
>>> generate_iwc('categorize.nc', 'iwc.nc')

References

Hogan, R.J., M.P. Mittermaier, and A.J. Illingworth, 2006: The Retrieval of Ice Water Content from Radar Reflectivity Factor and Temperature and Its Use in Evaluating a Mesoscale Model. J. Appl. Meteor. Climatol., 45, 301–317, https://doi.org/10.1175/JAM2340.1

class products.iwc.IwcSource(categorize_file: str, product: str)[source]

Data container for ice water content calculations.

append_sensitivity() None[source]

Calculates iwc sensitivity.

append_bias() None[source]

Calculates iwc bias.

append_error(ice_classification: IceClassification) tuple[source]

Estimates error of ice water content.

lwc

Module for creating Cloudnet liquid water content file using scaled-adiabatic method.

products.lwc.generate_lwc(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet liquid water content product.

This function calculates cloud liquid water content using the so-called adiabatic-scaled method. In this method, liquid water content measured by microwave radiometer is used to constrain the theoretical liquid water content of observed liquid clouds. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_lwc
>>> generate_lwc('categorize.nc', 'lwc.nc')

References

Illingworth, A.J., R.J. Hogan, E. O’Connor, D. Bouniol, M.E. Brooks, J. Delanoé, D.P. Donovan, J.D. Eastment, N. Gaussiat, J.W. Goddard, M. Haeffelin, H.K. Baltink, O.A. Krasnov, J. Pelon, J. Piriou, A. Protat, H.W. Russchenberg, A. Seifert, A.M. Tompkins, G. van Zadelhoff, F. Vinit, U. Willén, D.R. Wilson, and C.L. Wrench, 2007: Cloudnet. Bull. Amer. Meteor. Soc., 88, 883–898, https://doi.org/10.1175/BAMS-88-6-883

class products.lwc.LwcSource(categorize_file: str)[source]

Data container for liquid water content calculations. Child of DataSource.

This class reads input data from a categorize file and provides data structures and methods for holding the results.

Parameters:

categorize_file – Categorize file name.

lwp

1D liquid water path.

Type:

ndarray

lwp_error

1D error of liquid water path.

Type:

ndarray

is_rain

1D array denoting presence of rain.

Type:

ndarray

path_lengths

1D array of path lengths.

Type:

ndarray

atmosphere

Dictionary containing interpolated fields temperature and pressure.

Type:

dict

categorize_bits

The CategorizeBits instance.

Type:

CategorizeBits

class products.lwc.Lwc(lwc_source: LwcSource)[source]

Class handling the actual LWC calculations.

Parameters:

lwc_source – The LwcSource instance.

lwc_source

The LwcSource instance.

Type:

LwcSource

is_liquid

2D array denoting liquid.

Type:

ndarray

lwc_adiabatic

2D array storing adiabatic lwc.

Type:

ndarray

lwc

2D array of liquid water content (scaled with lwp).

Type:

ndarray

class products.lwc.CloudAdjustor(lwc_source: LwcSource, lwc: Lwc)[source]

Adjusts clouds (where possible) so that theoretical and measured LWP agree.

Parameters:
  • lwc_source – The LwcSource instance.

  • lwc – The Lwc instance.

lwc_source

The LwcSource instance.

Type:

LwcSource

lwc

Liquid water content data.

Type:

ndarray

is_liquid

2D array denoting liquid.

Type:

ndarray

lwc_adiabatic

2D array storing adiabatic lwc.

Type:

ndarray

echo

Dictionary storing radar and lidar echos

Type:

dict

status

2D array storing lwc status classification

Type:

ndarray

class products.lwc.LwcError(lwc_source: LwcSource, lwc: Lwc)[source]

Calculates liquid water content error.

Parameters:
  • lwc_source – The LwcSource instance.

  • lwc – The Lwc instance.

lwc_source

The LwcSource instance.

Type:

LwcSource

lwc

Liquid water content data.

Type:

ndarray

error

2D array storing lwc_error.

Type:

ndarray

drizzle

Module for creating Cloudnet drizzle product.

products.drizzle.generate_drizzle(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet drizzle product.

This function calculates different drizzle properties from cloud radar and lidar measurements. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Return type:

str

Examples

>>> from cloudnetpy.products import generate_drizzle
>>> generate_drizzle('categorize.nc', 'drizzle.nc')

References

O’Connor, E.J., R.J. Hogan, and A.J. Illingworth, 2005: Retrieving Stratocumulus Drizzle Parameters Using Doppler Radar and Lidar. J. Appl. Meteor., 44, 14–27, https://doi.org/10.1175/JAM-2181.1

class products.drizzle.DrizzleProducts(drizzle_source: DrizzleSource, drizzle_solver: DrizzleSolver)[source]

Calculates additional quantities from the drizzle properties.

Parameters:
  • drizzle_source – The DrizzleSource instance.

  • drizzle_solver – The DrizzleSolver instance.

derived_products

Dictionary containing derived drizzle products: ‘drizzle_N’, ‘drizzle_lwc’, ‘drizzle_lwf’, ‘v_drizzle’, ‘v_air’.

Type:

dict

class products.drizzle.RetrievalStatus(drizzle_class: DrizzleClassification)[source]

Estimates the status of drizzle retrievals.

Parameters:

drizzle_class – The DrizzleClassification instance.

drizzle_class

The DrizzleClassification instance.

retrieval_status

2D array containing drizzle retrieval status information.

Type:

ndarray

der

Module for creating Cloudnet droplet effective radius using the Frisch et al. 2002 method.

class products.der.Parameters(ddBZ, N, dN, sigma_x, dsigma_x, dQ)[source]
ddBZ: float

Alias for field number 0

N: float

Alias for field number 1

dN: float

Alias for field number 2

sigma_x: float

Alias for field number 3

dsigma_x: float

Alias for field number 4

dQ: float

Alias for field number 5

products.der.generate_der(categorize_file: str, output_file: str, uuid: str | None = None, parameters: Parameters | None = None) str[source]
Generates Cloudnet effective radius of liquid water droplets

product according to Frisch et al. 2002.

This function calculates liquid droplet effective radius def using the Frisch method. In this method, def is calculated from radar reflectivity factor and microwave radiometer liquid water path. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

  • parameters – Tuple of specific fixed parameters (ddBZ, N, dN, sigma_x, dsigma_x, dQ)

  • approach. (used in Frisch) –

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_der
>>> generate_der('categorize.nc', 'der.nc')
>>>
>>> from cloudnetpy.products.der import Parameters
>>> params = Parameters(2.0, 100.0e6, 200.0e6, 0.25, 0.1, 5.0e-3)
>>> generate_der('categorize.nc', 'der.nc', parameters=params)

References

Frisch, S., Shupe, M., Djalalova, I., Feingold, G., & Poellot, M. (2002). The Retrieval of Stratus Cloud Droplet Effective Radius with Cloud Radars, Journal of Atmospheric and Oceanic Technology, 19(6), 835-842. Retrieved May 10, 2022, from https://doi.org/10.1175/1520-0426(2002)019%3C0835:TROSCD%3E2.0.CO;2

class products.der.DropletClassification(categorize_file: str)[source]

Class storing the information about different ice types. Child of ProductClassification().

class products.der.DerSource(categorize_file: str, parameters: Parameters | None = None)[source]

Data container for effective radius calculations.

append_der() None[source]

Estimate liquid droplet effective radius using Frisch et al. 2002.

append_retrieval_status(droplet_classification: DropletClassification) None[source]

Returns information about the status of der retrieval.

ier

Module for creating Cloudnet ice effective radius file using Z-T method.

products.ier.generate_ier(categorize_file: str, output_file: str, uuid: str | None = None) str[source]

Generates Cloudnet ice effective radius product.

This function calculates ice particle effective radius using the Grieche et al. 2020 method which uses Hogan et al. 2006 to estimate ice water content and alpha from Delanoë et al. 2007. In this method, effective radius of ice particles is calculated from attenuated-corrected radar reflectivity and model temperature. The results are written in a netCDF file.

Parameters:
  • categorize_file – Categorize file name.

  • output_file – Output file name.

  • uuid – Set specific UUID for the file.

Returns:

UUID of the generated file.

Examples

>>> from cloudnetpy.products import generate_ier
>>> generate_ier('categorize.nc', 'ier.nc')

References

Hogan, R. J., Mittermaier, M. P., & Illingworth, A. J. (2006). The Retrieval of Ice Water Content from Radar Reflectivity Factor and Temperature and Its Use in Evaluating a Mesoscale Model, Journal of Applied Meteorology and Climatology, 45(2), 301-317. from https://journals.ametsoc.org/view/journals/apme/45/2/jam2340.1.xml

Delanoë, J., Protat, A., Bouniol, D., Heymsfield, A., Bansemer, A., & Brown, P. (2007). The Characterization of Ice Cloud Properties from Doppler Radar Measurements, Journal of Applied Meteorology and Climatology, 46(10), 1682-1698. from https://journals.ametsoc.org/view/journals/apme/46/10/jam2543.1.xml

Griesche, H. J., Seifert, P., Ansmann, A., Baars, H., Barrientos Velasco, C., Bühl, J., Engelmann, R., Radenz, M., Zhenping, Y., and Macke, A. (2020): Application of the shipborne remote sensing supersite OCEANET for profiling of Arctic aerosols and clouds during Polarstern cruise PS106, Atmos. Meas. Tech., 13, 5335–5358. from https://doi.org/10.5194/amt-13-5335-2020,

class products.ier.IerSource(categorize_file: str, product: str)[source]

Data container for ice effective radius calculations.

convert_units() None[source]

Convert um to m.

product_tools

General helper classes and functions for all products.

class products.product_tools.IceCoefficients(K2liquid0: float, ZT: float, T: float, Z: float, c: float)[source]

Coefficients for ice effective radius retrieval.

K2liquid0: float

Alias for field number 0

ZT: float

Alias for field number 1

T: float

Alias for field number 2

Z: float

Alias for field number 3

c: float

Alias for field number 4

class products.product_tools.CategoryBits(droplet: numpy.ndarray[Any, numpy.dtype[numpy.bool]], falling: numpy.ndarray[Any, numpy.dtype[numpy.bool]], freezing: numpy.ndarray[Any, numpy.dtype[numpy.bool]], melting: numpy.ndarray[Any, numpy.dtype[numpy.bool]], aerosol: numpy.ndarray[Any, numpy.dtype[numpy.bool]], insect: numpy.ndarray[Any, numpy.dtype[numpy.bool]])[source]
class products.product_tools.QualityBits(radar: numpy.ndarray[Any, numpy.dtype[numpy.bool]], lidar: numpy.ndarray[Any, numpy.dtype[numpy.bool]], clutter: numpy.ndarray[Any, numpy.dtype[numpy.bool]], molecular: numpy.ndarray[Any, numpy.dtype[numpy.bool]], attenuated_liquid: numpy.ndarray[Any, numpy.dtype[numpy.bool]], corrected_liquid: numpy.ndarray[Any, numpy.dtype[numpy.bool]], attenuated_rain: numpy.ndarray[Any, numpy.dtype[numpy.bool]], corrected_rain: numpy.ndarray[Any, numpy.dtype[numpy.bool]], attenuated_melting: numpy.ndarray[Any, numpy.dtype[numpy.bool]], corrected_melting: numpy.ndarray[Any, numpy.dtype[numpy.bool]])[source]
class products.product_tools.ProductClassification(categorize_file: str)[source]

Base class for creating different classifications in the child classes of various Cloudnet products. Child of CategorizeBits class.

Parameters:

categorize_file (str) – Categorize file name.

is_rain

1D array denoting rainy profiles.

Type:

ndarray

class products.product_tools.IceClassification(categorize_file: str)[source]

Class storing the information about different ice types. Child of ProductClassification().

class products.product_tools.IceSource(categorize_file: str, product: str)[source]

Base class for different ice products.

append_icy_data(ice_classification: IceClassification) None[source]

Adds the main variable (including ice above rain).

append_status(ice_classification: IceClassification) None[source]

Adds the status of retrieval.

products.product_tools.interpolate_model(cat_file: str, names: str | list) dict[str, ndarray][source]

Interpolates 2D model field into dense Cloudnet grid.

Parameters:
  • cat_file – Categorize file name.

  • names – Model variable to be interpolated, e.g. ‘temperature’ or [‘temperature’, ‘pressure’].

Returns:

Interpolated variables.

Return type:

dict

Misc

Documentation for various modules with low-level functionality.

concat_lib

Module for concatenating netCDF files.

concat_lib.truncate_netcdf_file(filename: str, output_file: str, n_profiles: int, dim_name: str = 'time') None[source]

Truncates netcdf file in dim_name dimension taking only n_profiles. Useful for creating small files for tests.

concat_lib.update_nc(old_file: str, new_file: str) int[source]

Appends data to existing netCDF file.

Parameters:
  • old_file – Filename of an existing netCDF file.

  • new_file – Filename of a new file whose data will be appended to the end.

Returns:

1 = success, 0 = failed to add new data.

Notes

Requires ‘time’ variable with unlimited dimension.

concat_lib.concatenate_files(filenames: list, output_file: str, concat_dimension: str = 'time', variables: list | None = None, new_attributes: dict | None = None, ignore: list | None = None, allow_difference: list | None = None) None[source]

Concatenate netCDF files in one dimension.

Parameters:
  • filenames – List of files to be concatenated.

  • output_file – Output file name.

  • concat_dimension – Dimension name for concatenation. Default is ‘time’.

  • variables – List of variables with the ‘concat_dimension’ to be concatenated. Default is None when all variables with ‘concat_dimension’ will be saved.

  • new_attributes – Optional new global attributes as {‘attribute_name’: value}.

  • ignore – List of variables to be ignored.

  • allow_difference – Names of scalar variables that can differ from one file to another (value from the first file is saved).

Notes

Arrays without ‘concat_dimension’, scalars, and global attributes will be taken from the first file. Groups, possibly present in a NETCDF4 formatted file, are ignored.

concat_lib.concatenate_text_files(filenames: list, output_filename: str | PathLike) None[source]

Concatenates text files.

concat_lib.bundle_netcdf_files(files: list, date: str, output_file: str, concat_dimensions: tuple[str, ...] = ('time', 'profile'), variables: list | None = None) list[source]

Concatenates several netcdf files into daily file with some extra data manipulation.

utils

This module contains general helper functions.

utils.seconds2hours(time_in_seconds: ndarray) ndarray[source]

Converts seconds since some epoch to fraction hour.

Parameters:

time_in_seconds – 1-D array of seconds since some epoch that starts on midnight.

Returns:

Time as fraction hour.

Notes

Excludes leap seconds.

utils.seconds2time(time_in_seconds: float) list[source]

Converts seconds since some epoch to time of day.

Parameters:

time_in_seconds – seconds since some epoch.

Returns:

[hours, minutes, seconds] formatted as ‘05’ etc.

Return type:

list

utils.seconds2date(time_in_seconds: float, epoch: tuple[int, int, int] = (2001, 1, 1)) list[source]

Converts seconds since some epoch to datetime (UTC).

Parameters:
  • time_in_seconds – Seconds since some epoch.

  • epoch – Epoch, default is (2001, 1, 1) (UTC).

Returns:

[year, month, day, hours, minutes, seconds] formatted as ‘05’ etc (UTC).

utils.datetime2decimal_hours(data: ndarray | list) ndarray[source]

Converts array of datetime to decimal_hours.

utils.time_grid(time_step: int = 30) ndarray[source]

Returns decimal hour array between 0 and 24.

Computes fraction hour time vector 0-24 with user-given resolution (in seconds).

Parameters:

time_step – Time resolution in seconds, greater than 1. Default is 30.

Returns:

Time vector between 0 and 24.

Raises:

ValueError – Bad resolution as input.

utils.binvec(x: ndarray | list) ndarray[source]

Converts 1-D center points to bins with even spacing.

Parameters:

x – 1-D array of N real values.

Returns:

N + 1 edge values.

Return type:

ndarray

Examples

>>> binvec([1, 2, 3])
    [0.5, 1.5, 2.5, 3.5]
utils.rebin_2d(x_in: ndarray, array: MaskedArray, x_new: ndarray, statistic: Literal['mean', 'std'] = 'mean', n_min: int = 1, *, mask_zeros: bool = True) tuple[MaskedArray, list][source]

Rebins 2-D data in one dimension.

Parameters:
  • x_in – 1-D array with shape (n,).

  • array – 2-D input data with shape (n, m).

  • x_new – 1-D target vector (center points) with shape (N,).

  • statistic – Statistic to be calculated. Possible statistics are ‘mean’, ‘std’. Default is ‘mean’.

  • n_min – Minimum number of points to have good statistics in a bin. Default is 1.

  • mask_zeros – Whether to mask 0 values in the returned array. Default is True.

Returns:

Rebinned data with shape (N, m) and indices of bins without enough data.

Return type:

tuple

utils.rebin_1d(x_in: ndarray, array: ndarray | MaskedArray, x_new: ndarray, statistic: str = 'mean', *, mask_zeros: bool = True) MaskedArray[source]

Rebins 1D array.

Parameters:
  • x_in – 1-D array with shape (n,).

  • array – 1-D input data with shape (m,).

  • x_new – 1-D target vector (center points) with shape (N,).

  • statistic – Statistic to be calculated. Possible statistics are ‘mean’, ‘std’. Default is ‘mean’.

  • mask_zeros – Whether to mask 0 values in the returned array. Default is True.

Returns:

Re-binned data with shape (N,).

utils.filter_isolated_pixels(array: ndarray) ndarray[source]

From a 2D boolean array, remove completely isolated single cells.

Parameters:

array – 2-D boolean array containing isolated values.

Returns:

Cleaned array.

Examples

>>> filter_isolated_pixels([[0, 0, 0], [0, 1, 0], [0, 0, 0]])
    array([[0, 0, 0],
           [0, 0, 0],
           [0, 0, 0]])
utils.filter_x_pixels(array: ndarray) ndarray[source]

From a 2D boolean array, remove cells isolated in x-direction.

Parameters:

array – 2-D boolean array containing isolated pixels in x-direction.

Returns:

Cleaned array.

Notes

Stronger cleaning than filter_isolated_pixels()

Examples

>>> filter_x_pixels([[1, 0, 0], [0, 1, 0], [0, 1, 1]])
    array([[0, 0, 0],
           [0, 1, 0],
           [0, 1, 0]])
utils.isbit(array: ndarray, nth_bit: int) ndarray[source]

Tests if nth bit (0,1,2,…) is set.

Parameters:
  • array – Integer array.

  • nth_bit – Investigated bit.

Returns:

Boolean array denoting values where nth_bit is set.

Raises:

ValueError – negative bit as input.

Examples

>>> isbit(4, 1)
    False
>>> isbit(4, 2)
    True

See also

utils.setbit()

utils.setbit(array: ndarray, nth_bit: int) ndarray[source]

Sets nth bit (0, 1, 2, …) on number.

Parameters:
  • array – Integer array.

  • nth_bit – Bit to be set.

Returns:

Integer where nth bit is set.

Raises:

ValueError – negative bit as input.

Examples

>>> setbit(1, 1)
    3
>>> setbit(0, 2)
    4

See also

utils.isbit()

utils.interpolate_2d(x: ndarray, y: ndarray, z: ndarray, x_new: ndarray, y_new: ndarray) ndarray[source]

Linear interpolation of gridded 2d data.

Parameters:
  • x – 1-D array.

  • y – 1-D array.

  • z – 2-D array at points (x, y).

  • x_new – 1-D array.

  • y_new – 1-D array.

Returns:

Interpolated data.

Notes

Does not work with nans. Ignores mask of masked data. Does not extrapolate.

utils.interpolate_2d_mask(x: ndarray, y: ndarray, z: MaskedArray, x_new: ndarray, y_new: ndarray) MaskedArray[source]

2D linear interpolation preserving the mask.

Parameters:
  • x – 1D array, x-coordinates.

  • y – 1D array, y-coordinates.

  • z – 2D masked array, data values.

  • x_new – 1D array, new x-coordinates.

  • y_new – 1D array, new y-coordinates.

Returns:

Interpolated 2D masked array.

Notes

Points outside the original range will be nans (and masked). Uses linear interpolation. Input data may contain nan-values.

utils.interpolate_2d_nearest(x: ndarray, y: ndarray, z: ndarray, x_new: ndarray, y_new: ndarray) MaskedArray[source]

2D nearest neighbor interpolation preserving mask.

Parameters:
  • x – 1D array, x-coordinates.

  • y – 1D array, y-coordinates.

  • z – 2D masked array, data values.

  • x_new – 1D array, new x-coordinates.

  • y_new – 1D array, new y-coordinates.

Returns:

Interpolated 2D masked array.

Notes

Points outside the original range will be interpolated but masked.

utils.calc_relative_error(reference: ndarray, array: ndarray) ndarray[source]

Calculates relative error (%).

utils.db2lin(array: float | ndarray, scale: int = 10) ndarray[source]

DB to linear conversion.

utils.lin2db(array: ndarray, scale: int = 10) ndarray[source]

Linear to dB conversion.

utils.mdiff(array: ndarray) float[source]

Returns median difference of 1-D array.

utils.l2norm(*args) MaskedArray[source]

Returns l2 norm.

Parameters:

*args – Variable number of data (array_like) with the same shape.

Returns:

The l2 norm.

utils.l2norm_weighted(values: tuple, overall_scale: float, term_weights: tuple) MaskedArray[source]

Calculates scaled and weighted Euclidean distance.

Calculated distance is of form: scale * sqrt((a1*a)**2 + (b1*b)**2 + …) where a, b, … are terms to be summed and a1, a2, … are optional weights for the terms.

Parameters:
  • values – Tuple containing the values.

  • overall_scale – Scale factor for the calculated Euclidean distance.

  • term_weights – Weights for the terms. Must be single float or a list of numbers (one per term).

Returns:

Scaled and weighted Euclidean distance.

TODO: Use masked arrays instead of tuples.

utils.cumsumr(array: ndarray, axis: int = 0) ndarray[source]

Finds cumulative sum that resets on 0.

Parameters:
  • array – Input array.

  • axis – Axis where the sum is calculated. Default is 0.

Returns:

Cumulative sum, restarted at 0.

Examples

>>> x = np.array([0, 0, 1, 1, 0, 0, 0, 1, 1, 1])
>>> cumsumr(x)
    [0, 0, 1, 2, 0, 0, 0, 1, 2, 3]
utils.ffill(array: ndarray, value: int = 0) ndarray[source]

Forward fills an array.

Parameters:
  • array – 1-D or 2-D array.

  • value – Value to be filled. Default is 0.

Returns:

Forward-filled array.

Return type:

ndarray

Examples

>>> x = np.array([0, 5, 0, 0, 2, 0])
>>> ffill(x)
    [0, 5, 5, 5, 2, 2]

Notes

Works only in axis=1 direction.

utils.init(n_vars: int, shape: tuple, dtype: type = <class 'float'>, *, masked: bool = True) Iterator[ndarray | MaskedArray][source]

Initializes several numpy arrays.

Parameters:
  • n_vars – Number of arrays to be generated.

  • shape – Shape of the arrays, e.g. (2, 3).

  • dtype – The desired data-type for the arrays, e.g., int. Default is float.

  • masked – If True, generated arrays are masked arrays, else ordinary numpy arrays. Default is True.

Yields:

Iterator containing the empty arrays.

Examples

>>> a, b = init(2, (2, 3))
>>> a
    masked_array(
      data=[[0., 0., 0.],
            [0., 0., 0.]],
      mask=False,
      fill_value=1e+20)
utils.n_elements(array: ndarray, dist: float, var: str | None = None) int[source]

Returns the number of elements that cover certain distance.

Parameters:
  • array – Input array with arbitrary units or time in fraction hour. x should be evenly spaced or at least close to.

  • dist – Distance to be covered. If x is fraction time, dist is in minutes. Otherwise, x and dist should have the same units.

  • var – If ‘time’, input is fraction hour and distance in minutes, else inputs have the same units. Default is None (same units).

Returns:

Number of elements in the input array that cover dist.

Examples

>>> x = np.array([2, 4, 6, 8, 10])
>>> n_elements(x, 6)
    3

The result is rounded to the closest integer, so:

>>> n_elements(x, 6.9)
    3
>>> n_elements(x, 7)
    4

With fraction hour time vector:

>>> x = np.linspace(0, 1, 61)
>>> n_elements(x, 10, 'time')
    10
utils.isscalar(array: ndarray | float | list | Variable) bool[source]

Tests if input is scalar.

By “scalar” we mean that array has a single value.

Examples

>>> isscalar(1)
    True
>>> isscalar([1])
    True
>>> isscalar(np.array(1))
    True
>>> isscalar(np.array([1]))
    True
utils.get_time() str[source]

Returns current UTC-time.

utils.date_range(start_date: date, end_date: date) Iterator[date][source]

Returns range between two dates (datetimes).

utils.get_uuid() str[source]

Returns unique identifier.

utils.get_wl_band(radar_frequency: float) int[source]

Returns integer corresponding to radar frequency.

Parameters:

radar_frequency – Radar frequency (GHz).

Returns:

0 = 35GHz radar, 1 = 94Ghz radar.

utils.get_frequency(wl_band: int) str[source]

Returns radar frequency string corresponding to wl band.

utils.transpose(data: ndarray) ndarray[source]

Transposes numpy array of (n, ) to (n, 1).

utils.del_dict_keys(data: dict, keys: tuple | list) dict[source]

Deletes multiple keys from dictionary.

Parameters:
  • data – A dictionary.

  • keys – Keys to be deleted.

Returns:

Dictionary without the deleted keys.

utils.array_to_probability(array: ndarray, loc: float, scale: float, *, invert: bool = False) ndarray[source]

Converts continuous variable into 0-1 probability.

Parameters:
  • array – Numpy array.

  • loc – Center of the distribution. Values smaller than this will have small probability. Values greater than this will have large probability.

  • scale – Width of the distribution, i.e., how fast the probability drops or increases from the peak.

  • invert – If True, large values have small probability and vice versa. Default is False.

Returns:

Probability with the same shape as the input data.

utils.range_to_height(range_los: ndarray, tilt_angle: float) ndarray[source]

Converts distances from a tilted instrument to height above the ground.

Parameters:
  • range_los – Distances towards the line of sign from the instrument.

  • tilt_angle – Angle in degrees from the zenith (0 = zenith).

Returns:

Altitudes of the LOS points.

Notes

Uses plane parallel Earth approximation.

utils.is_empty_line(line: str) bool[source]

Tests if a line (of a text file) is empty.

utils.is_timestamp(timestamp: str) bool[source]

Tests if the input string is formatted as -yyyy-mm-dd hh:mm:ss.

utils.get_sorted_filenames(file_path: str, extension: str) list[source]

Returns full paths of files with some extension, sorted by filename.

utils.str_to_numeric(value: str) int | float[source]

Converts string to number (int or float).

utils.get_epoch(units: str) tuple[int, int, int][source]

Finds epoch from units string.

utils.screen_by_time(data_in: dict, epoch: tuple[int, int, int], expected_date: str) dict[source]

Screen data by time.

Parameters:
  • data_in – Dictionary containing at least ‘time’ key and other numpy arrays.

  • epoch – Epoch of the time array, e.g., (1970, 1, 1)

  • expected_date – Expected date in yyyy-mm-dd

Returns:

Screened and sorted by the time vector.

Return type:

data

Notes

  • Requires ‘time’ key

  • Works for dimensions 1, 2, 3 (time has to be at 0-axis)

  • Does nothing for scalars

utils.find_valid_time_indices(time: ndarray, epoch: tuple[int, int, int], expected_date: str) list[source]

Finds valid time array indices for the given date.

Parameters:
  • time – Time in seconds from some epoch.

  • epoch – Epoch of the time array, e.g., (1970, 1, 1)

  • expected_date – Expected date in yyyy-mm-dd

Returns:

Valid indices for the given date in sorted order.

Return type:

list

Raises:

RuntimeError – No valid timestamps.

Examples

>>> time = [1, 5, 1e6, 3]
>>> find_valid_time_indices(time, (1970, 1, 1) '1970-01-01')
    [0, 3, 2]
utils.append_data(data_in: dict, key: str, array: ndarray) dict[source]

Appends data to a dictionary field (creates the field if not yet present).

Parameters:
  • data_in – Dictionary where data will be appended.

  • key – Key of the field.

  • array – Numpy array to be appended to data_in[key].

utils.edges2mid(data: ndarray, reference: Literal['upper', 'lower']) ndarray[source]

Shifts values half bin towards up or down.

Parameters:
  • data – 1D numpy array (e.g. range)

  • reference – If ‘lower’, increase values by half bin. If ‘upper’, decrease values.

Returns:

Shifted values.

utils.get_file_type(filename: str) str[source]

Returns cloudnet file type from new and legacy files.

utils.get_files_with_common_range(filenames: list) list[source]

Returns files with the same (most common) number of range gates.

utils.get_files_with_variables(filenames: list, variables: list[str]) list[source]

Returns files where all variables exist.

utils.is_all_masked(array: ndarray) bool[source]

Tests if all values are masked.

utils.find_masked_profiles_indices(array: MaskedArray) list[source]

Finds indices of masked profiles in a 2-D array.

utils.remove_masked_blocks(array: MaskedArray, limit: int = 50) ndarray[source]

Filters out large blocks of completely masked profiles.

utils.sha256sum(filename: str | PathLike) str[source]

Calculates hash of file using sha-256.

utils.md5sum(filename: str | PathLike, *, is_base64: bool = False) str[source]

Calculates hash of file using md5.

cloudnetarray

CloudnetArray class.

class cloudnetarray.CloudnetArray(variable: Variable | ndarray | float, name: str, units_from_user: str | None = None, dimensions: Sequence[str] | None = None, data_type: str | None = None)[source]

Stores netCDF4 variables, numpy arrays and scalars as CloudnetArrays.

Parameters:
  • variable – The netCDF4 Variable instance, numpy array (masked or regular), or scalar (float, int).

  • name – Name of the variable.

  • units_from_user – Explicit units, optional.

  • dimensions – Explicit dimension names, optional.

  • data_type – Explicit data type, optional.

lin2db() None[source]

Converts linear units to log.

db2lin() None[source]

Converts log units to linear.

mask_indices(ind: list) None[source]

Masks data from given indices.

rebin_data(time: ndarray, time_new: ndarray, *, mask_zeros: bool = True) list[source]

Rebins data in time.

Parameters:
  • time – 1D time array.

  • time_new – 1D new time array.

  • mask_zeros – Whether to mask 0 values in the returned array. Default is True.

Returns:

Time indices without data.

fetch_attributes() list[source]

Returns list of user-defined attributes.

set_attributes(attributes: MetaData) None[source]

Overwrites existing instance attributes.

filter_isolated_pixels() None[source]

Filters hot pixels from radar data.

filter_vertical_stripes() None[source]

Filters vertical artifacts from radar data.

calc_linear_std(time: ndarray, time_new: ndarray) None[source]

Calculates std of radar velocity.

Parameters:
  • time – 1D time array.

  • time_new – 1D new time array.

Notes

The result is masked if the bin contains masked values.

rebin_velocity(time: ndarray, time_new: ndarray, folding_velocity: float | ndarray, sequence_indices: list) None[source]

Rebins Doppler velocity in polar coordinates.

Parameters:
  • time – 1D time array.

  • time_new – 1D new time array.

  • folding_velocity – Folding velocity (m/s). Can be a float when it’s the same for all altitudes, or np.ndarray when it matches difference altitude regions (defined in sequence_indices).

  • sequence_indices – List containing indices of different folding regions, e.g. [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10]].

datasource

Datasource module, containing the DataSource class.

class datasource.DataSource(full_path: PathLike | str, *, radar: bool = False)[source]

Base class for all Cloudnet measurements and model data.

Parameters:
  • full_path – Calibrated instrument / model NetCDF file.

  • radar – Indicates if data is from cloud radar. Default is False.

filename

Filename of the input file.

Type:

str

dataset

A netCDF4 Dataset instance.

Type:

netCDF4.Dataset

source

Global attribute source read from the input file.

Type:

str

time

Time array of the instrument.

Type:

ndarray

altitude

Altitude of instrument above mean sea level (m).

Type:

float

data

Dictionary containing CloudnetArray instances.

Type:

dict

getvar(*args) ndarray[source]

Returns data array from the source file variables.

Returns just the data (and no attributes) from the original

variables dictionary, fetched from the input netCDF file.

Parameters:

*args – possible names of the variable. The first match is returned.

Returns:

The actual data.

Return type:

ndarray

Raises:

RuntimeError – The variable is not found.

append_data(variable: Variable | ndarray | float, key: str, name: str | None = None, units: str | None = None, dtype: str | None = None) None[source]

Adds new CloudnetVariable or RadarVariable into data attribute.

Parameters:
  • variable – netCDF variable or data array to be added.

  • key – Key used with variable when added to data attribute (dictionary).

  • name – CloudnetArray.name attribute. Default value is key.

  • units – CloudnetArray.units attribute.

  • dtype – CloudnetArray.data_type attribute.

get_date() list[source]

Returns date components.

Returns:

Date components [YYYY, MM, DD].

Return type:

list

Raises:

RuntimeError – Not found or invalid date.

close() None[source]

Closes the open file.

static to_m(var: Variable) ndarray[source]

Converts km to m.

static to_km(var: Variable) ndarray[source]

Converts m to km.

output

Functions for file writing.

output.save_level1b(obj, output_file: PathLike | str, uuid: UUID | str | None = None) str[source]

Saves Cloudnet Level 1b file.

output.save_product_file(short_id: str, obj: DataSource, file_name: str, uuid: str | None = None, copy_from_cat: tuple = ()) str[source]

Saves a standard Cloudnet product file.

Parameters:
  • short_id – Short file identifier, e.g. ‘lwc’, ‘iwc’, ‘drizzle’, ‘classification’.

  • obj – Instance containing product specific attributes: time, dataset, data.

  • file_name – Name of the output file to be generated.

  • uuid – Set specific UUID for the file.

  • copy_from_cat – Variables to be copied from the categorize file.

output.get_l1b_source(instrument: Instrument) str[source]

Returns level 1b file source.

output.get_l1b_history(instrument: Instrument) str[source]

Returns level 1b file history.

output.get_l1b_title(instrument: Instrument, location: str) str[source]

Returns level 1b file title.

output.get_references(identifier: str | None = None, extra: list | None = None) str[source]

Returns references.

Parameters:
  • identifier – Cloudnet file type, e.g., ‘iwc’.

  • extra – List of additional references to include

output.get_source_uuids(data: Observations | list[Dataset | DataSource]) str[source]

Returns file_uuid attributes of objects.

Parameters:

data – Observations instance.

Returns:

UUIDs separated by comma.

Return type:

str

output.merge_history(nc: Dataset, file_type: str, data: Observations | DataSource) None[source]

Merges history fields from one or several files and creates a new record.

Parameters:
  • nc – The netCDF Dataset instance.

  • file_type – Long description of the file.

  • data – Dictionary of objects with history attribute.

output.add_source_instruments(nc: Dataset, data: Observations) None[source]

Adds source attribute to categorize file.

output.init_file(file_name: PathLike | str, dimensions: dict, cloudnet_arrays: dict, uuid: UUID | str | None = None) Dataset[source]

Initializes a Cloudnet file for writing.

Parameters:
  • file_name – File name to be generated.

  • dimensions – Dictionary containing dimension for this file.

  • cloudnet_arrays – Dictionary containing CloudnetArray instances.

  • uuid – Set specific UUID for the file.

output.copy_variables(source: Dataset, target: Dataset, keys: tuple) None[source]

Copies variables (and their attributes) from one file to another.

Parameters:
  • source – Source object.

  • target – Target object.

  • keys – Variable names to be copied.

output.copy_global(source: Dataset, target: Dataset, attributes: tuple) None[source]

Copies global attributes from one file to another.

Parameters:
  • source – Source object.

  • target – Target object.

  • attributes – List of attributes to be copied.

output.add_time_attribute(attributes: dict, date: list[str] | date, key: str = 'time') dict[source]

Adds time attribute with correct units.

output.add_source_attribute(attributes: dict, data: Observations) dict[source]

Adds source attribute to variables.

output.update_attributes(cloudnet_variables: dict, attributes: dict) None[source]

Overrides existing CloudnetArray-attributes.

Overrides existing attributes using hard-coded values. New attributes are added.

Parameters:
  • cloudnet_variables – CloudnetArray instances.

  • attributes – Product-specific attributes.

output.fix_attribute_name(nc: Dataset) None[source]

Changes incorrect ‘unit’ variable attribute to correct ‘units’.

This is true at least for ‘drg’ variable in raw MIRA files.