Skip to content

Orchestration & data

PowerEnv is the orchestration façade over a grid and attached resources. DataLoader loads time-series traces for grids and renewables.

powerzoo.envs.power_env.PowerEnv(config, reward_fn=None)

Bases: BaseEnv

Unified scenario environment built from one grid plus attached resources.

from_yaml(yaml_path) classmethod

reset(*, seed=None, options=None, day_id=None)

step(action)

Apply a Dict action and return (obs, reward, terminated, truncated, info).

Action structure

The native action space is gymnasium.spaces.Dict keyed by resource ID (e.g. {'battery_0': {'charge_rate_mw': 0.5}}), plus an optional 'unit_power_mw' key for direct generator setpoints.

.. note:: Standard RL libraries that do not support Dict action spaces (e.g. Stable-Baselines3) require an ActionWrapper such as :class:powerzoo.wrappers.FlattenActionWrapper to flatten the Dict into a contiguous array and unflatten it back before calling this method. Passing a flat array directly will raise TypeError.

render(mode='human')

close()

get_resource_metadata(resource_id)

get_resource_status()


powerzoo.data.data_loader.DataLoader(data_dir=None, manifest_dir=None)

Benchmark data facade backed by parquet files and manifest metadata.

Features: - Semantic signal loading via :meth:load_signals - Calendar / profile time-alignment across heterogeneous sources - Forecast-panel loading with issue_time / target_time - Legacy column-based loading via :meth:load_data - Resampling, date filtering, multi-dataset merge

Initialize DataLoader.

Parameters:

Name Type Description Default
data_dir Optional[Union[str, Path]]

Directory containing parquet files. If None, defaults to powerzoo/data/parquet.

None
manifest_dir Optional[Union[str, Path]]

Directory containing manifest JSON files. If None, defaults to powerzoo/data/manifests.

None

load_signals(signals, *, source=None, region=None, start_date=None, end_date=None, resample=None, time_alignment=None, interpolation='linear')

Load one or more semantic signals into a single DataFrame.

This is the primary public entry-point for data access.

Parameters:

Name Type Description Default
signals List[str]

Semantic signal names (e.g. ["load.actual_mw", "solar.available_mw"]).

required
source Optional[str]

Restrict lookup to a specific data source ("gb", "aemo", "alibaba", …).

None
region Optional[str]

Filter by region (e.g. "NSW1").

None
start_date, end_date

Simulation time window.

required
resample Optional[str]

Target frequency ("5min", "30min", …).

None
time_alignment Optional[Dict[str, str]]

Per-signal calendar-shift overrides. {"solar.available_mw": "2024-01-01"} means "read solar data starting from 2024-01-01 and map it onto start_date".

None
interpolation str

Interpolation method when resampling.

'linear'

Returns:

Type Description
DataFrame

DataFrame indexed by datetime (simulation timeline).

load_actual_series(signals, **kwargs)

Convenience wrapper: load only actual_series data.

load_forecast_panel(signals, *, source=None, region=None, start_date=None, end_date=None)

Load forecast-panel data with issue_time + target_time.

Returns a DataFrame with columns region, issue_time, target_time, and the requested signal columns. The panel is not flattened to a single datetime axis.

load_data(dataset_name=None, columns=None, start_date=None, end_date=None, validate_dates=True, merge_how='inner', resample=None, interpolation='linear')

Load data from parquet file(s) with optional resampling.

If dataset_name is not provided, automatically finds datasets based on columns. If columns span multiple datasets, loads and merges them on datetime.

Parameters:

Name Type Description Default
dataset_name Optional[str]

Name of dataset (without extension). If None, auto-detect from columns.

None
columns Optional[List[str]]

List of columns to load. Required if dataset_name is None.

None
start_date Optional[Union[str, datetime, date]]

Start date for filtering (inclusive)

None
end_date Optional[Union[str, datetime, date]]

End date for filtering (inclusive)

None
validate_dates bool

If True, validate dates against metadata date_ranges

True
merge_how str

How to merge multiple datasets. Default 'inner': only timestamps present in all requested datasets (avoids NaNs when UK settlement half-hours differ slightly between NGESO exports or around DST boundaries). Use 'outer' if you need the union of timelines and will handle missing values yourself.

'inner'
resample Optional[str]

Resample frequency ('5min', '15min', '30min', '60min'). If None, uses original data frequency.

None
interpolation str

Interpolation method for resampling. Options: 'linear', 'quadratic', 'cubic', 'spline', 'nearest', 'zero', 'slinear', 'pchip', 'akima'

'linear'

Returns:

Type Description
DataFrame

DataFrame with loaded data